For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more alaties's commentsregister

I think Haskell is very akin to C++ in terms of learning curve.

There are a ton of concepts you have to learn up front to even get started. The compiler is generally unhelpful if you've never dealt with it before. There are a lot of approaches to solving a problem and the language does not make it clear which approach is preferred when. Unless you're a masochist, these qualities can be quite off putting.

I think a language like OCaml is a good entry point into functional programming. The compiler errors are really useful. The language pushes you towards a sane functional approach. The learning curve is pretty smooth, too.

My alma mater seems to agree and FP with OCaml in the upper level CS intro course (https://www.seas.upenn.edu/~cis120/current/index.shtml).


Built an automatic outdoor watering system using a zwave-controlled power outlet, a 120v AC solenoid valve, and a simple linux server running openHAB. I trigger the watering system using a script that checks the weather and sunset time for that day. If there was no rain and it's 2 hours past sunset, the script turns the watering system on for a set period.

Makes supporting a large garden while frequently traveling doable.


What solenoid did you use? I was looking into this and struggled to find an affordable valve for the water, the zwave switch is simple and I love that part of this solution!


I've had good luck with the 3/4" brass from both U.S. Solid and HFS. Both are 110 AC and range from $20-30 US on amazon. You'll have to solder on a male plug on both of these, but that's pretty straight forward. I also recommend waterproofing the wiring with some liquid electrical tape.

I've been messing with a 12V DC valve by SparkFun as well. It runs around $8, but it doesn't hold up well under high pressure. Great for a low pressure/gravity fed system though.

Also, you don't have to use zwave for the plug if you don't want. There are all sorts of remote controlled outlets now for whatever wireless standard/protocol you want to do your automation over.


Thanks! My entire house is Smartthings / Zwave...so the plug in this case will be perfect for me. Sounds like a fun project overall, appreciate the reply!


Brendan Gregg's [linux perf](http://www.brendangregg.com/linuxperf.html) posts.

They a great reference for linux tools and how to use them.

They are also a great guide to deductive reasoning when spelunking through a system to find issues.


As lobster_johnson stated, Prometheus storage retention is a command line launch option.

Prometheus is not intentionally designed as a long term cold storage option for metrics. You _can_ store metrics for as long as your storage allows, but Prometheus is not going to replicate or manage that data to prevent long term degradation. Depending on your long term needs, the preferred pattern is to roll data off Prometheus into a metrics store that better handles data over the scope of months. Rolling data off of Prometheus is done with an exporter and documented [here](https://prometheus.io/docs/operating/integrations/#remote-en...).

In most use operational use cases I've come across, we've only needed about a month of data, so keeping it in Prometheus was kosher. YMMV.


What if you want to keep time-series data for accounting purposes forever?

Right now I use graphite but would love something that also handles replication/redundancy with a good query language & enough performance to also use it to fetch the underlying data used to render front-end graphs for users.


The answer to these kinds of questions is very, very dependant on requirements and available resources. And there are rarely easy, out of the box answers.

The big questions to ask are: - what data is of interest - what are the requested ingest patterns (how is data getting into this system? How frequently? What rate of ingest is expected?) - what are the requested query patterns (who is doing queries? What do those queries look like specifically? Are people querying over unbounded time ranges or are people doing more focused queries? Do queries regularly involve aggregations or not?) - what is the requested SLA (i.e. how much partial down time is okay? How much full downtime is okay? What kind of query response time do you need to target?) - what resources are/will be available (money, man power, compute, storage, tech on hand)

It's possible that a time series data store might not be the correct system choice once these questions are answered. It's not unusual to see data split or copied into multiple systems to answer all requirements.


The most important consideration is the scale. A low scale system will not require an optimum solution, merely a correct one.

So I would say, the system is 5 minute ingest of about 1 million metrics. This is spread out over a half dozen locations, each which currently records in their own silo.

And aggregate metric is calculated with a 5 minute lag, which reads all the just-written data points and aggregates into sum-totals which are themselves stored and cached in one place. This is another million metrics basically stored separate from the rest.

But it doesn't really change the character of the system. In the end I'm trying to; Write batches of mostly numeric data Queries over time against those numbers Aggregate data over different blocks of time; 5 minute, hourly, daily, monthly Store it efficiently Ensure redundancy, integrity

Seems like a simple and common enough problem to have been reasonably "solved" for orders of magnitude higher scale than I'm operating at.


From what I've seen, each solution has been pretty specific to the situation. I've rarely seen one tech stack prevail at low to reasonable scale.

At reasonable scale, I've seen people get really far with pure graphite setups by utilizing tools like [carbon-c-relay](https://github.com/grobian/carbon-c-relay), [carbonate](https://github.com/graphite-project/carbonate), and high integrity filesystems like zfs underneath. It's a very hands on operation though. Things like growing the cluster are hard to do without downtime.

If constant growth and uptime is a concern, something like openTSDB might be a great choice. The complexity of setting up a Hadoop + HBase cluster is a pretty big upfront cost, but man is this thing the cockroach of time series data stores. Adding storage is just growing the HBase cluster. Querying across years of data is pretty simple and straightforward. For the complexity involved, openTSDB is worth it.


Feel free to reach out to me if you want to talk through it. alex@laties.info


> What if you want to keep time-series data for accounting purposes forever?

Prometheus is not recommended for uses where 100% accuracy is required, see https://prometheus.io/docs/introduction/overview/#when-does-...?

I'd also be wary of using Graphite for such a use case. For billing a more traditional database is probably best.


We store 18-24 months of data in OpenTSDB with good performance and compression, but it can be a pain to manage if you don't have experience working with HBase. InfluxDB has commercial cluster support, and there are quite a few other options. See https://blog.outlyer.com/top10-open-source-time-series-datab... for a good comparison list.


Depends on the compiler.

'a++' (postfix) and '++a' (prefix) have slightly different meanings in terms of return values. Postfix will return the value of 'a' before the increment, while prefix will return 'a' after the increment operation.

In a simple compiler, you would store the previous value of 'a' in the case of postfix in case you need to assign it. This results in a superfluous store instruction in most cases.

Of course, a smarter compiler could see the lack of assignment and throw away the store instruction during an optimization pass.


I’ve used crappy compilers where there’s essentially no optimisation, and moderately complex expressions produce absurd code, so if you want reasonable output, you end up writing the assembly you want in the notation of the higher-level language, like:

    /* a = (b * 2) + 1 */

    a = b;    /* mov a, b */
    a <<= 1;  /* shl a, 1 */
    a |= 1;   /* or a, 1 */


TI used to have an Assembler that was exactly like that, don't recall for which processor though.


Hacker Olympics! The organizer for it moved to San Diego and the event moved with him sadly :(


Yup! That was it!!


Would you mind explaining or linking to a good document on "C++98 style runtime polymorphism"?


You can do a google search for this information


How do I test this stuff out? I'd like to try this out, but I see no download links or documentation anywhere on the linked page... Am I missing something obvious?


https://developer.novelsphere.jp Document is here, but English edition is in preparation...excuse me.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You