For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more ZeljkoS's commentsregister

OCaml got experimental algebraic effects in version 5.0 (December 2022). Here is a tutorial: https://github.com/tanelso2/ocaml-effects-tutorial

Also from the tutorial: "Unlike Eff, Koka, Links, and other languages that support effect handlers, effects in Multicore OCaml are unchecked currently. A program that does not handle a performed effect fails with a runtime error."


I upstreamed all of my changes so really we should be linking to https://github.com/ocaml-multicore/ocaml-effects-tutorial instead of my repo.




Tim Harford argues he was in-fact a comedic genius:

https://www.pushkin.fm/podcasts/cautionary-tales/genius-stil...


You can actually find small Android phones via excellent GSMArena phone finder: https://www.gsmarena.com/search.php3?nYearMin=2023&fDisplayI...

Quick search for just display size found these 10 phones released after 2023: https://www.gsmarena.com/results.php3?nYearMin=2023&fDisplay...


My very lax criteria yield only 4 phones released since 2023:

* A phone, not a watch

* Android 14 or later OS

* Thickness: 9mm max

* Height: 150mm max

* Width: 71mm max

and three of them are the overpriced Samsung Galaxy S phones. Only 7 released since 2020:

https://www.gsmarena.com/search.php3?nYearMin=2020&nHeightMa...

and they are Samsung Galaxy S's, a couple of Asus ZenFone's, and Google Pixel 5.

If you're willing to add another 5mm, there are also a couple of Sony Xperia's and Sharp Aquous, and Google Pixel 8. And if you want to cap the height at 145 mm - it's just Google Pixel 5.


Flix FAQ (https://flix.dev/faq/) starts normal, but becomes increasingly more hilarious towards the end :D

Some gems:

---

Q: Wait, division by zero is zero, really?

A: Yes. But focusing on this is a bit like focusing on the color of the seats in a spacecraft.

---

Q: "This site requires JavaScript"

A: People who have criticized the website for using JavaScript: [1], [2], [3], [4], [5].

People who have offered to help refactor the site to use static html: 0.

---

Q: I was disappointed to learn that Flix has feature X instead of my favorite feature Y.

A: We are deeply sorry to have let you down.

---

Q: This is – by far – the worst syntax I have ever seen in a functional language. Semicolons, braces, symbolic soup, et al. It is like if Scala, Java and Haskell had a one night stand in the center of Chernobyl.

A: Quite an achievement, wouldn't you say?



It is interesting that HN community discussed why it wouldn't work all the way back in 2010: https://news.ycombinator.com/item?id=1701724


Interesting to note that (according to author, DHH), this article was removed by LinkedIn: https://www.linkedin.com/posts/david-heinemeier-hansson-374b...


We have a partial understanding of why distillation works—it is explained by The Lottery Ticket Hypothesis (https://arxiv.org/abs/1803.03635). But if I am understanding correctly, that doesn't mean you can train a smaller network from scratch. You need a lot of randomness in the initial large network, for some neurons to have "winning" states. Then you can distill those winning subsystems to a smaller network.

Note that similar process happens with human brain, it is called Synaptic pruning (https://en.wikipedia.org/wiki/Synaptic_pruning). Relevant quote from Wikipedia (https://en.wikipedia.org/wiki/Neuron#Connectivity): "It has been estimated that the brain of a three-year-old child has about 10^15 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10^14 to 5x10^14 synapses (100 to 500 trillion)."


So, can a distilled 8B model (say, the Deepseek-R1-Distil-Llama-8B or whatever) be "trained up" to a higher parameter 16B Parameter model after distillation from a superior model, or is it forever stuck at the 8B parameters that can just be fine tuned?


So more 'mature' models might arise in the near future with less params and better benchmarks?


That's been happening consistently for over a year now. Small models today are better than big models from a year or two ago.


"Better", but not better than the model they were distilled from, at least that's how I understand it.


I think this is how the "child brain" works too. The better the parents and the environement are, the better the child evolution is :)


Not at all — how many people were geniuses and their parents not? I can name several and I’m sure with a quick search you can too.


How is that relevant? A few examples do not disprove anything. It's pretty common knowledge that the more successful/rich etc. your parents were, the more likely you'll be successful/rich etc.

This does not directly prove the theory your parent comment posits, being that better circumstances during a child's development improve the development of that child's brain. That would require success being a good predictor of brain development, which I'm somewhat uncertain about.


They might also be more biased and less able to adapt to new technology. Interesting times.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You