"Moving preconditions up" means moving the code that checks the precondition up. The precondition still needs to be documented (in the type system is ideal, with an assertion otherwise, in a comment if necessary) close to where it's assumed.
Does it matter? MM-DD-YYYY is used in America and makes DD-MM-YYYY ambiguous, but as far as I know nobody uses YYYY-DD-MM, so ISO8601 should be perfectly fine, especially if users are trained. Besides, if you're not used to it, starting with the year forces you to think, which is desirable if you want to avoid human error.
I couldn’t have named the standard and never read it before today, but I’ve used YYYY-MM-DD for naming my own folders & files for a couple of decades, for the simple reason that it sorts correctly in chronological order.
You do it, I do it, probably many programmers or other systematically thinking people do it. How big share we are of the general public? Even many programs don't do it in their default naming when saving files. I find myself renaming often enough.
I don't think so really. It might help lure people over from haskell or rust or whatever but ocaml has its own solutions to the problems solved by ad hoc polymorphism. Usually in the module system, sometimes a ppx.
There's nothing comparable to ocaml's module system that I know of in any other mature language so no one really arrives in ocaml with well developed intuition for what sorts of things are possible and worth doing with it. But once you've put the time in it's usually the correct answer to "how to do I do <haskell type system thing> in ocaml."
I'm not a type systems expert though and it's likely there are some novel & unique situations where it's the best or only solution. Just maybe not as much as people assume from outside. Anyway I think there is a proposal for it so it'll probably work its way in eventually.
Haskell's "mixin modules" AKA backpack are I think the closest thing you're going to get outside of SML (they are a different trade off in the design space, it's not 1-to-1), but they have languished in the broader community due to a couple social factors. They are excellent for many of the same things you use modules for, though, and work just fine in my experience.
On the same note, the lack of some kind of module system along these lines (broadly all defined in terms of "holes" in a compilation unit which are "plugged up" later on at instantiation time), is one of my biggest complaints about Rust. The extensive usage of traits in the broader ecosystem results in this phenomenon (identical to Haskell) where global coherence requires your dependency tree gets tied up in the location of types and their trait impls. In other words, you cannot abstract over your dependencies in a truly modular way.
Global coherence requirements are fine (preferable even!) for many things but if it's the only hammer you have, well, you know the rest of the metaphor.
> Anyway I think there is a proposal for it so it'll probably work its way in eventually.
Modular implicits have been shelved indefinitely from what I understand, if that's what you're referring to.
> Modular implicits have been shelved indefinitely from what I understand, if that's what you're referring to.
They have not — people are still working on it. There's been a paper at the OCaml Workshop at ICFP this year, and a PR on the OCaml repo. The student who worked on this is now starting a PhD on modular implicits in the lab that created OCaml.
This looks interesting, it's great to see more machine learning efforts in typed languages.
I'm a bit surprised to see no mention of Owl (https://github.com/owlbarn/owl an older project for scientific computing in OCaml that was resurrected recently), I wonder how they compare.
The Raven README mentions:
> We prioritize developer experience and seamless integration.
so maybe that's one difference — I used Owl on a course project about a decade ago, and while it got the job done, I remember the experience being rather painful compared to Numpy (even though I was more experienced with OCaml than with Python at the time).
Wasn't there something about Owl being "concluded" about a year ago because the 2 developers no longer had time for the project? Is Raven the successor to Owl?
> Afraid to close the page because they wont find it in their history or bookmarks? Is this more an issue with bookmarks and history not being as useful as they could be?
I think tabs are just the better user interface.
It's not that I'm afraid I won't find the page in my history and bookmarks, it's that I don't want to have to do that because it's painful. History is full of irrelevant pages. Bookmarks make me lose my flow constantly wondering if I should bookmark a page or it's not needed (and in which directory!).
Tabs have a very simple workflow with low cognitive overhead. Everything is preserved by default (middle click/ctrl click is my default click in a browser), unless I'm clearly in a linear workflow where I don't want to keep the page (left click). Self-organizing due to the way they open, but very easy to manually reorder (or close) if needed. Kept in memory so going back to a (recent) tab is instantaneous.
They just... get out of the way and let me work. Tabs make browsing feel like one continuous task, where history/bookmarks feel like constant interruptions.
Code in this "named-pipeline" style is already self-documenting: using the same variable name makes it clear that we are dealing with a pipeline/chain. Using more descriptive names for the intermediate steps hides this, making each line more readable (and even then you're likely to end up with `dataStripped = data.map(&:strip)`) at the cost of making the block as a whole less readable.
Those are equivalent, I think. If you can replace an expression by its value, any two expressions with the same value are indistinguishable (and conversely a value is an expression which is its own value).
Machine learning is not magic -- no algorithm, machine learning or otherwise, will be able to treat an arbitrary-length sequence in constant time.
The actual problem is also more complex than fixed word widths due to hyphenation and justification - from what I recall, Knuth's paper (IIRC there's two and the second one is the one to read) on TeX's layout gives a good overview of the problem and an algorithm that's still state of the art. I think the Typst people might have a blog post about their modern take on it, but I'm not sure.