For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | datacynic's commentsregister

I like this Tufte quote from https://www.edwardtufte.com/notebook/book-design-advice-and-...:

It is also notable that the Feynman lectures (3 volumes) write about all of physics in 1800 pages, using only 2 levels of hierarchical headings: chapters and A-level heads in the text. It also uses the methodology of sentences which then cumulate sequentially into paragraphs, rather than the grunts of bullet points. Undergraduate Caltech physics is very complicated material, but it didn’t require an elaborate hierarchy to organize.

I think about it a lot when reading markdown feature-driven writing or catching myself doing it.


Writing documentation for LLMs is strangely pleasing because you have very linear returns for every bit of effort you spend on improving its quality and the feedback loop is very tight. When writing for humans, especially internal documentation, I’ve found that these returns are quickly diminishing or even negative as it’s difficult to know if people even read it or if they didn’t understand it or if it was incomplete.


DuckLake is more comparable to Iceberg and Delta than to raw parquet files. Iceberg requires a catalog layer too, a file system based one at its simplest. For DuckLake any RDBMS will do, including fs-based ones like DuckDB and SQLite. The difference is that DuckLake will use that database with all its ACID goodness for all metadata operations and there is no need to implement transactional semantics over a REST or object storage API.


https://www.sirlin.net/articles/designing-defensively-guilty...

This is probably it. As the article says, it’s also a cleverly layered mechanic if a player correctly predicts when their opponent will use it.


Yup that’s the one. Idk why I always forget about sirlin but it’s good shit


I also was nerdsniped into trying this and found that after extracting the features array into a newline delimited json file, DuckDB finishes the example query in 500 ms (M1 Mac), querying the 1.3 GB json file directly with read_json!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You