For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more LodeOfCode's favoritesregister

Nice article, which also explains the mapping of the puzzle to an exact cover problem and how those can be solved with dangling links as in Knuth's Algorithm X.

Wilson's algorithm is based on so-called loop-erased random walks [1].

[1] https://en.wikipedia.org/wiki/Loop-erased_random_walk


In safety-critical systems, we distinguish between accidents (actual loss, e.g. lives, equipment, etc.) and hazardous states. The equation is

hazardous state + environmental conditions = accident

Since we can only control the system, and not its environment, we focus on preventing hazardous states, rather than accidents. If we can keep the system out of all hazardous states, we also avoid accidents. (Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.)

One such hazardous state we have defined in aviation is "less than N minutes of fuel remaining when landing". If an aircraft lands with less than N minutes of fuel on board, it would only have taken bad environmental conditions to make it crash, rather than land. Thus we design commercial aviation so that planes always have N minutes of fuel remaining when landing. If they don't, that's a big deal: they've entered a hazardous state, and we never want to see that. (I don't remember if N is 30 or 45 or 60 but somewhere in that region.)

For another example, one of my children loves playing around cliffs and rocks. Initially he was very keen on promising me that he wouldn't fall down. I explained the difference between accidents and hazardous states to him in childrens' terms, and he realised slowly that he cannot control whether or not he has an accident, so it's a bad idea to promise me that he won't have an accident. What he can control is whether or not bad environmental conditions lead to an accident, and he does that by keeping out of hazardous states. In this case, the hazardous state would be standing less than a child-height within a ledge when there is nobody below ready to catch. He can promise me to avoid that, and that satisfies me a lot more than a promise to not fall.


The Miami stock exchange (MIAX) has their matching engines colocated in Equinix's NY4 data center in Secaucus NJ, much like many other exchanges. I would not be surprised if TXSE does the same.

Many trading firms already have their trading engines in that data center and I would assume TXSE would want quick access to that order flow and this might be easier if they are in NY4.

Of course, they may want to have their colo facilities in TX in their own data center, that way they can rent out space and make some extra revenue, but then they'd have to build that out.


Though at the limit, it would have to be at least O(sqrt(n)) thanks to the Bekenstein bound [0]. And of course, as mentioned in TFA, you can always do better if you can get away with local random access in parallel, rather than global random access.

[0] https://en.wikipedia.org/wiki/Bekenstein_bound


Why don't they take the timings from the closed captions of the original Japanese broadcast?

Here is the original study published in nature microbiology.

https://www.nature.com/articles/s41564-025-02142-0

Wanted to share what I thought the interesting parts. From the university press release.

"To date, AI has been leveraged as a tool for predicting which molecules might have therapeutic potential, but this study used it to describe what researchers call “mechanism of action” (MOA) — or how drugs attack disease.

MOA studies, he says, are essential for drug development. They help scientists confirm safety, optimize dosage, make modifications to improve efficacy, and sometimes even uncover entirely new drug targets. They also help regulators determine whether or not a given drug candidate is suitable for use in humans... A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.

Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.

In just 100 seconds, he was given a prediction: his new drug attacked a microscopic protein complex called LolCDE, which is essential to the survival of certain bacteria.

“A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” says Regina Barzilay, a professor in MIT’s School of Engineering and the developer of DiffDock, the AI model that made the prediction. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”


It is always very humbling to read such notes - the self-written post passing away letters. There have been others in hacker news, and they always make me think about life, about what matters to me the most and of course, what matters to others.

What struck me most about this one is that it spoke much more about the professional life then about the personal one. I would imagine that if I were to ever write one (which I won’t, cause I’ll live forever) it will be more on the side of outside work experiences.

Life is a beautiful gift, and it’s worth remembering that every day. Do what you love, do a lot of it, be kind to others, hug your cherished people, laugh, enjoy, smile…breathe.

I love you all, and hope you’re enjoying every moment of this incredible journey through the Universe on this floating space rock.


I’ve recently read the third edition of Bjarne’s “A tour of c++” (which is actually a good read). I feel the author of this post could benefit from doing so also.

I wonder what fraction of people today have literally no friends (perhaps this could be examined using a questionnaire and points system to determine who counts as a real friend vs mere acquaintance). The number of people with "zero" must surely be rising, especially among the young.

This is the second Economist article to mention the lethal trifecta in the past week - the first was https://www.economist.com/science-and-technology/2025/09/22/... - which was the clearest explanations I've seen anywhere in the mainstream media about what prompt injection is and why it's such a nasty threat.

(And yeah I got some quotes in it so I may be biased there, but it genuinely is the source I would send executives to in order to understand this.)

I like this new one a lot less. It talks about how LLMs are non-deterministic, making them harder to fix security holes in... but then argues that this puts them in the same category as bridges where the solution is to over-engineer them and plan for tolerances and unpredictability.

While that's true for the general case of building against LLMs, I don't think it's the right answer for security flaws. If your system only falls victim to 1/100 prompt injection attacks... your system is fundamentally insecure, because an attacker will keep on trying variants of attacks until they find one that works.

The way to protect against the lethal trifecta is to cut off one of the legs! If the system doesn't have all three of access to private data, exposure to untrusted instructions and an exfiltration mechanism then the attack doesn't work.


Even if you never want to practice magic, I highly recommend buying a few Dani DaOrtiz lectures. The way his mind works is phenomenal, and the things he talks about (psychology, how people think, crafting experiences) are applicable across the board.

(I rarely perform his tricks... they're brilliant, but they're so perfectly suited for his style that I can't even come close to pulling them off without seeming like a confused idiot. But I love watching him explain what goes into each trick. This specific trick is available on Vanishing.)


One great piece of advice an informal mentor gave me long ago is that there is no information in a rejection.

That is to say that you cannot draw any conclusions about yourself or your interviewing technique or your skills or anything from the single accept==0 bit that you typically get back. There are so many reasons that a candidate might get rejected that have nothing to do with one's individual performance in the interview or application process.

Having been on the hiring side of the interview table now many more times than on the seeking side, I can say that this is totally true.

One of the biggest misconceptions I see from job seekers, especially younger ones, is to equate a job interview to a test at school, assuming that there is some objective bar and if you pass it then you must be hired. It's simply not true. Frequently more than one good applicant applies for a single open role, and the hiring team has to choose among them. In that case, you could "pass" and still not get the job and the only reason is that the hiring team liked someone else better.

I can only think of one instance where we had two great candidates for one role and management found a way to open another role so we could hire both. In a few other cases, we had people whom we liked but didn't choose and we forwarded their resumes to other teams who had open roles we thought would fit, but most of the time it's just, "sorry."


For anyone wanting to go deeper, Knuth's Concrete Mathematics covers the discrete calculus topics mentioned here (and much more).

This is why it’s important for us to develop C4 alternatives to existing C3-using food staples.

https://c4rice.com/the-science/engineering-photosynthesis-wh...


Even better solutions which are interesting to visualize were proved optimal in 2007.

https://chris-lamb.co.uk/posts/optimal-solution-for-the-bloc...


I'm surprised i2p torrents are still not popular enough to be offered as an option by sites like this.

I'd assume there are many people who don't help out purely because of legal fears, something i2p could help with.


> So why are we immoral then?

Mostly because it's easier than the alternative.

More seriously, you've just described the "moral" of the story of Adam and Eve. In that view of the world our problem is that we understand morality (thanks to that apple eating strumpet) and therefore can make choices. Animals, infants, and the simple-minded have no such concept and therefore can't be held responsible for violating it.

This, of course, all nonsense and implies a sort of paternalistic universe, but it was always going to be that way. Everything we know is defined by it's relationship to something else we know, and we don't know God. We know Dad.

Fundamentally it's a way to make God seem like less of a child-abusing jerk. A sky-daddy who "punishes" those who understand morality, but grants slack to the cutie-patootie babies and puppies is just nicer for people to believe in.


David Kipping of Cool Worlds Lab just uploaded a video about TARS:

https://www.youtube.com/watch?v=MDM1COWJ2Hc


The main problem with the “Bitter Lesson” is that there’s something even bitter-er behind it — the “Harsh Reality” that while we may scale models on compute and data, that simply broadly inserting tons of data without any sort of curation yields essentially garbage models.

The “Harsh Reality” is that while you may only need data, the current best models and companies behind them spend enormously on gathering high quality labeled data with extensive oversight and curation. This curation is of course being partially automated as well, but ultimately there’s billions or even tens of billions of dollars flowing into gathering, reviewing, and processing subjectively high quality data.

Interestingly, in the time that this paper was published, the harsh reality was not so harsh. For example in things like face detection, (actual) next word prediction, and other purely self supervised and not instruction tuned or “Chat” style models, data was truly all you needed. You didn’t need “good” faces. As long as it was indeed a face, the data itself was enough. Now, it’s not. In order to make these machines useful and not just function approximators, we need extremely large dataset curation industries.

If you learned the bitter lesson, you better accept the harsh reality, too.


For whatever reason, evolution decided those wavelengths should be overlapping. For example, M cones are most sensitive to 535 nm light, while L cones are most sensitive to 560 nm light. But M cones are still stimulated quite a lot by 560 nm light—around 80% of maximum.

The reason is simple: genes coding the long wave opsins (light-sensitive proteins) in these cones have diverged from copies of the same original gene. The evolution of this is very interesting.

Mammals in general have only two types of cones: presumably they lost full color vision in the age of dinosaurs since they were primarily small nocturnal animals or lived in habitats with very limited light (subterranean, piles of leaves, etc.) Primates are the notable exception, and have evolved the third type of cone, enabling trichromatic color vision, as a result of their fruitarian specialization and co-evolution with the tropical fruit trees (same as birds, actually).

So, what's interesting is that New World and Old World primates evolved this cone independently. In Old World primates the third cone resulted from a gene duplication event on the X chromosome, giving rise to two distinct (but pretty similar) opsin genes, with sensitivity peaks at very close wavelengths. As a note, because these genes sit on the X chromosome, colorblindness (defects in one or both of these genes) is much more likely to happen in males.

New World primates have a single polymorphic opsin gene on the X chromosome, with different alleles coding for different sensitivities. So, only some (heterozygous) females in these species typically have full trichromatic vision, while males and the unlucky homozygous females remain dichromatic.

Decent wikipedia article on the subject: https://en.wikipedia.org/wiki/Evolution_of_color_vision_in_p...

Types of opsins in vertebrates: https://en.wikipedia.org/wiki/Vertebrate_visual_opsin


A Strangeloop talk by Mouse Reeve, years ago, looked at the Markovian structure of "Gnossiennes" then made an endless version. A beautiful talk and really cool music website.

Music website: https://gnossiennes.mousereeve.com/ (slightly better on Desktop).

Talk: https://youtu.be/ANYMii3Sypg

Abstract: https://www.thestrangeloop.com/2019/minimalist-piano-forever...


Patrick's "salary negotiation" piece has made the rounds here a number of times, and I always have the same reaction to it: It probably works, if you are already a software celebrity like Patrick, or some other highly sought after Captain of the software engineering industry. For us rank-and-file employee number 12887's, I just don't see it working--I've really never negotiated compensation significantly successfully in my 25+ year career. Here's how salary negotiation usually goes if you're not someone like John Carmack:

Them: We'd love to have you join the team! Here's your compensation letter, please get back to us within the next 2 weeks and we'll move forward.

You: Thank you, I believe my background and skills would suggest a compensation of $1.5X instead of $X. Is this possible with your hiring budget?

Them: The offer is for $X and we believe it is appropriate for your level.

You: Maybe my level should be higher then?

Them: Ha Ha

You: Looking at [website A, B, C] I see the average compensation for this role is $1.25X. Surely there is room to move this upward.

Them: We don't agree with that data. The offer is for $X. Shall we move forward?

You: What about equity or bonus, is there any flexibility with those numbers, or vacation time, or...?

Them: The offer is for $X. If you don't want the job, we can move on to one of our other 20 candidates who are in the pipeline for the role.

That's basically how salary negotiations went for almost the entirety of my career. If your results have been different, I'm kind of envious!


A larger rocket mitigates the effects of the rocket equation.

The wet (loaded with propellant) to dry (empty of propellant) mass ratio is determined via the rocket equation to be the exponential of delta V divided by exhaust velocity.

Certain parts of the rocket, such as the external tank structure, scale sub-cubically with the rocket's dimension, as do aerodynamic forces; whereas payload and propellant mass scale cubically.

Hence if the rocket is smaller than a critical threshold size, the requisite vehicle structures are too large relative to its propellant capacity to permit the required wet:dry mass ratio to achieve the delta V for orbit.

At exactly this size, the rocket can reach orbit with zero payload.

As the rocket increases in size beyond this threshold, it is able to carry a payload which is increasingly large relative to the rocket's total mass.


I am yet to find a better introduction than Busby and Kolman's "Introductory Discrete Structures with Applications".

Beautifully written, concise, very accessible with the precise right amount of formalism.

http://books.google.com/books/about/Introductory_Discrete_St...


The article didn't nail down an exact reason. Here is my guess. The quote from Andy Hertzfeld suggests the limiting factor was the memory bandwidth not the memory volume:

> The most important decision was admitting that the software would never fit into 64K of memory and going with a full 16-bit memory bus, requiring 16 RAM chips instead of 8. The extra memory bandwidth allowed him to double the display resolution, going to dimensions of 512 by 342 instead of 384 by 256

If you look at the specs for the machine, you see that during an active scan line, the video is using exactly half of the available memory bandwidth, with the CPU able to use the other half (during horizontal and vertical blanking periods the CPU can use the entire memory bandwidth)[1]. That dictated the scanline duration.

If the computer had any more scan lines, something would have had to give, as every nanosecond was already accounted for[2]. The refresh rate would have to be lower, or the blanking periods would have had to been shorter, or the memory bandwidth would have to be higher, or the memory bandwidth would have had to be divided unevenly between the CPU and video which was probably harder to implement. I don't know which of those things they would have been able to adjust and which were hard requirements of the hardware they could find, but I'm guessing that they couldn't do 384 scan lines given the memory bandwidth of the RAM chips, and the blanking times of the CRT they selected, if they wanted to hit 60Hz.

[1]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...

[2]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...


This is fiendishly clever, and really quite elegant.

I made some of my own notes on how this works here: https://simonwillison.net/2025/May/26/css-minecraft/


I've tried spaced repetition systems several times. The problem that I always discover is that I don't really have stuff that's worth memorizing. Things that are actually important I remember without trying and for the rest of the things, doing daily card reviews starts to feel like a pointless chore after a while.

I've read part of Nick Lane's other book, The Vital Question, cannot comment on this new one; TL;DR competent biochemist (from complete amateur standpoint at any rate), excellent science communicator; you can watch some of his talks online. e.g. the one linked on this new book's page is good: https://www.youtube.com/watch?v=vBiIDwBOqQA

He's really fascinated by the overall transformation process of inorganic matter -> organic matter, a sort of scientific fixation - which is always enjoyable when it's done by a competent scientist - and it's really captivating stuff. (The fact I haven't finished his previous book has nothing to do with the book material itself, if anything it really captivated me; it's just my not-amazing new habit of not finishing books...)


Strongly recommend this blog post too which is a much more detailed and persuasive version of the same point. The author actually goes and builds a coding agent from zero: https://ampcode.com/how-to-build-an-agent

It is indeed astonishing how well a loop with an LLM that can call tools works for all kinds of tasks now. Yes, sometimes they go off the rails, there is the problem of getting that last 10% of reliability, etc. etc., but if you're not at least a little bit amazed then I urge you go to and hack together something like this yourself, which will take you about 30 minutes. It's possible to have a sense of wonder about these things without giving up your healthy skepticism of whether AI is actually going to be effective for this or that use case.

This "unreasonable effectiveness" of putting the LLM in a loop also accounts for the enormous proliferation of coding agents out there now: Claude Code, Windsurf, Cursor, Cline, Copilot, Aider, Codex... and a ton of also-rans; as one HN poster put it the other day, it seems like everyone and their mother is writing one. The reason is that there is no secret sauce and 95% of the magic is in the LLM itself and how it's been fine-tuned to do tool calls. One of the lead developers of Claude Code candidly admits this in a recent interview.[0] Of course, a ton of work goes into making these tools work well, but ultimately they all have the same simple core.

[0] https://www.youtube.com/watch?v=zDmW5hJPsvQ


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You