For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | ted_dunning's commentsregister

That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.

There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.


As stated in the abstract, the anomalies occur more within a window around a nuclear event.

This precise point has been challenged, FWIW. See https://arxiv.org/pdf/2601.21946.

32˚N 80˚W altitude 1000 miles

That is certainly the myth that drives this.

There is also a fair bit of demographics at play. Many of the people writing these little applications grew up and imprinted before open source was much of a thing.


Sorta kinda.

TLDR: historical brine production and modern wetlands restoration.

https://en.wikipedia.org/wiki/San_Francisco_Bay_Salt_Ponds


That is merely medieval times.

In ancient times, floats were all 60 bits and there was no single precision.

See page 3-15 of this https://caltss.computerhistory.org/archive/6400-cdc.pdf


I see their 60-bit float has the same size exponent (11 bits) as today's doubles. Only the mantissa was smaller, 48 bits instead of 52.

That written document is prehistoric.

By definition, a document that is written is historic, not prehistoric.

Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).


Julia supports full IEEE 754 rounding mode support.

And none of that doesn't improve the throughput of the clinical trials. It just decrease the cost of coming up with things to put into trial.

Actually, this AI Compute is not very useful for physics, protein folding or many other high performance computing.

The problem is that the connectivity required for much of AI is very different than that required for classic HPC (more emphasis on bandwidth, less on super low latency small payload remote memory operations) and the numeric emphasis is very different (lots of mixed resolution and lots of ridiculously small numeric resolutions like fp8 vs almost all fp64 with some fp32).

The result is that essentially no AI computers reach the high end of the TOP500.

The converse is also true, classic frontier scale super computers don't make the most cost effective AI training platforms because they spend a lot of the budget on making HPC programs fast.


Alphafold (protein folding) was trained on Googles TPUs which are not GPUs true but very close.

Flow simulation also happens on GPUs and not CPUs though.

El Capitan is the top 1 on top 500 and the flops ratio between CPU and GPU is nearly 1 to 100.


You should mark sarcasm as subtle as this.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You