That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
There is also a fair bit of demographics at play. Many of the people writing these little applications grew up and imprinted before open source was much of a thing.
By definition, a document that is written is historic, not prehistoric.
Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).
Actually, this AI Compute is not very useful for physics, protein folding or many other high performance computing.
The problem is that the connectivity required for much of AI is very different than that required for classic HPC (more emphasis on bandwidth, less on super low latency small payload remote memory operations) and the numeric emphasis is very different (lots of mixed resolution and lots of ridiculously small numeric resolutions like fp8 vs almost all fp64 with some fp32).
The result is that essentially no AI computers reach the high end of the TOP500.
The converse is also true, classic frontier scale super computers don't make the most cost effective AI training platforms because they spend a lot of the budget on making HPC programs fast.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
reply