Currently some programmers, and with time more, have to write, integrate and debug LLMs, hence for the programming to end, other LLMs would have to be able to do so, too. LLMs successfully modifying other LLMs is, like, singularity. In other words, the moment programming ends is the same moment we all are going to die.
Microsoft IntelliTest (formerly Pex) [1] is internally using Z3 constraint solver that traces program data and control flow graph well enough to be able to generate desired values to reach given statement of code. It can even run the program to figure out runtime values. Hence the technique is called Dynamic Symbolic Execution [3]. We have technology for this, just not yet applied correctly.
I would also like to be able to point at any function in my IDE and ask it:
- "Can you show me the usual runtime input/output pairs for this function?"
- "Can you show me the preconditions and postconditions this function obeys?"
There is plenty of research prototypes doing it (Whyline [4], Daikon [5], ...) but sadly, not a tool usable for the daily grind.
> but what these people often need is someone to sit with them for a significant amount of time and demonstrate how one breaks a problem down, builds small pieces that demonstrate functionality and then put those pieces together into a solution.
And what if this is done repeatedly for the junior engineer, and yet any initiative they show after that is still negligible?
One generalization of this concept I see is: Instead of having a sequence of successive states, you only need the initial state and a function telling you how to compute the next state from previous one.
You can also see a connection to a version control system like Git. Instead of keeping snapshots of all the contents of the repository after each commit, one can keep only the initial repository state and changes in each commit. Then to get to N-th state you say "Apply first N commits to the initial state".
In the bouncing DVD logo example the "function to compute next state" or "commit contents" is just easy and regular, to the point of being expressible via simple math functions.
> You can also see a connection to a version control system like Git. Instead of keeping snapshots of all the contents of the repository after each commit, one can keep only the initial repository state and changes in each commit.
Not to invalidate your point, but this is a common misconception with git. Yes, many git commands will treat a commit as a diff compared to its parent(s), but the actual commit object is storing the full state of the tree. This still works out to be fairly efficient because if a file is identical between two commits, it is only stored once, and both commits point at the same stored file.
This is a common misconception with git. Yes, conceptually git will treat each commit as complete tree, but the actual pack files are storing deltas because anything else is way too inefficient. Loose objects are only the initial format stored before there is enough data for pack files to make sense.
The concept is what's important, here, since they were correcting the idea that git works mainly on diffs, which it doesn't. The diffs are merely a storage optimisation.
That's what I was thinking, too. Perhaps we already know majority of what there is to know, when it comes to fundamental concepts? Probably plenty of work left in improving our tools and engineering solutions, like machine learning-based software, as well as in understanding the intricates of biology. Maybe mathematics, too. Maybe improvements in these will cause another golden age of discovery. Like, understanding biology enough to gain significantly extended lifespan, which means much more expertise can be built by one individual.
I don't know how significant this is, but "not the intended target" and "civilian" could mean very different things. In particular, it seems likely that a strike targeted at a terrorist leader has a high chance of killing their aides or others in the car, who are probably members of the terror cell, even though they weren't the actual target of the strike.
> I define a few test cases and the AI writes code for a generalized solution
How about the AI never writing any code, just training "mini AI" / network that implements the test cases, of course in a generalized way, the way our current AI systems work. We could continue adding test cases for corner cases until the "mini AI" is so good that we no longer can come up with a test case that trips it over.
In such future, the skill of being comprehensive tester would be everything, and the only code written by humans would be the test cases.
This situation reminds me of low-background steel:
Low-background steel, also known as pre-war steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s. Typically sourced from shipwrecks and other steel artifacts of this era, it is often used for modern particle detectors because more modern steel is contaminated with traces of nuclear fallout.