While I'm definitely concerned that AI is a massive driver of centralization of power, at least in theory being able to do far more things in the space of "things physics admits to be possible" is massively wealth enhancing. That is literally how we have gotten from the pre-industrial world to today.
Controversially I'd argue that there is likely an optimal and stable level of technological advancement which we would be wise to not to cross. That said, we are human so we will, I'd just rather it happened in a couple hundred years rather than a decade or two.
For example, it's hard to imagine an AI which gives us the capability to cure cancer, but doesn't give us the capability to create target super viruses.
I had Opus 4.6 start analyzing the binary structure of a parquet file because it was confused about the python environment it was developing in and couldn't use normal methods for whatever reason. It successfully decoded the schema and wrote working code afterwards lol.
Didn't one of the PQC candidates get found to have a fatal classical vulnerability? Are we confident we won't find any future oopsies like that with the current PQC candidates?
The whole point of the competition is to see if anybody can cryptanalyze the contestants. I think part of what's happening here is that people have put all PQC constructions in bucket, as if they shared an underlying technology or theory, so that a break in one calls all of them into question. That is in fact not at all the case. PQC is not a "kind" of cryptography. It's a functional attribute of many different kinds of cryptography.
The algorithm everyone tends to be thinking of when they bring this up has literally nothing to do with any cryptography used anywhere ever; it was wildly novel, and it was interesting only because it (1) had really nice ergonomics and (2) failed spectacularly.
SIKE made it all the way to round 3. It failed spectacularly, but it happened rather abruptly. In one sense it wasn't surprising because of its novelty, but the actual attack was somewhat surprising--nobody was predicting it would crumble so thoroughly so quickly. Notably, the approach undergirding it is still thought secure; it was the particular details that caused it to fail.
It's hubris to say there are no questions, especially for key exchange. The general classes of mathematical problems for PQC seem robust, but that's generally not how crypto systems fail. They fail in the details, both algorithmically and in implementation gotchas.
From a security engineering perspective, there's no persuasive reason to avoid general adoption of, e.g., the NIST selections and related approaches. But when people suggest not to use hybrid schemes because the PQC selections are clearly robust on their own, well then reasonable people can disagree. Because, again, the devil is in the details.
The need to proclaim "no questions" feels more like a reaction to lay skepticism and potential FUD, for fear it will slow the adoption of PQC. But that's a social issue, and imbibing that urge may cause security engineers to let their guard down.
What's your point? SIKE has literally nothing to do with MLKEM. There is no relationship between the algorithms. Essentially everybody working on PQC, including Bernstein himself, have converged on lattices, which, again, were a competitor to curves as a successor to RSA --- they are old.
SIKE: not lattices. Literally moon math. Do you understand how SIKE/SIDH works? It's fucking wild.
I'm going to keep saying this: you know the discussion is fully off the rails when people bring SIKE/SIDH into it as evidence against MLKEM.
You may not have any questions about the security of ML-KEM, but many people do. See, for example, DJB's compilation of such doubts from the IETF WG: https://blog.cr.yp.to/20260221-structure.html
These doubts may not be the kind curious onlookers have in mind, but to say there are no doubts among researchers and practitioners is a misrepresentation. In fact, you're flatly contradicting what DJB has said on the matter:
> SIKE is not an isolated example: https://cr.yp.to/papers.html#qrcsp shows that 48% of the 69 round-1 submissions to the NIST competition have been broken by now.
Unqualified assurances is what you hear from a salesman. You're trying to sell people on PQC. There's no reason to believe ML-KEM is a lemon, but you're effectively saying, "it's the last KEX scheme we'll ever need", and that's just not honest from an engineering point of view, even if it's what people need to hear.
I think you just gave away the game. To the extent I believe a CRQC is imminent, I suppose I am "trying to sell people on PQC". But then, so is Daniel Bernstein, your only cryptographically authoritative cite to your concern. Bernstein's problem isn't that we're rushing to PQC. It's that we didn't pick his personal lattice proposal.
And, if we're on the subject of how trustworthy Bernstein's concerns are, I'll note again: in his own writing about the potential frailty of MLKEM, he cites SIKE, because, again, he thinks you're too dumb to understand the difference between a module lattice and a generic lattice.
Finally, I'm going to keep saying this until I don't have to say it anymore: PQC is not a "kind" of cryptography. It doesn't mean anything that N% of the Round 1 submissions to the NIST PQC Contest were cryptanalyzed. Multivariate quadratic equation cryptography, supersingular isogeny cryptography, and F_2^128 code-based cryptography are not related to each other. The point of the contest was for that to happen.
Yeah I get that, what I am really asking is that I know in my field, I can quickly get a vibe as to whether certain new work is good or not so good, and where any bugaboos are likely to be. For those who know PQC like I know economics, do they believe at this point that the algorithms have been analyzed successfully to a level comparable to DH or RSA? Or is this really gonna be a rush job under the gun because we have no choice?
Lattice cryptography was a contender alongside curves as a successor to RSA. It's not new. The specific lattice constructions we looked at during NIST PQC were new iterations on it, but so was Curve25519 when it was introduced. It's extremely not a rush job.
The elephant in the room in these conversations is Daniel Bernstein and the shade he has been casting on MLKEM for the last few years. The things I think you should remember about that particular elephant are (1) that he's cited SIDH as a reason to be suspicious of MLKEM, which indicates that he thinks you're an idiot, and (2) that he himself participated in the NIST PQC KEM contest with a lattice construction.
Bernstein's ego is at a level where he thinks most other people are idiots (not without some justification), that's been clear for decades. What are you hinting at?
I'm not saying anything about his ego or trying to psychoanalyze him. I'm saying: he attempted to get a lattice scheme standardized under the NIST PQC contest, and now fiercely opposes the standard that was chosen instead.
It's the same situation with classical encryption. It's not uncommon for a candidate algorithm [to be discovered ] to be broken during the selection process.
The secrecy around this is precisely the opposite of what we saw in the 90s when it started to become clear DES needed to go. Yet another sign that the global powers are preparing for war.
What do you mean? For as long as I remember (back to late 1994) people understood DES to be inadequate; we used DES-EDE and IDEA (and later RC4) instead. What "secrecy" would there have been? The feasibility of breaking DES given a plausible budget goes all the way back to the late 1970s. The first prize given for demonstrating a DES break was only $10,000.
Triple-key DES (DES-EDE) had already been proposed by IBM in 1979, in response to the criticism that the 56-bit keys of DES are far too short.
So practically immediately after DES was standardized, people realized that NSA had crippled it by limiting the key length to 56 bits, and they started to use workarounds.
Before introducing RC2 and RC4 in 1987, Ronald Rivest had used since 1984 another method of extending the key length of DES, named DESX, which was cheaper than DES-EDE as it used a single block cipher function invocation. However, like also RC4, DESX was kept as a RSA trade secret, until it was leaked, also like RC4, during the mid nineties.
IDEA (1992, after a preliminary version was published in 1991) was the first block cipher function that was more secure than DES and which was also publicly described.
Was that the only thing wrong with it? The 90s was definitely before my time but I was under the impression reading about it that there were also fundamental flaws with DES which lead to the competition which ultimately produced AES.
Yes, that was what was wrong with DES. I mean, it also had an 8-byte block size, which turns out to be inadequate as well, but that's true of IDEA and Blowfish as well.
My read of the recent google blog post is that they framed it as cryptocurrency related stuff just so they don't say the silent thing out loud. But lots of people "in the know" / working on this are taking it much more seriously than just cryptobros go broke. So my hunch is that there's more to it and they didn't want to say it / couldn't / weren't allowed to.
It should be noted that quantum computers are a threat mainly for interactions between unrelated parties which perform legal activities, e.g. online shopping, online banking, notarized legal documents that use long-term digital signatures.
Quantum computers are not a threat for spies or for communications within private organizations where security is considered very important, where the use of public-key cryptography can easily be completely avoided and authentication and session key exchanges can be handled with pre-shared secret keys used only for that purpose.
Most likely the NSA or someone else is ahead of the game and already has a quantum computer. If the tech news rumors are to true the NSA has a facility in Utah that can gather large swaths of the internet and process the data.
Is this also true for other TPM/snitching/DRM chips out there? IE will every existing device eventually become jailbreakable in the future or will we unfortunately not even get that benefit from all this?
The timeline here is for when major governments have access to CRQCs. It will be much longer than that (barring an AI singularity or something) before you have access to one.
Claude Code is IMO the benchmark today. For all of the various contexts I’ve used it in it has mostly oneshot the tasks I’ve given it and is very user friendly for someone who is not a professional software engineer. To the extent it fails I can usually figure out quickly why and correct it at a high level.
I think Codex is a better fit for professional software engineers. It's able to one-shot larger, more complex tasks than Claude and also does better context management which is really important in a large codebase.
On the other hand, I think Claude is more friendly/readable and also still better at producing out-of-the-box nice looking frontend.
I think this is where we might have differing opinions. I'm a CTO by profession and I know what bad code is, so it is quite easy for me, based on my professional experience, point out when Claude generates bad code. And when you point it out, or ask it why it didn't take the correct/simpler approach - the response is always along the lines of "Oops, sorry!" or "You're absolutely right to question that..."
Yeah my CTO says similar things. I usually just tell him to add it to the backlog and move on. At the end of the day these tools save us 3-4 eng hires and that’s what the board cares about
One of the most robust findings in labor economics is that in the long run capital and labor are complements, not substitutes. What this means is that over time, as capital productivity has increased, demand for both capital and labor has increased, rather than demand for labor falling while demand for capital increased. I'm skeptical that AI will be different than all of the previous inventions of the industrial era in this regard.
I don't believe demand for labour has increased. We used to force children as young as 6 to enter the labour force, and people used to work 6.5 days per week. Demand for labour has been in free fall since the 1970, evidenced by stagnant wages in most of the developed world. Furthermore capital is accumulating at the top as the capital owners use their position to extract it from the people below them. AI will only accelerate this. We are in for some interesting times for sure.
Not near as much as you'd like to believe. It takes years for us to grow up, get an education and become useful. Changing what we do can be quite difficult, especially with the time and monetary costs of doing so. Plus inroads by technology can wipe jobs quickly even if they'll eventually be replaced.
When you have a bunch of people scared that they'll starve tomorrow society will fall apart (even more than it has). The rise to authoritarianism will lead to rather bad outcomes in the medium term.
Apollo put a lot more burden on the Service Module than Artemis plans to put on the Orion. Apollo put the CSM/LM into a low lunar orbit while Artemis plans to put Orion into a high lunar orbit and make the Starship carry a lot more delta-V to land from a much higher velocity (and then accelerate back up to that velocity when coming back).
On top of that there weren’t really solar panels in the 1960s so the Service Module had to carry tons of chemicals to produce electricity, as well as extra fuel for all of that weight. As a result it was massively overbuilt compared to anything we’d try today and even so had to take an expedited flight path to the moon of 3 days in order to conserve operational lifetime. Artemis does not have nearly as severe constraints on either the Orion or the future Starship and so can afford to take a more fuel efficient 5 day coast up to the Moon and make the design tradeoffs on Orion that that entails.
reply