Only the asymmetric portion of the cryptography (which is only used in the handshake) will need to use PQC algorithms. Symmetric crypto algorithms (AES/ChaCha20/SHA-*), which are used after the handshake, are not as badly affected by quantum computing so they're not being replaced in the immediate term. I'm pretty sure that general purpose CPUs do not have hardware acceleration for the asymmetric crypto anyways.
Post-quantum algorithms tend to be slower than existing elliptic curve algorithms and require more data to be exchanged to provide equivalent security against attacks run on non-quantum computers.
This page lists some figures for ML-KEM-768 (which is the PQ key exchange algorithm that's most widely deployed today): https://blog.cloudflare.com/pq-2025/#ml-kem-versus-x25519 This one is actually faster than X25519 (a highly optimized ECC algorithm) by about double but requires 1,184 bytes of data to be exchanged per keyshare vs 32 for X25519. In practice everyone today is using a hybrid algorithm (where you do both ECC and PQ in case the PQ algorithm has an undiscovered weakness) so an ECC+PQ key exchange will be strictly slower than an ECC-only key exchange.
This page lists some numbers for different PQ signature algorithms: https://blog.cloudflare.com/another-look-at-pq-signatures/#t... Right now the NIST has selected three different ones (ML-DSA, SLH-DSA, and Falcon a.k.a. FN-DSA) which each have different trade-offs.
SLH-DSA is slow and requires a large amount of data for signatures, however it's considered the most secure of the algorithms (since it's based on the well-understood security properties of symmetric hash algorithms) so it was selected primarily as a "backup" in case the other two algorithms are both broken (which may be possible as they're both based on the same mathematical structure).
ML-DSA and Falcon are both fairly fast (within an order of magnitude of Ed25519, the X25519 curve signature algorithm), but both require significantly larger keys (41x/28x) and signatures (38x/10x) compared to Ed25519. Falcon has the additional constraint that achieving the listed performance in that table requires a hardware FPU that implements IEEE-754 with constant-time double-precision math. CPUs that do not have such an FPU will need to fall back to software emulation of the required floating point math (most phone, desktop, and server CPUs have such an FPU but many embedded CPUs and microcontrollers do not).
The net result is that TLS handshakes with PQ signatures and key exchange may balloon to high single- or double-digit kilobytes in size, which will be especially impactful for users on marginal connections (and may break some "middle boxes" https://blog.cloudflare.com/nist-post-quantum-surprise/#dili...).
Along similar lines, Mozilla recently updated their recommended server-side TLS configuration to enable the X25519MLKEM768 post-quantum key exchange now that it's making it into actually-deployed software versions: https://wiki.mozilla.org/Security/Server_Side_TLS At the same time they removed their "old client" compatibility profile as newer TLS libraries do not implement the necessary algorithms (or at least do not enable them by default) and slightly tweaked the "intermediate" compatibility profile to remove a fallback necessary for IE 11 on Windows 7 (now Windows 10 is the minimum compatible version for that profile).
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
Users care about quality, even if the people buying the software do not. You can't just say "well the market doesn't care about quality" when the market incentives are broken for a paricular type of software. When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software (see Windows in the consumer OS market, which is perceptibly losing share to MacOS and Linux following a sustained decline in quality over the last several years).
You literally just told me the market doesn't care about quality. I don't get what point you're trying to make?
> When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software
Right, but this magical market you're talking about doesn't exist. That's my point.
Have you seen Facebook's code quality? Have you seen any-big Chinese corpo code? There are a lot of very profitable businesses in the world with endless amount of tech debt. But tech debt is not necessarily a big deal in most scenarios. Obviously I'm not talking about mission critical software, but for general consumer/business software, it's fine. The hard part is understanding where you can cut the costs / add debt, and that comes from requirement gathering.
Every generation of the production Nissan Leaf has used lithium batteries. AFAIK no modern (~post-2000) mass-produced (>10k units sold) EV has ever used NiMH or lead-acid batteries.
Edit: Checking Wikipedia to verify my information, I found out that Nissan actually sold a lithium-battery EV in 1997 to comply with the same 90s CARB zero-emissions vehicle mandate that gave us the GM EV-1: https://en.wikipedia.org/wiki/Nissan_R%27nessa#Nissan_Altra
EVs no, but I think some Toyota hybrids (which are of course not even PHEVs) still use NiMH. Toyota tends to be very tight-lipped about their batteries and their sizes (or rather, lack thereof).
Tends to be tight lipped??? It is in the catalog[1]! It is more that American consumers aren't tech obsessed than Toyota being reluctant to share.
Even just looking at online media reports[2][3] clearly sourced from some exact same press event, it is obvious that US English equivalents are much lighter in content than Japanese versions. They're putting the information out, no one's reading it. It's just been the types of information that didn't drive clicks. Language barrier would have effects on it too, that Toyota is a Japanese company and US is an export market, but it's fundamentally the same phenomenon as citizen facing government reports that never gets read and often imagined as being "hidden and withheld from public eyes", just a communication issue.
I was looking up this year's Corolla a while ago and likewise there was minimal info that I could see about the battery capacity, which I think I figured out was about 3kWh.
> Are you saying that if I'm using D-without-GC, I can use any D library, including ones written with the assumption that there is a GC? If not, how does it not fracture the community?
"Are you saying that if I'm using Rust in the Linux kernel, I can use any Rust library, including ones written with the assumption they will be running in userspace? If not, how does that not fracture the community?"
"Are you saying that if I'm using C++ in an embedded environment without runtime type information and exceptions, I can use any C++ library, including ones written with the assumption they can use RTTI/exceptions? If not, how does that not fracture the community?"
You can make this argument about a lot of languages and particular subsets/restrictions on them that are needed in specific circumstances. If you need to write GC-free code in D you can do it. Yes, it restricts what parts of the library ecosystem you can use, but that's not different from any other langauge that has wide adoption in a wide variety of applications. It turns out that in reality most applications don't need to be GC-free (the massive preponderance of GC languages is indicative of this) and GC makes them much easier and safer to write.
I think most people in the D community are tired of people (especially outsiders) constantly rehashing discussions about GC. It was a much more salient topic before the core language supported no-GC mode, but now that it does it's up to individuals to decide what the cost/benefit analysis is for writing GC vs no-GC code (including the availability of third-party libraries in each mode).
The RTTI vs no-RTTI thing and the exceptions vs no-exceptions thing definitely does fracture the C++ community to some degree, and plenty of people have rightly criticized C++ for it.
> If you need to write GC-free code in D you can do it.
This seems correct, with the emphasis. Plenty of people make it sound like the GC in D is no problem because it's optional, so if you don't want GC you can just write D without a GC. It's a bit like saying that the stdlib in Rust is no problem because you can just use no_std, or that exceptions in C++ is no problem because you can just use -fno-exceptions. All these things are naïve for the same reason; it locks you out of most of the ecosystem.
Yeah, AV1 is primarily based on what Google was working on for their own successor to VP9, what would have been VP10, with technology contributions from Mozilla/Xiph's Daala and Cisco's Thor codecs.
Seriously. Also less and less software is supporting 7. Importantly, Firefox ESR 115 is the last modern browser to support Windows 7 and it's entering EOL after this month[1]; Chrome dropped Windows 7 support in 2023[2].
Right now I wonder if a browser or chrome embedded framework (and then when applications that use it update past the cut-off version) is the 'killer app' that motivates upgrades. The CEF cut-off for win7/8/8.1 was when their extended support periods ended, and presumably they upgraded the underlying SDK they to rely on features not present pre-win10, and presumably oct 2028 (+3 years) is when the same will happen again
Just made dirty hack and be able to run latest LibreOffice 25.8 on Win7 despite official "unsupported status". Save dialog is not working but luckily software is cross platform and has it's native dialog and it can be enabled in options.
There is a port of Mbed TLS to Classic MacOS[1] which has TLS 1.2 enabled but per the README.md probably not the right cihper suites (it only has AES-CBC ciphers enabled by default) to connect to servers configured per the widely-used Mozilla "intermediate" recommendations[2] (which require AES-GCM or ChaCha20 ciphers).
The Technology Connections video series on RCA's Selectavision CED home video system touches on this quite a lot (it was a horribly mismanaged project which took more than a decade to commercialize, by which time it had already been superceded by VHS/Betamax and Laserdisc)[1]. His main source for the information on the development of the CED system was the book "The Business of Research: RCA and the VideoDisc" by Margaret B. W. Graham.
reply