> Europe and the Gulf diversifying both their investments and defense purchases.
With what? The euro, yuan? Or weapons from france?
I hate to admit it, but it's much less that the US is great because it's the reserve currency, and much more that the world reserve currency is the dollar because the US is what it is.
Weapons are expensive, and it only makes sense to buy them from a country that specializes in them. And a country that makes weapons at huge scale is likely to be big enough tilt the direction of the country to be all the ugly things the modern US military industrial complex is.
The US isn't delivering Patriot missiles to Switzerland. Switzerland froze paiements. The US unilaterally took the money Switzerland escrowed to buy F35s, put them towards paying for Patriot missiles they won't deliver, and asked Switzerland to refill the F35 escrow account. Hundreds of millions of dollars have been siphoned off.
I'm having trouble reconciling this comment with reports that US stockpiles are already being depleted by the Iran war. At this point the US weapons production seems relatively specialized and inefficient, not "huge scale." Someone more informed care to weigh in?
Raytheon is about half the size of Pepsico, with about the same profit margin.
The supposed "Military Industrial Complex" that Ike warned about died years later, and the end of the Cold War buried what little remained. The F-35 is basically the only big military construction project we've had in a very long time, and it comes at a few hundred airframes per year.
In WW2, we were producing 10k+ rather advanced airframes every single year. In each category.
The company that designed and built the M1 Abrams Tank doesn't really exist anymore for example. We, like Russia, might not really have a capability of building 4000 hulls in a short timeframe, which is table stakes if we are actually concerned about a war with China. We were able to do these things back in WW2 because we, through central planning (not a free market), reorganized like 1/20th of the economy into building war assets. FDR decreed that we build 120k Shermans. We eventually managed 50k.
A lot of the supposed "graft" and pork of the defense industry is about giving it a lot of leeway just to stick around. Once you lose domain knowledge it's gone forever, you have to expend considerable resources to rebuild and recollect it. No, documentation doesn't count. Reading all of our notes hasn't fixed the fact that Russia and China can't build the exceptional jet engines we can.
I don’t think central planning is the only factor. Ukraine is leading a massively heroic drone innovation effort with very good results. In 2026 there are internet blackouts in Russia to hide the social chaos created by 1000+ drone strikes daily in deep Russian territory. It’s not a centrally planned war effort in Ukraine. It’s dozens of startups which started in people’s basements and bombed out ruins. The main factor is not central planning, but rather an existential threat which forced a massively heroic war effort.
> A lot of the supposed "graft" and pork of the defense industry is about giving it a lot of leeway just to stick around.
Even if they stick around, will they maintain an edge? Seems like their incentives are similar to a professor on tenure - do the minimum, collect paycheck. Even if they are still creating cutting-edge weapons, could they scale up efficiently when needed? Just read Casey Handmer's extremely critical posts on Orion/SLS; I would not trust the side of Lockheed/Boeing he describes with critical national security capabilities.
I’m probably not more informed, but it seems to me that it can be both. The rate of expenditure in a medium-sized war will far outstrip peacetime production needs. Even if you’re arming half the planet’s militaries, your peacetime production rate will be much smaller than what’s being used now, even if you’re building a lot by non-wartime standards.
Ukraine butchers soldiers for cheap. The US drops a bomb through the Atatollah’s bedroom window for not-cheap. It’s not clear to me which is more cost effective in the end. (Ignoring for a moment that US strategy in this war seems to be nonexistent. Imagine these capabilities were being used with some actual goal in mind besides “if we take their king then we win.”)
There is a big difference between defending against an unprovoked invasion vs assassinating an unsuspecting target.
Ukraine has shown themselves very capable of surgical offensive strikes using cheap drones deep inside enemy territory , so your comparison is not valid.
The US is defaulting on military orders to Europe and Germany just announced a 1 trillion euro rearmament plan. Europe is manufacturing big time. The Gulf states as of yesterday are now buying from Ukraine for fucks sake.
> "If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
For the record, Petrov made this decision based on a false assumption that the US wouldn't launch just a few missiles, but would instead send a lot, all at once. Except, that one of the US plans was to send a few missiles to destroy critical targets, and then follow it up with a large scale attack.
Petrov himself said that he might've acted differently if he was aware of this possibility. And even then, his initial hestitancy was basically a 50/50 gamble.
An AI would basically do the same thing if asked - just roll a random number, and launch nukes below a threshold, adjust threshold based on some llm evaluation of the situation if needed.
It's why every integration basically tries to piggyback off of a subscription, and why Anthropic has to continuously play whack-a-mole trying to shut those services down.
I agree with Palmer that Corporations shouldn't control governments.
But that's not what this is about. The US government is free to not use Anthropic's services.
The problem is that the government is using bullying tactics against a company excercising it's rights to not sell. Especially if they actually designate Anthropic as a supply chain risk - not only is that threat absolutely ridiculous, but actually doing so should be 9/10 on the danger scale.
WTF is even happening anymore? How did we get here that this is even up for debate???
Because they're not trying to build a leading coding agent - they're racing to AGI. That means cramming everything and anything, from every angle because general intelligence is happening next year and that'll connect all the dots, leading to hypergrowth and complete industry dominance, across all domains.
Seriously, if you listen to Dario talk, it's non-stop how there's a tsunami coming, people have no idea of what's ahead, how general intelligence or superintelligence is right around the corner. But also, this is super dangerous, and only Anthropic can save us from a doomsday scenario.
I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.
It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.
The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.
Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.
The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.
And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?
The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.
The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.
The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.
Your supposition fails to account for our history with hardware in any reasonable way.
Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.
Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.
This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.
Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.
Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.
The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.
History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.
How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.
It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."
To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.
> the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years
An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.
Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.
Ah, that’s my mistake. Thank you. I saw 2023, I thought GPT-3. Even still, people talk about GPT-4 today like it was a quaint little demo. It was a magnificent achievement, it scared the pants off of a lot of people, and sparked a new round of “is AI conscious?” discourse.
What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].
If somehow Claude became sentient that would be sci-fi. One day it’s wrangling CSS and Spring Boot Controllers and the next it’s telling you opinions it developed through its own experiences on programming languages. Not sure that’s on the near horizon, but it’s definitely impressive technology.
> Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?
Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.
Advances to round 2/7 to be able to do a powerpoint presentation so that companies will, at best, be forced to put some pointless label as a legal loophole, that consumers will promptly ignore because everyone will have it and it'll be meaningless.
Because that would mean AI isn't going to replace entire industries, which is the only way to justify the, not billions, but trillions in market value that AI leaders keep trying to justify.
With what? The euro, yuan? Or weapons from france?
I hate to admit it, but it's much less that the US is great because it's the reserve currency, and much more that the world reserve currency is the dollar because the US is what it is.
Weapons are expensive, and it only makes sense to buy them from a country that specializes in them. And a country that makes weapons at huge scale is likely to be big enough tilt the direction of the country to be all the ugly things the modern US military industrial complex is.
reply