Relevant Tony Hoare quote: “There are two approaches to software design: make it so simple there are obviously no deficiencies, or make it so complex there are no obvious deficiencies”.
I think this is so relevant, and thank you for posting this.
Of course it's trivially NOT true that you can defend against all exploits by making your system sufficiently compact and clean, but you can certainly have a big impact on the exploitable surface area.
I think it's a bit bizarre that it's implicitly assumed that all codebases are broken enough, that if you were to attack them sufficiently, you'll eventually find endlessly more issues.
Another analogy here is to fuzzing. A fuzzer can walk through all sorts of states of a program, but when it hits a password, it can't really push past that because it needs to search a space that is impossibly huge.
It's all well and good to try to exploit a program, but (as an example) if that program _robustly and very simply_ (the hard part!) says... that it only accepts messages from the network that are signed before it does ANYTHING else, you're going to have a hard time getting it to accept unsigned messages.
Admittedly, a lot of today's surfaces and software were built in a world where you could get away with a lot more laziness compared to this. But I could imagine, for example, a state of the world in which we're much more intentional about what we accept and even bring _into_ our threat environment. Similarly to the shift from network to endpoint security. There are for sure, uh, million systems right now with a threat model wildly larger than it needs to be.
Note that there was a major press cycle about this in October / November of last year - a quick Google showed stories in the Guardian, The Intercept, and the Cornell Sun, as well as commentary on Reddit. Not inconceivable that they found about it last October and had six months to leave and de-Googlify.
> Note that there was a major press cycle about this in October / November of last year
Fair point. However...the parent's comment is also fair because the article does a poor job of raising this material fact. You have to click through a sub-article.
It's almost like this article should be tagged (2025) because it's basically a replay of the author's account from 2025.[0]
Note that the judge is bound by precedent and law as to what "unreasonable" means, they can't just make it up as they go along unless there is no precedent. Otherwise the case can be reversed on appeal.
I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.
My understanding is that judges have certain specialties - one judge might be well versed in a particular area of law but not other ones. The case I was on was an area where nobody in the district had expertise, and everybody (judge, prosecutor, defense, jury) was learning as they go. The new incoming case was one that was in an area that our previous judge would normally handle. So it was assigned to him because it came in through his department, while the case I was on was sort of a free-floating orphan where not much was lost by having another judge handle it (and it was also already in the jury instructions phase, with testimony complete).
That's a terrible idea though. It means that anyone who selects the "Do not track me" option will find that they can't access the content at all, which will quickly train users to never select "Do not track me".
Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."
I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.
But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.
> Somewhere around 2005-2007, when people were wondering if the Internet was done
Literally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.
If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.
There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]
Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.
And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.
> If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.
Didn't we only pass 50% of households having a home PC in like... '00 or '01 or something? And I mean just in the US, which was way ahead of the curve.
> Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I actually think that's correct... if the smartphone hadn't taken off right after that. The "consumer" Internet and computing, the attention economy, et c., functionally is the smartphone. A desktop computer and even a laptop aren't in use when driving, at the store, at the park, every moment on vacation, et c. It'd still only be nerds lugging computers everywhere if nobody'd managed to make a smartphone that's capable-enough and pleasant-enough-to-use to expand the market beyond the set of folks who might have had a beeper in earlier years (the part of the market Blackberry was addressing). A gigantic proportion of the "GDP of the Internet", if you will, exists because smartphones exist.
I'm reminded of the quote that ATMs didn't unemploy bank tellers, smartphones did. While not owning a laptop may seem inconceivable to us here, smartphones exist as the only connection to the Internet for many.
The interesting question is without Apple and the iPhone, would RIM/Blackberry have "figured it out"? Would we be on 2-way "pagers" with keyboards and stupidly expensive data plans that you have to order separately? Because while the original iphone was a marvel in terms of hardware, I think the bidet contribution was the integration with AT&T for the cellphone plan, which only Steve Jobs had the clout to pull off.
> I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do
Almost definitely professional ragebaiters in Wired or Time or whatever, yeah.
I was also in tech at that time, in fact I worked for Google during that period and people definitely thought that the Internet had reached its peak. So many criticisms back then not about just peak Internet but that all these companies were blowing money on unproven business models, they were unsustainable, unprofitable, it was all just hype.
You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.
And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.
It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.
I agree with the gist of your points, but not much with these two:
>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.
These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.
>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.
We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.
In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.
It is the opposite of slop I am seeing. And that at a lower cost.
Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.
Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.
My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.
So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?
One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?
Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.
This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?
I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.
> Maybe AI will address everything at every level.
I think this is the idea you need to entertain / ponder more on.
I largely agree with you, what I don't agree with is the weighting about the individual elements.
My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.
We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.
In particular, we have dropped the external backoffice tool, so we have a single mono repo.
An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.
Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.
Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.
Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.
I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.
It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!
I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.
Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.
For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.
On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).
I think society is going to be much less tolerant. And that frightens me.
I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"
The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.
You are right, and it happens that the output looks decent.
Code idioms, or patterns if you will, is largely our solution.
We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.
In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.
Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.
Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.
It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.
Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.
I take your point but then it makes me think is there no more value in diversity?
[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.
What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit
Why have humans do work at all? We could have a radically better existence. It would mean that the few at the top of the pyramid lose their privileged position relative to the rest of us, but we could, actually, have that world of abundance for all.
Work in the current sense arguably isn't even desirable
> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.
If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?
The GPC spec does not say "no cookies will be set" [1], and does not mention cookies at all. It merely provides a way for the user to indicate their preference that their information not be shared or tracked. The spec even says:
> In the absence of regulatory, legal, or other requirements, websites can interpret an expressed Global Privacy Control preference as they find most appropriate for the given person, particularly as considered in light of the person's privacy expectations, context, and cultural circumstances.
The CCPA [2] also never explicitly mentions cookies or forbids them from being set. The relevant passages about opting out on the sale of personal information are:
> a) A business shall provide two or more designated methods for submitting requests to opt-out, including an interactive form accessible via a clear and conspicuous link titled “Do Not Sell My Personal Information,” on the business’s website or mobile application. Other acceptable methods for submitting these requests include, but are not limited to, a toll-free phone number, a designated email address, a form submitted in person, a form submitted through the mail, and user-enabled global privacy controls, such as a browser plug-in or privacy setting, device setting, or other mechanism, that communicate or signal the
consumer’s choice to opt-out of the sale of their personal information
How would you respond to their claim that you are fundamentally misunderstanding GPC, and that the spec and the law do not mean you never set cookies, they mean that you must honor the preferences expressed by the header in backend processes that involve tracking or sale of personal information?
To quote our report: At webXray we are experts in tracking technologies, and we work closely with in-house counsel, defense, plaintiff firms, and regulators. However, we are not lawyers ourselves, thus nothing in this report represents a legal conclusion. webXray was not founded to supplant the role of lawyers, courts, or judges. We were founded to provide clear, accurate, forensic data, without fear or favor. We believe that by filling this gap we can enhance outcomes for all consumers, businesses, and regulators.
---
We are filing the gap related to reliable facts not existing. We did a scientifically controlled test with GPC on and off. We presented the results as technical findings along with general background.
We are not lawyers, and we are happy to help others perform their own audits: https://webxray.ai - we have no desire to be lawyers.
We are a hard-tech engineering outfit, we deliver scientific clarity on complex topics.
So you agree that you have no way to confirm whether those websites honor or do not honor the do-not-sell-my-info choice. You are simply checking whether they set cookies or not, without knowing whether the data is sold or not on the backend.
Your marketing should specifically say "We track cookies" (or if you wanna get punchy about it, "We track cookies so cookies don't track you") so potential customers know exactly what they're getting. For the purposes of legal compliance, this is pretty irrelevant. There may be people that want to know that the existing laws and company's compliance to them doesn't actually stop the cookies from being sent, but your privacy report says the companies are "Our findings reveal major technology companies simply ignore globally defined opt-out signals, raising the spectre of industrial-scale non-compliance with California requirements", which is untrue and potentially opens you up to libel claims. They are not ignoring the laws, they are complying with the laws in a way that may or may not be what the consumer actually cares about.
Do you have any legal experience, evidence, or case history to support your perspective? You assert that the statement "Our findings reveal major technology companies simply ignore globally defined opt-out signals, raising the spectre of industrial-scale non-compliance with California requirements" is untrue -- how do you know? Do you think everything found in the discovery process would agree? Do you think a company with a history of privacy violations would actually go through with a lawsuit where they'd have to definitively prove they don't? What about proving malice, that webXray knew their statements were false or acted with reckless disregard for their truth? What about the risk of filing a suit where California's anti-SLAPP statue would probably apply?
Usually socialist revolutions fail because nobody can agree on who the new leaders should be. Workers seize control of the means of production...and then what? Who determines what they should do with it? Who do they look to for guidance? If you elect/appoint/select someone, now they are the new capitalist. If you don't, the machinery sits idle while various factions fight amongst themselves.
We saw this with Occupy Wall Street and the CHAZ in the U.S - these protests didn't fail because they were crushed, they failed because local police basically let them win and then once they won different factions had different ideas of what to do next. We also see it at the state level with the Soviet Union (where a strong dictatorship did eventually emerge - the communist revolution didn't mean everybody was equal, it just meant some people were more equal than others) and in Vietnam (which became intensely capitalist less than 15 years after the communists won.
The function of the business owner, CEO, or other executive figure is simply to be a symbol of which direction the organization needs to go. They don't do any work themselves, and they are selected for their ability to look pretty and shout platitudes that other people follow. But that symbol is needed to actually get the people moving in one direction.
>Workers seize control of the means of production...and then what? Who determines what they should do with it? Who do they look to for guidance? If you elect/appoint/select someone, now they are the new capitalist. If you don't, the machinery sits idle while various factions fight amongst themselves.
And then what is you do what you would have done at work yesterday, today. Same job description as you had previously. Your manager? Same as they were yesterday too. Everything exactly the same. Just some guy you never see is not getting their passive income. No machinery would sit idle for the same reason no machinery sat idle yesterday: people showed up to run it.
This is sort of how it worked in Cuba. Factories were nationalized and people went from working for the man to working for the public. And then the man had no government that would listen to them either. They had to go to the US government, argue that this was some great taking if left unanswered would sure happen all over the US and the rest of the world, and a hasty invasion designed by the US for these business owners to feign any political responsibility was designed, executed, and pushed back on the beachhead by the Cubans. Today the nation of Cuba remains sanctioned because of these owners from decades ago and their descendants, who still represent a significant political influence in south florida congressional districts, still feel like they were robbed by the people they were exploiting.
Iran's state media reported that the F-15 rescue mission was a cover to steal enriched uranium, something which fits the facts a lot more than them constructing an airstrip in enemy territory and blowing up at least two MC-130s just to rescue a pilot:
Also suspicious that Iran came to the negotiating table just a couple days after the F-15 mission after insisting for the other 5 weeks that there would be no negotiating and they were not even in contact with Washington.
I have my doubts. There was a previous BBC piece[0] which went into some of the challenges with such a mission. The first being: it is not publicly known where Iran is storing its uranium. There are many putative options, most of which are going to be hardened and underground. Isfahan is near the middle of the country - safely getting troops there would already be challenging, let alone digging up any from the collapsed tunnels.
Minor blurb from the article:
Satellite imagery shows that the entrances to Isfahan and Natanz were badly damaged by US airstrikes. US forces would likely need heavy machinery to dig through rubble in order to locate the enriched uranium, which is believed to be stored in tunnels buried deep underground - all while facing potential counterattacks from Iran.
"You've first got to excavate the site and detect [the enriched uranium] while likely being under near constant threat," Campbell said.
While it is hard to believe that someone in the US military believed that such a mission for uranium extraction can be successful, it is at least equally hard to believe that the US military has spent around a half of billion dollars just for saving 2 men, while also risking the lives of a very large number of other US combatants.
Saving your men is important, but it should have been easy to do that at a much lower cost and at much lower risks of additional personnel losses, if that had been the true mission goal.
I can certainly believe that the assurance the US gives to its pilots that they will never be left behind and the public demonstration of that assurance as something the US values in the billions of dollars.
It is also clear that if the mission was not a purely rescue mission then it would have taken a lot more equipment than what appears to have been used. Even for an escalade style high-risk low-probability mission it would be inadequate.
I think the most likely version of the claim would be that the Pentagon would have used the planning and execution of the mission as a valuable opportunity to learn for a dedicated mission to extract uranium in a contestable theatre. But even that is pushing it.
We know only approximately how much equipment has been lost. It is likely that much more equipment was used than what has been lost, i.e. many more transport airplanes than the 2 lost and many more helicopters.
Nevertheless, I agree that a possible explanation is what you propose, i.e. that the mission could have been more a test of the Iranian defense than an incursion that was actually expected to succeed.
In any case, if it was a test it was also a failure, as the defense was stronger than they expected, leading to excessive equipment losses.
reply