For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | ModernMech's commentsregister

It reminds me of a Age of Empires campaign I played at a LAN from a long while back, where the game went on for 20 hours and ended in a stalemate between an atomic age player and a very primitive age player. The atomic player had total control of the map, they were carpet bombing the entire thing with nuclear weapons. But they could only create them so fast while the primitive player was running around on horses, just surviving enough to prevent the other player from winning. The only reason the game ended was because I tripped over the power cord to one of the computers.

To me, that's what modern warfare looks like.


Ah, you mean Empire Earth. I loved that game, it had a great soundtrack.

Sounds like it indeed. The balance was... interesting, a single tank could not win against a dozen cavemen.

Weapons are designed with an opponent in mind, and guarded against the expected threat models from that opponent. Everything breaks down when the opponent does not what you want them to.

I don't see how a single tank could win against 12 cavemen, but I digress. It's a video game.

Empire earth slapped so hard. Both 1 and 2. Honestly now that I’m thinking about it, going to set aside some time this weekend and play it again!

Right right Empire Earth! My memory is a little fuzzy it must have been 20 years ago.

I don't remember Age of Empires having an atomic age?

It was probably Rise of Nations or one of the other similar games.

If I had to guess I think they meant empire earth instead.

I noticed this about my Dad, he's 66. Growing up he was always in front of the TV watching whatever show was on. Today he's never there, now he's always in front of his iPad watching YouTube, watching videos of tractors excavating things.

There's something quite therapeutic about that stuff. I spend a while watching Cutting Edge Engineering fixing bits off bulldozers and the like.

I really like the ones where they take old rusted things and restore them.

Depends what you mean by better. It crashed more and there was a lot of data loss, but it wasn't explicitly evil so maybe on measure it was better.

lol yeah the people running the programming subreddit don't understand social context, who could have guessed?

Do you think maybe it's the disclosure about self promotion you added? You explicitly say the purpose of the blog post is to promote your consultancy, so that might be why they marked it blogspam. I know it feels like you're being forthright, but really that you're promoting yourself is implicit in the fact it's a personal blog, so you can leave that out and still be honest.

Yeah, maybe it's that, though I still wouldn't expect someone to categorize the post as blogspam, even if they just glance at it. (At least according to my definition of blogspam, but I guess each has their own.) But yes, pragmatically I should probably remove the disclaimer.

Yes, a retreading of the accidental vs. implicit complexity discussion is in order here. I asked an AI agent to implement function calls in a programming language the other day. It decided the best way to do this was to spin up a new interpreter for every function call and evaluate the function within that context. This actually worked but it was very very very slow.

The only way I was able to direct the AI to a better design was by saying the words I know in my head that describe better designs. Anyone without that knowledge wouldn't be able to tell the heavy interpreter architecture wasn't good, because it was fast enough for simple test cases which all passed.

And you can say "just prompt better" but we're very quickly coming to a place where people won't even have the words to say without AI first telling them what they are. At that point it might as well just say "The design is fine don't worry about it" and how would the user know any better.


The evidence that software development cannot be automated is we already tried to do it in the 90s with OOP, UML, and outsourcing. It didn’t work out for the same reasons vibe coding isn’t working out — because building the system is the same as specifying it, and that is a creative iterative process.

We are at the point where sure ai can write code, but we could always do that; lack of code writing ability was not what killed the OOP automation efforts. There was plenty ability to code back then as well. The distinction of whether it’s an offshore team in India or Claude writing the code doesn’t change things as far as the larger picture of building the software.


This may be evidence that it's more difficult than evangelists first imagine, but it's not evidence of a technical obstacle. Generally, "automation failed" does not imply that "automation is impossible".

To your individual points:

- OOP and UML are domain-specific abstractions. Aside from still being very much used in expanding niches [1], they have failed to automate much work because their proponents failed to cover enough cases to have a useful general-purpose abstraction.

- Outsourcing is a labor strategy. There's nothing technical that prevents another similarly capable person from doing your job, at least in the next town, if not another country. The obstacles were/are social and political, and the WFH movement shows that. Also, outsourcing is not going anywhere, it's just reduced and converted to nearsourcing due to backlash.

- By contrast, software is a general-purpose abstraction [2]. Databases are a type of software. You can see LLMs [3] as schema-less databases that contain millions of abstractions connected to each other. You can get a UML model or Python code or text by querying the LLM's query engine in a language much more flexible than SQL.

Vibe coding makes it seem like the funny intermediate bullshit is the end result of using LLMs, but it's not. Sure, I agree that LLMs don't make sense to use when a calculator is enough, but I don't see any functional limitations to improving LLMs. Maybe new algorithms or combinations are needed, but no matter how slowly, quality is expected to reach at least human level for the majority of current tasks (on which many jobs depend).

Which leads to my point: we need political, social, philosophical reasons to limit or integrate automation in our civilization, not just watch and hope there's a big enough technical obstacle so we can keep our current jobs.

[1] For example, model-based software engineering is still a growing; slowly, but growing.

[2] So is the organization of mechanical machines or analog computers, but it's faster to reorganize and orchestrate electrical signals.

[3] More precisely, foundation models, because it's far more than natural language processing.


I'm not saying there is a technical obstacle, I'm saying there's a practical obstacle that LLMs and vibecoding don't overcome.

The evidence isn't in that it failed but why it failed. It's not that the problem was more difficult than they anticipated, it's that they didn't comprehend the nature of the problem they were solving from the outset. Software development is an incremental creative activity that involves good taste, constantly shifting and vague requirements, and keen understanding of human factors and ergonomics. LLMs fail at all three. And in fact the shifting and ambiguous requirement element is fatal to the notion of automating the process.

Outsourcing and oop and uml and LLMs and vibecoding --- they're all the same futile attempt at the same thing: abstracting the human out of a loop built for the benefit of humans. It's nonsensical, and the only people who want to do it are capitalists, so it's doomed to fail yet again.


Fair enough, I agree. The process of one or more people figuring out what is actually needed is a big part of the outcome, I'd consider an important social obstacle or limit to automation.

But here's what's important to my point:

  > abstracting the [worker] human out of a loop built for the benefit of [customer] humans
This is now technically easier and more feasible for current workers [1], which makes it economically more desirable to employers, and customers won't really know or care what happens to the workers. There's no indicator that companies can't go much leaner, even if it means that you can't automate every worker.

So, rather than wait for a technical wall to save us, or legally protect functionally replaceable jobs, or wait until people's lives implode, we should pressure our respective governments to decouple [2] the person's ability to survive from the ability to hold uninterrupted full-time employment. That's the only collective way forward that I see.

[1] We can even constrain it to existing roles: if a team of one requirements engineer, one full-stack dev/architect and LLMs can do the same job as a bigger team of specialized roles and coders, why would anyone pick the latter? I'd be happy to hear a technical or economic reason.

[2] My order of preference, preferably multiple: UBI, UBS, increased part-time work options, conditional non-basic income, union contracts, automation pauses, retraining, severance, temporarily subsidized bullshit jobs.


It’s funny that everyone is coming around to understanding the rich elite are mostly socio- and psychopaths. People who were clued into them early were told they were rude for calling them out but now the they are just admitting it straight up.

I’m sorry to say it but Musk, Thiel, Zuckerberg, Sama, and Bezos are clearly on that spectrum. And no, it’s not autism it’s sociopathy — they view us as NPCs and call empathy a weakness and a scam. And if you think this is rude to say, I don’t because the palpable lack of empathy at the highest echelons of power (from POTUS down) is becoming a real liability for humanity as a whole given the amount of power they have amassed.


The double standard I see across all political and cultural lines is people that demand empathy for them while they demonstrate none for an arbitrary outgroup or one specific to their personal lived experience. This is basically using emotion to drive thoughts rather than emotions to inform thoughts. I have doubts this will go away anytime soon given it takes an incredible amount of effort for people to critically examine themselves. I liken it to debugging and reverse engineering your brain and nervous system.

Don't forget Lawnmower Ellison

Actually I would prefer to lol. But yes, him especially. I think there is a sort of necessary degree of narcissism and megalomania that must be involved to run corporations of this scale. Which is fine and all but when your product is a beta quality robotic death machine and you want to run your tests on public roads, or let’s get teens addicted to our product on purpose and who cares if they off themselves, that’s when it crosses from “quirky control freak” (jobs) to “dangerous megalomaniac” (the rogues gallery from the last post)

Agree

The dark triad personalities are over represented in the c-suites in general and other positions of power and status (ie politics) because they value nothing else - and are one of the greatest sources of human misery and atrocity there has ever been. Honestly, it's one of humanity's biggest unsolved challenges - how to structure society and institutions in ways that elevate benevolent competence and constrain or keep out the psychopaths

When they get the power of the state or state-like powers through technology, very bad things happen


I miss Alex Karp on your list.

I can tell you that even on HN, most people have still not come around to it. If in a thread about Ellisson or Thiel or their respective companies you point out that Bezos, Zuck and Sama - and as a result the companies they lead and the use of their capital - are in the exact same spot on the sociopathy spectrum, this immediately invites hordes of downvotes by the (ex-)FAANG HNers who can't come to terms with the fact their whole net worth is based on growing the power of just as despicable of leaders as the ones they're busy chastising.

I have a cousin who owns a business that decided to contract with Oracle, and as he described this my first thought was “why tf would you ever sign an agreement with Oracle, don’t you know about them?”

But as he described the whole dealings (for some SEO product, I’m not sure of the specifics) it became clear to me they bamboozled him, gave him a bait and switch, and left him on the hook for a huge bill he never thought he’d have to pay.

So to answer your question I don’t think there is a value prop, I think it’s actually a giant grift.


> You treat your code as a means to an end to make a product for a user.

It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.

If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.

Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.


Yeah, I'm not trying to defend slop.

I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.

With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.

I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You