> I am watching it enable so many of my friends and family who have no idea how to code.
Be careful what you wish for, this is going to be a double edged sword like YouTube is. YouTube allowed regular people without money and industry connections to make all sorts of quality, niche content. But for every bit of great content, there’s 1000 times as much garbage and outright misleading shit.
Giving people without any clue how computing works the ability to create software that interfaces with the outside world is likewise going to create some great stuff and 1000 times as much buggy and dangerous stuff. And allow untold numbers of scammers with no technical skill the ability to scam the wider world.
I'm aware, and I'll very much take those odds. This is just another problem for humanity to solve in its quest to empower itself.
I'm not sure how we're going to solve the obviously relevant problem of slop, but I would rather die trying, than restrict access to knowledge and capability because of evil. I believe in the GOOD of humanity. We WILL find a way.
Well, we've sacrificed the precision of actual programming languages for the ease of English prose interpreted by a non-deterministic black box that we can't reliably measure the outputs of. It's only natural that people are trying to determine the magical incantations required to get correct, consistent results.
> How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Yeah, no one ever thinks beyond "whoa, how cool, I cloned Slack in 15 minutes!"
Personally, the thing I find more depressing is turning a career that was primarily about solving interesting puzzles in elegant ways into managing a swarm of idiot savant chatbots with "OK, that looks good" or "no, do it better" commands.
The problem that I'm trying to solve with agent is similar here, for instance, my comment likely made zero impression on you because I'm against both of the things that you are also against here.
> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?
It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.
Who is better positioned to pilot the LLM than a domain expert?
"Software engineer" as a job title has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder, for years prior to LLMs. People assuming the only, or even primary, function of the job is outputting code reveal a profound lack of understanding of the industry in my opinion. Beyond the first year or two it has been commonly accepted that the code is the easy part of the job.
> has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder
This is something that I would have thought HN readers were pretty familiar with. LLMs can make my code work faster or more prolific, but with 30yoe I spend a fairly significant chunk of my work time doing anything but code.
I'm occasionally reminded the HN's commenting base is much larger than my niche in the industry (VC backed startups + large public tech companies is my background). I had a similar reaction to people thinking Peter Bailis going from CTO at workday to "member of technical staff" at Anthropic was him trading a leadership position for closing jira tickets.
> Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer?
Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
> Because a machine can never take accountability.
A business leader can though.
> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.
If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer
By this definition, pre-LLM "business leaders" circa 2008 with not even an understanding of Excel were already "software engineers" this whole time - just prompting out software through their meatspace agents, instead of their silicon ones.
Dismissal of arguments as "just semantics" is high school level argumentation.
clearly not the same when they were abstracted from the realities of building software and.. directly taking accountability for it!
by semantics, i mean the definition and pool of tasks, responsibilities, and outcomes a job is comprised of is shifting so fast that the borders of what is a 'software engineer' and 'business person' are melding together. software engineers are business people in their own way
I don't understand why humans abstract a business leader away from the realities of building software, while LLMs do not.
If the rhetoric is to be believed, the set of responsibilities falling to the role of "software engineer" is shrinking to zero, and all engineers are being forcibly "promoted" to the managerial class of shepherding around agents.
i would say theres more nuance than that (disclaimer: dont have a crystal ball)
software engineers who are comfortable doing business work - managing, working with different stakeholders, having product and design taste, being sociable, driving business outcomes are going to be more desired than ever
likewise, business leads who can be technical, can decompose vague ideas into product, leverage code to prototype and work with the previous person will also be extremely high value.
i would be concerned if i was an engineer with no business acumen or a business lead with no technical acumen (not counting CEOs obviously, but then again the barrier to starting your own business as a SWE has never been lower)
It's funny, that's why COBOL was originally developed in 1960: so that business people could write software themselves without needing software engineers. And it sort of worked, to an extent. History repeats itself.
Between then and now, what ever happened to "no code development" or whatever they called it, where all of the world's APIs could be connected with lines in a diagram?
That's how things work already in every workplace where there's any real danger. The companies construes its policies and paper trail in bad faith so that the employees are always operating contradictory to policy/training and then when something happens blame can be shifted on them.
It's funny how we see some people who claim to have "taste" walking around in public wearing horrible Balenciaga shoes. Are they really just tasteless, or are they doing it ironically to troll the rest of us? I guess we'll never know. Maybe someday AI robots will achieve the same level.
> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.
Many experienced software engineers will move into infrastructure or architect roles, if they haven't already. Experienced engineers are in the best position to use LLMs because they can validate the output as actually being correct, not just looking like it works. Newer folks are going to be in a bad spot.
The optimistic spin is, I think, software developer as a career dies, just like sysadmin. But just like dev-ops, a new to-be-named role (or set of roles) will arise
Web front-end and backend developer as a career dies, probably desktop/mobile application development too. However, some of the more specialized software developer roles are likely to survive; none of the people on the Linux kernel team have anything to worry about and the same goes for the GCC folks.
I think these arguments tend to reach impasse because one gravitates to one of two views:
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.
An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.
They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data.
For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
>They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.
It's not about whether they make mistakes (they do! although the exact definition of a mistake is nuanced), but whether they can take accountability if the software fails and millions are lost or people die. A large part of the premium paid on software engineers is to take accountability for their work. If a "business person" directs their agent to build some software and takes accountability -- congrats! They are also now a software engineer :)
The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.
> We don't know what models they use, how system prompt changes or what are the actual rate limits (Yet Anthropic will become 1 trillion dollars company in a moment).
Not just that, but there’s really no way to come to an objective consensus of how well the model is performing in the first place. See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.”
I feel like if I start something from scratch with it it gets what feels like 80% right, but then it takes a lot more time to do the last 20%, and if you decide to change scope after or just be more specific it is like it gets dumber the longer you work with it. If you can think truly modular and spend a ton of time breaking your problem in small units, and then work in your units separately then maybe what it does could be maintainable. But even there I am unsure. I spent an entire day trying to get it to do a node graph right - like the visual of it - and it is still so so. But like a single small script that does a specific small thing, yeah, that it can do. You still better make sure you can test it easily though.
We find it incredibly hard to articulate what separates a productive and effective engineer from a below average one. We can't objectively measure engineer's effectiveness, why would we thing we could measure LLMs cosplaying as engineers?
> See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.”
Funny: I’m literally, at this very moment, working on a way to monitor that across users. Wasn’t the initial goal, but it should do that nicely as well ^^
States that already have a voter ID law haven't had any issues. The bigger objections are to those who say that the ID you can use to drive, board an airplane, buy ammo, etc, aren't good enough for voting.
The states aren't very logically consistent on ID laws. Illinois requires an FOID to bear arms but not an ID to vote. Arizona requires an ID to vote but not one to bear arms. Vermont is probably the most consistent non-ID state, not requiring an ID to vote and also not requiring an ID even to conceal carry a gun.
I can sort of buy the ID argument from places like Vermont but the arguments in many/most states are just complete bullshit where they've worked backwards to rationalize it and that's why there is no consistency for ID gating of rights within even the same state.
> Wondering which of the 8 providers has your show is lame.
Yep, but there is a solution. Buy physical media and rip it. You get basically the best quality available _and_ a backup at the same time. You don't have to resort to piracy to avoid streaming services.
Not all TV shows get a physical media release. Even when they do, it's not uncommon for them to be lower quality than the streaming release. (For example, the streaming release of The Expanse was 4k HDR, but the Blu-Ray release is 1080p SDR.)
I expect this will only get worse in the future - physical media is an increasingly niche product.
Last time I tried to watch a dvd I rented it. I tried to watch it on a projector from my laptop. DRM prevented me. So I tried to watch in a standalone dvd player. It was region locked to a different region.
I’ve pirated ever since.
I subscribe to a fair few services but don’t watch stuff there. My *ars provide a better experience. It’s more expensive with the various services and hardware but it works.
My conscience is clear, I tried the ‘correct’ approach.
> Anecdotally, my kids' schools (sample size two, both high school) are quite anti-AI in the classroom.
Well, it's still early days. Wait until we truly are in a "learn AI skills or be left in the dust" world and AI will play a major role in the classroom. Just like those Chromebooks everyone has now. Because kids gotta have computer skills in order to be prepared for the working world!
It's not called a uniparty for nothing. Vote red, vote blue, we're all gonna end up in the same place eventually, the only difference is the timeline (pretty interesting that the first states pushing this stuff are California, Colorado, Illinois, etc. -- not exactly who you imagine being concerned with "think of the children", is it?). All the bickering between the two parties is pro wrestling kayfabe at the end of the day.
Details matter. The California law and the others that seem to be modeled after it involves no actual age verification and no presentation of any identifying documents to anyone. It just requires that devices include a system that lets parents when setting up a child's device specify an age range and requires that things that need to check age use the range the parent specified.
This is the general approach that privacy advocates have said should be taken. It is just what I'd expect from a liberal state that has a record of trying to protect privacy but wants to address the issue of how to keep children from sites that are not suitable.
Yeah part of me thinks the reason we know all their claims are bullshit is because you’d have to be pretty dense to think that you could promise eliminate >50% of jobs in many high value sectors within 12-18 months and _not_ expect to create more than a few people who’d have nothing to lose…
Be careful what you wish for, this is going to be a double edged sword like YouTube is. YouTube allowed regular people without money and industry connections to make all sorts of quality, niche content. But for every bit of great content, there’s 1000 times as much garbage and outright misleading shit.
Giving people without any clue how computing works the ability to create software that interfaces with the outside world is likewise going to create some great stuff and 1000 times as much buggy and dangerous stuff. And allow untold numbers of scammers with no technical skill the ability to scam the wider world.
reply