For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | sn0wflak3s's commentsregister

The line is scope.

I'm not asking an agent to build me a full-stack app. That's where you end up babysitting it like a kindergartener and honestly you'd be faster doing it yourself. The way I use agents is focused, context-driven, one small task at a time.

For example: i need a function that takes a dependency graph, topologically sorts it, and returns the affected nodes when a given node changes. That's well-scoped. The agent writes it, I review it, done.

But say I'm debugging a connection pool leak in Postgres where connections aren't being released back under load because a transaction is left open inside a retry loop. I'm not handing that to an agent. I already know our system. I know which service is misbehaving, I know the ORM layer, I know where the connection lifecycle is managed. The context needed to guide the agent properly would take longer to write than just opening the code and tracing it myself.

That's the line. If the context you'd need to provide is larger than the task itself, just do it. If the task is well-defined and the output is easy to verify, let the agent rip.

The muscle memory point is real though. i still hand-write code when I'm learning something new or exploring a space I don't understand yet. AI is terrible for building intuition in unfamiliar territory because you can't evaluate output you don't understand. But for mundane scaffolding, boilerplate, things that repeat? I don't. llife's too short to hand-write your 50th REST handler.


I wrote it myself. But the irony isn't lost on me. "Who did what" is kind of the whole point of the article. Appreciate the feedback.


FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.

Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:

- "Not whether they've adopted every tool, but whether they're curious"

- "I still drive the intuition. The agents just execute at a speed I never could alone."

- "The model doesn't save you from bad decisions. It just helps you make them faster."

- "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."

- "That's not prompting. That's engineering"

It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.


The LLMs are training "us" now.

First we develop the machines, then we contort the entire social and psychic order to serve their rhythms and facilitate their operation.


FWIW I thought it read fine and enjoyed the take. As I'm exploring more AI tooling I'm asking myself some of the same questions.


Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.

I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.

For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.

None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.

So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.

Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.

Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.


Yeah, I came here to ask if you're Vibe Writing as well ;)

I wasn't quite sure though. Sometimes it's clearly GPT, sometimes clearly Claude, and this article was like a blend.


This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.

Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.


I get this. I don't think either of you is wrong. There's a real loss in not writing something from scratch and feeling it come together under your hands. I'm not dismissing that.

I have immense respect for the senior engineers who came before me. They built the systems and the thinking that everything I do now sits on top of. I learned from people. Not from AI. The engineers who reviewed my terrible pull requests, the ones who sat with me and explained why my approach was wrong. That's irreplaceable. The article is about where I think things are going, not about what everyone should enjoy.


Fair enough. I know how that reads. But when anyone with a laptop and a subscription can ship production software in a weekend, the architecture and the idea start to matter a lot more. The technical details in the post are real. I just can't share the what yet. Take it or leave it.


This has been a fallacy for as long as businesses have been built, and it will still be a fallacy in the AI era.

Ideas are cheap and don't need to be protected. Your taste, execution, marketing, UX, support, and all the 1000 things that aren't the code still matter. The code will appear more quickly now: You still need to get people to use it or care about it.

I've found almost without fail that you have more to gain in sharing an idea and getting feedback (both positive and negative) before/while you build the thing than you do in protecting the idea with the fear that as soon as someone hears it they'll steal it and do it better than you.

(The exception I think is in highly competitive spaces where ideas have only a short lifetime -- eg High Frequency Trading / Wall Street in general. An idea for a trade can be worth $$ if done before someone else figures it out, and then it makes sense to protect the idea so you can make use of it first. But that's an extremely narrow domain.)


I've heard this a thousand times and I have not once seen a person give an example of this actually happening. I'm more likely to believe the crocodiles coming out of sewer pipes urban legend at this point.


I understand your concern. The copycat problem is real.

But if you come from a technical background and this is your first time building a product, you'll soon learn that it is so damn hard to get users, especially *paying* ones.

I was there. I built something, shared it, prayed people would notice. The truth is most of the time your product fails. Better explore the problem you are trying to solve first, share your idea if necessary, and collect feedback. You'll have a much clearer picture of what you need to do from there.


I don't think it's about ideas or even the code. It's about execution, marketing, talking to your customers and doing sales. This is something AI can't do...yet


This is the question I keep coming back to. I don't have a clean answer yet.

The foundation I built came from years of writing bad code and understanding why it was bad. I look at code I wrote 10 years ago and it's genuinely terrible. But that's the point. It took time, feedback, reading books, reviewing other people's work, failing, and slowly building the instinct for what good looks like. That process can't be skipped.

If AI shortens the path to output, educators have to double down on the fundamentals. Data structures, systems thinking, understanding why things break. Not because everyone needs to hand-write a linked list forever, but because without that foundation you can't tell when the AI is wrong. You can't course-correct what you don't understand.

Anyone can break into tech. That's a good thing. But if someone becomes a purely vibe-coding engineer with no depth, that's not on them. That's on the companies and institutions that didn't evaluate for the right things. We studied these fundamentals for a reason. That reason didn't go away just because the tools got better.


The K-shaped workforce point is sharp and I think you're right. The curious ones are a minority, but they've always been the ones who moved things forward. AI just made the gap more visible :)

Your Codex case study with the content creators is fascinating. A PhD in Biology and a masters in writing building internal tools... that's exactly the kind of thing i meant by "you can learn anything now." I'm surrounded by PhDs and professors at my workplace and I'm genuinely positive about how things are progressing. These are people with deep domain expertise who can now build the tools they need. It's an interesting time. please write that up...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You