The simplest example is that you can easily be wasteful in your use of threads. If you just write blocking code, you will block the thread while waiting on io, and threads are a finite resource.
So avoiding that would mean a server can handle more traffic before running into limits based on thread count.
I do something similar with Claude Code. I say, "I ate a single serving of that Toasted Beef Ravioli that Aldi sells." Claude web searches, finds it, gets its nutrition info, then uses gspread to add it to the daily food log tab of my spreadsheet.
So much less hassle, lower activation energy needed than with MyFitnessPal.
Surprisingly even small models can do this quite well. I have Sonnet on a claw-like generate something based on my emails, airbnb receipts, and so on, and it was perfect and it could edit fields and whatnot, but the Gemini tool can't do anything.
It is an interesting possibility that must be considered. Only time will tell. However I disagree.
I think complex systems will still turn into a big ball of mud and AI agents will get just as bogged down as humans when dealing with it. And even though re-build from scratch is cheaper than ever, it can't possibly be done cheaply while also remembering the millions+ of specific characteristics that users will have come to rely on.
Maybe if you pushed spec-driven development to the absolute extreme, but i don't think pushing it that far is easy/cheap. Just as the effort to go from 90% unit test coverage to 100% is hard and possibly not worth it, I expect a similar barrier around extreme spec-driven.
Clarification: I'm advocating clean code in the generic sense, not Uncle Bob's definition.
But if you are saying that a human can instruct ai agents to refactor to prevent the big ball of mud, then you are saying that clean code *is* important.
I'm not OP but that's basically right. With NixOS, nix generates the system configuration as well as making sure the packages are available. If you pin your dependencies using something like nix flakes and rely on git as your source of truth, you can get GitOps for the operating system.
But it isn't necessary. You can certainly make a change and apply it without committing it to git or relying on a CI/CD pipeline to deploy it. And it isn't necessary to use input pinning - if you don't, you can wind up making it at best archaeological work to rollback. Most people recommend flakes nowadays though, whose input pinning and purity rules should prevent any need for archaeology if you do commit before applying.
Yes. That's why I'm using NixOS as well, despite all the terrible jank it has.
Automating my homelab config with coding machines not only hides the jank, but also makes NixOS feel like some actual agentic OS Microsoft wants, or rather an ad-hoc prototype. I literally just tell it what to do and describe issues if I have any. But again I have written a ton of Nix previously and I'm able to verify what it does, most of the time it's correct but it's not perfect.
Something that’s only peripherally interconnected is not a tapestry, but an LLM falling victim to these tropes will describe it as such for “punch”. Not just style with the absence of substance, but misleading substance.
So avoiding that would mean a server can handle more traffic before running into limits based on thread count.
reply