I thought everybody does this.. having a model create anything that isn't highly focused only leads to technical debt. I have used models to create complex software, but I do architecture and code reviews, and they are very necessary.
Absolutely. Effective LLM-driven development means you need to adopt the persona of an intern manager with a big corpus of dev experience. Your job is to enforce effective work-plan design, call out corner cases, proactively resolve ambiguity, demand written specs and call out when they're not followed, understand what is and is not within the agent's ability for a single turn (which is evolving fast!), etc.
The use case that Anthropic pitches to its enterprise customers (my workplace is one) is that you pretty much tell CC what you want to do, then tell it generate a plan, then send it away to execute it. Legitimized vibe-coding, basically.
Of course they do say that you should review/test everything the tool creates, but in most contexts, it's sort of added as an afterthought.
I had to fall back to that to deliver anything recently - but the last two months were really comfy with me just saying "do x" and just going on a walk and coming back to a working project.
Claude is still useful now, but it feels more like a replacement for bashing on a keyboard, rather than a thinking machine now.
It's not so much that Elastic is saying it as a lot of people doing the supposed wrong the advert-article describes.
I've seen some examples of people using ES as a database, which I'd advise against for pretty much the reasons TFA brings up, unless I can get by on just a YAGNI reasoning.
It will also depend a lot on the type of data: Logs are an easy yes. Something that required multi-document transactions (unless you're able to structure it differently) is a harder tradeoff. Though loss of ACKed documents shouldn't really be a thing any more.
Even most toy databases "built in a weekend" can be very stable for years if:
- No edge-case is thrown at them
- No part of the system is stressed ( software modules, OS,firmware, hardware )
- No plug is pulled
Crank the requests to 11 or import a billion rows of data with another billion relations and watch what happens. The main problem isn't the system refusing to serve a request or throwing "No soup for you!" errors, it's data corruption and/or wrong responses.
To be fair, I think it is chronically underprovisioned clusters that get overwhelmed by log forwarding. I wasn't on the team that managed the ELK stack a decade ago, but I remember our SOC having two people whose full time job was curating the infrastructure to keep it afloat.
Now I work for a company whose log storage product has ES inside, and it seems to shit the bed more often than it should - again, could be bugs, could be running "clusters" of 1 or 2 instead of 3.
reply