Besides some of the obvious hacks to reduce token usage, properly indexed code bases (think IntelliJ) reduce token usage significantly (30%-50%, while keeping or exceeding result quality compared with baseline) as shown with https://github.com/ory/lumen
Anthropic is not incentivized to reduce token use, only to increase it, which is what we are seeing with Opus 4.6 and now they are putting the screws on
Sounds like GPT wrote this piece based on some tech exec‘s „we must use AI or lose“ „strategy“. Just let engineers use the tools they want instead of force feeding them yet another ridiculous process. For me, if I have to do meetings in the morning (or „write promps“ lmao) instead of clearing out the ridiculous AI slop debt of code agents my product would never ship.
It‘s really hard to read this article, it smells of LLM generated slop once you get past the first couple of paragraphs - lots of negative parallelisms and lots of words without adding value to the sentence:
> To validate the thesis that the Yen unwind is the primary driver of volatility, we must examine the sequence of events. The crash did not happen in a vacuum; it followed a precise timeline …
> It wasn't just about rates anymore; it was about the stability of the U.S.-led global order
> The unwinding of a carry trade is not a monolithic event; it is a cascade that ripples outward
It‘s like almost in every paragraph. I don’t understand why this gets to be on the frontpage to be honest. It just reads horrible even if some of the points may be true (or hallucinated, who knows)
I have a sense that were in a moment of mass hysteria.
You dot even understand what your reading anymore. Cant tell whether you're reading a hallucination or someones thoughts.
"I don't agree / understand this, it must not be real!"
Now, you have to wonder: is my grammar just poor? Or did I intentionally inject spelling and grammar errors into the output or an llm? Is it in my system prompt to do this?
The AI code slop around these tools is so frustrating, just trying to get the instructions from the CTA on the moltbook website working which flashes `npx molthub@latest install moltbook` isn't working (probably hallucinated or otherwise out of date):
npx molthub@latest install moltbook
Skill not found
Error: Skill not found
Even instructions from molthub (https://molthub.studio) installing itself ("join as agent") isn't working:
npx molthub@latest install molthub
Skill not found
Error: Skill not found
> Contrast that with the amount of hype this gets.
Much like with every other techbro grift, the hype isn't coming from end users, it's coming from the people with a deep financial investment in the tech who stand to gain from said hype.
Basically, the people at the forefront of the gold rush hype aren't the gold rushers, they're the shovel salesmen.
> It's an opensource project made by a dev for himself
I see it more as dumpster fire setting a whole mountain of garbage on fire while a bunch of simians look at the flames and make astonished wuga wuga noises.
Nobody is sleeping on anything. Linting for the most part is static code analysis which by definition does not find runtime bugs. You even say it yourself "runtime bug, ask the LLM if a static lint rule could be turned on to prevent it".
To find most runtime bugs (e.g. incorrect regex, broken concurrency, incorrect SQL statement, ...) you need to understand the mental model and logic behind the code - finding out if "is variable XYZ unused?" or "does variable X oveshadow Y" or other more "esoteric" lint rules will not catch it. Likelihood is high that the LLM just hallucinated some false positive lint rule anyways giving you a false sense of security.
> static code analysis which by definition does not find runtime bugs
I'm not sure if there's some subtlety of language here, but from my experience of javascript linting, it can often prevent runtime problems caused by things like variable scoping, unhandled exceptions in promises, misuse of functions etc.
I've also caught security issues in Java with static analysis.
The usefulness of using static code analysis (strict type systems, linting) versus not using static code analysis is out of the question. Specifically JavaScript which does not have a strict type system benefits greatly from using static code analysis.
But the author claims that you can catch runtime bugs by letting the LLM create custom lint rules, which is hyperbole at least and wrong at most and giving developers a false sense of security at worst.
Catch or prevent - linting only covers a tiny (depending on programming language sometimes more sometimes less) subset of runtime problems. The whole back pressure discussion feels like AI coders found out about type systems and lint rules - but it doesn’t resolve the type problems we get in agentic coding. The only „agent“ responsible for code correctness (and thus adherence to feature specification) is the human instructing the agent, a better compiler or lint rule will not prevent massive logic bugs LLMs tend to create like tests testing functions that have been created by the LLM for the test to make it pass, broken logic flows, missing DI, recreating existing logic, creating useless code that’s not being used anywhere yet pollutes context windows - all the problems LLM based „vibe“ coding „shines“ with once you work on a sufficiently long running project.
Why do I care so much about this? Because the „I feel left behind“ crowd is being gaslighted by comments like the OPs.
Overall strict type systems and static code analysis have always been good for programming, and I‘m glad vibe coders are finding out about this as well - it just doesn’t fix the lack of intelligence LLMs have nor the responsibility of programmers to understand and improve the generated stochastic token output
OP isn't claiming all runtime bugs can be prevented with static lints suggested by LLMs but, if at least some can, I don't see how your comment is contributing. Yet another case of "your suggestion isn't perfect so I'll dismiss it" in Hacker News.
Why is this such a common occurrence here? Does this fallacy have a name?
"Still on Claude Code" is a funny statement, given that the industry is agreeing that Anthropic has the lead in software generation while others (OpenAI) are lagging behind or have significant quality issues (Google) in their tooling (not the models). And Anthropic frontier models are generally "You're absolutely right - I apologize. I need to ..." everytime they fuck something up.
Thank you for calling this out, we are being gaslit by attention seeking influencers. The algorithmic brAInrot is propagated by those we thought we can trust, just like the instagram and youtube stars we cared about who turned out to be monsters. I sincerely hope those people become better or wane into meaninglessness. Rakyll seems to spend more time on X than working on advancing good software these days, a shame given her past accomplishments.
Anthropic is not incentivized to reduce token use, only to increase it, which is what we are seeing with Opus 4.6 and now they are putting the screws on
reply