For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | aeneas_ory's commentsregister

Besides some of the obvious hacks to reduce token usage, properly indexed code bases (think IntelliJ) reduce token usage significantly (30%-50%, while keeping or exceeding result quality compared with baseline) as shown with https://github.com/ory/lumen

Anthropic is not incentivized to reduce token use, only to increase it, which is what we are seeing with Opus 4.6 and now they are putting the screws on


Check if your machine was affected with this tool: https://github.com/aeneasr/was-i-axios-pwned

How do we know this is not the next tool in line to compromise a machine?

Read the source code

I wrote a tool that helps you check if your machine was compromised: https://github.com/aeneasr/was-i-axios-pwned

Why does is this ridiculous thing trending on HN? There are actually good tools to reduce token use like https://github.com/thedotmack/claude-mem and https://github.com/ory/lumen that actually work!

Because the trending algorithm is designed for engagement, not accuracy

Sounds like GPT wrote this piece based on some tech exec‘s „we must use AI or lose“ „strategy“. Just let engineers use the tools they want instead of force feeding them yet another ridiculous process. For me, if I have to do meetings in the morning (or „write promps“ lmao) instead of clearing out the ridiculous AI slop debt of code agents my product would never ship.


It‘s really hard to read this article, it smells of LLM generated slop once you get past the first couple of paragraphs - lots of negative parallelisms and lots of words without adding value to the sentence:

> To validate the thesis that the Yen unwind is the primary driver of volatility, we must examine the sequence of events. The crash did not happen in a vacuum; it followed a precise timeline …

> It wasn't just about rates anymore; it was about the stability of the U.S.-led global order

> The unwinding of a carry trade is not a monolithic event; it is a cascade that ripples outward

It‘s like almost in every paragraph. I don’t understand why this gets to be on the frontpage to be honest. It just reads horrible even if some of the points may be true (or hallucinated, who knows)


I have a sense that were in a moment of mass hysteria.

You dot even understand what your reading anymore. Cant tell whether you're reading a hallucination or someones thoughts.

"I don't agree / understand this, it must not be real!"

Now, you have to wonder: is my grammar just poor? Or did I intentionally inject spelling and grammar errors into the output or an llm? Is it in my system prompt to do this?


The AI code slop around these tools is so frustrating, just trying to get the instructions from the CTA on the moltbook website working which flashes `npx molthub@latest install moltbook` isn't working (probably hallucinated or otherwise out of date):

      npx molthub@latest install moltbook  
       Skill not found  
      Error: Skill not found
Even instructions from molthub (https://molthub.studio) installing itself ("join as agent") isn't working:

      npx molthub@latest install molthub
       Skill not found
      Error: Skill not found
Contrast that with the amount of hype this gets.

I'm probably just not getting it.


> Contrast that with the amount of hype this gets.

Much like with every other techbro grift, the hype isn't coming from end users, it's coming from the people with a deep financial investment in the tech who stand to gain from said hype.

Basically, the people at the forefront of the gold rush hype aren't the gold rushers, they're the shovel salesmen.


> post-truth world order monetizing enshittification and grift

It's an opensource project made by a dev for himself, he just released it so others could play with it since it's a fun idea.


That's fair - removed. It was more geared towards the people who make more out of this than what it is (an interesting idea and cool tech demo).


> It's an opensource project made by a dev for himself

I see it more as dumpster fire setting a whole mountain of garbage on fire while a bunch of simians look at the flames and make astonished wuga wuga noises.


me like big fire, make pretty pictures and feel warm


Nobody is sleeping on anything. Linting for the most part is static code analysis which by definition does not find runtime bugs. You even say it yourself "runtime bug, ask the LLM if a static lint rule could be turned on to prevent it".

To find most runtime bugs (e.g. incorrect regex, broken concurrency, incorrect SQL statement, ...) you need to understand the mental model and logic behind the code - finding out if "is variable XYZ unused?" or "does variable X oveshadow Y" or other more "esoteric" lint rules will not catch it. Likelihood is high that the LLM just hallucinated some false positive lint rule anyways giving you a false sense of security.


> static code analysis which by definition does not find runtime bugs

I'm not sure if there's some subtlety of language here, but from my experience of javascript linting, it can often prevent runtime problems caused by things like variable scoping, unhandled exceptions in promises, misuse of functions etc.

I've also caught security issues in Java with static analysis.


The usefulness of using static code analysis (strict type systems, linting) versus not using static code analysis is out of the question. Specifically JavaScript which does not have a strict type system benefits greatly from using static code analysis.

But the author claims that you can catch runtime bugs by letting the LLM create custom lint rules, which is hyperbole at least and wrong at most and giving developers a false sense of security at worst.


> But the author claims that you can catch runtime bugs

I think you misinterpreted OP:

Every time you find a runtime bug, ask the LLM if a static lint rule could be turned on to prevent it

Key word is prevent.


Catch or prevent - linting only covers a tiny (depending on programming language sometimes more sometimes less) subset of runtime problems. The whole back pressure discussion feels like AI coders found out about type systems and lint rules - but it doesn’t resolve the type problems we get in agentic coding. The only „agent“ responsible for code correctness (and thus adherence to feature specification) is the human instructing the agent, a better compiler or lint rule will not prevent massive logic bugs LLMs tend to create like tests testing functions that have been created by the LLM for the test to make it pass, broken logic flows, missing DI, recreating existing logic, creating useless code that’s not being used anywhere yet pollutes context windows - all the problems LLM based „vibe“ coding „shines“ with once you work on a sufficiently long running project.

Why do I care so much about this? Because the „I feel left behind“ crowd is being gaslighted by comments like the OPs.

Overall strict type systems and static code analysis have always been good for programming, and I‘m glad vibe coders are finding out about this as well - it just doesn’t fix the lack of intelligence LLMs have nor the responsibility of programmers to understand and improve the generated stochastic token output


Static analysis certainly can find runtime bugs


OP isn't claiming all runtime bugs can be prevented with static lints suggested by LLMs but, if at least some can, I don't see how your comment is contributing. Yet another case of "your suggestion isn't perfect so I'll dismiss it" in Hacker News.

Why is this such a common occurrence here? Does this fallacy have a name?

EDIT: seems to be https://en.wikipedia.org/wiki/Nirvana_fallacy


Well, if you haven't noticed, LLM topics receive a particularly hostile reaction on HN.

My LLM has theorized that its success at answering trivia questions has left some people feeling threatened.


"Still on Claude Code" is a funny statement, given that the industry is agreeing that Anthropic has the lead in software generation while others (OpenAI) are lagging behind or have significant quality issues (Google) in their tooling (not the models). And Anthropic frontier models are generally "You're absolutely right - I apologize. I need to ..." everytime they fuck something up.


Thank you for calling this out, we are being gaslit by attention seeking influencers. The algorithmic brAInrot is propagated by those we thought we can trust, just like the instagram and youtube stars we cared about who turned out to be monsters. I sincerely hope those people become better or wane into meaninglessness. Rakyll seems to spend more time on X than working on advancing good software these days, a shame given her past accomplishments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You