For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | m101's commentsregister

Except they are fundamentally different companies now. Now they have no free cash flow and they are extremely capital intensive industrial businesses.

Another note is that this is on forward earnings. What may have just happened is analyst expectations on forward earnings have caught up what markets prices earlier. Forward earnings generally lag pricing, this happens on the way up, and on the way down..


I think q1 is a weak quarter for sales so it might be that inventory build is normal in this quarter. Economic weakness and high energy prices will not help sales though.

It would start economical and then some side would eventually resort to the meat grinder.

I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?

They get to look good by claiming it’s an ethical stance.


So if we assume this is the future, the useful life of many semiconductors will fall substantially. What part of the semiconductor supply chain would have pricing power in a world of producing many more different designs?

Perhaps mask manufacturers?


It might be not that bad. “Good enough” open-weight models are almost there, the focus may shift to agentic workflows and effective prompting. The lifecycle of a model chip will be comparable to smartphones, getting longer and longer, with orchestration software being responsible for faster innovation cycles.


"Good enough" open weights models were "almost there" since 2022.

I distrust the notion. The bar of "good enough" seems to be bolted to "like today's frontier models", and frontier model performance only ever goes up.


The generation of frontier models from H1 2025 is the good enough benchmark.


Flash forward one year and it'll be H1 2026.


I don’t see why. Today frontier models are already 2 generations ahead of good enough. For many users they did not offer substantial improvement, sometimes things got even worse. What is going to happen within 1 year that will make users desire something beyond already working solution? LLMs are reaching maturity faster than smartphones, which now are good enough to stay on the same model for at least 5-6 years.


Any considerable bump in model capability craters my willingness to tolerate the ineptitude of less capable models. And I'm far from being alone in this.

Ever wondered why those stupid "they secretly nerfed the model!" myths persist? Why users report that "model got dumber", even if benchmarks stay consistent, even if you're on the inference side yourself and know with certainty that they are actually being served the same inference over the same exact weights on the same hardware quantized the same way?

Because user demands rise over time, always.

Users get a new flashy model, and it impresses them. It can do things the old model couldn't. Then they push it, and learn its limitations and quirks as they use it. And then it feels like it "got dumber" - because they got more aggressive about using it, got better at spotting all the ways it was always dumb in.

It's a treadmill, and you pretty much have to keep improving the models just to stay ahead of user expectations.


> users report that "model got dumber"

I have seen this with ChatGPT progression from 4o to 5.2 applied to the newest model. Old prompts stop working reliably, different hallucination modes etc.


If you’re running at 17k tokens / s what is the point of multiple agents?


Different skills and context. Llama 3.1 8B has just 128k context length, so packing everything in it may be not a great idea. You may want one agent analyzing the requirements and designing architecture, one writing tests, another one writing implementation and the third one doing code review. With LLMs it’s also matters not just what you have in context, but also what is absent, so that model will not overthink it.

EDIT: just in case, I define agent as inference unit with specific preloaded context, in this case, at this speed they don’t have to be async - they may run in sequence in multiple iterations.


It will be validated but that doesn’t mean that the providers of these services will be making money. It’s about the demand at a profitable price. The uncontroversial part is that the demand exists at an unprofitable price.


That really is the $800 billion elephant in the room.


This “It’s not about profits, man, it’s about how much you’re worth. The rules have changed. Don’t get left behind,” nonsense is exactly what a bunch of super wrong people said about investing during the .com bust. Even if we got some useful tech out of it in the end, that was a lot of people’s money that got flushed down the toilet.


But the survivors became some of the biggest and most profitable companies on the planet: Google, Amazon, Ebay/Paypal. And of course, the people selling shovels always do well in a rush (Apple, Adobe, etc).


I’m not talking about the health of the the industry— I’m talking about the fallout for employees, anyone with any stake in the stock market, etc. A whole lot of retail investors, 401k holders, etc. got fucked, and a whole lot of other people lost their jobs. Careers were stunted. This was before we had preexisting condition protection so for people with cancer or other serious chronic health conditions, losing a job could be a death sentence, even if they got another job the very next day. The housing market got screwed up.

From the big short (and a bunch of introductory macroeconomics classes:)

"For every 1% that unemployment rises, 40,000 people die."

There are consequences to people running big companies like they’re playing poker.


And the owners of those companies became mega billionaires and turned into monsters. Maybe there's a lesson there


good comment, but of course it's downvoted on hackernews


It’s not clear at all because model training upfront costs and how you depreciate them are big unknowns, even for deprecated models. See my last comment for a bit more detail.


They are obviously losing money on training. I think they are selling inference for less than what it costs to serve these tokens.

That really matters. If they are making a margin on inference they could conceivably break even no matter how expensive training is, provided they sign up enough paying customers.

If they lose money on every paying customer then building great products that customers want to pay for them will just make their financial situation worse.


"We lose money on each unit sold, but we make it up in volume"


By now, model lifetime inference compute is >10x model training compute, for mainstream models. Further amortized by things like base model reuse.


Those are not marginal costs.


I think actually working out whether they are losing money is extremely difficult for current models but you can look backwards. The big uncertainties are:

1) how do you depreciate a new model? What is its useful life? (Only know this once you deprecate it)

2) how do you depreciate your hardware over the period you trained this model? Another big unknown and not known until you finally write the hardware off.

The easy thing to calculate is whether you are making money actually serving the model. And the answer is almost certainly yes they are making money from this perspective, but that’s missing a large part of the cost and is therefore wrong.


When you lend money at 1% but market rates are 30% then you are, in fact, losing money. Except under your definition you are not losing money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You