For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | malfist's commentsregister

"safety considerations" don't matter. The main sticking point with LLMs is that it's a blatant theft of everyone's copyright all while letting the bosses threaten your job. Blatantly stealing to wealth transfer to the ultrawealthy.

I realized that one of my bigger issues with LLMs is actually that I worry they increase "information entropy" on average. Most tools help me reduce entropy - LLMs seem to increase it, on a global scale.

This is related to my observation that for thousands of years, written text has indicated a human author - this is no longer true, and I think this is going to be very difficult for us to wrap our human brains around fully.


there are some scientist and theorists that argue entropy production is the ultimate sign of life (Jeremy England) and consciousness (Robin Carhart-Harris, Tom Froese)

Interesting take. Hadn't thought of it in terms of entropy, but it's true. Almost by definition as the training proces doesn't introduce anything novel beyond scraped inputs and a randomly initialized network. From there, the stochastic generation only adds randomness (and the prompt, of course).

Generally I think this is a legitemate issue, although:

> the training process doesn't introduce anything novel

This is not always the case. A compiler, linter, proof checker, tests, etc. can all lower entropy.


That might be the case from your position. But if you were a woman whose stalker was able to locate your photos with ease and generate deepfakes or emulate your voice to feed his obsession you might think differently. If you were worrying about your kids surviving tomorrow because an AI system might target their school for the next round it bombings then copyright infringement night not be your top concern.

What? Atlassian is not stack overflow.

R&D is literally what built uber and uber eats. Research to determine the product and _development_ to build it.

That's like saying most home cooking in the country is the same because everything comes from Walmart/Kroger/Meijer.

No, you're missing the point. A lot of casual restaurants aren't even really cooking any more. They just heat up prepared food purchased from Sysco.

https://www.thenation.com/article/society/restaurant-consoli...


"A lot of" and "most" are different things.

Yes, a lot of places are not making their own jalepeno poppers. There's still plenty of stuff being made from raw ingredients all over the place.


Wait, why does your debit card involve forex fees and spreads?

Somebody's got to do the currency conversion. If you let the merchant do it, it's usually even worse.

(Implicit is the OP buying a bunch of stuff in a currency which is not the one they earn it; probably only one of those is dollars)


With Wise it does not, but with my legacy bank it does, because the base currency is one of the non-euro European ones.

How is that different than any other payment processor? Interchange isn't free anywhere

Stripe's fees are well above interchange fees (especially in Europe). On top of that Stripe's pricing for other features (e.g. invoicing and subscriptions) is also a percentage, so you end up paying a ton for those features.

because stripe on purpose hide fees, constantly asks you to try out new features and then secretly charges you more then market price when you say yes. See radar, managed payment, stripe billing management etc.

The people that want just a bicycle wasn't going to buy figma

Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.

In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.

If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.


One thing I am not entirely sure if there will be huge efficiency gains. Just looking at TDP that is the power consumption of say 3090 and 5090 and the increase is substantial then compare it to performance and the performance lift stops looking that great...

3x increase in compute for a 1.5x increase in tdp is pretty good considering the underlying process had barely changed. In anycase, consumer GPUs aren't a good metric as they operate with different economic constraints.

H100 to GB200 saw a 50x increase in efficiency, for example.


https://www.nvidia.com/en-us/data-center/gb200-nvl72/

Nvidia only advertises 25x efficiency. And that is their word...


Sure. But if that fully depreciates, $1100/year GPU produces $20k of economic benefit, would you decommission it as long as there is demand?

If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.

I want to see math on how a single GPU will pull down that much revenue, because that seems like a dubious outcome.

Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.

But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.


That all depends on if you're running your own hardware (unlikely) or renting.

You can already see people here saying the same stuff about opus 4.7, saw a comment claiming that Opus 4.7 on low thinking was better than 4.6 on high.

I'm not seeing that in my testing, but these opinions are all vibe based anyway.


> A lot of geolocation data on the market is anonymized

A lot isn't good enough.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You