For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | oldfuture's commentsregister

The word "fake" draws attention but I think the article obscures two real problems:

Training is missing from the analysis entirely (as someone else noted)

Inference water use is indeed minimal per prompt no argument there, but training the old GPT-3 consumed roughly 5.4 million liters of water. LLaMA 3: ~22 million. These are huge events, happening multiple times a year across the industry, folding them into national averages seems like the statistical simplification he article criticizes everyone else for doing…

"Small nationally" ≠ "fine locally"

The Dalles, Oregon is the clearest example. In 2012, Google used 12% of the city's water supply. Today it consumes a third, around 1.19 million gallons per day, and well a sixth data center comes online in 2026, in the same area.

The city is now pursuing a $260 million reservoir expansion into a national forest (!), where 95% of the projected new water demand will be industrial, not residential. Residents are looking at a potential 99% rate increase by 2036 to fund infrastructure that may exists primarily to serve one company. Apparently the city fought a 13-month legal battle just to keep those numbers secret, that’s like a community being reshaped around a single tenant.

Hays County, Texas residents sharing the Edwards Aquifer with incoming data centers voted to block one. Memphis is watching xAI draw 5 million gallons per day. Bloomberg found two-thirds of new U.S. data centers since 2022 are sited in high water-stress zones. Arizona have already passed ordinances capping data center water use.

This to me looks like a problem in the making, AI water use isn't a national crisis for now, but local impacts are already real, training costs are systematically underreported, and the five year trajectory in water stressed regions deserves serious attention indeed


this should be known by everyone


We should fully own what we buy, things like this are essential


this can be solved by adding an external nas - for redundancy - and an opensource application or extension that manages the syncing?

making self hosting more seamless is key, we simply can't trust to be dependent on third parties for access to our own data in the long term


If you already have a NAS I’m not sure what this does for you that just getting a bigger NAS wouln’t?


if you don't have your data when you don't have access to third parties you don't own your data.

we should start switching to solutions like this to keep control and freedom

you can find their opensource repos here: https://github.com/getumbrel


> you can find their opensource repos here: github.com/getumbrel

Not all Umbrel repos are OSS (and that's okay): https://blog.getumbrel.com/everything-you-need-to-know-about... / https://archive.vn/4M4xO


Behind Italy’s Parmigiano Reggiano is a radical tradition of cooperatives. In some areas, they make up nearly a fifth of the GDP without leaving democracy outside the factory's doors



Google's removal of num=100 parameter last month makes it harder for third party tools (e.g. chatgpt, perplexity) to access beyond the first 10 results

This is incredibly hurting the visibility of any new emergent site as we can already see in the data


One thing worth stressing is that the witness + executor layer is the critical trust boundary here.

In classic Ethereum, bugs are noisy: if one client diverges, other clients complain, and consensus fails until fixed.

In zk Ethereum, bugs can be silent: the proof validates the wrong execution and everyone downstream accepts it as truth.

I mean that the witness is like a transcript of everything the EVM touched while running a block: contract code, storage slots, gas usage, etc. so you can replay the block later using only this transcript, without needing the full Ethereum state.

For security, that witness ideally needs to be cryptographically bound to the block (e.g., via Merkle commitments), so no one can tamper with it.

The executor is the piece that replays that transcript deterministically. If it does so correctly, then you can generate a zk proof saying “this block really executed as Ethereum says it should.” But correctness here isn’t binary, it means bit-for-bit agreement with the Yellow Paper and all EIPs, including tricky cases like precompile gas rules. So the danger is in the details. If the witness omits even one corner case, or the executor diverges subtly, the zk system can still generate a perfectly valid proof, but of the wrong thing. zk proofs don’t check what you proved, only that you proved it consistently. In today’s consensus model, client bugs show up quickly when nodes disagree.

So while the compilation and toolchain work here is impressive, the real challenge is making sure the witness and executor are absolutely faithful to Ethereum semantics, with strong integrity guarantees. Otherwise you risk building cryptographic certainty, but about the wrong computation. This makes the witness/executor correctness layer the single point of failure in my view where human fallibility can undermine mathematical guarantees, looking forward to understand how this problem will be tackled


I guess one approach would be to have multiple independently-developed provers, and use them all for each proof. You'd spend more computation doing proofs but you wouldn't slow the network down since you could do it in parallel.


The comment you're replying to is worried about the opposite case: where the proof is good, but the computation being proved is faulty. The analog would be to have the same prover prove execution of multiple node implementations.


Thank you for highlighting this important tradeoff!

> In zk Ethereum, bugs can be silent: the proof validates the wrong execution and everyone downstream accepts it as truth.

Are there any write-ups by folks who have run into this scenario? Maybe Linea while developing their zkEVM?


You have more control, in theory, on a cellphone, and so do people around you. With the glasses you really have no way to say if they are listening or watching what you see. The phone has most of the time the sensors partially blocked by a bag or a pocket so it really can't be compared with eyewear.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You