For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Veedrac's commentsregister

More than anything it's a supply limit. Solar is consistently scaling about as fast as any manufacturing industry scales. The TAM is just big.

I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.

Ohh like property tax on a vacant building

There's a very simple solution to this problem here. Instead of wink-wink-nudge-nudge implying that 100% is 'human baseline', calculate the median human score from the data you already have and put it on that chart.

Its below 1% lmao

where did you get this 1%?

Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.

This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.


I think the claim is this: the wave function never collapses. However, the effect of the wave function on the environment quickly converges to only one of the two states. We could not know the difference because we cannot directly observe the wave function. We only can see the result as it is magnified onto a macro scale by our observation equipment (or, lacking that, our eyes, which themselves turn a tiny microscopic phenomenon into macro signals). Once that particular outcome has been 'selected' for, the probability of the other outcome quickly becomes vanishingly small very fast. Thus, all future outcomes are that outcome, even though the underlying reality is still that fully entangled state.

Photons (and other objects that seem to behave 'quantumly') do not seem subject to this (and thus we can use them to understand quantum behavior) because they have particular properties wherein their behavior is not as affected by these macroscopic drop-offs quite as badly.


My confusion is that this is just Many Worlds / the Schrödinger equation, and Quantum Darwinism doesn't seem to add anything that wasn't already obvious by inspection. But after reading more, I think that's kind of the point? It's ultimately just an argument for why the Schrödinger equation produces these locally classical regions, plus a bunch of overly flowery prose and dressing up in invented jargon that can mostly be ignored. I think the article failed to ignore that second part and ended up confused.


Many worlds is not the Schrodinger equation. No I don't think this is many worlds. The decision is made uniquely and then is amplified.


Many worlds is just the claim that the Schrödinger equation holds in actuality.

I don't think QD makes decisions 'uniquely'. Take this quote,

> The step from the epistemic (“I have evidence of |π17〉”.) to ontic (“The system is in the state |π17〉”.) is then an extrapolation justified by the nature of ρS⁢ℰ: Observers who detected evidence consistent with |π17〉 will continue to detect data consistent with |π17〉 when they intercept additional fragments of ℰ. So, while the other branches may be in principle present, observers will perceive only data consistent with the branch to which they got attached by the very first measurement. Other observers that have independently “looked at” S will agree.

https://pmc.ncbi.nlm.nih.gov/articles/PMC9689795/

Emphasis on "the other branches may be in principle present" — the claim at least in this paper can't be that all branches agree, just that they agree locally.


Without defining what 'actuality' is, then there's no meaning to 'the Schrodinger equation holds in actuality'. In their own way, all interpretations of quantum mechanics claim the Schrodinger equation holds in 'actuality'. Some view probability and potential as a claim on 'actuality'. Others dismiss this and instead view probability skeptically and claim it must thus be true. This is an ontological argument, not a scientific one.


If you don't like the word 'actuality', I can rephrase. Many worlds is just the claim that physical reality materially evolves in correspondence with the Schrödinger equation.

If you want to quibble over what it means for something to be material, go ahead, but unless you can tie it to some specific claim being made about QD I don't really know what the exercise gets you.


This is missing the primary reasons insider trading is bad, which are that it's an information theft incentive against employers, and worse, that it's a sabotage incentive.


> From what I've seen, models have hit a plateau where code generation is pretty good...

> But it's not improving like it did the past few years.

As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?


Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.

If anything, the pace has increased.

This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.

You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.


Are you seeing any meaningful improvements to anything you use though? Like have self-driving cars become really cheap and common place? Medicine improved? Is Netflix giving us an abundance of cheap, really good content to watch? How is your AI doctor?

The geeks are telling us the LLMs are great, but that's about it.

I'm seeing way more AI generated youtube thumbnails...I know you will say "give it time" but I'm pretty convinced the problems AI solves are not the hard problems required to boost an economy.


The wild thing is, that "plateau" link is from September 2025, aka two months before Opus 4.5.

Yeah, it's not a plateau.


I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.

There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.

It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.


Yeah, I feel that too. It'd be great if people acknowledged the progress without turning it into polarized movements and numerous discussions about how we all lag behind...


What I feel is that people are claiming progress is being made, but on what front ?

The machines might be producing more code at a faster rate, but what has that actually amounted too?


I mean if you take now, from a year ago, vs a year ago from two years ago and then once more vs two years ago to three years ago, you wouldn't see the idea of a plateau in effectiveness or not?

I still have several projects I developed in mid 2024 where I felt the AI was really close but not quite good enough for production, and almost two years in they haven't gotten appreciably better to where I would be able to release an actual application.


This rule isn't internalizing an externality.


Have you been around a Waymo as a pedestrian? Used one recently? I have never felt as safe around any car as I do around Waymos.

It can feel principled to take the critical stance, but ultimately the authorities are going to have complete video of the event, and penalizing Waymo over this out of proportion to the harm done is just going to make the streets less safe. A 6mph crash is best avoided, but it's a scrap, it's one child running into another and knocking them over, it's not _face jail time_.


> the companies owning and operating those AIs would go out of business as no one would be able to afford the products made by the AIs

What do you think money is...?

Money is a way to indirectly trade labour and goods. If a job is automated, that labour doesn't disappear into the aether, it's still in the tradable pot of total goods and services. You cannot empty a pot by filling it. A world where a company though automation has made there nobody else to productively sell to is a world where _by definition_ they own all the output that they could otherwise have traded for.


> The volume of space from the ground to 50,000 feet is about 200x smaller than the volume from the Karman line to the top of LEO alone (~2,000 km).

Volume is the natural way to assume space scales, but it's incorrect. Two planes can fly parallel, side by side. Two satellites cannot orbit side by side.

In the limit, if Earth had a solid ring of infinitesimal width, it would take zero volume but all orbits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You