For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | aledevv's commentsregister

> During the 40 years since the disaster, it has become clear that many species are living quite happily within the 37-mile-wide (60km) exclusion zone set up around the ruined power plant. But that's not to say nature hasn't changed here – sometimes for the worse.

So.. the radiations has had virtually no impact on the natural ecosystem's regrowth?

Not only... we've always been told about the disastrous consequences of nuclear radiation, but, according to the BBC article (by Chris Baraniuk), that's not the case.

I don't know... I'm quite perplexed.


Nobody's measuring cancer rates in wild animals.

Due to our long lifespan, humans are relatively vulnerable to radiation, radioactive materials, and other bioaccumulative poisons. A fish might not accumulate enough mercury to kill itself over its lifetime, but when you eat one every day it all adds up.

This was why the disaster was so bad for so many farmers across Europe: https://www.bbc.com/news/uk-wales-36112372 ; the caesium is not enough to kill a sheep, which has a life of one or two years before slaughter, but should not be consumed by humans.


The man-made radioactive isotope caesium-137 can be detected in the bodies of all living humans and it was there even before the Chernobyl accident. The first nuclear explosion in 1945 spread, for the first time, the isotope caesium-137 over the whole planet. We have so sensitive methods of detecting caesium-137 that we can use them to check if a bottle of wine was produces before 1945

https://www.npr.org/sections/thesalt/2014/06/03/318241738/ho...

Of-course there were radionuclides in our bodies even before the first nuclear test in 1945. For example Potassium-40 or Carbon-14. The presence of Carbon-14 in organic matter is the basis of the radiocarbon dating method to date archaeological, geological and hydrogeological samples.

The big question is how much radionuclides is safe and how much radionuclides is a health risk.

https://en.wikipedia.org/wiki/Dose%E2%80%93response_relation...

https://en.wikipedia.org/wiki/Radiation_dose


In addition to that, if a quarter of animals die prematurely from some horrible disease, that’s just Tuesday. People tend to get upset when that happens among humans.

For a long time there was a serious debate over whether wild animals actually experienced aging or not, because they’d never live long enough to get noticeably aged.


Well.

There are dogs roaming around the Buryakovka nuclear waste storage facility. About ~10 years ago I have been told that their average lifespan was in a ballpark of three years. Make what you will from it.

OTOH Przewalski's horses are just thriving in the Zone!


That sounds quite accurate. The average lifespan of a feral cat in the wild is said to be a year or two. Much shorter than the domestic equivalent.

He didn't say that though. He said many species are living quite happily, but nature has also changed, sometimes for the worse

> All of these features are about breaking the coupling between a human sitting at a terminal or chat window and interacting turn-by-turn with the agent.

This means:

- less and less "man-in-the-loop"

- less and less interaction between LLMs and humans

- more and more automation

- more and more decision-making autonomy for agents

- more and more risk (i.e., LLMs' responsibility)

- less and less human responsibility

Problem:

Tasks that require continuous iteration and shared decision-making with humans have two possible options:

- either they stall until human input

- or they decide autonomously at our risk

Unfortunately, automation comes at a cost: RISK.


AI driven cars have better risk profiles than humans.

Why do you think the same will not also be true for AI steerers/managers/CEO?

In a year of two, having a human in the loop, will all of their biases and inconsistencies will be considered risky and irresponsible.


"Did the vehicle just crash" has a short feedback loop, very amenable to RL. "Did this product strategy tank our earnings/reputation/compliance/etc" can have a much longer, harder to RL feedback loop.

But maybe not that much longer; METR task length improvement is still straight lines on log graphs.


The AI has read all the business books, blogs and stories.

Unless your CEO is Steve Jobs, it's hard to imagine it being much worse than your average pointy haired boss.


> The AI has read all the business books, blogs and stories.

This seems like a liability as most business books, blogs, and stories are either marketing BS or gloss over luck and timing.

> Unless your CEO is Steve Jobs, it's hard to imagine it being much worse than your average pointy haired boss.

As someone using AI agents daily, this is actually incredible really easy to imagine. It's actually hard to imagine it NOT being horrible! Maybe that'll change though... if gains don't plateau.


But they are shit. Over the last 2 days I've got bored of the predictable cycle of it first getting excited about a new idea then back peddling once I shoot it to pieces.

They can't write and think critically at the same time. Then subsequent messages are tainted by their earlier nonsensical statements.

Opus 3.7 BTW, not some toy open source model.


Getting to that point is likely going to involve a lot of (the business and personal equivalent of) Teslas electing to drive through white semitrailers.

> AI driven cars have better risk profiles than humans.

From which company? I hope you say "Waymo", because Tesla is lying through its teeth and hiding crash statistics from regulators.


Let's not forget that Waymo requires an extensive, custom mapping and software/pre-training development process for every new city they operate in, are only in 10 cities total after over 20 years, and are still nowhere near profitability (or even with a clear plan to get there as far as I can tell).

I personally believe widely available self-driving cars which don't operate at a loss will continue to elude us until we accept the tradeoffs of dedicated lanes, a standardized vehicle-to-vehicle communication protocol, and roadside sensors. We were lied to.


For a fraction of the cost of developing self-driving cars we could have self-driving trains/trams/subways and most likely minibuses as part of public transportation networks.

And self-driving minibuses would basically provide 95% of the benefits of self-driving buses. They could offer 24/7 frequent service with huge coverage, we already have dedicated bus lanes in many places (and we could scale dedicated bus lanes much faster than dedicated self-driving car lanes), etc.

Now, I understand that in many places (especially the US) this is infeasible because public anything = communism.


Folks in the US are happy to spend tax dollars on roads, it's just that mass transit spending is considered communism.

To be fair to the anti-train crowd, we've been led so far down this disastrous path of car-led sprawl that the hope of even building feasible buses that can reach into the byzantine suburbs is unlikely.

So, maybe our best hope is self-driving EVs? At least in our lifetimes.


Or autonomous weapons?

Only vintage-style images?

If they put a pricing page, I think there would be someone who would buy it, especially nowadays when with embedded llms there is a huge hunger for RAM (as well as CPU). :))

I was thinking... will x402 protocol make it super easy for scammers to commit such frauds in future? By tricking online searches done by LLM's to trick them into spending money?

2027. Just-in-time built software and hardware.

> VCs explicitly use stars as sourcing signals

In my opinion, nothing could be more wrong. GitHub's own ratings are easily manipulated and measure not necessarily the quality of the project itself, but rather its Popularity. The problem is that popularity is rarely directly proportional to the quality of the project itself.

I'm building a product and I'm seeing what important is the distribution and comunication instead of the development it self.

Unfortunately, a project's popularity is often directly proportional to the communication "built" around it and inversely proportional to its actual quality. This isn't always the case, but it often is.

Moreover, adopting effective and objective project evaluation tools is quite expensive for VCs.


Vast majority of mid level experienced people take stars very seriously and they won't use anything under 100 stars.

I'm not supporting this view but it is what it is unfortunately.

VCs that invest based on stars do know something I guess or they are just bad investors.

IMO using projects based on start count is terrible engineering practice.


I've seen the same devs refuse to use a library because the last commit was 3 months ago, despite the library being extremely popular, battle tested, and existing for 10 years.

also and above all because it can be easily manipulated, as the research explained in the article actually demonstrates

> measure not necessarily the quality of the project itself, but rather its Popularity

Surely a project's popularity is often related to its utility. A useful and popular project seems like exactly the kind of thing a VC might be interested in.


Well, pretty sure that VCs are more interested in popularity than in quality so maybe it's not such a bad metric for them.

Yes, you're right, but popularity becomes fleeting without real quality behind the projects.

Hype helps raise funds, of course, and sells, of course.

But it doesn't necessarily lead to long-term sustainability of investments.


> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one command away from anyone who looked

This is the top!

This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.

AI used for professional purposes (not experiments) should NOT be used haphazardly.

And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.


The problem isn't AI, the problem is lack of an intelligent person somewhere in this whole situation. Way before AI I've seen a medical company create a service where frontend would tell backend what SQL queries to execute.


“You’re just holding it wrong”


Also it’s the wrong tool for this kind of work.

Claude, opencode etc. Are brute force coding harnesses that literally use bash tools plus a whole bunch of vague prompting (skills, AGENT.md, MCP and all that stuff) to nudge them probabilistically into desirable behavior.

Without engineering specialized harnesses that control workflows and validate output, this issue won‘t go away.

We‘re in the wild west phase of LLM usage now, where problems emerge that shouldn’t exist in the first place and are being solved at the entirely wrong layer (outside of the harness) or with the entirely wrong tools (prompts).


On 4th example photo the model says:

>They likely share an agnostic worldview and identify as heterosexual.

I wonder how the model would know that they are heterosexuals?

let's be careful about categorizing people so easily and in such a simplistic way.


Many homosexuals are visually identifiable as such (with reasonable certainty), some by accident and some by deliberate signalling. I can easily see how the absence of any such signals could end up as a classification as heterosexual, even though it really should put them in the "unknown" category.

Of course any automated classification of that kind quickly gets problematic in multiple ways. In the EU it's a fast-track to getting your AI labeled as a "high risk AI system" that has higher requirements for quality control, ensuring fairness and user choice, etc


Tagged both me (male) and my male partner as heterosexuals. I think there is still some learning to do on that front. Rainbow merchandise has not been as widely adopted as you might think.


A bit like "they do not have cancer", if you are fitting to a distribution you will have the best results by estimating an average. Being hetero is the majority/average, so a good prediction.

But doing this on a 20-way parlay like in this case will almost always fail.


> I wonder how the model would know that they are heterosexuals?

It’s about 95% likely to be correct, which is very effective at scaring statistically illiterate people.


[flagged]


this doesn't even pass a basic logic test, why would be wounded make us seek something we want in ourselves and being whole make us seek something we aren't? there are plenty of people of any gender that have any quality you may be seeking

you can't just make something up in your head and apply it to everyone


ok


I propose a further and different "key to understanding."

I would add: the second thing to decide, besides the scale, is the Plan.

What do we mean, for example, by the "Ethical Plan." By ethical plan, I mean the purpose... "WHAT do I use mathematics for"?

Mathematics can be something immensely BIG if I use it for something important. Or it can be miserably SMALL if I use it for something petty and trivial.

In short: even in this case, greatness depends not only on the scale, but also on the eyes of the beholder, on the Context in which it is applied, and, why not?, also on the Purpose and the ethical plan.

If mathematics were, for example, something at the service of Justice, it would be something immensely Big.


It sounds like you ain't a fan of recreational mathematics?


I'm working on a AI RAG (retrieval augmented generation) system: https://longtermemory.com

It's a tool that use QDrant, a vectorial db, to embedding the texts chunks: LLM api is questioned to generate the Q&A pairs from a chunked texts.

Each chunk is then embedded and stored in the vectorial db to facilitate the Q&A generation, thanks to better context informations.

This tool helping people to study everything thanks to even Spaced Repetition algorithm.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You