For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more LeroyRaz's commentsregister

I don't think things get better over time. What is your source for that? Here's an article (with sources) describing a massive down trend in literacy and reading comprehension: https://jmarriott.substack.com/p/the-dawn-of-the-post-litera...

In short, college students nowdays have lower reading comprehension than young children in the 1850s. That is not what I would call progress.

Speaking personally, I believe I would potentially have significantly worse critical reasoning abilities if I had grown up using LLMs. It is very clear to me the temptation of using them as an ersatz for engagement and thought.

I think you are perhaps conflating technological progress (yes technology has improved) with demographic progress. Demographic progress is far from monotonically increasing (reading comprehension is newly plummeting, maths scores are dropping in America, science per scientist is stalling compared to 50 years ago, etc...)


Be careful. The models easily hallucinate problems and misdiagnose. For example, I had an issue with some GPU code, and it assured me, with utter conviction that my problem was caused by some subtle race condition ('a known issue') that the model described in great when the real issue was just a trivial typo - no race condition, no subtly or complexity.


You make a completely false equvilance.

A car person would be some kind of car person hybrid if you read it literally. Car person is acting as a short hand for "Car obsessed person." Car is a noun, blind is an adjective, etc...


There's no such thing as capitalism without government. Depending on how the government regulates capitalism, you don't necessarily get wealth inequality.


Chat Control seems unabashedly evil.


This particular type of authoritarianism is primarily from the left in the west. The EU, the UK, Canada, it is the political left implementing these policies, often using them to censor right wing views (e.g., Objecting to immigration gets labelled racist -> justified censorship. Objecting to trans women in women only spaces gets labelled as hateful -> justified censorship, etc...).


I don't understand how the removing of right wing users is described as a necessary good to create a safe and inclusive space, while the removal of left wing users is decried as censorship. If you dismiss people who have genuine concerns about things as bigoted, you yourself are being bigoted! It is as though people don't understand the definition of the word.


[flagged]


So you can frame anyone you disagree with as "intolerant", and then you are free to be intolerant to them.

Also I can be intolerant to you, because you are intolerant (to people you think are intolerant).

This is neither smart nor moral. This is logically inconsistent and breeds conflict almost by definition.


>Also I can be intolerant to you, because you are intolerant (to people you think are intolerant).

It's never been more clear to me that you didn't click the link


Who defines intolerance though? What if you call me intolerant because I do not like your favourite programming language, and have dared criticize it - is that OK?

It's a recursive paradox: an intolerant person cannot define what intolerance is, but that's what often happens in reality.


There are quite a few solutions to the paradox of tolerance. One of which was even articulated by the guy that coined the term, which the wiki link covers.

I also recommend

https://medium.com/extra-extra/tolerance-is-not-a-moral-prec...


The article mostly redefine the question about tolerance as being a strategy for peace, one which will be given up when war is unavoidable. It not so much a solution to the problem of tolerance and more of a description of the motivations behind the who deploy it.

As an effect it also reject the concept of an intolerant person, as there are only people with different views about what constitutes an acceptable state of peace and what represent real and present danger.

If the left view the right as jeopardizing safety, and the right view the left as jeopardizing safety, then existential conflict is inevitable and tolerance as a strategy is dropped by both.


That is a good point. If we analyze this article called “Tolerance Isn’t A Moral Precept” with the understanding that it is arguing under the assertion that it actually is a moral precept and then imagine in our minds an enormous number of people that have no concept of “clear and present danger” at all, it is very troublesome indeed. In a vein that has very similar relevance to the article and discussion at hand, the fact that there is a Popemobile in Cars 2 means that the failed 1981 assassination attempt of Pope John Paul II didn’t just theoretically happen in the Cars universe, it objectively happened in the Cars universe. Now, it is clear that we do not know if that pope was man or Car but we must posit tha


"clear and present danger" is a concept used by the legal system to define when speech is or isn't legal, and the bar used is generally fairly high. Still, we usually deploy a jury or a team of judges to determine this in order to really be sure to get it right. It is not something that is easily defined in an objective sense.

"clear, specific and concrete danger" is something that may be a better set of criteria. If one can not specify the danger, or if it is not concrete, then its arguable not very clear or present.


Exactly! If we imagine that people only exist on paper then we can pretend that these made up people that have no real-world counterparts don’t know what a clear and present danger is. With that in mind it is obvious that tolerance isn’t possible.

Now, once you consider that the people that I’m imagining are nearly-spherical but covered in spikes and all of them are in the bottom of a big slippery bowl, you can see how they’re always causing harm to one another. Also in this scenario they can only communicate in grunts and as such they rely on me - the dungeon master judge - to resolve conflicts through a complex legal system that I made up. Sadly I am powerless to do anything about the spikes or how slippery the bowl is, so you can guess how that works out.

To all the people that talk about a “social contract” I say “put that in your pipe and smoke it”!



Intolerance is the wrong framing. You should be understanding. People who throw the label "bigot" around are themselves bigots (convinced of the superiority of their own beliefs without engaging with others views). A lot of people can have genuine disagreements with certain ideologies (e.g., they might think that transitioning minors is net negative for society) but also be open to dialogue. Such people are not bigots. Those who disagree with the left on any given point may or may not be wrong, but disagreement is not bigotry. There is no paradox there.

There is also a difference between speach and action. As a society, we should allow all speach (e.g., people questioning authority), but supress certain actions (e.g., violence). Currently, the American left seems to believe that people voicing the wrong views justified violence. That belief is abhorrent and fundamentally completely at odds with liberalism and a just and well functioning society.

Specifically, re-tolerance of intolerance, I highly recommend a speach by Rowan Atkinson on exactly that topic. If you Google it you can probably find it. It is worth a watch. He is an incredibly intelligent and eloquent man.


"people who have genuine concerns about things", things like women, non-whites, LGBTIQ, especially trans, atheists, scientists, anti-fascists, even people who don't follow the MAGA party line, such as idolizing Charley Kirk, the list goes on. "Genuine concerns" which are expressed as undisguised or thinly disguised death threats in some cases.


To take the example of trans: yes. Multiple different groups have genuine concerns about trans issues. For example, you have trans people wanting to be respected and not victimized, you have women who fear men being in their spaces, you have parents who fear their children being harmed and encouraged to transition, you have female athletes concerned about unfair competition from people who went through male puberty, etc... None of these groups innately hate each other or are bigoted, they just have different opinions on what the best society should be like as shaped by their own experiences.

And all of these groups have a point. Obviously you do get examples of trans people being victimized, you do get examples of children regretting transitioning, you do get examples of unfair athletic competition, etc... It is a complete conspiracy theory to label all of the groups as made of hateful fake news spreading bigots. They are all just human beings, muddling through the world!


This is always how it goes, first it's just "genuine concerns", and soon enough you've designated a group who are to be the scapegoats with no right to live their own lives as they see fit.


Maybe I haven't worked on tricky enough problems, but I swear people massively hype-up the difficulty of coding. The vast majority of the time there is little difficult problem solving


The standard term is Machine Learning (ML). It's not artificial intelligence (AI) because there is no sense of attempting to manipulate concepts (e.g., the classic symbolic conceptual manipulation algorithms, or modern foundation models).


The article is misleading and badly written. None of the mentioned works seem to have used language or knowledge based models.

It looks like all the results were driven by optimization algorithms, and yet the writing describes AI 'using' concepts and "tricks". This type of language is entirely inappropriate and misleading when describing these more classical (if advanced) optimization algorithms.

Looking at the paper in the first example, they used an advanced gradient descent based optimization algorithm, yet the article describes "that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise."

Ridiculous, and highly misleading. There is no conceptual manipulation or intuition being used by the AI algorithm! It's an optimization algorithm searching a human coded space using a human coded simulator.


Right - I hate that "AI" is just being used as (at best) a replacement term for ML, and very misleading for the public who is being encouraged to believe that some general purpose AGI-like capabiity is behind things like this.

The article is so dumbed down that it's not clear if there is even any ML involved or if this is just an evaluation of combinatorial experimental setups.

> The outputs that the thing was giving us were really not comprehensible by people,

> Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise.

I'll chalk this one up to the Russians, not "AI".


This doesn't even seem to be ML, though.


I agree with your point, but I think it's worth noting that there's a real problem of language today both in popular and scientific communication. On the one hand, in popular understanding, there's the importance of clearly separating the era of "machine learning" as let's say Netflix recommendations from the qualitative leap of modern AI, most obviously LLMs. This article clearly draws on the latter association and really leads to confusion, most glaringly in the remark you note that the AI probably took up some forgotten Russian text etc.

However, scientifically, I think there's a real challenge to clearly delineate from the standpoint of 2025 what all should fall under the concept of AI -- we really lose something if "AI" comes to mean only LLMs. Everyone can agree that numeric methods in general should not be classed as AI, but it's also true that the scientific-intellectual lineage that leads to modern AI is for many decades indistinguishable from what would appear to be simply optimization problems or the history of statistics (see especially the early work of Paul Werbos where backpropagation is developed almost directly from Bellman's Dynamic Programming [1]). The classical definition would be that AI pursues goals under uncertainty with at least some learned or search‑based policy (paradigmatically but not exclusively gradient-descent of loss function), which is correct but perhaps fails to register the qualitative leap achieved in recent years.

Regardless -- and while still affirming that the OP itself makes serious errors -- I think it's hard to find a definition of AI that is not simply "LLMs" under which the methods of the actual paper cited [2] would not fall.

[1] His dissertation was re-published as The Roots of Backpropagation. Especially in the Soviet Union, important not least for Kolmogorov and Vapnik, AI was indistinguishable from an approach to optimization problems. It was only in the west where "AI" was taken to be a question of symbolic reasoning etc, which turned out to have been an unsuccessful research trajectory (cf the "AI winter").

[2] https://arxiv.org/pdf/2312.04258


"AI" is just a misleading and unhelpful term, exactly because it causes people to assume that there are properties we associate with intelligence (abstract thought, planning, motivations, emotions) present in anything given the term. That is easier to correct when someone is referring to a logistic regression. I think that "AI" has clung to LLMs because they specifically give the illusion of having those properties.


I would distinguish between:

- methods that were devised with domain knowledge (= numerical methods)

- generic methods that rely on numerical brute forcing to interpolate general behaviour (= AI)

The qualitative leap is that numerical brute forcing is at a stage where it can be applied to useful enough generic models.

There's a fundamental difference between any ML based method and, say, classic optimization. Let's take a simple gradient descent. This solves a very specific (if general) class of problems: min_x f(x) where f is differentiable. Since f is differentiable, someone had the (straightforward) idea of using its gradient to figure out where to go. The gradient is the direction of greatest ascent, so -grad(f) comes as a good guess of where to go to decrease f. But this is local information, only valid at (or rather in the vicinity of) a point. Hence, short of improving the descent direction (which other methods do, like quasi-Newton methods, which allow a "larger vicinity" of descent direction pertinence), the best you can do is iterate along x - h grad(f) at various h and find one that is optimal in some sense. How this is optimal is all worked out by hand: it should provide sufficient decrease, while still giving you some room for progression (not too low a gradient), in the case of the Wolfe-Armijo rules, for example.

These are all unimportant details, the point is the algorithms are devised by carefully examining the objects at play (here, differentiable functions), and how best to exploit their behaviour. These algorithms are quite specific; some assume the function is twice differentiable, others that it is Lipschitzian and you know the constant, in others you don't know the constant, or the function is convex...

Now in AI, generally speaking, you define a parametric function family (the parameters are called weights) and you fit that family of functions so that it maps inputs to desired ouputs (called training). This is really meta-algorithmics, in a sense. No domain knowledge required to devise an algorithm that solves, say, the heat equation (though it will do so badly) or can reproduce some probability distribution. Under the assumption that your parametric function family is large enough that it can interpolate the behaviour you're looking after, of course. (correct me on this paragraph if I'm wrong)

To summarize, in my (classic numerics trained) mind, classic numerics is devising methods that apply to specific cases and require knowledge of the objects at play, and AI is devising general interpolators that can fit to varied behaviour given enough CPU (or GPU as it were) time.

So, this article is clearly not describing AI as people usually mean it in academia, at least. I'll bet you a $100 the authors of the software they used don't describe it as AI.


I think it's pretty clear that they suspect the mechanism underlying the model's output is the same as the mechanism underlying said theoretical principles, not that the AI was literally manipulating the concepts in some abstract sense.

I don't really get your rabid dismissal. Why does it matter that they are using optimisation models and not LLMs? Nobody in the article is claiming to have used LLMs. In fact the only mention of it is lower down where someone says they hope it will lead to advances in automatic hypothesis generation. Like, fair enough?


They write the "AI was probably using some esoteric theoretical principles." That is a direct quote of the article.

If it was an LLM based model this could be a correct statement, and it would suggest a groundbreaking achievement: the AI collated esoteric research, interpreting it correctly and used that conceptual understanding to suggest a novel experiment. This might sound far fetched, but we already have LLM based systems doing similar... Their written statement is plausible given the current state of hype (and also a plausible, though ground breaking, given the current state of research).

In reality, the statement is incorrect. The models did not 'use' any concepts (and the only way to know that the article is wrong is to actually bother to consult the original paper, which I did).

The distinction matters: they implied something ground breaking, when the reality is cool, but by no means unprecedented.

Tldr: using concepts is not something classic ML algorithms do. They thus directly erroneously imply (a groundbreaking) foundation model based (or similar) approach. I care because I don't like people being mislead.


I think you're taking the statement way too literally. It's very clear to me what they are trying to communicate there - sure, you can read all sorts of things into a sentence like that if you try, but let's assume the best in people when there are unknowns, not the worst?

Again, the authors never said anything about language models. That's entirely on you.


What makes it clear to you that they don't mean what they explicitly write? What are you defending and why?

Philosophical discussions aside, it is entirely possible for current AI to use concepts (but the research they are describing does not employ that kind of AI).

I also think most lay people seeing the term AI are likely to think of something like ChatGPT.

It is a) literally incorrect what they write, and b) highly misleading to a lay person (who will likely think of something like ChatGPT when they read the term AI). Why are you defending their poor writing?


> What makes it clear to you that they don't mean what they explicitly write?

Because that's how language works - it's inherently ambiguous, and we interpret things in the way that makes the most sense to us. Your interpretation makes no sense to me, and requires a whole host of assumptions that aren't present in the article at all (and are otherwise very unlikely, like an AI that can literally work at the level of concepts).

> Why are you defending their poor writing?

I'm defending them because I don't think it's poor writing.


There are two ways to interpret the sentence we are discussing:

A: a grammatically incorrect statement, saying that "the AI used theory", when they mean that "the AI's design can be understood using theory" (or more sloppy "that the design uses the theory").

B: a grammatically valid if contentious-to-you statement about an LLM or knowledge graph based system (e.g., something like the AI Scientist paper) parsing theory and that parsing being used to create the experiment design.

As I have explained, B is a perfectly valid interpretation, given the current state of the art. It is also valid historically, as knowledge graph based systems have been around for a long time. It is also the likely interpretation of a lay person, who is mainly exposed to hype and AI systems like chatGPT.

Regardless, they a) introduce needless ambiguity that is likely to mislead a large proportion of readers. And b) if they are not actively misleading then they have written something grammatically incorrect.

Both findings mean that the article is a sloppy and bad piece of writing.

This particular sentence is also only a particular example of how the article is likely to mislead.


Okay - at this point I have nothing more to say. You think I'm misrepresenting Quanta's audience, and I think you're being needlessly pedantic. Doesn't seem like we're going to resolve this short of you "showing me the victim", so to speak. It didn't mislead me, it didn't mislead my partner, and it doesn't seem to have mislead you either. So who are these "laypeople" who are injecting all this hidden meaning into the article?

Anyway, I don't think it's reasonable for me to ask you for evidence here, so let's just agree to disagree.


The thread is full of people debating what AI means and whether ML/optimization algorithms count as AI. Laypeople don't think of machine learning when they see AI, they think of chatbots. I would argue even for a techy magazine this is a bad term to use without spending two sentences to clarify the distinction.

Examples of people being confused:

rlt: The discovering itself doesn’t seem like the interesting part. If the discovery wasn’t in the training data then it’s a sign AI can produce novel scientific research / experiments.

wizzwizz4 in reply: It's not that kind of AI. We know that these algorithms can produce novel solutions. See https://arxiv.org/abs/2312.04258, specifically "Urania".

About a quarter of the comments here I just have to assume what definition of AI they're talking about, which changes the meaning and context significantly.


I deal with this at work all the time, and it drives me up the wall. Words have meaning! That's the point! If you cannot or will not say what you mean, your writing skills are poor. Precision matters. Lack of ambiguity matters. Perhaps in this case you were able to read between the lines and divine what the words truly meant, but forcing readers to do that is the mark of a bad writer. Not just because it's added time and mental effort for the reader, but because readers who failed to read between the lines will have no idea that you fed them a falsehood and now have an inaccurate understanding of whatever you were writing about.


It confused me sufficiently that I consulted the original paper to check what type of AI algorithm the research used! That was my immediate reaction to reading the sentence.

Out of curiosity, are you familiar with work like "the AI Scientist"? Having an LLM based AI suggest experiments based on parsing scientific literature is not outlandish.


This is the obvious consequence of funding agencies pouring money into “AI.”


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You