For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more NateEag's commentsregister

I think they meant "Nobody knows why LLMs work."


same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.


Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.


My goodness. Please introduce me to this "plenty of people". I'm in the field, and none of them work with me.

But I can tell you that statistics and parametrized functions have absolutely nothing to do with it. You're way out of your depth my friend.


Yes, yes, no one understands how anything works. Calculus is magic, derivatives are pixie dust, gradient descent is some kind of alien technology. It's amazing hairless apes have managed to get this far w/ automated boolean algebra handed to us from our long forgotten godly ancestors, so on & so forth.


No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.

Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.

Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.


I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.


omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.

Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.

This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.

Understand?

There's no confusion. Just people who don't what they are talking about (you)


I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.


The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.


I didn't ask for your help so it's probably better for everyone if you spend your time & efforts elsewhere. Good luck.


Well don't ask me to help you then. I read your profile and it has this snippet in there:

"Address the substance of my arguments or just save yourself the keystrokes."

The substance of your argument was complete ignorance about the topic, so I addressed it as you requested.

Please remove that sentence from your profile if that is not what you want. Thank you.


I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.


There is no other interpretation for that other than what I said. If you disagree then that’s a misinterpretation of the English language.

I am addressing the substance of your argument and that substance is lack of knowledge there is zero other angle to interpret it.


As I said previously, I don't think this is a productive use of time or effort for anyone involved so I'm dropping out of this thread.


u come across ungrateful to someone who was just trying to help


I've developed a new hobby lately, which I call "spot the bullshit."

When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.

I've found something every time I looked, since starting this routine.


Yep - it honestly reads like an LLM's summary, which often miss critical nuances.


I know, especially with the bullet points.

The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.


> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

Anyone semi-literate can write down what they're feeling.

It's sometimes called "journaling".

Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."

The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.

I think using the AI to skip these activities would be very bad for the people doing it.

It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.


I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.


They're saying that if they completely refused to touch any system that has been touched by AI, they would be unable to find paying work.

Thus, they won't use it directly themselves, but are willing to work with people who do.


This is not wrong, but the comment you replied to implies the author of the comment understood that perfectly already.


Qt uses "slow as shit" JavaScript in its UI markup language:

https://doc.qt.io/qt-6/qtqml-javascript-expressions.html

Is your complaint with Electron, the "browser as local GUI app" framework that's been popular with SaaS vendors for their "native" apps?


Right, for small scripting, not for the majority of the app. All the backend interaction is in C++.

Like, electron is fine, but it's orders of magnitude slower than it needs to be for the functionality it brings. Which is just not ideal for many desktop applications or, especially, the shell itself.

Ultimately people use electron because they know HTML, CSS, and JS/TS. And, I guess, companies think engineers are too stupid to learn anything else, even though thats not the case. There is a strong argument for electron. But not for Linux userland dev, where many developers already know Qt like the back of their hand.


As a longtime musician, I fervently believe in doing the best you can with the tools you have.

As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.

I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.

It seems to me that LLMs do that to an unprecedented degree.

Is it possible to use them to help you make worthwhile, human-focused output?

Sure, I'd accept that's possible.

Are the tools inherently inclined in the opposite direction?

It sure looks that way to me.

Should every tool be embraced and accepted?

I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.

The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.

I agree with them.


The Corridor Crew [1] are luminaries in our field, and they are incredibly bullish on this tech.

They've made dozens of essays and done tons of experiments showing that they think AI is going to be great for our field:

https://www.youtube.com/watch?v=DSRrSO7QhXY (scrub through the timelines to the end of these videos to see)

https://www.youtube.com/watch?v=iq5JaG53dho

https://www.youtube.com/watch?v=mUFlOynaUyk

https://www.youtube.com/watch?v=GVT3WUa-48Y

Listen to them.

Our entire industry pays attention to them, and they're right!

[1] https://en.wikipedia.org/wiki/Corridor_Digital


The Corridor Crew [1] are luminaries in our field, and they are incredibly bullish on this tech.

They are literally "react" youtubers who have never worked a single day as professional vfx artists.

This is like saying Jake Paul is the heavyweight boxing champion of the world.


If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.

I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.

Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.


ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated

I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"


My minimalist version has a better domain name:

http://endinter.net/


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You