Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.
Yes, yes, no one understands how anything works. Calculus is magic, derivatives are pixie dust, gradient descent is some kind of alien technology. It's amazing hairless apes have managed to get this far w/ automated boolean algebra handed to us from our long forgotten godly ancestors, so on & so forth.
No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.
Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.
Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.
I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.
omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.
Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.
This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.
Understand?
There's no confusion. Just people who don't what they are talking about (you)
I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.
The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.
I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.
Right, for small scripting, not for the majority of the app. All the backend interaction is in C++.
Like, electron is fine, but it's orders of magnitude slower than it needs to be for the functionality it brings. Which is just not ideal for many desktop applications or, especially, the shell itself.
Ultimately people use electron because they know HTML, CSS, and JS/TS. And, I guess, companies think engineers are too stupid to learn anything else, even though thats not the case. There is a strong argument for electron. But not for Linux userland dev, where many developers already know Qt like the back of their hand.
As a longtime musician, I fervently believe in doing the best you can with the tools you have.
As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.
I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.
It seems to me that LLMs do that to an unprecedented degree.
Is it possible to use them to help you make worthwhile, human-focused output?
Sure, I'd accept that's possible.
Are the tools inherently inclined in the opposite direction?
It sure looks that way to me.
Should every tool be embraced and accepted?
I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.
The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.
If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.
I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.
Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.
ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated
I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"