> This one raises a lot of important points about LLMs, but the only real conclusion it seems to make is "LLMs are bad! We should never build them!".
I think the point was never to bring a solution or show any essence of reality. The point was being polemical and signalling savviness through cynicism.
The author is still grieving by watching a civilisation changing technology just passing by. Every single one of the problems they note applies to any technology that existed.
The internet produced 4chan. Produced scammers. Produced fraud. Instrumental in spreading child porn. Caused suicides. Many people lost their lives due to bullying on the internet. Many develop have addictions to gaming.
To anyone who has given it some thought, any sufficiently advanced technology usually affects both in good and bad ways. Its obvious that something that increases degrees of freedom in one direction will do so in others. Humans come in and align it.
There's some social credit to gain by being cynical and by signalling this cynicism. In the current social dynamics - being cynical gives you an edge and makes you look savvy. The optimistic appear naive but the pessimists appear as if they truly understand the situation. But the optimists are usually correct in hindsight.
We know how the internet turned out despite pessimists flagging potential problems with it. I know how AI will turn out. These kind of articles will be a dime a dozen and we will look at it the same way as we look at now at bygone internet-pessimists.
This is response not just to this article, but a few others.
I think you underestimate people's grievance with technology. If you make a poll my guess is more than 50% of people will say the world was a better place pre-social media.
If the AI tech keeps going at the direction it's going now, more and more people will start believing the world would be better if the internet and computer had never been invented.
You talk like the internet being a net positive is a given. It really isn't, especially after it's proven that it doesn't democratize power (see Arab Spring, and China, and the US, and everywhere.)
Its usually the educated and elite PMC types who have grievance with technology. They secured their status and have lucrative jobs mostly with the help of technology and they are too scared to have anything threaten their position in society. It is highly hypocritical to behave this way but they don't seem to have the self awareness to observe it objectively.
Ask any poor person in India what their sentiment is with tech - it is usually optimism.
> You talk like the internet being a net positive is a given. It really isn't, especially after it's proven that it doesn't democratize power (see Arab Spring, and China, and the US, and everywhere.)
The world is far more democratic now than before and I attribute it to technology because it reduces information asymmetry.
> The world is far more democratic now than before and I attribute it to technology because it reduces information asymmetry
That is fantasy. Information technology has created an unprecedented level of information asymmetry and the gap is widening everyday as the total computing capacity grows.
Before information era, the ruling class was roughly as blind as peasants. Population census took years, and sometimes outright impossible. The opaqueness was two-way. Now it's one way - people in power know everything about the citizens.
Take two countries. One with open access to information in the way you described and another country where internet is not allowed. Which one do you think will be more democratic?
(hint: there already exist examples like such)
Without information, there is no way a voter may know which person to vote for and whether to believe in them at all and you are easily susceptible towards manipulation.
It will become more clear when you try to answer this hypothetical: if your objective were to bring in more democracy in North Korea, would you allow the global internet to proliferate if you could? According to your theory, it would just make it worse in general.
> We know how the internet turned out despite pessimists flagging potential problems with it.
A sludge of spyware and addiction machines which employ negative emotion and outrage to drive shareholder value?
"The internet" is a pretty big tent. Everything from text messages to streaming video to online gaming to social media to encyclopedias. I think 15 years ago you could make a strong case that the internet was mostly a net positive, I think now that is much more difficult. If governments are able to fully realise their plans for surveillance and control, it will almost certainly become a net negative. Of course with many positive aspects.
So likewise with AI, we should be careful to not make the same mistakes as we did with the internet so we can realise something that is mostly positive. We could absolutely have a world where AI is as beneficial as you believe it will be, but we don't get there through inaction, we get there by being deeply critical of the negative aspects of AI and ensuring that we don't let a small number of hyper scalers control our access to it.
“Exploit labour” is just outdated Marxism. No self respecting economist believes this kind of rhetoric anymore but it only exists amongst west coded leftist.
It’s a sort of cynical fatalism to think everything is exploitation — directly coming from Marx.
It’s not exploitation to mutually agree on a deal. Most of population know this except Marxists!
a) just hand wave away that there is a massive power and wealth differential involved in this "mutual agreement" b) dismiss all discussions which recognize that fact... as "outdated Marxism"
Plenty of mainstream economists are capable of seeing the real world which you are pretending doesn't exist.
Even Marx meant the word "exploit" in relatively value neutral terms, just recognizing that in any economy built on private property we exploit humans the same way we do any "resource". It's up to the reader whether they see that as having any moral connotation.
Massive power and wealth differential is just simply a reason to be jealous. It is precisely this concept of mutual agreement (capitalism) that brought most of humanity out of poverty.
>Plenty of mainstream economists are capable of seeing the real world which you are pretending doesn't exist.
Not really. There are total of zero economic policies made by analysing economy through Labour Theory of Value or whatever other crap Marxists believe.
The above poster used "exploit" in non value neutral terms. Marx tried very hard to be value neutral about it (but its clear what the intentions were) but his readers don't play that game.
You are addressing something totally different to the original claim - which tried to say that capitalism is inherently exploitative on labour which is just outdated Marxism
To be frank, I thought trying to twist this into an argument about whether capitalism is inherently exploitative was a complete waste of time and I replied as such. If you'll recall what we were originally talking about here - "AI, should HN users be optimistic?"
That's a good idea and FWIW I agree that as a person who might lose their job to AI, you do deserve to feel apprehensive, even if it might lead to some good later.
This is incorrect because LLMs don’t have the same property as that of other tools. LLMs enjoy compounding effect of intelligence which you can’t get my separating them. If what you said was right then we would see more open weight domain model. We don’t and there’s a reason why.
its correct because im using relative sizing and you are using absolute sizing, and completely ignoring the AI Bloat belief that we just need larger and larger models.
> Their models, such as Sarvam 2B and Sarvam-M, are fine-tuned for medical reasoning and symptom triage in local languages, without the need for high-end devices or constant internet. These systems can summarize patient notes, offer diagnostic guidance and even prioritize cases, functioning as low-cost, frugal AI assistants for overstretched healthcare workers.
Wow bad idea. Domain specific models simply don’t work. Ever. You should not be using some shoddy 3M model for medical purposes when you can spend just a few dollars extra and get GPT that is miles and miles better. The local language value proposition is also exaggerated.
This article keeps repeating the lie that network is hard to find in India and that local models win. This is on the face ridiculous to anyone who has been to India. Almost everyone has access to a smartphone with 4g connection. What they don’t have is the ability to afford a phone that can run a good model. Why would I as a poor farmer in India, use an extremely underpowered 3B model on my 100 dollar smartphone when I can use the free version of ChatGPT that is miles ahead in every dimension?
My 1000 dollar iPhone can barely run Gemma 4 which is hardly usable for serious questions anyway.
I do get the need for Indian ecosystem to build internal competency so that when the time comes they are prepared. But for now pursuing a distillation attack strategy like China looks better. Or have companies that specialise in integration locally - something big model companies don’t have expertise in.
Agreed, and also now any actions people don’t personally agree with that have a negative effect on someone (which is everything) can be labeled “structural violence”, so they can claim to have a moral justification for physical violence. It’s disgusting and breathtakingly stupid.
I think the point was never to bring a solution or show any essence of reality. The point was being polemical and signalling savviness through cynicism.
reply