Code is still there, but humans are done dealing with it. We're at a higher level of abstraction now. LLMs are like compilers, operating at a higher level. Nobody programs assembly language any more, much less machine language, even though the machine language is still down there in the end.
They certainly do, and I can't really follow the analogy you are building.
> We're at a higher level of abstraction now.
To me, an abstraction higher than a programming language would be natural language or some DSL that approximates it.
At the moment, I don't think most people using LLMs are reading paragraphs to maintain code. And LLMs aren't producing code in natural language.
That isn't abstraction over language, it is an abstraction over your computer use to make the code in language. If anything, you are abstracting yourself away.
Furthermore, if I am following you, you are basically saying, you have to make a call to a (free or paid) model to explain your code every time you want to alter it.
I don't know how insane that sounds to most people, but to me, it sounds bat-shit.
I assume that you wrote that with AI, then. If so, I assume it’s not really your opinion. You provided some prompt, which is hidden from us.
I don’t know you, don’t trust you, and if you write with AI nobody else will get to know you or trust you, either, unless they fall for your false AI mask.
While your breakdown of LLM “benefits” is thorough, I think it glosses over—or outright ignores—some significant limitations and trade-offs that make the picture far less rosy. It’s easy to frame this technology as an unqualified upgrade to human writing, but that framing is misleading and potentially harmful. Let me go point by point through your categories and explain where the problems lie.
1. Enhanced Productivity
Yes, LLMs can produce text quickly, but speed is not synonymous with quality. Churning out a draft in seconds is only useful if that draft actually advances the writer’s ideas, rather than lulling them into outsourcing thought itself. What often happens is that people mistake “having words on a page” for “having meaningful ideas.” Productivity in writing is not about word count—it’s about clarity of thought, and clarity is something that an LLM cannot supply. It can rearrange existing patterns, but it cannot truly reason or generate original insight. A fast draft is worthless if it’s hollow.
2. Improved Writing Quality
This point assumes that grammar and surface-level polish are the essence of good writing. They are not. Good writing emerges from the writer’s voice, their personality, their quirks, even their mistakes. Grammar-correcting AI tends to standardize expression into a bland, middle-of-the-road prose style. The result is “correct,” but sterile. Moreover, “tone adjustment” and “clarity” are superficial facsimiles of understanding. Simplifying an idea is only valuable if you understand what makes it complex in the first place. AI doesn’t “understand” ideas—it flattens them into patterns of words that look simpler but may remove nuance in the process.
3. Creativity and Ideation
Here is where the hype is the most exaggerated. Brainstorming with an LLM often produces generic, cliché, or predictable results. If you ask for metaphors, you’ll get the most common ones floating around in its training data. If you ask for plots, you’ll get reheated versions of existing tropes. Calling this “creativity” misunderstands what creativity actually is: the human capacity to connect disparate, personal experiences into something novel. An LLM is bounded by statistical averages. It cannot be surprised by itself. Humans, on the other hand, can.
4. Language Versatility
Translation and localization are areas where LLMs seem promising, but again, nuance matters. Language is not merely about syntax or vocabulary; it is deeply cultural, contextual, and historically embedded. Machine translation may be “good enough” for casual use, but it consistently fails to capture subtext, irony, humor, idiom, or cultural resonance. Outsourcing too much of this to AI risks flattening linguistic richness into something utilitarian but impoverished.
5. Research Assistance
This one is especially dangerous. Yes, LLMs can summarize and generate context, but they are notorious for producing confident-sounding misinformation (“hallucinations”). Unless the user already has expertise in the topic, they will not know whether what they’re reading is accurate. This means that instead of empowering research, LLMs encourage intellectual laziness and misinformation at scale. The “citation help” is even worse: fabricated references, garbled bibliographic entries, and misleading formatting are common. Presenting this as a “benefit” is disingenuous without an equally strong warning.
6. Editing and Rewriting
Paraphrasing and consistency checks may sound helpful, but they too come at a cost. When you outsource the act of rewriting, you risk losing the friction that forces you to refine your own ideas. Struggling to find words is not a flaw—it’s part of thinking. Offloading that process to an algorithm encourages passivity. You end up with smoother sentences, but not sharper thoughts. “Consistency” is also a double-edged sword: AI can enforce bland uniformity where variation and individuality might have been more compelling.
7. Customization and Integration
This is just another way of saying “industrialization of writing.” The more writing is engineered through prompts and APIs, the more it shifts from being a human practice to being an automated pipeline. At that point, writing stops being about human connection or expression and becomes just another commodity optimized for scale. That’s fine for spam emails or ad copy, but disastrous if applied to domains where authenticity and trust actually matter (e.g., journalism, education, or literature).
8. Cost Efficiency
Framing this as a cost benefit—“reduces need for human writers”—is perhaps the most telling point in your list. This reduces writing to a purely economic function, ignoring its human and cultural value. The assumption here is that human writers are redundant unless they can outcompete machines on efficiency. That is not just shortsighted; it’s destructive. Human writers don’t merely “generate content”—they interpret, critique, and shape culture. Outsourcing all that to probabilistic models risks a future where the written word is abundant but devoid of depth.
The larger issue is that your entire framing assumes writing is merely a transactional process: input (ideas or tasks) → output (words on a page). But writing is not just about producing text. It is about thinking, communicating, and connecting. By presenting LLMs as a categorical improvement, you erase the most important part of the process: the human struggle to articulate meaning.
So yes, LLMs have uses, but they should be treated as narrow tools with serious limitations—not as the new standard for all writing. To present them otherwise is to flatten human expression into machine-mediated convenience, and to celebrate that flattening as “progress.”
Just because Alpha go exist back in the days, go competition doesn’t go away. Just because ChatGpt exists these days, it doesn’t replace our desires of learning something more interactively.
ChatGPT can answer many things, but it seems miss the point if you use it for a website designed for learning.
Fr, what is the “other stuff” you all claims to be? Since I have heard many people who heavily use AI claimed to save time for “other stuff”, but I never heard about what is their “other stuff”.
If what you actually want to do is not really related to shader, there is a huge chance you have started your learning process in a wrong direction, which is a bit different to the definition of “being efficient”.
Does being able to solve basic computer graphics problems mean ChatGPT can solve all computer graphics problems? If not, does that mean advanced computer graphics problems fit into the bucket of "things that are worth learning?"
Because if the advanced problems are worth learning, but the basic problems are not worth learning, how are you supposed to jump straight to learning the advanced stuff while skipping the rest? You'll inevitably end up needing to learn the basics first anyway.
This mindset will be the great filter for kids these days.
Yes, LLMs are sort of competent in many, many areas, but if you refuse to learn stuff the LLM can do, you will fail miserably to spot when the LLM is incompetent.
The difference now is that people are actively trying to remove people (others and themselves) from software development work, so the robots have to have adequate instructions. The motivation is bigger. To dismantle all human involvement with software development is something that everyone wants, and they want it yesterday.
I thought this was going to be about libraries containing books, but it works either way.
Why own a book? Much less a building full of them? You can generate an entire book instantly using AI. You can generate thousands, millions, trillions, of books using AI. More information that has ever been known before.
Same for software libraries. Same for software. Need a new virtual instrument of a symphony orchestra, broken down into violins 1, violins 2, cellos, etc.? Just ask AI to build it for you, and it's done. Need a C compiler? Ask AI to build it for you, and it's done.