For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | wfhrto's commentsregister

Excellent news. Hopefully all sports teams will do likewise.


> music

> window

> fancier attire

one of these things is not like the other


Right. Two of these is things you can see, but only one is a thing you can hear.


Code is still there, but humans are done dealing with it. We're at a higher level of abstraction now. LLMs are like compilers, operating at a higher level. Nobody programs assembly language any more, much less machine language, even though the machine language is still down there in the end.


> Nobody programs assembly language

They certainly do, and I can't really follow the analogy you are building.

> We're at a higher level of abstraction now.

To me, an abstraction higher than a programming language would be natural language or some DSL that approximates it.

At the moment, I don't think most people using LLMs are reading paragraphs to maintain code. And LLMs aren't producing code in natural language.

That isn't abstraction over language, it is an abstraction over your computer use to make the code in language. If anything, you are abstracting yourself away.

Furthermore, if I am following you, you are basically saying, you have to make a call to a (free or paid) model to explain your code every time you want to alter it.

I don't know how insane that sounds to most people, but to me, it sounds bat-shit.


At this point, it would be shameful to not write with LLMs. I don't want to spend time reading plain human text when improved AI text is an option.


> improved AI text

It is certainly your prerogative to believe that, but know your opinion is far from universal. It is a widespread view that AI-written text is worse.


> improved AI text

Why are you on hackernews and not talking to an LLM?


I assume that you wrote that with AI, then. If so, I assume it’s not really your opinion. You provided some prompt, which is hidden from us.

I don’t know you, don’t trust you, and if you write with AI nobody else will get to know you or trust you, either, unless they fall for your false AI mask.


That's a great point!

Large Language Models (LLMs), like GPT-4, offer numerous benefits for writing tasks across various domains. Here’s a breakdown of the key advantages:

1. Enhanced Productivity

Faster Drafting: Quickly generate drafts for essays, reports, emails, blog posts, and more.

24/7 Availability: Instant support with no downtime or fatigue.

Reduced Writer’s Block: Provides starting points and creative prompts to overcome mental blocks.

2. Improved Writing Quality

Grammar and Style: Corrects grammar, punctuation, and stylistic issues.

Tone Adjustment: Adapts tone to suit professional, casual, persuasive, or empathetic contexts.

Clarity and Conciseness: Helps simplify complex ideas and remove redundant language.

3. Creativity and Ideation

Brainstorming: Assists in generating titles, outlines, metaphors, and analogies.

Storytelling: Offers plot ideas, character development, and dialogue suggestions for creative writing.

Variations: Produces multiple versions of the same message (e.g., for A/B testing).

4. Language Versatility

Multilingual Support: Translates and writes in many languages.

Localization: Tailors content for different cultural contexts or regions.

5. Research Assistance

Summarization: Condenses large documents or articles into key points.

Information Retrieval: Provides background context on topics quickly (though should be fact-checked for critical work).

Citation Help: Assists in generating citations in formats like APA, MLA, or Chicago.

6. Editing and Rewriting

Paraphrasing: Rewrites text to avoid plagiarism or improve readability.

Consistency Checks: Maintains tone, terminology, and formatting across long documents.

Content Expansion: Adds detail to thin content or elaborates on underdeveloped points.

7. Customization and Integration

Prompt Engineering: Tailors responses for specific industries (e.g., legal, medical, technical).

API Integration: Can be embedded into writing tools, content platforms, or CMS systems.

8. Cost Efficiency

Reduces Need for Human Writers: Especially for repetitive or low-complexity tasks.

Scales Effortlessly: One model can serve multiple users or projects simultaneously.

Would you like a breakdown of how these benefits apply to a specific type of writing (e.g., academic, marketing, business)?


Yes please go on


This is AI bullshit.


It is improved bullshit.


That it is.


While your breakdown of LLM “benefits” is thorough, I think it glosses over—or outright ignores—some significant limitations and trade-offs that make the picture far less rosy. It’s easy to frame this technology as an unqualified upgrade to human writing, but that framing is misleading and potentially harmful. Let me go point by point through your categories and explain where the problems lie.

1. Enhanced Productivity

Yes, LLMs can produce text quickly, but speed is not synonymous with quality. Churning out a draft in seconds is only useful if that draft actually advances the writer’s ideas, rather than lulling them into outsourcing thought itself. What often happens is that people mistake “having words on a page” for “having meaningful ideas.” Productivity in writing is not about word count—it’s about clarity of thought, and clarity is something that an LLM cannot supply. It can rearrange existing patterns, but it cannot truly reason or generate original insight. A fast draft is worthless if it’s hollow.

2. Improved Writing Quality

This point assumes that grammar and surface-level polish are the essence of good writing. They are not. Good writing emerges from the writer’s voice, their personality, their quirks, even their mistakes. Grammar-correcting AI tends to standardize expression into a bland, middle-of-the-road prose style. The result is “correct,” but sterile. Moreover, “tone adjustment” and “clarity” are superficial facsimiles of understanding. Simplifying an idea is only valuable if you understand what makes it complex in the first place. AI doesn’t “understand” ideas—it flattens them into patterns of words that look simpler but may remove nuance in the process.

3. Creativity and Ideation

Here is where the hype is the most exaggerated. Brainstorming with an LLM often produces generic, cliché, or predictable results. If you ask for metaphors, you’ll get the most common ones floating around in its training data. If you ask for plots, you’ll get reheated versions of existing tropes. Calling this “creativity” misunderstands what creativity actually is: the human capacity to connect disparate, personal experiences into something novel. An LLM is bounded by statistical averages. It cannot be surprised by itself. Humans, on the other hand, can.

4. Language Versatility

Translation and localization are areas where LLMs seem promising, but again, nuance matters. Language is not merely about syntax or vocabulary; it is deeply cultural, contextual, and historically embedded. Machine translation may be “good enough” for casual use, but it consistently fails to capture subtext, irony, humor, idiom, or cultural resonance. Outsourcing too much of this to AI risks flattening linguistic richness into something utilitarian but impoverished.

5. Research Assistance

This one is especially dangerous. Yes, LLMs can summarize and generate context, but they are notorious for producing confident-sounding misinformation (“hallucinations”). Unless the user already has expertise in the topic, they will not know whether what they’re reading is accurate. This means that instead of empowering research, LLMs encourage intellectual laziness and misinformation at scale. The “citation help” is even worse: fabricated references, garbled bibliographic entries, and misleading formatting are common. Presenting this as a “benefit” is disingenuous without an equally strong warning.

6. Editing and Rewriting

Paraphrasing and consistency checks may sound helpful, but they too come at a cost. When you outsource the act of rewriting, you risk losing the friction that forces you to refine your own ideas. Struggling to find words is not a flaw—it’s part of thinking. Offloading that process to an algorithm encourages passivity. You end up with smoother sentences, but not sharper thoughts. “Consistency” is also a double-edged sword: AI can enforce bland uniformity where variation and individuality might have been more compelling.

7. Customization and Integration

This is just another way of saying “industrialization of writing.” The more writing is engineered through prompts and APIs, the more it shifts from being a human practice to being an automated pipeline. At that point, writing stops being about human connection or expression and becomes just another commodity optimized for scale. That’s fine for spam emails or ad copy, but disastrous if applied to domains where authenticity and trust actually matter (e.g., journalism, education, or literature).

8. Cost Efficiency

Framing this as a cost benefit—“reduces need for human writers”—is perhaps the most telling point in your list. This reduces writing to a purely economic function, ignoring its human and cultural value. The assumption here is that human writers are redundant unless they can outcompete machines on efficiency. That is not just shortsighted; it’s destructive. Human writers don’t merely “generate content”—they interpret, critique, and shape culture. Outsourcing all that to probabilistic models risks a future where the written word is abundant but devoid of depth.

The larger issue is that your entire framing assumes writing is merely a transactional process: input (ideas or tasks) → output (words on a page). But writing is not just about producing text. It is about thinking, communicating, and connecting. By presenting LLMs as a categorical improvement, you erase the most important part of the process: the human struggle to articulate meaning.

So yes, LLMs have uses, but they should be treated as narrow tools with serious limitations—not as the new standard for all writing. To present them otherwise is to flatten human expression into machine-mediated convenience, and to celebrate that flattening as “progress.”


Cool! I pasted some of the puzzles into ChatGPT and it solved them!


Congratulations! You just wasted 5 minutes of your life and got absolutely nothing in return.


So… what does it prove? Do you learn anything?

Just because Alpha go exist back in the days, go competition doesn’t go away. Just because ChatGpt exists these days, it doesn’t replace our desires of learning something more interactively.

ChatGPT can answer many things, but it seems miss the point if you use it for a website designed for learning.


ChatGPT can solve the problems for me. I can spend my time doing other things other than learning stuff it can do.


Fr, what is the “other stuff” you all claims to be? Since I have heard many people who heavily use AI claimed to save time for “other stuff”, but I never heard about what is their “other stuff”.

If what you actually want to do is not really related to shader, there is a huge chance you have started your learning process in a wrong direction, which is a bit different to the definition of “being efficient”.

So tell me what you want to do.


They will just write more AI prompts.

There is no point where a vibe coder will put down their glasses and say “I will now write this code by hand”.


Literally no reason to write code by hand, ever again


Does being able to solve basic computer graphics problems mean ChatGPT can solve all computer graphics problems? If not, does that mean advanced computer graphics problems fit into the bucket of "things that are worth learning?"

Because if the advanced problems are worth learning, but the basic problems are not worth learning, how are you supposed to jump straight to learning the advanced stuff while skipping the rest? You'll inevitably end up needing to learn the basics first anyway.


This mindset will be the great filter for kids these days.

Yes, LLMs are sort of competent in many, many areas, but if you refuse to learn stuff the LLM can do, you will fail miserably to spot when the LLM is incompetent.


you should read the MIT published study that doing this actually makes your less smarter over long periods of time


I don't think that how academy works


It worked for Plato


How?


What's the point of following a tutorial series if you can't be bothered to learn the subject?


As they say, LLMs don't need to have good taste in music, they just need to have taste in music as good as typical humans.

With that, we no longer need humans to listen to music!


AI can review code. No need for human involvement.


For styling and trivial issues, sure. And if it's free, do make use of it.

But it is just as unable to properly reason about anything slightly more complex as when writing code.


The difference now is that people are actively trying to remove people (others and themselves) from software development work, so the robots have to have adequate instructions. The motivation is bigger. To dismantle all human involvement with software development is something that everyone wants, and they want it yesterday.


everyone? source?


It's sort of obvious. Humans cost more money than coding agents. The more you can have a coding agent do, the less you have to pay a human to do.

This aligns pretty clearly with the profit motive of most companies.


I thought this was going to be about libraries containing books, but it works either way.

Why own a book? Much less a building full of them? You can generate an entire book instantly using AI. You can generate thousands, millions, trillions, of books using AI. More information that has ever been known before.

Same for software libraries. Same for software. Need a new virtual instrument of a symphony orchestra, broken down into violins 1, violins 2, cellos, etc.? Just ask AI to build it for you, and it's done. Need a C compiler? Ask AI to build it for you, and it's done.


Indirectly, of course, as all new fiction is written by AI, the "slush pile" problem will go away.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You