For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | dinfinity's commentsregister

> If you're in tune with animals and spend time around a parrot, it's obvious there is a lot going on in their minds.

Not saying there isn't and somewhat offtopic, but if you apply this to LLMs those are much, much 'smarter' than all the animals people like to call intelligent (or something similar). If you disagree, please tell me for which task requiring intelligence you'd rather have an animal's wit than that of an LLM.

I really do feel we should be taking the current state of affairs as a starting point to recalibrate what counts as smart or worth 'protecting', whether it's our beloved animal friends or something inorganic. Simultaneously believing "birds are super smart" and "LLMs are just stochastic parrots" seems absurd.


> If you disagree, please tell me for which task requiring intelligence you'd rather have an animal's wit than that of an LLM.

Navigating your way to a location without colliding with anything. Finding food in the woods. Such stuff that animals can do that we yet have AI be able to do.


Moving a complex system of muscles so that they can just stand upright is already very very complex, let alone intercepting a prey's movement mid-flight by just controlling all those muscles.

People way overestimate the actually intelligent part of LLMs vs simply being good at recalling context-related stuff from the training data.


Complexity does not require intelligence. Modern computers (even without AI) and technological systems do incredibly complex things and I'm quite sure you would not call those systems (again, without AI) intelligent.

There is a difference between a problem being complex and you try to find a solution to it (hard), vs a program being complex. The latter is trivial to execute, but that is entirely different from analysing it.

So are animals trivially executing a complex program or are they 'analyzing' a complex problem?

LLMs can (more often) successfully find solutions for far more complex problems than animals can. So where does that leave us?


Neither of those are based in intelligence, but rather in dexterity, agility and sensing capabilities. Try again, and this time please read the question carefully and answer in good faith rather than trying to (unsuccessfully) look for a loophole.

Dogs can be trained for a variety of tasks that require wit. Such as helping the blind navigate. And sniffing for illegal drugs.

Sniffing for illegal drugs requires wit? Right.

And 'trained' clearly means it is not something based in intelligence, but in repetition and conditioning.

Answer the actual question.


If you think being a guide dog doesn't require intelligence, you're delusional.

> answer the actual question

I literally did. You asked, which task that requires intelligence would I rather use an animal over an Llm. I'd much rather have a dog as my guide dog than an Llm. It can use it's innate intelligence to sense danger, navigate around obstacles it's never seen before, and even communicate with other humans through barking.

> trained clearly means it is not something based in intelligence, but in repetition and conditioning

I can't tell if you're trolling at this point. Llms are also trained and therefore are based on repetition and conditioning.


> If you think being a guide dog doesn't require intelligence, you're delusional.

I see you dropped "sniffing out drugs" as a task requiring intelligence, that's a start.

> It can use it's innate intelligence to sense danger

So sensing danger requires intelligence? Bacteria can sense danger.

> navigate around obstacles it's never seen before

Not intelligence, but dexterity. Only if it has to solve a puzzle does intelligence come into play. And dogs suck ass at solving puzzles. Some birds are somewhat decent at it, but still very far removed from what an LLM can do.

> communicate with other humans through barking

Yeah, Timmy fell down a well, right? Perfect example of 'intelligence' and something you'd prefer a dog over an LLM /s

> I can't tell if you're trolling at this point. Llms are also trained and therefore are based on repetition and conditioning.

That is a fair point, but remember that your training examples were "sniffing for drugs" and "being a guide dog", both of which are very much in-distribution training (guide dogs only do a very specific very small set of things and require a lot of training to even be able to do those).

But for the sake of argument, let's say that there are some tasks requiring intelligence where you would prefer a dog over an LLM. Answer me this: Roughly what percentage of distinct tasks requiring intelligence would you prefer to have a dog over an LLM? For each task, imagine that failure to complete the task will cause serious harm to your loved ones, so the stakes are high.


Keychron keyboards are absolutely amazing. And very affordable for what they are.


Sadly, haven't found anything close to what the G13 does with Keychron.


The code is what it does. The comments should contain what it's supposed to do.

Even if you give them equal roles, self-documenting code versus commented code is like having data on one disk versus having data in a RAID array.

Remember: Redundancy is a feature. Mismatches are information. Consider this:

// Calculate the sum of one and one

sum = 1 + 2;

You don't have to know anything else to see that something is wrong here. It could be that the comment is outdated, which has no direct effects and is easily solved. It could be that this is a bug in the code. In any case it is information and a great starting point for looking into a possible problem (with a simple git blame). Again, without needing any context, knowledge of the project or external documentation.

My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code".


> Redundancy is a feature. Mismatches are information. Consider this:

Respectfully, if someone wrote code like this, I wouldn't want to work with them. I mean next step is "I copy paste code instead of writing functions, and in the comment above I mention all the other copies, so that it's easy to check that they are all doing the same thing redundantly".

> The arguments against copious inline comments are "but people don't update them" and "I can see less of the code".

Well no, that's not my argument. I have been navigating code for 20 years and in good codebases, comments are rare and describe something "surprising". Good code is hardly surprising.

My problem with "literate programming" (which means "add a lot of comments in the implementation details") is that I find it hard to trust developers who genuinely cannot understand unsurprising code without comments. I am fine with a junior needing more time to learn, but after a few years if a developer cannot do it, it concerns me.


You did not engage with my main arguments. You should still do so.

1. Redundancy: "The code is what it does. The comments should contain what it's supposed to do. [...] You don't have to know anything else to see that something is wrong here." and specifically the concrete trivial (but effective) example.

2. "My take on developers arguing for self-documenting code is that they are undisciplined or do not use their tools well. The arguments against copious inline comments are "but people don't update them" and "I can see less of the code"."

> Respectfully, if someone wrote code like this, I wouldn't want to work with them. I mean next step is "I copy paste code [...]

This is an nonsensical slippery slope fallacy. In no way does that behavior follow from placing many comments in code. It also says nothing about the clearly demonstrated value of redundancy.

> I have been navigating code for 20 years and in good codebases, comments are rare and describe something "surprising".

Your definition of good here is circular. No argument on why they are good codebases. Did you measure how easy they were to maintain? How easy it was to onboard new developers? How many bugs it contained? Note also that correlation != causation: it might very well be that the good codebases you encountered were solo-projects by highly capable motivated developers and the comment-rich ones were complicated multi-developer projects with lots of developer churn.

> My problem with "literate programming" [...] is that I find it hard to trust developers who genuinely cannot understand unsurprising code without comments.

This is gatekeeping code by making it less understandable and essentially an admission that code with comments is easier to understand. I see the logic of this, but it is solving a problem in the wrong place. Developer competence should not be ascertained by intentionally making the code worse.


You talk as if you had scientific proof that literate programming is objectively better, and I was the weirdo contradicting it without bringing any scientific proof.

Fact is, you don't have any proof at all, you just have your intuition and experience. And I have mine.

> It also says nothing about the clearly demonstrated value of redundancy.

Clearly demonstrated, as in your example of "Calculate the sum of one and one"? I wouldn't call that a clear demonstration.

> This is gatekeeping code by making it less understandable

I don't feel like I am making it less understandable. My opinion is that a professional worker should have the required level of competence (otherwise they are not a professional in that field). In software engineering, we feed code to a compiler, and we trust that the compiler makes sure that the machine executes the code we write. The role of the software engineer is to understand that code.

Literate programming essentially says "I am incapable of writing code that is understandable, ever, so I always need to explain it in a natural language". Or "I am incapable of reading code, so I need it explained in a natural language". My experience is that good code is readable by competent software engineers without explaining everything. But not only that: code is more readable when it is more concise and not littered with comments.

> and essentially an admission that code with comments is easier to understand.

I disagree again. Code with comment is easier to understand for the people who cannot understand it without the comments. Now the question is, again: are those people competent to handle code professionally? Because if they don't understand the code without comments, many times they will just have to trust the comments. If they used the comments to actually understand the code, pretty quickly they would be competent enough to not require the comments. Which means that at the point where they need it, they are not yet professionals, but rather apprentices.


"Nearly odorless"?

I call bullshit.


Honestly just a fairly mild earthy smell. Nothing terrible. When I was a kid my dad could render the bathroom unapproachable for 15 minutes. But he drank whiskey and smoked.


Exactly. It's not that the producers or distributors (of food, content, etc.) are not malicious/amoral/evil/greedy. It's that the real solution lies in fixing the vulnerabilities in the consumers.

You don't say to a heroin addict that they wouldn't have any problems if those pesky heroin dealers didn't make heroin so damn addictive. You realize that it's gonna take internal change (mental/cultural/social overrides to the biological weaknesses) in that person to reliably fix it (and ensure they don't shift to some other addiction).

I'm not saying "let the producers run free". Intervening there is fine as long as we keep front of mind and mouth that people need to take their responsibility and that we need to do everything to help them to do so.


Doesn't the government try to ban heroin?? You have to live in the real world, not your ideal world, and in the real world people are not perfectly rational agents. They make mistakes. Each and every mistake could have been avoided if the individual just had a stronger will, was a little smarter, a little more prudent, or took a little more time to think, but just because mistakes can be avoided and some people are better at avoiding them than others does not change the fundamental issue: drugs, tobacco, gambling, and TikTok are trying to increase the rate at which mistakes are made. Wouldn't you rather live in a society where they aren't out to get you?

I think there's an argument that can be made, like, "well maybe 10% of the time people consuming alcohol is a mistake, but I just use it recreationally. The government shouldn't prohibit all drinking!" And sure. If it is really the case that people would take the same actions even if they had more time to think things through and were in a good mental state, the government should probably not be intervening for the 10% of the cases that doesn't hold. But you have to draw the line somewhere.


> I'm not saying "let the producers run free". Intervening there is fine as long as we keep front of mind and mouth that people need to take their responsibility and that we need to do everything to help them to do so.


> You: It's that the real solution lies in fixing the vulnerabilities in the consumers.

> Me: Just because mistakes can be avoided and some people are better at avoiding them than others does not change the fundamental issue: drugs, tobacco, gambling, and TikTok are trying to increase the rate at which mistakes are made. Wouldn't you rather live in a society where they aren't out to get you?


> This is where algorithmic, ad-fueled social media leads a republic.

Only if we keep repeating things like this.

People have agency and there are many people who are not led by or actively abusing social media. You don't tell a heroin addict it's not their fault, that the presence and malice of dealers made their fate inevitable.


You’re asking for a hundred million people to personally overcome engineered manipulation rather than addressing the manipulators. What you say is true for the individual but it’s not a solution at scale for our societal issues.

It’s much akin to suggesting that poor people should not blame the system that keeps them poor and instead should focus on their education and getting themselves out of their current situation. Sure it’s accurate for the individual but, it’s not an actual solution to the problem at scale.

Heroin addicts quit aided by the intense and direct efforts and support of the people around them. Whether that’s hospital staff or family. And you often do tell heroin addicts it’s not their fault. You tell them addiction is a disease. That their addiction is not a moral failing.


There's room for a yes-and approach, it's not either-or.

Some people can make the change, and since social media is social, that small vanguard causes others to switch as well.

One can blame the system for keeping them poor, while also doing as much as possible to change their own position within the current system, those are not in opposition to each other! In fact, discouraging people from getting educated because of the system is its own form of oppression.

Highlighting people's own agency to make changes for themselves also highlights how the engineered manipulation of social media is not inevitable. These are complimentary things to do.


> In fact, discouraging people from getting educated because of the system is its own form of oppression.

Who is doing this? Did I suggest that addressing the systemic issues with the cycle of poverty precludes individuals pursuing education? No, I said it doesn’t on its own resolve the issue of poverty as a whole.

> Only if we keep repeating things like this.

I still think the answer to this problem will not hinge on individuals choosing to interact with social media less and more intelligently. The vast majority of us know social media is manipulating and dividing the population. We all know we should use it less. We already have these discussions and have been for years and it shows to have very little actual effect on the overall situation we find ourselves in.

You are right in that it can be a yes-and type situation but my worry is that bringing personal responsibility to the forefront of the conversation mostly serves to diminish the responsibility we face as a society to reign in these monstrous (in both ways) companies that are actively destroying our social fabric in pursuit of profits in the most charitable view, or in pursuit of bringing us into a hellscape of a new world order, re Thiel et al.


We also need to leave platforms with highly manipulated and unclear algorithms. Especially the former Twitter, Facebook, and TikTok.

This is one of the great things about BlueSky, you can make your own feeds. The bad thing about BlueSky is that the default algorithms are pretty bad (except for the simple "following" feed). But choosing your own sets of feeds, each with their own algorithms, is a great way to keep up with highly focused news and also allow discovery of new information, without as much manipulation as you'll get on the past generation of social media.


> It is always the eternal tomorrow with AI.

ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.

I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.

Your phrasing seems overly pessimistic and premature.


That GP's comment is an advertisement for a subscription based closed source application that gets access to your credentials.


To be fair, most vending machine operators do not allow suggestions from customers on what products to stock, let alone extensive ongoing and intentional adversarial psychological manipulation and deception.

If it had just made stocking decisions autonomously and based changes in strategy on what products were bought most, it wouldn't have any of the issues reported.


Did you use AI to help you understand the code and what it was doing (incorrectly)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You