All kinds of worries are possible. (1) It turns out that all this AI generated stuff is full of bugs and we go back to traditional software development, creating a giant disinvestment and economic downturn. (2) sofware quality going way down. we cannot produce reliable programs anymore. (3) massive energy use makes it impossible to use sustainable energy sources and we wreck the environment every more than we are currently doing. (4) AIs are in the hands of a few big companies that abuse their power. (5) AI becomes smarter than humans and decides that humans are outdated and kills all of us.
It obviously depends on how powerful AI is going to become. These scenarios are mutually exclusive because some assume that AI is actually not very powerful and some assume that it is very powerful. I think one of these things happening is not at all unlikely.
1 and 2 are really only an issue if you vibe code. There's no reason to expect properly reviewed AI assisted code to be any worse than human written code. In fact, in my experience, using LLMs to do a code review is a great asset - of used in addition to human review
For me it's always the fear of AI regurgitating something legally problematic directly from its training set: unintentionally adding copyright and licensing issues from those even with no intentions of doing so.
Obviously these issues existed before AI, but they required active deception before. Regurgitating others people's code just becomes the norm now.
People have measurably lower levels of ownership and understanding of AI generated code. The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.
In essence, we get the output without the matching mental structures being developed in humans.
This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.
> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
While I agree with this intuitively, I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc.
> "I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc."
There was no "switch"; the transition took literally decades. Assembler and high level languages co-existed in the mainstream all the way until the 1990s because it was well understood that there was a trade off getting the best performance using assembler (e.g. DOOM's renderer in 1993) and ease of development and portability (something that really mattered when there were a dozen different CPU architectures around) using high level languages.
There is no need to get past the argument because it doesn't exist. Nobody said that.
There is a massive difference in outright transformation of something you created yourself vs a collage of snippets + some sauce based on stuff you did not write yourself. If all you did to use your AI was to train it exclusively on your own work product create during your lifetime I would have absolutely no problem with it, in fact in that case I would love to see copyright extended to the author.
But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright protection. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).
Is this even a controversial statement? Seems very clearly correct to me.
My original point wasn't worried about the copyright though. I'm completely ignoring it for now because I do agree it's a problem until Congress says something (lol) or courts do.
>can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran
that's a bad comparison for two reasons. One is that C is a transparent language that requires understanding of its underlying mechanics. Using C doesn't absolve you from understanding lower concepts and was never treated as such. The power of C comes squarely with a warning label that this is a double edged sword.
Secondly insofar as people have used higher level languages as a replacement for understanding and introduced a "everyone can code now" mentality the criticism has been validated. What we've gotten, long before AI tooling, were shoddy, slow, insecure tower-of-babel like crappy codebases that were awful for the exact same reason these newest practices are awful.
Introducing new technology must never be an excuse for ignorance, the more powerful the tool the greater the knowledge required of the user. You don't hand the most potent dangerous weapon to the least competent soldier.
Sure, a lot of people are incompetent. But the world generally works. Which is of course, the problem. The only time anything really gets questioned is when you start having a GitHub like 0 9s situation.
The study compares ChatGPT use, search engine use, and no tool use.
The issues with moving from ASM to C/Fortran are different from using LLMs.
LLMs are automation, and general purpose automation at that. The Ironies of Automation came out in the 1980s, and we’ve known there are issues. Like Vigilance decrement that comes when you switch from operating a system to monitoring a system for rare errors.
On top of that, previous systems were largely deterministic, you didn’t have to worry that the instrumentation was going to invent new numbers on the dial.
So now automation will go from flight decks and assembly lines, to mom and pop stores. Regular to non-deterministic.
The HLL-to-LLM switch is fundamentally different to the assembler-to-HLL switch. With HLLs, there is a transparent homomorphism between the input program and the instructions executed by the CPU. We exploit this property to write programs in HLLs with precision and awareness of what, exactly, is going on, even if we occasionally do sometimes have to drop to ASM because all abstractions are leaky. The relation between an LLM prompt and the instructions actually executed is neither transparent nor a homomorphism. It's not an abstraction in the same sense that an HLL implementation is. It requires a fundamental shift in thinking. This is why I say "stop thinking like a programmer and start thinking like a business person" when people have trouble coding with LLMs. You have to be a whole lot more people-oriented and worry less about all the technical details, because trying to prompt an LLM with anywhere near the precision of using an HLL is just an exercise in frustration. But if you focus on the big picture, the need that you want your program to fill, LLMs can be a tremendous force multiplier in terms of getting you there.
> The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.
The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).
Why does everyone who hears "AI code" automatically think "vibe-coded"?
Are they against change in general, or certain kinds of change? Remember when social media was seen as near universal good kind of progress? Not so much now.
Social media has never been seen as a universal positive force? It's the same with AI. It has good and bad aspects as does any technology that has an impact on this scale, AI will arguably have a much bigger impact imo.
People are generally against change that forces them to change the way they used to do things.
I'm sure most will have their reasons why they are against this particular change, but I don't think it will affect anything. The genie is out of the bottle, AI is here to stay. You either adapt or you will slowly wither away.
It reminds me of something I read on mastodon: "genie doesn't go back in the bottle say AI promoters while the industry spends a trillion dollars a year to try to keep the genie out of the bottle"
It's certainly possible. All that is required is for AIs to become more expensive than humans. Developing projects on a $100 Claude Code subscription is a lot of fun. I bet people would simply go back to hiring human developers if that subscription cost $10,000 instead.
That is the bait and switch. The end goal is that you are out of the equation. Your perceived effectiveness at using AI as an exchange of labor diminishes over time to the point that you become irrelevant.
Who has that end goal?? Who is going to direct the AI if only the CEO is left in the organization? The CEO will never actually do it , and will always need someone who can and will do it. I just can’t see a grand plan to take humans out of the equation entirely.
If you selectively read one sentence of my comment, you risk missing the forest for the trees. I don't have any particular knowledge on the arab spring so I won't comment on that but I quite clearly said that technology has good and bad aspects to it.
This is like blaming a knife as being a killer weapon. Social media is inherently good if owners of the platforms allow for good interactions to take place. But given the mismatch between incentives alignment, we don't have nice things.
"Hi! Thank you for your feedback. You are absolutely right. I am new to GitHub and I uploaded the project as a ZIP file initially to include the assets and binaries.I am currently extracting the source files (C++ .cpp and .h) so you can browse the 'Bit-Plane sequencing' logic directly in the repository without downloading anything. It will be ready in a few minutes!In the meantime, you can find the core logic in the src/ folder inside the ZIP."
“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
I've lost count of the interactions Ive had or witnessed in which a living breathing authority on a subject is ignored in favour of using voice recognition to ask an LLM the question first, or worse, second.
How is the getting stranded example different than asking on a travel forum how to get somewhere, and an active and well intentioned user that isn't familiar with your area of travel answers, gives you wrong instructions, and you get lost?
It's because we spent that last 50 years training people that computers are algorithmic, cold, and don't make human mistakes. Your calculator can't tell you the meaning of life, but it will never get 2 + 2 wrong.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
cunningham's law [0] [1] increases the likelihood that at least one other person will point out the error and correct it. chances are you'll probably get more than one person posting.
LLMs don't do this. they give confident language output, not correct answers.
Because the vast and overwhelmingly majority of the time, if you ask a question into the ether that nobody has a good answer to, most people will gloss over it and not bother answering, as attested by decades of relatable memes ( https://xkcd.com/979/ ). In contrast, the chatbot is trained to always attempt to give an answer, and is seemingly disincentivized via its training set to just shrug and say "I don't know, good luck fam".
reply