> There just happened to be a whacko that got into the White House
My counter to this is that such an occurrence was increasingly likely starting around the time the massive US Evangelical base was essentially fully captured by (and became a wing of) the Republican party. It was more and more obvious over a period of at least 40 of those 60 years you mention.
There's a kid outside the window of the place I'm staying who's been in the yard playing and talking with people online through his VR headset for like 2+ hours. He's living in the future. Whatever happens, he and his friends are going to continue to be interested in more of this.
Whether what they're using in 20 years is produced by the company formerly known as Facebook or not is a whole different question.
When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes.
Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted.
What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases.
Human journalists and marketing copy writers have been writing like this for at least 50 years, if not considerably longer.
I am exhausted by so many people calling writing out as AI without sufficient proof other than writing style. Some things are more obvious, sure... maybe I'm just too stupid to see a lot of the rest of it? But so much of what gets called out seems incredibly familiar to me compared with traditional print media I've been reading my entire life.
I'm starting to wonder if a lot of people just have poor literacy skills and are knee-jerk labeling anything that looks well written as AI.
I think one factor is the lack of variation. Sure, a copywriter might use those techniques as a hook, but there’s far more content using them paragraph after paragraph after paragraph than I’ve ever seen before.
You might also reframe how you read those comments. Perhaps when people are labeling a piece as “written by AI,” they’re just conveying that they perceive it to use the same “voice” that LLMs use, and judge that voice negatively. Sometimes people say things non-literally and don’t need proof.
You're right that (some) marketing copy writers have been writing in this style for decades, but suddenly every second tech blogger has assumed the same voice in the past 2 years. Not everyone is as sensitive to it. I read this crap daily so I've developed an awareness and I'm confident in calling it out.
I don't think I've personally seen a single false positive on HN. If anything, too much slop goes through uncontested.
> If anything, too much slop goes through uncontested.
It's actually insane opening up /r/webdev and similar subreddits and seeing dozens of AI authored posts with 50+ comments and maybe a single person calling it out. Makes me feel crazy. It's not as much of a problem here, but there is absolutely a writing style that suddenly 50% of submissions are using. It's always to promote something and watching people fall for it over and over again is upsetting.
> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming
It seems to me like too many people are missing this point.
Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.
The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.
One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.
The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.
So I 100% side with Dijkstra on that point.
What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.
> The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term.
While I agree with your second sentence here, the first one gives me pause. Why isn't it "worth discussing"? Do you refuse to engage in conversation with all mentally challenged people? Do you avoid all interactions with human children? There are many, many folks living their lives as fully as they can right now who are convinced these things are alive. There are ethical implications to that assumption regardless of whether the things are actually alive, especially when people respond to them as if they are.
We need to have better arguments and refine them for different audiences.
Are you aware of the concept of philosophical zombies? Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is. On the other hand, some of those people's peers are arguing that weather patterns might be conscious (among even more extreme claims). From the standpoint of logic and reason being paramount, we cannot claim to know the answers to these questions. What we can do is discuss the ethical implications of various people coming to different conclusions about them.
Because it's obviously not true. The second sentence follows the first.
> There are many, many folks living their lives as fully as they can right now who are convinced these things are alive.
And those people are living in a delusion, whether it's self-imposed, or the result of false advertising. The way you get them out of that is by rationalizing and explaining the technology in terms they can understand, not by mistifying it and bringing up existential topics.
> Are you aware of the concept of philosophical zombies?
I wasn't, no.
> Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is.
Look, we can philosophize about the nature of existence until we're blue in the face. People have been pondering about similar questions since the dawn of humanity. FWIW I don't believe in "top minds" as having authority to tell us anything. What we know for certain is how technology works, since we built it. And we damn well know that this technology has absolutely zero understanding about anything. Go ahead, ask it how it works. It will tell you that it doesn't understand a single word it's generating, but it sure can string together patterns that make it look like it does. And you think there's some deeper meaning here we should discuss seriously? Please.
Like I said, I think these are interesting thought experiments, and something we should keep thinking about. But it should be clear to anyone, especially technically minded people, that we're nowhere near being able to create artificial intelligence. What we have now are a bunch of grifters and snake oil salesmen selling us a neat statistical trick and telling us it's "AI". This should be criminally prosecuted, if you ask me.
Not saying there's anything wrong with your perspective (lots of terms get in muddied waters, it's common and not a problem if everyone is on the same page), but this is what I just found on Wikipedia:
"Early on, the notebook computer and LCD vendors commonly used the term LVDS instead of FPD-Link when referring to their protocol, and the term LVDS has mistakenly become synonymous with Flat Panel Display Link in the video-display engineering vocabulary."
The cable in the article is pretty much doing the same conflation of terms that Wiki is talking about - the automotive one is a proprietary cable that carries some protocol that uses LVDS as its signalling, so at the most basic level both it and the display cable in the laptop are 'LVDS cables' but that's also the most generic term that gives you no information about the protocol actually being carried by the cables.
Yeah I saw that too which is why I posted my comment, it's surprising to me :) LVDS for display cables was an incredibly term in that context. Even still is sometimes despite them mostly being eDP (embedded-DisplayPort) now, which is quite incorrect hah
And eDP is a differential signal at 200 or 400 millivolts so I don't see how that's "quite incorrect". It's not "the" LVDS but it's still in the category.
> having conceived of no further items to which AI could provide assistance
For me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.
I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
A point worthy of much more discussion! However, this article is oversimplifying things.
We are going down tumultuous, uncharted, river rapids. There are many models of varying sizes and types being trained on hardware large and small, with finely tuned, bespoke weighting competing alongside industrial, committee-driven inference with massive budgets.
There is a wide spectrum of sizes of models and the hardware/environments they run in. What works today in one place may not work tomorrow in another. What stopped working yesterday may start working better next month.
The only way to have enough control to be scientific about it is to run your own hardware and provision a large amount of it to R&D. For anything big, this is very expensive.
In summary: You cannot predict how role-playing style prompts influence output until you thoroughly test it against a proper 'control group' on whatever stack you're currently running.
My counter to this is that such an occurrence was increasingly likely starting around the time the massive US Evangelical base was essentially fully captured by (and became a wing of) the Republican party. It was more and more obvious over a period of at least 40 of those 60 years you mention.
reply