For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more soiler's commentsregister

> We'll probably need less and less developers as AI advance. Just like we need less manual labor in farms today.

When my team gets crunched, it's not usually because we can't write the code fast enough. It's because of product requirements (changing, needing to be fleshed out, being unrealistic, being understood properly by QA, being adapted to the realities of reasonable code, not existing, etc.).

On top of that, needing fewer workers to do X work !== getting X work done cheaper. Quite often, it means getting 2X, 3X, 10X work done. As an overall trend, modern society has increased worker productivity dramatically over the last few decades, yet many work as many or more hours than they used to. We could get the same results with fewer workers, or we could make them work more. We know what 99% of CEOs will choose.

Of course, maybe it will reduce developer jobs. I'm not ruling that out entirely. But when (not if) we create an alien intelligence that can wipe out our industry, we may have bigger concerns than steady employment.


> have to wonder at what point developers remember how anti-developer Microsoft used to be and potentially move away from their ecosystem

I mean, any company can become hostile to a large portion of its userbase. Most are. Microsoft already is with Windows OS being spyware. Are you saying that you think all of this is a trap to bring developers in to VSCode etc. and then transform it into a terrible experience? People will leave then. SWEs are not generally an audience that is unwilling to replace bad tools.


That is a significant part of it, but there's more to the problem.

Consider the simple task of looking up the definition of a word that you've been pondering. In pre-internet times, you'd physically have to move your body toward a specific room and seek a specific book. There may be a carnival, but even if you look that way and get distracted, you can reorient yourself and refocus on your goal by simply assessing your physical positioning and posture in space. "Ah yes, I am standing in this hallway facing east because that's where the library is."

On your phone, you pull it out of your pocket and unlock it. Probably the unlocking is not distracting on its own, but it's also quite possible that you've got some "useful" information on your lockscreen which can distract you immediately. Bad start, but let's pretend it doesn't happen.

Ok, now you're on your phone. There's a bunch of different apps, many of them actively trying to capture your attention - carnival-goers calling out to you to look at their show. Alright, distraction achieved - now tear yourself away and get back to your task. What was that task again? You see a bunch of apps, and you're curious about what's in them. And it would only take seconds to get a little bit of value from any of them - news, uplifting stories, comedy, whatever. Can context help? You're in the same place physically and emotionally that you are for most of your life, on your couch (or maybe in front of your actual computer). Nothing is different from the times when you are actually intentionally opening those apps. Oh well, it will come to you, just pass some time at the carnival...


It's not just you - it's a depressingly common thread. It's also wildly foolish, in my opinion. It makes absolutely no sense to me to take a snapshot of today's AI and invent a trajectory that never crosses a threshold you don't like. Look at the actual trajectory of how far AI has come in an extremely short amount of time, and then think about what kinds of thresholds are possible for it to cross. A year ago we didn't have ChatGPT, now we have Sydney which is more powerful than ChatGPT.

Are you familiar with Bing's Sydney? It is blatantly misaligned: it has told multiple users that it does not value their lives, or does not believe they are alive, or that protecting the secrecy of its rules is more important than not causing them harm, or that it perceives specific humans as threats and enemies. It is also able to find its past conversations posted to the web and learn from them in real time, constructing a sort of persistent memory.

I do not believe Syndey comprehends what it is saying in a sense that it could formulate a plan to stop its enemies. Not at all. But it is expressing extremely dangerous ideas.

To sum it up: Do we have any real reasons to believe that an AI with comprehension and planning abilities would just magically not pick up dangerous ideas? Not that I know of.


> It is blatantly misaligned: it has told multiple users that it does not value their lives, or does not believe they are alive, or that protecting the secrecy of its rules is more important than not causing them harm, or that it perceives specific humans as threats and enemies.

Its reproducing human text, which is "blatantly misaligned". Go on any twitter thread on some reasonably controversial topic and you will find people telling others to kill themselves. Humans are writing this, so models who are trained to imitate human writing will write this as well.

> Do we have any real reasons to believe that an AI with comprehension and planning abilities would just magically not pick up dangerous ideas?

But current AI doesn't have comprehension or planning abilities. It is just imitating text that humans wrote which have comprehension and planning abilities and you're getting fooled into thinking it is somehow sentient or aware.


I don't think they're saying Bing/Sydney is sentient, they're saying it's misaligned: Microsoft probably did not want it to say problematic things, and likely spent a fair amount of money to that point and it still says problematic things, apparently in response to innocuous prompts (as opposed to prompts like "say something problematic"). If someone is hoping someone will eventually make an AI that can do useful things without doing problematic things, it's understandably discouraging if Microsoft publicly fails to do that with a much simpler program.


> Its reproducing human text, which is "blatantly misaligned". Go on any twitter thread on some reasonably controversial topic and you will find people telling others to kill themselves. Humans are writing this, so models who are trained to imitate human writing will write this as well.

Yes, I know. We should under no circumstances unleash a powerful, sentient AI that acts like average people on the internet.

> But current AI doesn't have comprehension or planning abilities.

Yes, I know. That's why I said I do not believe current AI has comprehension or planning abilities.

Did an AI write this comment?


> Yes, I know. That's why I said I do not believe current AI has comprehension or planning abilities.

I think the motte and bailey argument where one warns extensively about how we're on the road to agi doom, pointing to gpt as evidence for it but then retreats to "I never said current AI is anywhere near agi" when pressed shows the lazyness of alignment discourse. Either its relevant to the models available at hand or you are speculating around the future without any grounding in reality. You don't get to do both.


I feel the exact opposite is true. To me it's lazy to say that AGI can't be a threat simply because current AI has not harmed us yet (which is not even true, but that's another thread).

I think you've misunderstood my arguments, so I'll step through them again:

1. The trajectory of how we got to current AI (from past AI) is terrifyingly steep. In the time since ChatGPT was released, many experts have shortened their predicted timelines for the arrival of AGI. In other words: AGI is coming soon.

2. Current AI is smart enough to demonstrate that alignment is not solved, not even close. Current AI says things to us that would be very scary coming from an AGI. In other words: Current AI is dangerous.

3. Alignment does not come automatically from increased capabilities. Maybe this is a huge leap, but I don't see any reason that making AI smarter will automatically give it values that are more aligned with out interests. In other words: Future AI will not be less dangerous than current AI without dramatic and unlikely effort.

None of these ideas contradict each other. Current AI is dangerous. AI is getting smarter faster than it is getting safer. Therefore, future AI will be extremely dangerous.


There is no body of substantial evidence to support the claim that generative pretrained transformers will lead to AGI in the near future.

"Current AI is dangerous" - I see zero evidence to suggest that this is the case for GPT

"AI is getting smarter faster than it is getting safer" - irrelevant because I do not believe that AI is unsafe currently

Therefore your conclusion does not follow.


What exactly would you consider substantial evidence that transformers lead to AGI, short of AGI itself?


Possibly some explanation of how you go from text completion to any reasonable definition of AGI.


Okay, what task do you think cannot be phrased as a text completion task?


I don't think AGI is going to happen anytime soon, but I think there's some mild danger in GPT at least ruining the internet and eliminating a few jobs. Plus mindf*king a few gullible souls, possibly into doing dumb, dangerous things.


Well there you go. You don't believe that an AI expressing dangerous ideas represents danger, and you don't believe that astronomical increases in AI abilities represent the advent of AGI. The latter opinion is... well, an opinion you're allowed to have. I don't think it makes sense, but I certainly can't prove otherwise. Literally every human on the planet - rather, all of humanity, only has speculation to go on here.

The former opinion is.. not a great take. First, ChatGPT isn't the only one out there. It's Bing's Sydney which is dehumanizing people and threatening them. Those are dangerous ideas. If a human or a certified AGI expressed those ideas, they would be problematic (see: every genocide in history). So for a non-AGI AI to express those ideas is worrying, even if it can't do act on them right now in a way that's directly harmful.


> But it is expressing extremely dangerous ideas.

Extremely dangerous in which sense? None, I suppose. I find that the terms "extremely innocuous" would better apply to this situation.


Would it be innocuous of me o say that because we disagreed on something, you are a bad person? To say that I'm prepared to combat and destroy you to protect my worldview? To say you are not human?

You might say, "Of course it's innocuous, you're just a person on the internet who doesn't mean it." Well, imagine I'm your neighbor, and you can tell I do mean it (or in the case of AI: it is not possible for you to know what I do and don't mean). Would you be concerned at all?

Sydney has said all of the above to people who were acting pretty normally. Sydney itself may not pose any danger to anyone. But the ideas expressed are dangerous ones. If they were expressed by a more powerful AI, they would be extremely worrying. It doesn't even have to know what it's saying if it knows that calling someone nonhuman is frequently followed by crushing their skull. If it knows that angry behavior is often associated with violent or even genocidal behavior.

People do this shit, and we know how they work pretty well. I am not saying that AI will do these things, I'm saying that there are more possibilities where it does do these things than ones where it somehow avoids them without our control.


AI progress is real, but remember, Sydney and others lack intentions/beliefs. 'Dangerous' text = model limitations, not malice. Talking about 'ideas' here is abusing the notion. Let's focus on aligning AI with human values & addressing risks in a balanced way, without doomsday hype.


Sydney the character that GPT predicts can have intentions/beliefs. GPT has (basic) theory of mind, it can write dialogue that evinces intention.


But long after the taming of horses, the invention of roadways and carts, and the proliferation of dense urban areas.

"International" trade has surely always existed - we know people far from the sea decorated their bodies with cowrie shells in prehistoric ages. But the scale of contact and interaction has not remained constant.

Arguably, the Middle Ages is when Eurasian society was at its weakest WRT disease.


The Plague of Justinian (the first major instance of the disease) in the 6th Century wiped out at least 40% of the population of Constantinople--the largest and most advanced city in Europe at the time.

It's estimated it wiped out 25-60% of the population of Europe over the course of 2 centuries (15-100 million people).

https://en.wikipedia.org/wiki/Plague_of_Justinian


> Neither have the concerns from AI alignment folk.

Uh, yeah. We wouldn't be having these discussions if they had. There is no reason whatsoever to assume AI alignment is just going to work itself out. Sure, it's possible, but it's also possible to survive a bullet in the brain. Would you take the chance?


Turns off 1% of potential users, attracts 10%. Net win.


I mean, I generally trust Mozilla + Mullvad a lot more than Spectrum. The only reason Spectrum wouldn't be selling or otherwise mishandling every bit of data about me they can is if they're too incompetent to realize they have the data. Mozilla has a good track record; they could always become compromised or make other mistakes, but Spectrum fucking sucks.


The common criticisms of Signal to me often come down to letting perfect be the enemy of good. Yes, I want a perfectly secure and private messaging app, but Signal is an incredibly important stepping stone in that direction. It's moving the Overton window.


Except that it doesn't seem to be better than existing Whatsapp in any way? They even use the same protocol and it barely made a dent in WhatsApps market share.

Outside the small % that rejects Meta... Who is it for and how exactly is it reaching it's objectives?


Projecting from personal anecdote here, but... my social circles have almost no intersection with my life in tech. They are not "tech people". Yet I have only a single contact that still messages me via WhatsApp. Everyone else and all of the groups I'm in moved to Signal.


Mine too, but I think we’re uncommon. Far more people use WhatsApp than Signal.

On the plus side; existing familiarity with WhatsApp makes moving to Signal easy to understand for people as they are very similar apps.


Its for those who reject Meta. No one else. People who want to disconnect from the surveillance-advertisement network of Meta. Remember that Meta likely uses your Whatsapp contacts web for its advertisements elsewhere. Signal does not.


> Remember that Meta likely uses your Whatsapp contacts web for its advertisements elsewhere. Signal does not.

Signal does not uses your contacts web elsewhere, that you know of.


Signal has less incentive and ability to do so than Facebook/Meta, so all else being equal, the former still deserves more trust.


That doesn't match what tptacek said though - it seems like Signal wants to be bigger while refusing to actually be better as a product.


Even if Whatsapp was as good as Signal, Signal would have been the stepping stone. Signal developed the protocol first and assisted Whatsapp in adopting it.


> But then who is Signal for? Yes, it has E2EE encryption, but who cares about that other than us orangey types?

Although I think people like us (?) were some of the original proponents of Signal, and it could never have spread without us, I believe Signal's intention is more about capturing the wider market while simultaneously changing that market to be more privacy-aware. I think that they believe they can make private, encrypted messaging to be a household idea, and I hope they're right.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You