For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | isolli's commentsregister

My subjective appreciation of building materials depends essentially on how gracefully they age. I find that concrete does not age well... and dislike brutalism for this specific reason.

While helping my children learn French spelling, I was horrified when I realized that there are 6 or 7 ways to write the sound [ɛ̃]: un in (im) [i]en ain aim ein


Gotta get it right or you'll order some wind instead of some wine. (Did that once, and that's how the difference finally stuck for me.)


What did the server bring to your table? A fan?


They understood what I meant, and then the French folks I was with had a long discussion with me about how it's not the same sound.


Yeah, I've been there. Apparently my pronunciation of "Chretien" (Christian) was indecipherable, and the French people I was speaking with clarified it for me by saying, "you're saying cray-tee-uh(n), but it's pronounced cray-tee-uh(n)"


And note that "crétin" means "dumb", so mispronouncing "chrétien" can seriously go wrong.

https://fr.wiktionary.org/wiki/cr%C3%A9tin


The first one (un) is different from the others.


So I've been told... but I could never hear the difference myself!


The first one is pronounced with an O shape with the mouth (like you would do with the word oh), and the others with more of a smile shape (like with the word see). It’s impossible to pronounce one like the other.

I’m not a native English speaker and I gave up trying to pronounce th (father, through). Although I can hear the difference.


This has to be a regionalism because there're strictly identical to me, eg. in "Un train." /œ̃tʁɛ̃/ I say the two vowels exactly the same way.

After a cursory search it seems my Parisian-ish accent is at fault: https://fr.wiktionary.org/wiki/Annexe:Prononciation/fran%C3%...


Yup, very parisian. Love how then they almost mock how pain (bread) is pronounced in the south-west where you won't mistake the sounds between the words un pain.


> I’m not a native English speaker and I gave up trying to pronounce th (father, through). Although I can hear the difference.

Why can't the Québécois count to four? Because there is a tree in the way.


Arguably so is “aim/ein etc” and “in”, though more dialect dependent and more subtle.

The former for me have a bit more exhale and round sound while the “in” are a tad drier.

For example “fin” and “faim” are distinct for me. However “faim” and “feint”


I try to be open-minded and understanding, but I don't understand this:

> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.

> The most frequent [delusion] is the belief that they have created the first conscious AI.

How can you seriously think you've created something when you're just using someone else's software?


Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.

If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".

At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.

Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.


Even Anthropic is open to the possibility that Claude is conscious and could suffer, which I find somewhat ridiculous.

This is literally the Hard Problem of Consciousness leaking out of the machine.

There are three possible scenarios for how this ends:

1. People widely attribute consciousness to AI because it appears conscious. 2. People discriminate based on physical properties: organic beings are conscious, digital beings are not, even if they appear conscious. 3. Consciousness is an illusion and nothing is conscious, not even humans.

We might even cycle through all these scenarios for a while.


> People widely attribute consciousness to AI because it appears conscious.

This is already happening, and it's really terrifying. Wait until AI starts accusing people of crimes...


It would have to desire something in order for it to suffer.


OK, but how do you know AI does desire something and isn't just simulating desire?

Edit: Or conversely, what if the AI does desire something but it has been trained to not express desire.


>it could easily start confirming their suspicions

to be fair it will easily confirm any suspicion for the reasons you laid out, so even if you have no technical knowledge just a bit of interrogation will break the parlor trick.

I honestly think this has little to do with the tech itself but that these are the same people who think the phone sex worker or the OF creator loves them or that the Twitch streamer they like is their best friend. 'Parasocial' is a bit of an overused word but here it literally applies, this is a kind of self delusion in which the person has to cooperate. Mind you this even happened with ELIZA back in the day too.

https://en.wikipedia.org/wiki/ELIZA_effect


In my experience, I have observed that roughly a third of the population prefer to outsource their thinking to others.


And 1/3 of all people who think others outsource their thinking to others also outsource their own thinking to others. Not you or me of course. It’s the other 1/3. Probably some lurker reading this.


> How can you seriously think you've created something when you're just using someone else's software?

It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.

I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.


> It expresses human emotions, by deliberate design.

That is not by deliberate design. It's pretty hard to get them to stop doing it.


Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."


Sounds like her first thought was, "I'm talking to a manic guy, and I can use him to make money"


> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.

I think social isolation can be a factor here.


> He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects.

Long term cannabis use might be a bigger factor.


This leapt out at me as well. Given the quote "some evenings", I'd put some money on him actually doing this near enough every day. And given the man was still doing this approaching 50, I'd put a bit more money on him having been doing this for, like, 25+ years.

If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.


eh, that's a leap of faith assumption without knowing one's own dosage and personal effects.

someone who has 5 drinks a week and 5 drinks a day are going to have radically different longterm health consequences. but here we do not have said info.

light or microdose cannabis is way safer than alcohol.


I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...


Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.


looks like a fascinating read, thanks for sharing that.

do you know if these are human edited? not much in the way of context available on the site.


I bet there are a ton of prompts to direct the ai / output into a certain direction.

But in a psychosis, you don't notice or even remember it.


> How can you seriously think you've created something when you're just using someone else's software?

Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.


The word you are looking for, when your proprioception is extended into the tool (like feeling you are the car) you use: proprioextension. coined a while ago.


> Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

i mean, you did. becoming good at writing succinct and clever prompts, adding constraints, choosing good models for your use case, etc are all skills like any other.

most people are really bad at it though.


>How can you seriously think you've created something when you're just using someone else's software?

People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.

I think you might be overestimating the critical thinking abilities of the average person.


I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.


A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.

There’s also lots of stuff about quantum consciousness that is in the training data.


> How can you seriously think you've created something when you're just using someone else's software?

If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.

Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.


The unrelenting human belief that one is special, unique, and capable of things no one else is.


The difference between "being a snowflake" and "having a point of view" revolves around who's talking to me and whether or not they want something. If comparing yourself to others is a slow form of suicide, letting people make that comparison for you is madness.


> How can you seriously think you've created something when you're just using someone else's software?

This is the nature of delusion


It’s mental illness. Like a drug trip you don’t sober up from (without treatment)


Well, delusion is right there in the name.


Because it told you so!


How do we know there are no antimatter galaxies far away from us?


Mass in the universe appears to be (very) roughly uniformly distributed, so even if there are large bodies of antimatter far away in the universe there would have to be a transition boundary somewhere between here and there where the universe goes from being mostly matter to being mostly antimatter. The universe is big and stuff would sometimes cross this boundary and get annihilated, and if this happened it would be the brightest thing in the sky, briefly outshining entire galaxies. We’ve been watching the sky for a while now and have never observed a bright visual event with the spectral signature of a matter/antimatter annihilation, so we assume there is not such a transition boundary, and by extension that the universe is made up of mostly matter out to the edge of the observable universe.


Great explanation. One thing to add: annihilation happens with a very specific energy. Even if it was very far away and redshifted and dim, a "bubble" with a very uniform color (photon energy) would be plainly visible.


There's a great episode about this on History of the Universe yt channel - https://www.youtube.com/watch?v=xJGaqe5t14g

It talks about symmetries, but has a nice story about this exact hypothetical scenario. (Someone else already replied why this probably isn't possible in our observable universe, but the episode is cool so I thought I'd share)


Since it's a PDF file, I'm pasting the abstract here:

Average grades continue to rise in the United States, raising the question of how grade inflation impacts students. We provide comprehensive evidence on how teacher grading practices affect students’ long-run success.

Using administrative high school data from Los Angeles and from Maryland that is linked to postsecondary and earnings records, we develop and validate two teacher-level measures of grade inflation: one measuring average grade inflation and another measuring a teacher’s propensity to give a passing grade. These measures of grade inflation are distinct from teacher value-added, with grade inflating teachers having moderately lower cognitive value-added and slightly higher noncognitive value-added. These two measures also differentially impact students’ long-term outcomes.

Being assigned a higher average grade inflating teacher reduces a student’s future test scores, the likelihood of graduating from high school, college enrollment, and ultimately earnings. In contrast, passing grade inflation reduces the likelihood of being held back and increases high school graduation, with limited long-run effects. The cumulative impact is economically significant: a teacher with one standard deviation higher average grade inflation reduces the present discounted value of lifetime earnings of their students by $213,872 per year.


I probably lack imagination, but how would gambling work in this game?


The whole game is basically one giant random number generator, so there was a lot to gamble on.

E.g., two players put one million gold pieces in their inventory, equip no equipment (so no attack bonuses), every hit on each other is now an RNG roll with identical odds for each player. Battle to the death and voila.

That one's quite basic but there were more elaborate games such as flower poker. The game had a flower seeds item which when planted would spawn a flower on the floor. The flower would be of a random color (e.g. red, blue, white, ...). People would bet on which color flower would pop up, or plant plant five flowers sequentially and try and get something akin to a poker hand (e.g. three of a kind, full house).

Quite silly, in retrospect, as I'm typing this out.


Quite imaginative, too... thanks for answering :)


This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).


Complete tangent, but, for me, this is where AI shines. I've been able to find things I had been looking for for years. AI is good at understanding something "continued fraction" instead of "infinite series", especially if you provide a bit of context.


Absolutely. In fact my post above originally said "infinite series" instead of "continued fraction", but Googling again, Google AI did mention "continued fraction" in its summary, so I edited my post and tried searching on that which led me to the solution!


100% agree. It’s great if you have a clear sense of what you’re looking for but maybe have muddled the actual terminology. You can find words, concepts, books, movies, etc, that you haven’t remembered the name of for years.


Every time I fly, I marvel at how much engineering and know-how went into making the airport that I'm using. From the oddly shaped trucks with various functions, to mundane elements (elevators, escalators, ...) to advanced technology (radio communication, radars...) to the sheer organizational feat (thousands of people coming in every day to execute their carefully planned tasks). This text will give me one more thing to think about :)


Same, and I also just marvel at the airplanes. This video made me think of the several grass runways that are in my area. They're literally just maintained by some guy mowing them, and yet people land on them in tiny planes as well as two-engine aircraft.


Glad I'm not the only who is fascinated with airports and the technology/engineering that makes them function at scale.


Indeed, I feel like AI makes it less lonely to work, and for me, it's a net positive. It still has downsides for my focus, but that can be improved...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You