Also when you factor in the age of the wedding participants it almost seems like a regression. A couple in their 30s should be able to afford more than a couple in their 20s a generation ago.
I don’t think my concern is that AI is going to make everything too awesome for people to cope. The fact that I can now DoorDash lunch doesn’t really matter when I can’t afford a place to live or healthcare.
Also, as the article points out, this is all mostly theoretical as we're pretty far from utopia. Just ask the private chauffeur for your burrito next time he comes by.
You’re absolutely right. I hate AI writing — it’s not that I hate AI, it’s that it makes everything it says sound a specific combination of smug and authoritative — No matter the content. Once you realize it’s not saying anything, that’s the real aha moment.
This seems absurdly naive to me with the path big tech has taken in the last 5 years. There’s literally infinite upside and almost no downside to constraining the ecosystem for the big players.
You don’t think that eventually Google/OpenAI are going to go to the government and say, “it’s really dangerous to have all these foreign/unreglated models being used everywhere could you please get rid of them?”. Suddenly they have an oligopoly on the market.
Personally in LA I had a Waymo try to take a right as I was driving straight down the street. It almost T-boned me and then honked at me. I don’t know if there has been a change to the algorithm lately to make them more aggressive but it was pretty jarring to see it mess up that badly
In recent weeks I've found myself driving in downtown SF congestion more than usual, and observed Waymos doing totally absurd things on multiple occasions.
The main saving grace is they all occurred at low enough speeds that the consequences were little more than frustrating/delaying for everyone present - pedestrians and drivers alike, as nobody knew what to expect next.
They are very far from perfect drivers. And what's especially problematic is the nature of their mistakes seem totally bizarre vs. the kinds of mistakes human drivers make.
This place is great, but my work had a function here and I walked around with one of our juniors and never have I felt so old. The pure astonishment and confusion when looking at a “floppy disk” aged me instantly.
I suppose that means the museum is doing its job then: educate people totally ignorant of the history of computing. Next time that younger person sees a floppy disk they will know what it is.
It’s weird when the failure modes of AI are similar.
I once solved a Leetcode problem kind of unorthodox and both ChatGPT and Gemini both said it was wrong in the same way. Then I asked both of them to give me a counter example and only Gemini was able to realize the counter example would have actually worked.
When thinking about automation people overindex on their current class biases. For 20 years we heard that robots were going to take over the “burger flipper” jobs. Why was it so easy to think that robots could replace fast food workers? Because they were the lowest rung on the career ladder, so it felt natural that they would be the first ones to get replaced.
Similarly, it’s easy to think that the lowly peons in the engineering world are going to get replaced and we’ll all be doing the job of directors and CEOs in the future, but that doesn’t really make sense to me.
Being able to whip your army of AI employees 3% better than your competitor doesn’t (usually) give any lasting advantage.
What does give an advantage is: specialized deep knowledge, building relationships and trust with users and customers, and having a good sense of design/ux/etc.
Like maybe that’s some of the job of a manager/director/CEO, but not anyone that I’ve worked with.
I think the logic still holds due to the red queen effect. If everyone else is getting 3% better and you’re not, it could spell trouble.
Medium term, I expect ai adoption to compound. So if you can be 3.5% better, it could become a massive advantage over a few years compared to the competition.