For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | levkk's commentsregister

On top of it, intel chips are not competitive with apple silicon. Why buy a laptop that's 30% slower and uses more energy for the same price?

30% slower than a M5 is a M3/M4. I will take that, thank, and not concern myself with MacOS or the thousand cuts of leaving x86.

To be able to run any OS you want.

Can it run macOS?

For now, yes. But probably not for the next macos release.

To avoid having to use Mac OS, and suffering the whims of Apple.

I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.


Netflix does it to drive engagement as well.


Lobsters is like that, basically a ghost town compared to Reddit. If you block engagement, you will succeed.


> Lobsters

Invite only, very exclusionary. Private club with public posting? Worst of both worlds.


It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.

You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.


I started promoting the idea of hardware verification about 6 years ago. Didn't get any traction and I doubt I ever will.

I think Apple is the only company that would even be able to do that. You have to control the full stack to the pixels or speaker.


One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.


As the tool gets better, people trust it more. It's like Tesla's self-driving: "almost" works, and that's good enough for people to take their hands off the wheel, for better or for worse.

The "almost" part of automation is the issue + the marketing attached to it of course, to make it a product people want to buy. This is the expected outcome and is already priced in.


Exactly, Waymo were talking about this a few year back, they found that building it up gradually will not work, because people would stop paying attention when it's "almost" there, until it isn't and it crashes. So they set out on having their automation good enough to operate on its own without a human driver before starting to deploy it.


I would say the opposite here. The perpetrator has rejected multiple Claude's warnings about bad consequences, and multiple Claude's suggestions to act in safer ways. It reminds me of an impatient boss who demands that an engineer stopped all this nonsense talk about safety, and just did the damn thing quick and dirty.

Those guys who blew up the Chernobyl NPP also had to deliberately disable multiple safety check systems which would have prevented the catastrophe. Well, you get what you ask for.


I view it more as "I crashed my car, I should have been wearing my seat belt, wear yours!"

Source: had codex delete my entire project folder including .git. Thankfully I had a backup.


All queries run inside transactions, and a slow lane like S3 will cause delays, which will in turn block vacuum and cause more problems than it will solve. Most deployments of Postgres (e.g., RDS) won't let you install custom extensions either, although they do have their own S3 extension (which I wouldn't recommend you use).

The right place to manage this is either in the app or in a proxy, before the data touches Postgres.


See prepared statements.


Models don't learn. They retrain them periodically, but junior engineers learn much faster and constantly improve. If you stop learning, you will only be as good as the model.

I've been coding (software engineering, I guess) for close to 15 years. The models skill set is a comfortable L1 (intern), pushing L2 (junior). They are getting better, but at a snail pace compared to a human learning the same thing.


This was my biggest frustration with LLM based coding but Agent Skills have largely solved it.

While there’s a lot of room to improve them it’s a huge game changer for effectively coding harnesses.


You can I believe. We only support BIGINT, VARCHAR and UUID for sharding, but all other data types are completely fine for passthrough, i.e. to be included and used in your queries.


General statement about adoption. Last time we made a Show HN (9 months ago), it was a POC, running on my local. Now we're used in production by some pretty big companies, which is exciting!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You