For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more fh9302's commentsregister

If your compare the Geekbench 5 results there is an approximately 8% IPC improvement. Geekbench 5 does not use SME.


The M3 Pro had some downgrades compared to the M2 Pro, less performance cores and lower memory bandwidth. This did not apply to the M3 and M3 Max.


For example when you set a timer each day at 8 pm, this data is used to suggest you a timer shortly before 8 pm. It's a convenience feature.


This article is highly misleading, making it sound like Siri is collecting data from apps and sending it to Apple. This is not the case, Siri Suggestions are fully on-device, though they can sync accross devices with mandatory E2EE. Apple never gets access to any of this data.


Apple can remotely execute code on any internet connected device running an proprietary Apple operating system.

It is only a matter of time before courts realize this.

The CCP controls the Apple software signing HSMs in China for a reason.


But if this is your threat model - that you have no trust of the operating system or the vendor - then all of this is pointless because at any time they can just backdoor themselves. Apple could just never ask or collect this, but still they're one update away from starting to collect it.

Of course that's always a threat with any computer, but you must place some amount of trust somewhere.


If Apple did not collect the data today, then a court order in the future will not allow them to collect data that was not stored today.

Personally I only use reproducibly built FOSS software and I isolate most of my hardware and workloads from each other with virtual machines via QubesOS.

Proprietary software is not at all required to be well integrated into modern society.


> from starting to collect it.

So even then they would have no data before that point!


  you must place some amount of trust somewhere.
Using something and trusting it are different things.


[flagged]


Apple operating systems automatically apply patches to devices for critical security updates so long as those patches are signed by a cryptographic private key held by Apple. That is in fact an RCE system that already exists.

There also exist humans that have access to those private keys, and those humans can be controlled by money, court orders, or violence.

In China the CCP has control over the software signing keys, so they can push any software to any apple device they like.

How long before US politicians start demanding the same?

Or maybe they just make a security mistake. Maybe a state actor performs a side channel attack on the known vulnerable Apple Silicon that powers their HSMs.

SPOFs always tend to fail.


> In China the CCP has control over the software signing keys, so they can push any software to any apple device they like.

I've never heard about China having special iOS releases signed by different keys. Fairly sure all devices across the world get the same exact OS builds, but would be curious to read more about this if you have any sources?


You cannot really use that as argument. Everyone does that so it does not make Apple ”worse”.

Same applies almost every Linux distribution since their builds are not reproducible.

It is just a matter of who you want to trust. Eventually you need to trust someone.


Not everyone does this

My core area of research is supply chain attacks, and I run a company where we regularly train high risk organizations how to remove trust from any single human or system in critical areas of their stack like key management, CI/CD, etc. Many of our clients are fintech companies where trusting a single person, even a system administrator, would seriously endanger them.

Meanwhile Apple sysadmins still manage most of their infra with centrally controlled Puppet nodes last I heard.

Speaking of Linux distros, I created a 100% reproducible and full-source-bootstrapped Linux distro where every package is signed and reproduced by multiple people to avoid having to trust any single human, including me.

https://codeberg.org/stagex/stagex

Guix comes close to this mark too, so we are hardly the only viable option in town.

There are always alternatives to centralizing trust and you do not need to have an Apple-sized budget to afford them.


  Eventually you need to trust someone.
There are plenty of things I use but don't trust.


So you zero evidence that (a) Apple has deliberately put backdoors or that (b) CCP has access to iOS source code.


Siri suggestions might more accurately be termed "Springboard suggestions". From what I recall, it essentially works as a fuzzy matcher for suggesting applications to launch in similar contexts (time window, previous app used, etc.). It's like a smart history feature, and no, I don't think it ever leaves the device at all or even syncs via iCloud, since I have completely different suggestions across my iPhone and two iPads.


Safari is not hardcoded to the first position, it is fully randomized.


Passmark is known to be a bad benchmark, something like Geekbench which has results that closely match with the industry standard SPEC would be a better comparison for real life performance.

M3: https://browser.geekbench.com/v6/cpu/5411370

Intel 165H: https://browser.geekbench.com/v6/cpu/5387822

Around the same multi-core results, significantly better single core for the M3.


Do you have something more recent than a leak from over 2 years ago that has long been fixed? I'm curious why iCloud Private Relay is theatre at the moment.


[flagged]


> It was advertised as being private, but it wasn't

Signal had a bug once. Herego, it’s a scam?


> Signal had a bug once.

Are you referring to this?

https://www.forbes.com/sites/daveywinder/2019/10/05/signal-m...

It was a bad bug in the Android client, to be sure, but it didn't bypass Signal encryption.


And a VPN leaks a lot of information about your network activity to the operator, so by your standard it is privacy theater. Do you see why you’re coming across as having inconsistent standards and thereby perhaps an axe to grind?

iCloud Private Relay is used for all network activity from Safari which does not seem like a “limited amount of activity.”


> And a VPN leaks a lot of information about your network activity to the operator, so by your standard it is privacy theater.

A VPN isn't designed to keep your IP address hidden from the operator. iCloud Private Relay doesn't hide your IP address from Apple either. That's not the point, and everyone knows this in advance. The point is to keep your IP address hidden from the request destination servers.


Your logic is that any flaw in an implementation renders it useless. In the case of VPNs, operators can and do share information about clients to destination servers, law enforcement, and more out of band. Just because it involves a spreadsheet and not a WebRTC request does not mean it can be forgiven if you're going around making absolutist claims regarding efficacy.


> Your logic is that any flaw in an implementation renders it useless.

I didn't say that. It's a straw man.


You said that iCPR is privacy theatre because of a resolved security bug from 2 years ago. Please spell out the implication of that claim for me then.


> Please spell out the implication of that claim for me then.

I'll spell out my views below, but I want to start by noting that I don't agree with the way you've characterized them. Going all the way back to your initial reply, I don't like the way this leading question was phrased:

> What specific features do you allege exist just to mislead the general public?

I think Hanlon's razor is a false dilemma. With a big company like Apple, there's typically a combination of bureaucratic incompetence and marketing exaggeration. Clearly, Apple leadership has decided to make privacy a consumer differentiator for their products, so they have a financial incentive to hype privacy features as much as possible. As a consequence, Apple management would be eager to be pitched any and all privacy features from engineering; these may even lead to bonuses and promotions, though that's purely speculation on my part. Regardless of the personal motivations of employees, the company is pursuing privacy features in earnest and isn't intending for them to be fake. Nonetheless, the company also has the unfortunate habit of shipping half-baked features and implementations. This is driven largely by the artificial, forced march of the annual release schedule, which demands that great new features be continually announced at a certain time, whether they're ready or not. The situation is not unique to privacy features either; Apple's entire software product line is suffering in quality. Engineering simply doesn't have enough time to do things right, which results in new features that are superficial and/or flawed. You could say it's marketing-driven incompetence.

Several commenters have mentioned that all software has bugs, as if that were somehow profound, or as if I were somehow ignorant of software development as a software developer. (I actually had to spend some time fixing a bug before I wrote this reply.) But not all bugs are created equal. From my perspective, a bug that's discovered relatively quickly by someone else is worse than a bug that's discovered only years later, in the sense that it suggests insufficient QA on the part of the developers, who themselves should have noticed the bug before it shipped. And a bug in the primary functionality of a product or feature is worse than a bug in a more obscure part of the software. This is why I'm not impressed by the length of time since a bug was fixed; if a feature or product was shipped with an obvious, fundamental flaw in its main functionality, that's a stain on the reputation of the developers. And if they keep making such mistakes, why should you ever trust them to be competent? No bug fix can fix the bug writers.

I don't want to focus too much on iCloud Private Relay, though. It wasn't what I had in mind when I was writing my original comment, and I don't even use iCloud Private Relay myself. I mostly don't use a VPN, except on rare occasions. I've discussed iCloud Private Relay here only because you asked me about it.

It's been a busy afternoon/evening for me, so I've kind of run out of steam now on this comment, but I promised I would reply.


The difference is really obvious during code compilation or other tasks that can take advantage of all cores.


Speaking only of the Air here: The newer laptop feels thicker, since it doesn't have the nice tapered front. In general use, outside of only one external display in clamshell mode, the M1 feels similar. If you push things, like for video editing or rendering, the M2 or M3 are markedly better.

However, you quickly hit throttling if you push for more than a couple minutes at a time (like when you export in Handbrake, it will slow down and only run marginally faster than the M1, in my experience).


Someone should sell a little app for Mac that glows red when an upgrade would have been useful (e.g, when you maxed out your max Mac).

I suspect mine would be green almost all the time, even on this almost three year old M1 Max.


This is a cool idea, although presumably m3 would be faster even for single thread apps? Also, are there cases where memory bandwidth could leave cpu at less than 100% on m1 but still be faster on m3?


There is no evidence that Apple ever planned to introduce a MFi system with USB-C. What likely happened is that leakers misinterpreted the USB-C E-Marker as a MFi chip.


Or that Apple was considering a "made for iPhone" certification program to allow manufacturers of USB-C devices to certify that they'll work with iOS devices -- which would be a perfectly reasonable thing for them to have! -- and misinterpreted that as meaning that Apple intended to implement a restrictive device authorization scheme like they had for Lightning devices.

(Just because newer iOS devices have a USB-C port doesn't mean that all USB-C devices will work with them! Devices still require drivers; if iOS doesn't know how to handle a device, it won't work.)


You either have to enable E2EE or disable both Messages in iCloud and device backups. Otherwise the device backups contain a copy of your messages.


The "Messages in iCloud" sync is end to end, so you can enable it and disable iCloud backup, or manually backup on your computer: https://support.apple.com/en-us/102651


Yeah, it is end to end encrypted, but the keys are part of your device's iCloud backup. So unless you turn on end to end encryption for that backup or disable it, Apple can access the keys required to decrypt the iMessage in iCloud messages.


I believe the reason iMessages aren't protected with iCloud Backup is because they're stored decrypted in the SQLite database iMessage uses, chat.db.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You