So Signal is sending the notifications through Apple's ecosystem somehow, presumably to save battery life by not having a persistent connection to Signal's servers? That's what I think happens on Android, too. When I had Lineage years ago, I had a persistent connection to Signal as the notifications didn't come through Google. Unfortunately there was a persistent notification for the persistent connection with no way to remove it.
After these news Signal should ask the users ASAP and on new installs something like:
> Do you want the notifications to pass through Apple (no privacy, better battery) or through Signal itself (better privacy, but less battery life due to the persistent connection to Signal's servers.
It should be as part of the setup wizard, not inside the settings.
1. On android if Google Play isn't available (or you install the no Google apk version) it'll use a websocket for notifications. Apple doesn't allow a persistent connection except through their own notification framework.
2. In either case Signal doesn't send message contents through the notification framework (not even encrypted). Once Signal receives a notification the app wakes up and reaches out to the signal service directly for actual encrypted message.
3. Regardless when signal shows the contents of your message in the notification menu of your device your device keeps a record on your device of that message content.
The FBI here didn't get anything from apple, once they had the apple device unlocked they looked at the notification database on the device to get the message contents. This isn't really any different from the fact that if the FBI has your unlocked phone they can read your signal messages. The notable bit is that the notification database retains messages even after the app is deleted.
> Apple doesn't allow a persistent connection except through their own notification framework.
How can iOS not allow persistent connections at all? How would a long download work or a call in the background work at all?
> Regardless when signal shows the contents of your message in the notification menu of your device your device keeps a record on your device of that message content.
How is that not treated as a backdoor unless it's explicitly mentioned when someone installs iOS?
1. I'm not an iOS dev, but I know they have a specific framework for doing calls (CallKit I think), so you'd be using specific APIs that allow for a persistent connection for that purpose. Probably something similar for downloads. But generally iOS kills apps running in the background after a while so there's no way to persist an open connection indefinitely.
2. To be clear it's my understanding that the notifications weren't exfiltrated off the device. The notifications are stored in a database on the device that iPhone and can only be accessed by unlocking the phone. So I wouldn't call it a backdoor. Both Android and iOS retain notifications in a database, in Android you can disable it, I don't know about iOS.
" if the ___ has your unlocked phone they can read your signal messages. "
It's worth noting you can add an additional security check pin/bio/pass to signal that is different from your phone unlock.
The protester had also uninstalled signal from phone (even with access to the phone, they would not have access to signal, if they had reinstalled signal, and some how got the security pin or passphrase, they wouldn't be able to load the prior messages, without either, no messages at all).
There is no other way to send push notifications on iOS, you have to use APNS. When the app is active you can switch to your own local socket connection, but as soon as it goes into the background those connections are lost. Pushes can also start the app in the background if it hasn't been used in a while and has been evicted by the OS.
You can send push notifications with your own encryption on top, which I believe Signal does, so Apple can't see it on the APNS side, but your local extension to decrypt the content is still subject to the user's settings, and part of the notification history if you put message content in the notification.
iOS is a walled garden and it will be as strong an argument as ever, regardless of how long iOS has been a walled garden for. Also, don't you see how having to buy your privacy for 15£, even for 0.01£ is ridiculous? And to your last point - a parent can easily bypass all that bullshit if they wanted. They could let their kids use a normal computer without any walled gardens. What's to stop them from seeing 4chan or motherless or anything like that? Nothing. And nothing will unless you force all of society into your dystopic vision of a safe world for kids.
What do people expect when handing over their computing to a for-profit company? You can use various services where you knowingly hand over some of your data or offload a computational load, but with Apple it's like you're handing off the keys to your house, the plumbing, the electric wiring, the bricks, the alarm system and everything else to 1 entity. And you get upset when you realize you're just renting a property with less assurance you'd get from a slumlord in the ghetto. And for a lot of people that Apple property is their main computing property. Not a vacation home away from their desktop. Once they're evicted, once the slumlord disables the heating, increases the price of water or forbids you from inviting people, you have no other recourse.
This has more to do with the UK government than a "for-profit company". Apple has been one of the biggest forces pushing back against this kind of thing forever, at least in the US where companies still have rights.
No it doesn't. The UK government instituted age checks for social media, Apple didn't like the UK government and enabled age checks for the OS, wanting to blame the government for it. It's done this sort of thing before.
> How many false positives did you go through to do that? You guys never say. You also never do live demos of your AI because you know it's going to hallucinate and make your company a laughing stock.
The false positive rate might be too big for a live demo to work. A 50 (for example) hour live demo of someone working with the AI to find a bug might look bad even though finding a 23 year old security bug in 50 hours with a human in the loop would still be impressive.
> Is there anything plug-and-play that can do a reasonable job of flagging/disconnecting massive outbound data transfers?
I don't know of such a tool but you'd have to run it everywhere you have data. If the LAPD's data was not on prem, which is expected (but not necessarily a good practice for sensitive data), it would be harder to both have an exfiltration monitor for the data they do have on prem and for the data they have in whatever hosting provider or "cloud" they stored it at. Maybe the bill for the egress transfers in the morning plays such a role to a certain extent.
Non-US citizens - what's the situation with cameras in public spaces where you live? In my town every 2nd hour or building entrance has a private camera pointed at the street. It's very depressing because the cops don't care - I've asked 2 in a patrol car when there was a mild case of vandalism I witnessed. Technically it's illegal, but nothing happens. The public cameras are on intersection and some bus stops. Too much, if you ask me, but the private cameras are everywhere.
Japan is exporting it's AI-enhanced crime prediction platform across LatAm after successfully deploying it in Tokyo [0]. Japan is doing similar work to analyze financial transactions [1]. South Korea has also deployed a similar surveillance platform called Dejaview [2]. Even Finland has been deploying surveillance camera fusion centers [3]
The brutal reality is everyone is doing this and there's nothing you can do about it. National Security trumps all other concerns (even the GDPR exempts governments who argue their data collection is done for National Security reasons), especially in a world as unstable as today.
Societies that are strongly collectivist in nature tend to align closer with expanded state powers and don't view it as an affront.
The techno-individualist subculture that is common on HN and Reddit is that - a subculture.
Techno-individualism cannot coexist with collectivist culture where the primacy of the state is held as sacrosanct and supreme.
And now that countries like Russia [0], Iran [1], and China [2] have been expanding hybrid warfare capabilities across the West - especially now that Europe is now expeiencing the largest conventional war since WW2 - we need to recognize that we are no long in a state of peace.
In London cameras are everywhere, mostly private and they have been for years. Don't think I've seen anything like it in any European city I've visited.
Private cameras pointing to street can be lawful under GDPR, but in that case they are GDPR controller. That then requires them fulfill bunch of obligations which they probably aren't, e.g. giving proper Article 13 notice.
I don't know if it's criminal in any EU country, but it would be something that you could complain to DPA about. Or initiate civil lawsuit against the controller.
Worth noting is that in some cases the camera vendor might also be (joint) controller as they can determine means & purposes of the processing. If they are simply storing the video then it's unlikely, but if they for example use it for AI training that would likely bring them controller territory.
> William Shockley another Nobel Prize...for inventing the transistor, probably the most consequential invention of the 20th century, could not recognize that touring college campuses promoting eugenics and forced sterilization was half-baked.
This seems different than the astrology or AIDS or cancer ideas mentioned above it as it's scientifically sound, just widely considered unethical.
It's obvious what GP meant - we can verify that the apps we download are the apps everyone else downloads.
We can't do this with Proton where our mail is supposedly end-to-end encrypted. They can easily view our mail if they can send us a different code when we load their site.
> That isn't what "sandboxed" means, it has nothing to do with checking hashes. And no, mobile apps are not really sandboxed
Apps ARE somewhat sandboxes and GP didn't mean than sandboxing == checking hashes. It was 2 sentences appearing one after the other.
>We can't do this with Proton where our mail is supposedly end-to-end encrypted. They can easily view our mail if they can send us a different code when we load their site.
That isn't a problem with how the web works vs how apps work, that's a problem with you trusting Protonmail.
If you really wanted to be secure sending an email or any communication, you wouldn't trust any third party, be it an app or a website - you would encrypt your message on an air-gapped system, preferably a minimal known safe linux installation, and move the encrypted file to a USB, and then insert the USB into a system with network access, and then send the encrypted file to your destination through any service out there, even plain old unencrypted http would work at that point, because your message is already encrypted.
The second you give your unencrypted message to any third-party on any device with an input box and a network connection, is the moment you made it public. If I had to be extremely sure that my message isn't read by anyone else, typing it into a mobile app or a web browser isn't the place I'd start - it would only be done as a last resort.
That is a problem with you not understanding how security works.
> If you really wanted to be secure
There is no such thing as "being really secure". There are threat models, and implementations that defend you against them. Because you can't prevent a bulldozer from destroying your front door does not mean that it is useless to ever lock it.
Even your air-gapped example is wrong, because it means that you have to trust that system (unless you are capable of building a computer from scratch in your garage, which I doubt).
Sending an encrypted over the Signal app is a lot more secure than sending an email over the ProtonMail website, which itself is more secure than sending it in a non-secret Telegram channel. It's a gradient, it can be "more" or "less" secure, it doesn't have to be "all or nothing" as you seem to believe.
>That is a problem with you not understanding how security works.
That's hilariously wrong.
>There is no such thing as "being really secure".
Sure there is. "Being really secure" isn't what I said at all, and it's a vague statement to make. You're reaching to create an internet argument, and I'm frankly bored of this, you're out of your depth.
>Even your air-gapped example is wrong, because it means that you have to trust that system
I'd trust a system that I set up. I'm not going to do it on a system that you set up, that much is for certain.
> (unless you are capable of building a computer from scratch in your garage, which I doubt).
I still have an EPROM burner, so yes, I could, and I have.
>Sending an encrypted over the Signal app is a lot more secure than sending an email over the ProtonMail website
If you really think that, then nobody should be taking security advice from you.
I'm really tired of this pointless internet interaction. Goodbye.
Well, you can verify that the code that you downloaded is the same that everyone else downloaded. Even if it contains webviews.
Now if it contains webviews, it brings the security issue of... the webapps, of course.
Personally, I want an open source app. You can audit an open source app and even compile it yourself. You can't really do that with a website. And I don't mean just mobile apps, that applies to desktop apps, too. I wouldn't run a web-based terminal, for instance (do people actually do that?).
>Well, you can verify that the code that you downloaded is the same that everyone else downloaded. Even if it contains webviews.
Not impossible to do with websites, if the need to do it was there. It would take about 15 minutes to create a browser extension that could make a hash of all the files loaded, to compare with other users with the extension installed - but honestly that's just not needed because if you're connecting via HTTPS, then you're getting the files that are intended to be served, presumably not malicious if you trust the source. And if you don't trust the source, then why are you loading it to begin with??
>Now if it contains webviews, it brings the security issue of... the webapps, of course.
Web applications are sandboxed in the web browser. Very little issue with that, outside of browser bugs/exploits, but bugs and exploits are found in every system ever.
>I wouldn't run a web-based terminal, for instance (do people actually do that?).
AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
> And if you don't trust the source, then why are you loading it to begin with??
I trust that Proton (for example) has implemented E2EE in their services. I wouldn't trust them to handle my unencrypted data - I wouldn't trust anyone for that. I don't trust that their security is perfect - no one's security is. So if they're breached, they could serve me malicious JS. I don't trust they're impervious to government pressure or blackmail. By making sure the files served to me are the same as the files served to anyone else, I can be relatively sure I'm not targeted personally. People could also review those files to make sure they're not malicious.
> It would take about 15 minutes to create a browser extension that could make a hash of all the files loaded, to compare with other users with the extension installed
You completely underestimate it. I am absolutely certain that you cannot create a browser extension that meaningfully solves this problem in 15 minutes.
> Web applications are sandboxed in the web browser. Very little issue with that
Except that when we are talking about end-to-end encryption, the sandbox has nothing to do with it. The sandbox defends against something else, not the server serving you an end-to-end encryption program abusing it.
> AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
I genuinely can't see if you just don't understand the point being discussed at all, or if you keep saying off-topic things as a way to divert the discussion.
>You completely underestimate it. I am absolutely certain that you cannot create a browser extension that meaningfully solves this problem in 15 minutes.
You are absolutely wrong. I write browser extensions, I can spin up a new one in a minute, and the code to monitor and hash all resources loaded by a webpage is trivially easy to do. It would be simple to set up a server to allow comparing the hashes, in a POC. I'm not talking about making this a robust service that everyone can use, I'm only talking about how easy it is to do in a general way. It's far easier than you think it is.
>>>I wouldn't run a web-based terminal, for instance (do people actually do that?).
>> AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
>I genuinely can't see if you just don't understand the point being discussed at all, or if you keep saying off-topic things as a way to divert the discussion.
You're right, I certainly don't understand the nonsense you're trying to convey.
I'm also tired of this pointless internet interaction. Goodbye.
> I'm not talking about making this a robust service that everyone can use
Right. So you cannot do it. Thank you.
> I'm also tired of this pointless internet interaction. Goodbye.
Seems to me that you don't enjoy discussing with people who behave like jerks, which I admittedly did, just for you). You may not have realised it, but you started it. I am happy to disagree in a respectful tone, but you broke it first. Maybe that's something to think about in your next totally meaningful internet interaction, though it sounds like you like telling others that you know better because you are older.
Now it only ensures that Cloudflare doesn't tamper with the WhatsApp Web code they serve, you still have to trust Meta.
I feel like reaching the same level as "checking the hash for the app" would be very hard in practice. I.e. the web is not built around doing that. Your extension would have to scan all the files you download when you reach a page, somehow make a hash of it, somehow compare it to... something, but then make the difference between "tampered with" and "just a normal update".
Also you just can't "download the sources, audit them and compile them yourself" with a webapp. If you do that, it's just "an app built with web tech", like Electron, I guess?
reply