The entire point of signatures is that you don't have to trust the channel that delivers them.
The root of trust is the private key that comes with the operating system install ISO, but that you can download over HTTPS or have it delivered as physical media etc.
But the public signing key, that's already installed via presumably verified initial media is already there, so unless their private key is compromised, which would be a bigger deal ....
How does this chain of trust bootstrap for a first time (or stand alone) user who downloads Ubuntu from a non-Ubuntu system? If I've got MITM, I can change both the ISO (with the totally-not-verified public key) as well as the gpg sig to match using a private key I know (and gave you the public key to).
Sure. And how does a TLS connection for the ISO download solve this problem? You would need to make sure that the TLS connection is to the correct server. How do you check it is the right server? Probably by checking the hostname against some webpage and the TLS certificate against some CA roots in your browser. How you get the right hostname? How do you know the CA roots in your browser are correct? How do you know your browser executable and config are correct?
There's an extremely wide range of adversaries between "potentially capable of MITMing a network I use" and "potentially capable of screwing with my OS/browser's CA roots or actively acquiring and misusing an illegally obtained valid TLS cert for ubuntu.com".
Sure, most nation states can craft whatever TLS cert they want, with only some risk of bad press if they get caught signing a ubuntu.com TLS cert fraudulently via a CA they control/coerce. If those people are my adversary I'm screwed. "YOU'RE STILL GONNA GET MOSSAD'D UPON!"
A TLS connection for the download (and the gpg signature) protects against people like the disgruntled hotel IT guy, the kid futzing with the cafe wifi, an evil housemate, some crappy rooted IoT shit somebody hooked up to the wifi, an overly curious coworker or corporate IT drone, the red team in a company pen test.
I've heard the arguments here - that it's a difficult problem for all the mirror operators to add ssl certs, that it'll stop downloads being cacheable, etc. But I didn't really buy those arguments 5 years ago, and these days, with LetsEncrypt and HSTS - I think those arguments are even more bogus than they were in 2015...
Side-channel verification with the distro developers. Contact them over a channel that's unlikely to be compromised, get them to confirm that the keys in the system signature keyrings belong to them. Repeat for many channels until you get enough confidence.
To anyone who hasn't seen this yet, it's worth a watch. Half of it is a lawyer talking about why you invoke the 5th even if you don't think you're a suspect. The other half is a response by a police officer whose opening statement was "Everything [the lawyer said] is true. Ok? And it was right, and it was correct."
"Short-term, transient use, including, but not limited to, the contextual customization of ads shown as part of the same interaction"
So, they don't go to Data Brokers Inc and say "I'll sell you an estimate of gnicholas's IQ for $xxx", but they will go to Scams Inc and say "Hey, wanna target ads to people we think are stupid?"
Stupid people are easier to manipulate, especially if you continue to commercially bombard them with information coming from all different sources. And if you control all the channels that they see, then they have no alternate reality. Heck, this applies to seemingly smart people also, if they don’t reach out to different sources for a balanced view of information.
This is now becoming very true with ad networks like Google’s ad monopoly. Regardless of what website you visit, the ads all come from the same source.
So now, one use case for this is political manipulation. You can start to target people of a certain IQ range, to get them to think the same way, which can then ultimately coerce them to vote a specific way.
So now, you have a direct pathway, from planting a seed of an idea, to political manipulation, to actual physical reality.
This is now starting to sound like the premise of The Outer Limits.
But hey, morality never stopped aspiring entrepreneurs from making a billion dollars.
All companies with a brand name to protect do the latter but not the former. It's why "selling your data" is such a deceptive phrase, since most people assume this means your private data is put on a USB stick and given to bad guys...
You’re right about the literal “your data isn’t for sale” claim.
Still, I have a fun fact: at one point in time, Facebook would let you target an ad to “all men who have any of this list of N emails”, and then accept your list of N emails be N-1 women’s emails, and one man’s emails.
Result: direct targeting of a single individual.
Or, in this case: direct querying of a single individual’s attributes.
I imagine tinder might have closed this hole as well, or maybe it never gave advertisers those exact tools. But these advertising tools can leak a lot of data, and the vendor’s incentive is to leak that data while swearing up and down that it’s safe.
Yes, but they also incorporate by reference all of the business purposes described in their general Privacy Policy (this page is the California addendum).
In the general Privacy Policy, [1] business purposes includes:
> Develop, display and track content and advertising tailored to your interests on our services and other sites
The "and other sites" seems like it would include selling user data to third parties, at least by my reading (IAAL).
This is in addition to all of the business purposes listed in their general Privacy Policy, so "Short term, transient" is not a limitation on all business purposes — just the additional ones listed here (in the CA addendum).
As I mention in another comment, the general Privacy Policy includes some very broad language that appears to include selling data to third parties.
> Ways to [support a market for longer-lasting clothing] include offering warranties on clothing and making tags that inform consumers of a product's expected lifespan.
So I can cryptographically prove that at a particular date at a particular time with a particular camera that I authentically pointed my camera at a screen projecting faked content?
I think this only works if you're signing something currently much harder to fake, like a full light-field or something, and even then it won't stay hard to fake forever.
You could have the camera record the lidar also and crypto sign that too. Assuming it is all done in camera and the lidar is high resolution enough it should be possible to completely tamper proof a video recording at least at the “talking head” level.
I think Apple could possibly start providing this for iPhone videos if they decided to as the hardware is all there already both the lidar sensor as well as the Secure Enclave etc
I think this idea of adding additional layers is on the right track because it confounds the deepfake generation problem by adding a curse of dimensionality. You can't just fake 2D. You need to fake 3D which is much harder.
I'm curious what the equivalent of "3D" to "2D" will be for deepfaked voice would be.
There is already the color channel dimension in addition to the spatial dimensions and time dimension. Adding depth doesn't make things much harder, especially because there is so much correlation between depth and color and time.
I know they don't currently do this, but if GPS satellites cryptographically signed their timing signals, could that potentially be usable to establish that, in addition to the particular camera capturing the images at a particular time, that it also did so in a particular-ish place ?
It's possible to record and replay signals from gps satellites with different delays, so maybe that could be used to spoof the location, but if the camera also has an internal clock, perhaps it could detect if there was too much discrepancy there? But I don't know how much that could constrain the location. Also gps only has limited precision, especially indoors?
Uh, maybe using cell towers instead of, or in addition to, gps would be better? Hm.
I mean presumably you could just have the subject or subjects of the video sign the content. A full unedited interview for example could be signed by the interviewer and interviewee.
Edit: This would of course be predicated on PKI being safe and easy for most people to use which decades has proven otherwise though.
As I understand it, there are two definitions of technological measure in the DMCA. One is in the anti-circumvention portion, the other in the anti-cirvumvention-tool portion. In the latter case:
17 U.S.C. Sec. 1201 (b)(2) (B):
a technological measure "effectively protects a right of a copyright owner under this title" if the measure, in the ordinary course of its operation, prevents, restricts, or otherwise limits the exercise of a right of a copyright owner under this title.
By reading historical accounts of what colors different things were.
If you give people black-and-white film to watch, people are guess the colors that are there based on what they are familiar with anyway, so this has a chance to be more historically accurate than that.
Consider the film "They Shall Not Grow Old" in which WW1 footage was painstakingly restored and colorized. They went and found old uniforms and weapons to check the colors. They went to the present-day locations to check the colors of the grass and trees. Etc.
> Good open rate data gathering results in you not knowing that it's being tracked
At some level you know that people react badly when they know they're being tracked, so it's important to help ensure they are not aware of the tracking.
> Having said all of this: of course I don't allow images to display by default on my email provider. But I'm privacy minded and most people aren't. Which is fine, the world is diverse and that's a good thing. I love choice.
You don't like being tracked. You make a living tracking others, or helping others to do so. You think it's important to not let people discover that they're being tracked. Then you rationalize it as people making the "choice" to be tracked.
There may be an internally consistent case to be made that this is above board and ethical, but you haven't made it.
> The reason I don't mention "I see you are opening my emails" is because it makes people uncomfortable.
Oh, you're so close.
Why does it make them uncomfortable when you mention it?
----
> How dare you sir levy false accusations against me. I NEVER said that. Please don't spread lies about me.
Two comments ago:
> So that's why you see silly emails like 'I see you are opening my email, why don't you answer' - which is insane from a marketing point of view, why creep people out? So silly. Good open rate data gathering results in you not knowing that it's being tracked.
not let people discover that they're being tracked. != not actively disclosing.
One is actively hiding. The other is not actively revealing. The fact that you are pretending they are the same makes me feel you aren't arguing in good faith. You are intelligent enough to understand this without me having to highlight it, twice.
You are similarly intelligent enough to understand that the actual person reading your email doesn't care about the distinction you're making.
If you go to someone and say, "don't worry, I didn't deceive you, I just profited off of your already existing ignorance", they're not going to be happy with that answer. Either you have their informed consent, or you don't.
To repeat, why does it make your customers uncomfortable to discover that they're being tracked?
> The disclosure is there and Snowden made sure no one can declare themselves uninformed.
I don't think I need to offer additional commentary on that claim, I think it kind of speaks for itself.
> I'm not a big player, I don't make the rules, I play within them. Don't like the rules? Work on getting them changed. Don't like my funnels? Don't sign up.
But at least on this one point, both of us seem to be completely agreed.
The advertising industry is incapable of self-regulation, and there's no point in companies like Apple, Mozilla, DuckDuckGo, or Fastmail having a 'dialog' over blocking 3rd-party cookies, auto-denying permission prompts, blocking device IDs, and caching assets serverside in emails.
They just need to push their privacy changes and stop pretending that the advertising industry is interested in holding itself to a responsible standard. There is no realistic scenario where tracking mechanisms are left open and marketers commit to only using them responsibly.
This was Apple's mistake a few weeks ago with device IDs, where they backpedaled just because Facebook was angry. Platforms can't negotiate with advertisers, they just have to change the rules and let them complain.
Ultimately, the conversation we've had here hasn't boiled down to some kind of philosophical disagreement about the nature of privacy or how different concerns should be balanced, your position is just that you're going to do anything you're legally allowed to do, and if anyone feels violated by that, it's their fault for not stopping you.
That's not a philosophy that's worth negotiating or debating with.
Thank you, it's not what I was thinking of but looks interesting. Seems like there is some history of multi-layer displays that I wasn't aware of, not that I took the idea any further than a "wouldn't it be cool if..." scenario.