>As a side note, this is why we built Booth.video -- to demo that this isn't a fundamental tradeoff and it's possible to have E2EE, metadata-secure video conferencing in the browser.
now i wonder how you did that. Is the key exchange of participants happening out of band?
i think it cleared a thing up or two. However, would you mind sharing why insertable streams are apparently required for this to work? As WebRTC traffic is encrypted already E2E it seems to me that constructing the SDP with the key, currently used here with insertable streams, would be good enough.
Sure. So WebRTC is encrypted between peers when 100% of the communication is going peer to peer. But in most WebRTC services, your peer is actually the SFU, which is the server. So you're encrypting to the server, not to the other participants. (Most "pure" WebRTC platforms switch over to SFU-based communications at 4 or more participants, but many of the bigger platforms always send video/audio through the SFU regardless of how many participants there are.)
E2EE implies both ends have an encrypted channel to transport data to each other directly, without an intermediary step. this is the very definition of the term, at least it is in my mind. Having the data only encrypted to and from their servers would merely be transport layer encryption. Although i have no idea whether they implement one, the other or both.
In context of video conferencing software (WebRTC specifically) this is actually somewhat interesting, because typically the signaling server is the one who hands out the public key of the other peer and needs to be trusted, so they could by all means deliver public keys to which they posses the keys for decryption and it therefore would allow them to play man in the middle in a typically relayed call. So even if E2EE is implemented, it might be done poorly without figuring out how to establish trust independently.
Yeah, the key delivery is the hardest part if you are privacy focused. Signal and Whatsapp have a screen, where you can generate a QR code, and use that to verify that you and your contact have exchanged keys without a man in the middle attack.
I wish browser would do something similar with their WebRTC stack. Something that shows independently of the site (out of its execution context) which keys are used and allow for an easy comparison of them independently. But i don't know of such functionality being there yet.
more precisely a CNAME must be the only record type for a given label as it would be otherwise ambiguous to revolvers. that doesn't hold true for DNSSEC secured zones where the records for signatures are to be allowed, but also arent creating any ambiguity on how to resolve queries. the apex must have NS and SOA records for a minimum to work so that rules out any CNAME in addition to that. an apex without any other record but SOA and NS would work fine though. so saying it needs to be an A record implies the wrong thing.
i just wish i could control the routing via routing tables instead, making dynamic routing decisions possible without specialized software that is able to manipulate it.
just so you know, your assumption i am not using the right tools feels almost insulting to me considering i made no claim about any tooling used. i am using systemd-networkd to setup networking anywhere, i never touch wg-quick because it is no fit for my use cases. i have multiple routing tables and do policy routing and i would really like to have the "via" in the routing tables to have a meaning to wireguards crypto routing thing. i.e. i want to be able to set "AllowedIPs" based upon the routing table very similar to reverse path filtering. i know i can setup multiple interfaces with multiple keys to exchange and multiple ports to set and to make sure every client that needs to is kept in sync.... but it would be much nicer if i could handle it like an ip-ip tunnel and make routing decisions with software build for this purpose.
i use policy routing and let wireguard mark packets it wants to send out. the main table is empty (there is no route... at all..) external connections insert routes into their own tables. the wireguard interface does this too and any packet not marked by wireguard will use this routing table... if wireguard is missing nothing marks packets to leave at external interfaces. i have an additional rule that prohibits any traffic that did not have a route in any table applicable at the end of the policy rules
i mean sure there will be several dozens ways to compromise your machine once your user account is wide open already... but allowing any script or software to run any command privileged without any questions asked? there is certainly a risk attached to that and not even necessarily related to an active attack... you are one badly written script away from doing something dumb without even noticing it...
and not having to confirm anything to use your ssh keys means not only your machine is compromised but all of those that those keys allow access to are potentially compromised too now... i use an ssh-agent (gpg-agent implementation) to only ask once at the start of my session for the password and every time for confirmation of usage or after some time without usage it will ask for the password again. its not annoying at all...
what i do for work is very close to what you want and its pretty easy to achieve with using lightdm and the dm-tool with the add-nested-seat command. It will start a new Xephyr X server local to your current user and attach the session manager to it. from there you just login and have the second user session in a window just like you wanted... however, i did not get clipboard sharing to work but i actually like this extra bit of isolation.... its not even hackish and performance is exactly as native because it is... its a bit harder to get sound working concurrently, but not impossible, although i never really tried. However, i use pipewires pulseaudio interface to stream audio to a remote AV receiver in the room and this should work fine in the second user session too, although as said i never bothered to try...
just want to make it clear, usually you are assigned a prefix which you can announce to your network and then nodes will pick random addresses from that prefix. its still easy to determine the packets came from your network just not clear from which device exactly much like current NAT setups lets say...
... unless your ISP would change your prefix regularly (like every day), like it's now the case for those who have a public dynamic IPv4. If they give you a static IPv6 prefix, it's like giving you a static IPv4 now, i.e. something ISPs would probably want to monetize as a premium service. So I'm pretty sure we'll end up with regularly changing prefixes... which is probably good for privacy.
In most countries, the ISP is legally obliged to keep records at which point in time has a customer had an address. Usually a court can order an ISP to hand over those records. In other cases, the police, secret service, copyright lawyers etc. can talk directly to the ISP.
Sure, but keeping logs for legal reasons doesn't mean transmitting those logs to all the web sites you visit. I think the point here was untrackability by websites rather than pure anonymity.
In general, the level of privacy/ anonymity should be pretty similar with IPv4 or IPv6 in the real world. You will be better off at some hacky, local IPv4-only ISP behind a huge "CG"NAT because they may not keep all the records or not for so long or whatever. You might be better off at a large mobile provider with IPv4 as a service over an IPv6 network, because the prefixes can change often and IPv4 is again basically a huge gateway hiding you to some degree. If you really need to be as anonymous as possible to most services online, use TOR or a series of VPNs/ SSH tunnels in different countries or whatever and a number of other anonymizing tools, browser plugins etc. Having IPv4 or IPv6 will not play a role as people usually leave huge amounts of traces all over the place.
For websites, for all these reasons, it is way more interesting to fingerprint you with multiple different aspects of your online presence. E.g. look here: https://amiunique.org/ or here: https://robinlinus.github.io/socialmedia-leak/
For most people and websites, IPv4 vs IPv6 for fingerprinting isn't very interesting because websites have better means and people usually don't really know/ care and most ISPs think they have a great business model handing out dynamic prefixes/ IPs and combining all that with CGNAT or NAT64/ DS-Lite or other gateways. Privacy extensions and local NATs of various kinds also don't help in having more visibility into that chaos. Actually, in my experience, most tracking currently is really, really dumb to such a degree, that I cannot be reliably tracked even if I want to (e.g. when using navigation like Waze) because for some reason, the app cannot get my position using GPS even after tens of minutes. The app still hasn't figured out, I live somewhere else now and hasn't offered me a new home address. It also doesn't recognize any pattern in my routine quite obviously. That would actually kind of be the point of the app, wouldn't it - it would make it more comfortable for me and it could serve me much better targeted adds, e.g. for good coffee in the morning? I also don't get very meaningful suggestions or adds for anything and I only use an add-block plugin. For those reasons, I don't think most websites are able to track me effectively and I don't worry about law enforcement much - I am not a journalist, nor a dissident, nor have I done anything in regards interesting to law enforcement or such.
I think, we all should worry a bit less and focus on fixing stuff all over the place so that tracking is more easily detectable and can be mostly disabled in such a way as to not hinder useful functionality. IPv4 and IPv6 as currently deployed are mostly "boring" in a good way. IPv6 tends to work just fine in places that want to make it work. So for most things, talking IPv4 vs IPv6 is like vim vs emacs or some other endless and rather pointless discussions. Not very informative or useful. We should be using what makes sense. IPv6 starts to make sense in most places, because it becomes easier to set up each day and IPv4 becomes more expensive to maintain. The real problems of IT are really elsewhere now and they are so plentiful.
You would laugh, but just doubling the amount of RAM in the computer today is an undertaking. You cannot get RAM with the same frequency and timing as is built into a computer just a few years old. Higher speed RAM doesn't "just work". No, you have to update the BIOS/ UEFI because they have support for newer RAM standards as it turns out and set the frequency by hand to enforce a downclock of the newer RAM - not really something a regular user would know how to do at all. All of this just adds up and the IT field is so complex nobody can reliably navigate it nor give any dependable estimates about anything. You have similar things way up the stack. Just try updating dependencies in a project (mentioned at length on the front page just a few days ago https://news.ycombinator.com/item?id=29106159). If engineers have to solve stuff, that should have taken 10 minutes for 2 hours instead, we have a major problem - we are just not getting stuff done, because we have been let down by otherwise solid assumptions.
now i wonder how you did that. Is the key exchange of participants happening out of band?