Google, (the Google Chrome team), have stated before that "[JPEG XL] doesn't provide significant benefits over existing image formats" [1] and have been vocal in there disinterest in shipping support for it (they deprecated experimental support) [1]
My guess is that the newest/latest JPEG encoder developed by Google researchers, Jpegli[2], has a lot to do with this. Jpegli has been described in a Reddit comment as "a JPEG encoder that was developed by the JXL [JPEG XL] folks and the libjxl psychovisual model" and described to have superior performance to WebP (lossy WebP) [3]. that whole reddit thread has comments relevant to this discussion, specifically about the tradeoffs of supporting extra formats in browsers.
I just realized that the PNG file for the logo hosted on wikimedia, is 232KB!
That is a lot, that's unnecessarily large for such a simple logo, so I used vtracer, a raster image to SVG vectorizer, written in Rust, and SVGO, a SVG optimizer, to create the SVG file version of the logo, it is 16KB. a 93.1% improvement in size! (and they look the same)
The 2 commands used:
> wget 'https://upload.wikimedia.org/wikipedia/commons/d/d5/SUN_microsystems_logo_ambigram.png'
--2024-05-14 17:33:31-- https://upload.wikimedia.org/wikipedia/commons/d/d5/SUN_microsystems_logo_ambigram.png
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving upload.wikimedia.org (upload.wikimedia.org)... 185.15.59.240, 2a02:ec80:300:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|185.15.59.240|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 237484 (232K) [image/png]
Saving to: ‘SUN_microsystems_logo_ambigram.png’
SUN_microsystems_logo_ambigr 100%[=============================================>] 231.92K 821KB/s in 0.3s
2024-05-14 17:33:32 (821 KB/s) - ‘SUN_microsystems_logo_ambigram.png’ saved [237484/237484]
> vtracer --input SUN_microsystems_logo_ambigram.png --output SUN_microsystems_logo_ambigram.png.svg
Conversion successful.
> svgo --precision 1 -o SUN_microsystems_logo_ambigram.png-opt2.svg -i SUN_microsystems_logo_ambigram.png.svg
Done in 73 ms!
63.162 KiB - 76% = 15.148 KiB
> ls -lah
total 114M
drwxr-xr-x 2 wis wis 4.0K May 14 17:34 .
drwxr-xr-x 4 wis wis 4.0K May 14 17:29 ..
-rw-r--r-- 1 wis wis 232K Jan 12 2019 SUN_microsystems_logo_ambigram.png
-rw-r--r-- 1 wis wis 16K May 14 17:34 SUN_microsystems_logo_ambigram.png-opt2.svg
-rw-r--r-- 1 wis wis 64K May 14 17:33 SUN_microsystems_logo_ambigram.png.svg
Makes you think how much the Wikimedia Foundation can improve the loading experience for users and save in bandwidth costs, if they optimize all the PNG raster images that can/should be optimized, which this file is a prime example of.
I think it's up to the users themselves, who sometimes also write bots like this. I have encountered a lot of such effort when clicking on an image, especially on the more popular ones. As an example, check this out: https://commons.wikimedia.org/wiki/File:Tux.svg
Oh wow, this book cover art-piece has the same concept, but even more impressive; the Sun logo has 3 "sun" words in the logo not 4 (I know I also thought they were 4 initially), but this art piece has the word "Al-Khwarizmi" 4 times, one for each edge of the square.
Edit: I can't count, both have 4 words, one word for each edge.
The logo SUN Microsystems had is a 4-way ambigram with rotational symmetries. Designed by Professor Vaughan Pratt of Stanford, the logo features 4 interleaved copies of the word "sun", forming a rotationally symmetric ambigram, with the letters U and N in each word forming the letter S for the next word.
You can read the word "sun" if you rotate your head by 45°, 135°, 225°, or 315°.
It reminds me of Columbia Sportswear's logo [0] but it's cleverer. There's a third logo that reminds me of both of those but I've forgotten whose it was...
> For the threat model of most users, where hardware-based targeted attacks aren't a big concern, this is a bad tradeoff.
> hardware-based targeted attacks
You mean physical-access attacks, correct?
Is it really just these kinds of attacks that a T2 chip protects against?
AFAIK if malware has super user privilege, it can access the RAM of other processes, and therefore it can access the encryption keys stored in RAM by other processes.
If those processes could have used an encryption API that does the encryption on the chip, and therefore not need to store encryption keys in RAM, they'd be protected against this kind of attack, a kind of attack that is not hardware-based.
Considering those keys are loaded into RAM for/whilst unencripting, i don't see how it matters, cause the malware should have access to the (now) unencripted data regardless.
It's fun to think about a technical solution that if implemented by the state and porn websites it would be privacy preserving, I have a hunch some form of cryptography can be part of this solution. some form of cryptography that would allow the website to check if something like a hash of a state identification number is of a real ID number, but this hash-like thing would not allow the website nor the state to know to what ID number it belongs to (e.g. with a database lookup of the hash), it would allow them only to know that it belongs to some real/actual ID number.
does anybody know of any form of cryptography that would allow anything like this?
All sorts of sticky bits here. If the main thing the age verification service is used for is watching porn then how much privacy can you really have? The verification side knows you're watching porn and can look at your ISP records or ask the registered providers if you accessed using token / session x if they really need to e.g. unmask your specific fetish or find out about your activity.
There are some difficult tensions between building for privacy vs being auditable.
Another specific part that seems difficult is the need for a biometric bind. There's no clear way to do this without invasive UX that's bad for the use-case.
If you want to make assertions about a natural person then you need to bind them to the credential with a biometric match, to prevent IDs from being copied or shared.
If you perform that on the client it's amenable to all sorts of hacking, "the drm problem" where you are asking a computer or mobile device to act as a little policeman. The device is no longer "yours".
If you perform it on the server you need to be passing images or better video back to a service. You can have the best protocol and procedures in the world but you will never convince customers that is private & anonymous.
It all depends on requirements tho. If the goal is mainly to prevent say, 8 year olds stumbling across porn websites, and not to stop a motivated 8 year old from accessing them by stealing parent credentials or using workarounds they found on a forum, then the problem is fairly tractable and could probably be solved within the credit card ecosystem alone.
There is some discussion of that sort of system here [1]. Search for "zero knowledge proof of age" or "zero knowledge age verification" or similar and you should find more.
Another approach uses digital signatures.
The naive approach that isn't very good from a privacy point of view would work something like this. We have three parties: (1) U, a user that wants to use a site V, (2) V, a site that wants verification that its users are at least 18, and (3) T, a site that U is willing to reveal personal information to that proves their age to T.
Good candidates for T would be sites that already have U's information, such as a site run by their government or their bank.
In this naive approach what would happen is V would give U some sort of login token, U would pass that token to T along with sufficient proof for T to verify U is at least 18, and then T would sign the token and give the signed token back to U.
T would use a signature that they only use for for verifications that age is at least 18. If they offered other verification service, such as verification that a person lives is a resident of a specific state, they would have a different signature for those.
U would verify that the token was signed with T's "at least 18" signature, and U has passed age verification.
That's not good as far as privacy goes because T sees the contents of the token. They could log it, and someone who obtained those logs and the logs of V could match them up. Also T could recognize from the format of the token that it is a V token so T would know what site you are trying to sign up for.
That can be addressed by replacing the signature with a blind signature. A blind signature is a kind of digital signature where before sending the token to T to sign U can apply a special transformation that essentially randomizes the token. T only sees that transformed token and signs it.
What's special about the transformation is that if the inverse transformation is applied to the signature of the transformed token it produces a signature for the original token. You then end up with the original token and a T signature for that token, which you can give to V just as in the naive case.
What T sees no longer matches anything V issues, and no longer looks like a V token.
If the volume of verifications at T is too low and the volume of people verifying at V is too low someone who obtains both T and V logs might make some deductions from timing.
If age verification requirements become widespread so that it isn't just porn sites but nearly all social media sites and e-commerce sites, the T sites should have enough volume that timing attacks aren't effective. You could further reduce their effectiveness by adding some delays. Wait a few hours after getting your transformed token signed by T before completing the verification at V.
You could also toss in fake requests to T. Send them random tokens every now and then to sign and then throw the tokens and signatures away. Then T, or someone who is spying on T, won't have any idea which of those requests are for real verifications and which are just noise.
There is no such thing just because of logistics alone. All it would take is one 'adult ID' to be shared enmasse. If they can start analyzing the patterns to see that one adult provides 1000s of uses per minute across many IPs, then it isn't privacy preserving.
Besides, they are clearly looking to build a panopticon. Privacy preservation is the opposite of their goal.
My guess is that the newest/latest JPEG encoder developed by Google researchers, Jpegli[2], has a lot to do with this. Jpegli has been described in a Reddit comment as "a JPEG encoder that was developed by the JXL [JPEG XL] folks and the libjxl psychovisual model" and described to have superior performance to WebP (lossy WebP) [3]. that whole reddit thread has comments relevant to this discussion, specifically about the tradeoffs of supporting extra formats in browsers.
[1] https://www.techspot.com/news/101764-google-once-again-accus...
[2] https://opensource.googleblog.com/2024/04/introducing-jpegli...
[3] https://www.reddit.com/r/programming/comments/1ajq7bj/commen...