For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | txrx0000's commentsregister

This is about mass surveillance and control.

https://en.wikipedia.org/wiki/Edward_Snowden#Revelations

The existence of eIDAS itself is already a big problem. They're going to try to gradually push laws to make it so that you'll need a government issued signature to do anything. That's when they'll have total power over you because they can simply refuse to issue.

Modern computing and communications technologies can be leveraged to build infinitely stable authoritarian regimes. It's even possible for democracies to stumble into it on their own as they attempt to regulate these new technologies. In hindsight, the Internet was built wrong. It has a top-down structure which all of human civilization is beginning to mirror.


> They're going to try to gradually push laws to make it so that you'll need a government issued signature to do anything. That's when they'll have total power over you because they can simply refuse to issue.

The more this signature is necessary the harder it becomes to deny issueing it to somebody.

I don't see how this changes much compared to nowadays. You can already require an ID for all kinds of these and the government already has total control over those. So what changes? China manages to ruin the lives of the people illegally born under the 1-child-policy for decades already, all without systems like eIDAS.

You can't protect yourself from authoritarian regimes with tech or good policy since those will just get ignored. Look at Trumps war with Iran, where did Congress agree to it?

I'm not a fan of these systems either, I also think software should be open and no vendor lock-in should exist. But I don't think this will change much to be honest.


It will matter a lot in the long run. I will outline one concrete way it will matter, which I think is the most critical, but there are other ways it will do damage besides this:

Right now, physical ID is only required for government services, for the most part. But digital signatures can be extended later to gate all services and purchases, both online and physical, including non-government ones. For example, you can't host a website without a gov approved signature for each website.

Under a system like that, you would rarely find out when the gov refuses to issue a signature, or when any kind of injustice happens, really. Websites where people can talk about bad things happening to them will simply be denied a signature to legally operate, so they're given the ultimatum to "voluntarily" censor posts, or be shut down. It becomes impossible to have this very conversation on a public platform with any kind of meaningful reach. And they already have this kind of system in China, since you brought it up. In fact, they have domestic surveillance systems that make the Snowden disclosures look cute.


> They're going to try to gradually push laws to make it so that you'll need a government issued signature to do anything.

And in the EU it's already nearly the case. The dystopian horror that KYC/AML has become for honest citizens is beyond belief. And they're of course hiding behind the excuse that "bad guys are laundering money": but going after actual drug dealers, of course they're not doing that. We now have articles wondering if Belgium (where most of the EU institutions do live and where all these totalitarian laws are passed) has become a "narco-state" (where criminals make the rules).

People's life can be ruined when some employee, somewhere, decides he wants to bumps his SAR quota (Suspicious Activity Report): you can have a real-estate transaction fail (and have hence moreover to pay a 10% penalty to the other party) if either a notary, bank employee, real-estate agency employee decided that they've got the nostalgy of the Gestapo-time and decided to act like a good little nazi (yes, Godwin's law: for we're literally talking about totalitarism).

I recently had an notary's employee bother my brother for the source of funds when he bought an apartment... A quarter of a century ago. A quarter of a century ago and he was talking to my brother as if he was a criminal for he didn't have access anymore to the bank wire transfer from 25+ years ago. It's crazy for the exact same controls had already been done 25+ years ago when he bought the apartment. And the notary's employee fully knows that. (regarding that case my brother is currently looking into the national federation of notaries and he's going to file a complaint: he's got emails from that notary's employee that are totally out of line).

The problem is way too much power over the lives of others is put into the hands of petty people: petty bank employees, petty notary employees, petty public servants. The same kind of people who were all too happy to out jews during WWII and who were making sure trains would leave on time.

I previously had a folder where every single money transfer of more than 10 K EUR was saved: I know do it for every transfer below 5 K EUR. And these are to be kept forever for I know that me or my wife or my daughter shall invariably meet motherfuckers asking them "proof of the source of funds from 30 years ago when your father bought that collectible car" (worth less than 20 K back then btw, but worth 6 digits now).

Just fuck these systems and fuck anyone working on it and fuck all the nazis participating in it.


The threat is if you replace your cognitive capabilities with AI, but you don't control entire the system your AI runs on (hardware, firmware, drivers, OS, weights, frontend), then that's equivalent to someone else owning a part of your brain.

Cool project. But why tunnel Telegram specifically? This could be a yet another VPN protocol.

There are some useful ideas from SoftEtherVPN, BitTorrent, Yggdrasil Network, and Tor you could borrow, if you're looking to improve this. The ideal tunneling solution, which doesn't exist yet, is one that not only evades DPI, but also onion bounces you through nodes in a decentralized ad hoc network, and does automatic node discovery.


Nowadays I prefer to have dedicated application solutions, everybody (with security in mind) has some sort of base WG/Tailscale setup, it's annoying to tweak to incorporate those on top, so per-program solution imo makes sense and especially in the AI era where you don't really want to allow agents to tamper with your main network card config, it's safer and cleaner.

Thanks OP for this!


What I meant was that you could combine ideas from those 4 projects to build a new VPN protocol, not that you need to tweak your existing tunneling setup to allow those applications through.

There are two things very very wrong with the California law, which you call "age indication".

1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.

2) The age API is not "completely private". It's a legally-mandated data point that can be used to track a user across apps and websites. We must reject all legally-mandated tracking data points because it sets the precedent for even more mandatory tracking to be added in the future. We should not be providing an API that makes it easier for web platforms to get their hands on user data!

For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside down: https://news.ycombinator.com/item?id=47472805


> For many years, certain tech companies, SIGs, and governments have fought against technologies that could enable real digital parenting, all while claiming to do the opposite and "protecting children". They craft a narrative to convince you that top-down digital surveillance and access-control is for your own good, but it's time we reject that and flip their narrative upside down

The EFF has a good series related to this[1].

[1] https://www.eff.org/deeplinks/2026/03/rep-finke-was-right-ag...


> 1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.

FWIW, this is not quite an accurate description of AB1043, in at least three respects:

1. Apps don't get your exact age, just an age range.

2. Websites don't get your age at all.

3. AB1043 itself doesn't mandate any content restrictions; it just says that the app now has "actual knowledge" of the user's age. That's not to say that there aren't other laws which require age-specific behaviors, but this particular one is pretty thi on this.

In addition, I certainly understand the position that the age range shouldn't leave the computer, but I'm not sure how well that works technically, assuming you want age-based content restrictions. First, a number of the behaviors that age assurance laws want to restrict are hard to implement client side. For example, the NY SAFE For Kids act forbids algorithmic feeds, and for obvious reasons that's a lot easier to do on the server. Second, even if you do have device-side filtering, it's hard to prevent the site/app from learning what age brackets are in place, because they can experimentally provide content with different age markings and see what's accepted and what's blocked. Cooper, Arnao, and I discuss this in some more detail on pp 39--42 of our report on Age Assurance: https://kgi.georgetown.edu/research-and-commentary/age-assur...

I'm not saying that this makes a material difference in how you should feel about AB 1043, just trying to clarify the technical situation.


Thanks for the clarification.

Regarding what to do with algorithmic feeds, instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child. We could implement a password lock for app installation/updates at the OS-level that can be enabled in the phone's settings, that works like Linux's sudo. Every time you install/uninstall/update an app, it asks for a password. Then parents would be able to choose which apps can run on their child's device.

Notice their strategy: these companies make it hard/impossible for you to uninstall preloaded apps, and they make it hard to develop competing apps and OSes, and they degrade the non-preloaded software UX on purpose, which creates the artificial need to filter the feeds in existing platforms that these companies control. They also monopolize the app store and gatekeep which apps can be listed on it, and which OS APIs non-affliated apps can use. Instead of accepting that and settling with just filtering those existing platforms' feeds, we should have the option to abandon them entirely.

We need the phone hardware companies to open-source their device firmware, drivers, and let the device owner lock/unlock the bootloader with a password, so that we could never have a situation like the current one where OSes come preinstalled with bloat like TikTok or Facebook, and the bootloader is locked so you can't even install a different OS and your phone becomes a brick when they stop providing updates. If we allow software competition, then child protection would have never been a problem in the first place because people would be able to make child-friendly toy apps and toy OSes, and control what apps and OS can run on the hardware they purchased. Parents would have lots of child-friendly choices. This digital parenting problem was manufactured by the same companies trying to sell us a "solution" like this Cali bill or in other cases ID verification, which coincidentally makes it easier for them to track people online.


> instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child.

Isn't that how parental controls already work?

There are problems, though:

1. The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)

2. Web browsers. Unless the parent is going to uninstall and disallow web browser use, the kid can still sign into whatever service they want using the web browser. I don't think parental controls block specific sites, and even if they do, there are ways around that, certainly.

I am very often the person who says that parents should actually parent their kids and not rely on the government to nanny them. But in this case I think there actually is value to the government making laws that make Facebook (etc.) less evil. And as a bonus, maybe they'll be forced to be less evil to adults too!


1. The current norm of social siloing apps was created by these tech companies in the first place. What regulators can do is discourage anti-competitive practices that lock users into specific software and hardware platforms. If there's plenty of competition for every kind of social app, and competition for OSes, and users could freely choose and move between them, then not having a particular app would not result in social isolation. This affects adults as well.

2. The OS has a firewall. But it's currently not user-controllable on your phone. Phone companies have decided you don't need that feature. But actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites. We can even make it possible to import/export site lists as a txt file so that you can download/share a curated block list that you or other parents made, to block many things at once. You could also do this for your entire home WiFi network in your WiFi router's settings, if your router's firmware has that feature.

And yeah, I agree that we should make the platforms less evil in general. But I think the way to do that is to give people the ability to easily ditch bad platforms and build new ones. Let the platforms actually compete, then the best will prevail. Right now, they don't prevail because of layers and layers of anti-competitive barriers. It would take great technical effort to regulate all the tricks these tech companies use, that's why I propose dealing with it at the root: make it so that all computer/phone hardware manufacturers must open-source their device drivers and firmware, and let the user lock/unlock the bootloader and install alternative OSes. If we do this, then the entire software ecosystem will fix itself over time along with all the downstream problems.


> Phone companies have decided you don't need that feature.Bu actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites.

iOS: Settings > Screen Time > Content & Privacy Restrictions > Toggle on

Then same area:

- App Installations & Purchases: disallow all

- App Store, Media, Web & Games > Web Content > Limit Adult Websites > Fill in allowlist and/or denylist, or Only Approved Websites and fill in allowlist


Apple is indeed better than most other companies on #2. But that's because it's the worst offender on #1. Its strategy is to appear to be the model company that cares about user rights and privacy, in hopes of capturing everyone in their closed-source walled garden that's already surveiling you at the OS level.

They're a part of the corp-gov surveillance complex [0]. This is the real threat behind the age verification push. The feds already have mass surveillance capabilities in iOS and macOS, and even Windows and most Android distros, but not on most open-source Linux distros, so they're starting to force it legally in the open. They're desperate because Linux is about to outcompete the enshittified Windows on desktops.

[0] https://en.wikipedia.org/wiki/Edward_Snowden#Revelations


> The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)

If so many people give their kids phones and so few don't, why ban them in the first place? Clearly the vast majority of parents are fine with their kids having one.

You're just inventing a problem then. Or worse, implement a conservative talking point.


It's possible to mandate effective parental controls and then say "it's illegal to give your child access to facebook" and then just see what happens. You don't have to jump straight to making it technologically guaranteed by construction, maybe it's enough to just give parents the tools and an excuse to say no.

We don't need DNA testing locks on cans of beer that won't let you drink from them unless you're an adult, do we? It's perfectly possible for a parent to buy their child all the beer they want, and there's nothing stopping the children from trying to peer pressure them into it, and in many countries it's not even generally illegal to let your child drink beer! And yet almost all parents are able to almost completely enforce a reasonable level of restricted access, simply because society frowns upon it.


Had this problem with my kid - social media caused serious mental health issues. Toxic content in kids areas.

But taking it away was worse.

Once “not using it” isn’t an option, government intervention becomes reasonable.


If we accept the premise that age restrictions of any kind are good (which, just to be clear, I don't think we should), there are good reasons for tailoring your content based on the user's age.

Imagine you're a streaming service, trying to show a list of movies that a user can watch. If you can only communicate age restrictions to the OS, but can't actually check the users age, you have a choice of showing a list of movies that some users won't actually be able to watch, or a list of movies limited to those appropriate for all ages. Neither are great options.

If you can check the user's age bracket, you can actually tailor the list to what the user can realistically watch.


There are only about 120 versions to target if you pick each individual age - or a handful if you bracket it. You can simply create a lookup table for eachage group and let the user's device decide which one to show.

The user can voluntarily give the platform their age by typing it into their account profile in that streaming app. You can already do this right now. No laws required.

The problem at hand is we have a new law that forces everyone to give their age to every app. It's mandatory personal info collection.


1. I don’t see how that’s better in any real way. You can infer the exact same information as querying the range and it makes dynamic behavior based on age range (ex. access to age restricted chat rooms as an obvious example) completely impossible.

2. Is it meaningfully more identifying than User-Agent? There’s dozens of other datapoints for uniquely identifying a user. If we get a few high profile lawsuits because advertising companies knowingly showed harmful ads to children, I’d consider it a win. Age is not that interesting of a data point.


I wouldn’t focus on whether it’s “identifying” but whether it’s revealing. Young teenagers are a very high-value target for advertisers. They are very impressionable, and they provide a proxy for advertisers for their parents’ money. So this law essentially makes it mandatory to share that information with advertisers. And also by proxy, predators.

It also makes it explicitly illegal to do use it for such purposes. While I agree on the point, I think in practice it changes little. I also think it could be a net positive, because now there’s no plausible deniability about the targets age, opening up a decent amount of liability for exploitative practices targeting children specifically.

> I don’t see how that’s better in any real way.

It's so much better. In the one case, the OS is leaking age information (even if just an age range) to every service it talks to. In the other case, the OS isn't telling anyone anything, and is just responding to the age rating that the app/service advertises.


How would you implement a feed of mixed content? Say you're YouTube and some videos are about puppies and some videos are about guns? How would you hide only the gun videos from the homepage when the user is under 16?

Why does YouTube allow videos about guns but not boobs?

why not?

These are quite modest and decent examples

Music video by Mylène Farmer performing Libertine. (C) 1997 Polydor (France) ^[https://youtu.be/oGFr_NcKyfo?t=325]

TWIN BUSCH® Germany - Making-of Kalender 2017 ^[https://www.youtube.com/watch?v=WP7HYlBsVB4]

TWIN BUSCH® Germany - Making-of Kalender 2018 ^[https://www.youtube.com/watch?v=sdCga9jqD_8]

Making-of TWIN BUSCH® Kalender 2024 ^[https://www.youtube.com/watch?v=A9JNBdYUYiA]

MAKING OF | Twin Busch Kalender 2026 ^[https://www.youtube.com/watch?v=cWPastHi8Vs]

and more: https://youtu.be/YzDHQXKBRek

https://youtu.be/draP5nH_WXk

https://youtu.be/LkpTshwskgg

I'm not even talking about entire sections that feature blatantly pornographic or perverted content, some of which are clearly aimed at a younger audience who might accidentally stumble upon it through keywords you wouldn't expect.


That response reveals exactly the same information.

1. Depends on how it's implemented. It won't identify you to individual platforms if the OS filters on a per-app or per-website basis. And yeah, there would be no dynamic behavior based on age, as that would enable tracking based on age. I don't think any kind of API is the ideal solution though, it's just better than the malicious one being mandated in the Cali bill. Instead of an API, it's simpler and more effective to just have an app installation lock (like sudo on Linux) and a firewall for website blocking with a nice UI in the phone's settings, locked behind a password/pin.

2. Other data points like User-Agent are not required by law, and browsers already spoof user agent by default. I agree that there are other data points we need to address, but the problem in this specific case is the slippery slope of legally-mandated data points. And I don't think winning high profile lawsuits is a real "win", it just exposes problem which we already know in this case. Keep in mind those people can get away with the Epstein files.


> The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age.

How would you make that happen? Many websites would not be subject to your jurisdiction.


Assume they're 18+ then.

But even that's still not a great solution. I outline a better solution that doesn't require any legal enforcement at all, in the link at the bottom of my original comment.


We're actually seeing this play out right now with the server-based age assurance systems which are already widely deployed and mandated under the UK Online Safety Act and laws in about 25 US States. In many cases, the sites just comply, presumably because they are worried that the regulators have a way to reach them even if they aren't hosted in the relevant jurisdiction. In some cases, however, the sites just ignore the regulations or tell the regulators to pound sand, as 4Chan is doing with UK OfCom: https://www.bbc.com/news/articles/c624330lg1ko

So? The same problem exists for having the OS broadcast the user's age range to all apps/services/websites: the service outside your jurisdiction doesn't have to actually restrict content based on age.

At least with the reverse system (services broadcast an age rating), you have some nice properties:

1. You can set it up so that if the service doesn't broadcast an age rating, access is denied.

2. You aren't leaking age information (even if it's just a range) to random websites outside your jurisdiction.


Apps need to know the age of the user in order to follow the law. There will always need to be a way for apps to get the age of the user. If the OS does not give anything the apps will have to implement it themselves.

Counter-surveillance is not a binary switch. We can win by forcing the government to use increasingly expensive backdoors and exploits (>$10k per capita per year, beyond which mass surveillance is impractical even with a $1T budget). Hardware backdoor capabilities are costlier to maintain and use than something at the app level. Encrypting content and leaving metadata exposed is still better than encrypting nothing because they'll have less info to work with which means more effort. The point of all this is not to make it impossible for the gov and corps to surveil a targeted individual (of course they'd be able to if they expend enough resources). The point is to ensure that they only have enough resources to do targeted operations rather than blanket mass surveillance. The former is fine for a democracy, but the latter destroys it.

I always remind myself and everyone else that human DNA is "only" 1.6 GB of data, and yet it encodes all of the complex systems of the human body including the brain, and can replicate itself. Our intuitive feel of how much stuff can be packed into how many bits are probably way off from the true limits of physics.

That's not strictly true - DNA doesnt replicate itself, a cell with DNA replicates itself.

You need to count the information contained in the non-DNA part of the cell too.

Just in case it's not obvious, you can't take human DNA and put it in a cat cell, it won't work, that cell won't replicate.


True.

For now, the DNA replication and the synthesis of RNA and proteins using the information stored in DNA are the best understood parts about how a cell grows and divides, but how other complex cellular structures, e.g. membranes or non-ribosomal peptides, are assembled and replicated is much less understood.

We need more years of research, perhaps up to a decade or two, until we will be able to know the entire amount of information describing a simple bacterial cell, and perhaps more than that for a much more complex eukaryotic cell.


Human DNA has 3.2 billion base pairs, and with 2x the information density compared to binary systems (due to 4-letters as opposed 2), that's roughly 800MB of informational data.

Second, what's even more crazy is that roughly 98% of that DNA is actually non-coding.. just junk.

So, we are talking about encoding entirety of the logic to construct a human body in just around 16MB of data!!!

That's some crazy levels of recursive compression.. maybe it's embedding "varying" parsing logic, mixed with data, along the chain.


>Second, what's even more crazy is that roughly 98% of that DNA is actually non-coding.. just junk.

I think it's a myth that non-coding DNA is junk. Say:

https://www.nature.com/articles/444130a

>'Non-coding' DNA may organize brain cell connections.


As another poster has said, much of the "junk" is not junk.

The parts of the DNA with known functions encode either proteins or RNA molecules, being templates for their synthesis.

The parts with unknown functions include some amount of true junk caused by various historical accidents that have been replicated continuously until now, but they also include a lot of DNA that seems to have a role in controlling how the protein or RNA genes are expressed (i.e. turning off or on the synthesis of specific proteins or RNAs), by mechanisms not well understood yet.


It encodes the data on top of locally optimal trajectories in the physical world that were learned in millions of years of evolution. Treat this as context, not weights.

And anybody who’s ever met a baby can tell you, they score very poorly on most llm benchmarks.

The solution is to develop relative skill rating systems like Elo.

No, the solution is to exclude male advantage from the female competition via evidence-based analysis, as the IOC's new policy does.

Grouping based on skill would achieve what you describe and then some. It would eliminate every kind of advantage, not just sex-based advantage.

Sport does that already. The Olympics is the very top skill tier.

So you're just suggesting making everything mixed-sex, and having very few women at the Olympics?


> So you're just suggesting making everything mixed-sex, and having very few women at the Olympics?

Yeah. It would work like video game rankings. Top-ranked players are top-ranked because of skill, and if they happen to be mostly men for most games, so be it.

But I get your point. The crux of the problem is most people don't want to see skill-based matchmaking. They want to see the best man, the best woman, or the best disabled person, etc. The categories are already defined in people's minds as cultural constants. The trans people don't like this because they feel excluded by both male and female categories, so they argue in bad faith that there's no physical difference between females and trans-females or males and trans-males. Our long-term options as a society are to either 1) change culture so that people get used to skill-based matchmaking like in video games, or 2) ignore trans people and wait for this issue to disappear when future tech allows a man to transfer his consciousness into a female body and vice versa.

Since 2) is quite far out technologically, I propose 1).


If we can admit of best male, best female, best disabled, best under the age of 18, etc, we can certainly admit of best trans-male; best trans-female.

That's a possible compromise, but a high maintenance one. It would set a precedent for other groups, and then we'd have to add a new category every time people complain.

I think we should just make the Olympics universal and let anyone compete for the title of absolute best in the world, no qualifiers. Detach the existing categories too, like men-only or women-only. Make all category-gated games a separate deal, like Paralympics. Each group can organize their own variant if they want.


However, the point is not to group by advantage. It is to create a separate category for women to compete in where women can win. Any grouping that failed at this purpose misses the mark

It’s interesting how the evidence based analysis switched as soon as the republicans came into power. Maybe this is less about evidence and more about opinion actually?

Not sure how this helps. Olympic events already have relative rating systems that ranks all the participant: pretty complicated and sport dependent systems that determine qualification for the games and competition amongst all the competitors at the games. The problem how to have separate competitions for different groups of participants when there isn't a universally shared agreement on who should be in which group.

If you have a relative skill rating system, then there's no need to split competitors into groups. But if you insist, then you can split them based on skill ratings (define a rating range for beginner, intermediate, advanced, etc). And for games with one-on-one matchups, sampling from a gaussian centered on each player's skill rating is good enough.

It will end up being all men at all the skill rating levels.

It doesn't.In tennis a 14 UTR whatever wins against a 13 UTR whatever. UTR is your effectiveness rating against every other player. Same in chess with ELO.

The issue is woman would disappear from profesional sports. Sinners 16.27 rating means that he double bagels Sabalenkas 13.29 essentially 100% of the time. The 500th ATP player has a UTR of 13.81, half a point is quite a bit stronger, do he's still very much stronger than Sabalenka. You probably have to start looking well into the thousand somethings for something that is consisently beaten by her.

Only the top 200 players make money, the top 100 good money, and the top 50 ridiculous money.


So women would not be in something like top 2000 of tennis players or worse. Which would basically remove any incentive for women to participate in pro tennis at all.

I don't get how you can compare Sinner's UTR against Sabalenka's when they're based to two disparate group scores? Doesn't there need to be at least a modicum of cross-pollination to make a meaningful comparison?

There is some cross pollination. Women can play vs men, just usually don't. I'm fairly certain singles UTR is universal across players, it only distinguishes between doubles and singles UTR.

UTR can also include unranked games if one of the players submits a score and the other approves it.


No it would not. Look at chess ratings.

Basically proving my point. Very few women in top chess. Currently there are 0 women in top 100 chess players. Only 3 women were ever in the top 100 chess players. And chess is not even a game where men have a natural advantage like in almost all of the physical sports.

I don't deny that there are very few women in top chess, but that wasn't your point. You said it would end up being all men at all the skill rating levels, which is not true. Take chess as an example: there are a lot more women at around 1500 elo than at 2500 elo. So if you host an intermediate-level tournament just for players around 1500 elo, plenty of women will participate.

The ratio of men to women who are at 1500 Elo in chess is like worse than 90:1, so no, you host an intermediate level tournament and it will be almost all men. Well, mostly boys but that’s current chess for you.

But it’s not just that. If there are no top women in any kind of leagues in chess, that will only further discourage women from participating competitively in chess in the first place.

Note that most competitive women chess players play in women’s only tournaments even though they can easily join open men’s tournaments as well. For various reasons, one being that these women’s only tournaments are where they have the best chance of winning or being in the top k for prizes.


The male-to-female ratio at 1500 elo is not 90:1, but more like 9:1. 10% is a visible minority.

But I see where our disagreement is. You think there ought to be more women in chess. I think different people can do different things, so women don't need to match men in every statistic and vice versa. If we open it up to universal participation and it turns out to be a male-dominated game, then let it be. I don't think there's anything wrong with that.


> I think different people can do different things, so women don't need to match men in every statistic and vice versa. If we open it up to universal participation and it turns out to be a male-dominated game, then let it be. I don't think there's anything wrong with that.

You don't have a say though, others want to see women play chess against each others and happily pay for and organize that event. Or do you want to make female only events illegal? As long as they are legal they will continue to be held.


…The whole point of women’s only competition is to see women compete in top level games and tournaments in some league.

We have to separate child protection from Internet control so that the "protect the kids" narrative loses its potency. So here's a counter-narrative: we can implement digital child protection without Internet-wide access control, and it requires just 3 simple features that can be implemented in less than a week. There's no need to introduce new laws at all. This could just be done tomorrow if there is genuine will to protect the kids.

1) If you're a platform like Discord or Gmail, give users the option to create an extra password lock for modifying their profile information (which includes age). This could also be implemented at the app level rather than at the account level. Parents can take their child's phone, set the age, and set these passwords for each of their child's apps/accounts.

2) If you're an OS developer, add a password-protected toggle in the OS settings that gates app installation/updates, like sudo on Linux. Parents can take their child's phone and set this password, so they can control what software runs on their child's phone. If we have this, then 1) isn't even strictly needed because parents can simply choose to only install apps that are suitable for their child.

3) If you're a device manufacturer, you should open-source your drivers and firmware and give device owners the ability to lock/unlock the bootloader at will with a custom password. Parents should be able to develop and install an open-source child-friendly OS. Companies like Apple and Samsung have worked against this for years by introducing all kinds of artificial roadblocks to developing an alternative OS for their hardware.


(This is a reply to the dead comment, which was not dead when I start writing this)

I don't know how long their specific proposal would take, but on a Unix or Unix-like system the California bill could be done in a week.

0. Make a directory somewhere, say /etc/age_check, and in that directory create four files: 0-13, 13-16, 16-18, 18+, owned by some system account with permissions 000.

1. This would be the hardest part. Modify whatever is used to interactively create new user accounts to ask for the user age if the account is a child's account, and than add an ACL entry for the appropriate /etc/age_check file that allows the child's account to read that file.

The California bill says you have to ask for and age or birthdate but the API you provide for apps to ask for age information just requires giving an age bracket, so I'm taking that as meaning I am not required to actually store the age. I only have to make the API work.

2. The API for checking age is to try to open the files in /etc/age_check. Whichever open succeeds gives you the user's age bracket.


So basically parents set the child's age and apps rely on that if they need to know if the user is old enough?

That's pretty similar to the California bill. Parents set an age when creating a child's account. The OS provides an API to get the user's age bracket from that, which apps that need to know the age bracket of the user can call.


The California bill gets it backwards. Rather than Internet services taking the user's age and deciding what content to serve, the Internet service or app should broadcast the age rating of its content to the OS (if convenience is desired), like how movie ratings work. The responsibility to decide what content is suitable for a child should rest in the hands of that child's parent, not the state or the corporation.

edit: on second thought, realistically, the API solution is too brittle regardless of which way it goes. Because the API requires every service to implement it and that's not happening, whereas an app installation lock only requires one child-friendly OS to implement it, then parents can choose that OS.


That's not my understanding. This is what the bill says: Provide a developer who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface that identifies [the age group].

So the app requests a signal (like, calling an API), and the OS returns the signal (returning the age group).

Regarding API vs installation lock, TBH I don't think the law concerns that level of details. An OS or app-store installation lock that checks app ratings can be considered as a valid implementation.


The California law is horrible because it forces everyone to let tech companies and governments decide what's suitable for children, rather than let parents decide. It's telling parents to give every app their child's age and trust that the apps will do the right thing. It also legitimizes personal data collection (in this case, the user's age) for every app and service on the Internet that wants to know your age.

The password-based app installation lock I proposed in my original comment doesn't require any kind of age checking at all, so it naturally doesn't fit the California law. The device owner (in this case, the parent who buys the device for their child) gets to decide what apps can be installed on their child's phone on an app-by-app basis using a password set by the parent. The app store doesn't need to know, and the apps don't need to know.


You have a point. Though I suspect that average parents are either too lazy or not tech literate enough.

I do want to note that this California law alone doesn't say anything about content restriction. I won't be surprised if there was/will be another bill to assign the responsibility (which may be more controversial). But the current law is only about the age gating mechanism. And on the positive side it removes the need for actual age verification (like using ID) which other regions still insist on.


The California law is the closest thing to what we do in the physical world but better. We already decided as a society to limit the purchase of pornography, gambling, alcohol, tobacco, prostitution, drugs, via age gates and require the merchant to be liable for that. We already find this reasonable as a society. The California law recognizes the tracking problems of requiring a verifiable id online and instead recognizes that parental self-assertion at the point of account creation is enough.

Since tracking children is generally illegal, you can also voluntarily lie and label yourself as a child when you don't want to access such content.


We have decided as a society to age-gate the purchase of a very small selection of goods and services, but this did not require a law that says all merchants have the right to know your age. And in this case, it's not even just all merchants, but anyone that serves you any kind of information. The real world equivalent of this California bill would be more like: anyone you've ever talked to has the right to know your age.

A more reasonable approach would be for parents to keep tabs on (or for stricter parents, control) who their child is associating with and where they're going, and advise their child on who/what to stay away from if they're out alone. And of course that takes parenting effort. The digital equivalent of this are things like password-gating app installation in the OS and website-blocking in the WiFi router. But I will say, I don't think these kinds of analogies are good because the Internet is too different from the physical world.

And let's not underestimate the tracking power of a legally mandated data point: the age contains about 6 bits of information that can be used to identify your user account on the Internet across apps and websites, even if your inputted age is fake.


Would the content rating be per HTML element and the browser would delete the elements with bad ratings from the DOM, or how would it work?


I'd imagine it works like movie ratings. You don't filter movies from scene to scene. There's just one rating for an entire site or app.

But yeah I get the point, API based solutions are complicated and brittle because they require all services to implement it properly. In contrast a user-set app installation password in the OS settings is more effective and easier to implement.


If a chronological social media feed contains both R and G rated elements how would you implement that?


> the API requires every service to implement it and that's not happening

No it doesn't. A browser/appinstaller with parental/age controls enabled would fail as unavailable if there was no age rating on the website/app. This is exactly the solution we should be aiming for, as it keeps the incentives lined up instead of turning them upside down.

One big problem with the laws currently being pushed is that it leaves the decision for what sites are "appropriate" for kids completely in the lands of corporate attorneys. For example, Facebook will happily make an "under 18" site that uses LLMs to censor posts, but still contains all of the same dopamine drip mechanics. Whereas keeping the decision process of appropriate under the control of the end-device means parents could straightforwardly go beyond what corporate attorneys decide, and block Facebook regardless of the age rating.

I'm responding to another comment of yours here since HN loves the rate limit. In that comment you were talking about locked down bootloaders. But bootloaders are already thoroughly locked down, and most devices are still essentially usable. The current looming threat is remote attestation, which makes it so that websites (and other services) are able to prevent you from running software of your choice when interacting with them! The backwards legislation being currently pushed is all but guaranteed to end up in more demands for remote attestation, whereas the correct direction of information flow (sites/apps publish headers saying they're suitable for <18 etc) would not necessitate remote attestation.


I shouldn't have defended the API or age rating solution. It's just a trap in hindsight. That kind of solution must be rejected altogether even if it's the OS checking the app/website's age rating header, because we'd be giving the OS oligopoly (Apple, Google, Microsoft) way too much leverage, and in the long term they're going to make it so that you can only run their approved apps because unapproved apps didn't implement their age rating API. And there is no competing OS to fix that situation if those same companies keep the bootloader on their hardware locked. That still puts authority over children in the hands of governments and corporations rather than parents.

I stand by my original comment. No new laws are needed. All of the features outlined in 1), 2), and 3) should be user-controlled, and there's no need to send info over the air.


You can still get hardware that you can install your own OS on. But you have to be deliberate about picking it out before a purchase, rather than hoping to unlock a random carrier phone down the line. For example my phone is a Pixel running Graphene. It has a locked down bootloader that could only be unlocked with the online consent of Google. While this most certainly chafes me (and if I could snap my fingers and make such schemes blink out of existence I would), I do have to admit that it really isn't that debilitating.

The unlocking process zaps the userdata partition. This security model would totally suffice for locking down a child's phone. If the child zaps their phone and erases everything on it, then the parent can handle that out of band.

For the general problem, I would say that there has been a longstanding market failure here, in that parental control software isn't widespread or straightforwardly usable across different websites. Your 3 points don't really address that. (2) has been doable on standard desktops forever, and (3) just pushes mobile devices back towards the capability of desktops (which on its own is laudable!). But standard desktops have had these capabilities for decades and still haven't evolved the kind of straightforward parental controls that most parents are demanding.


I don't think it's a market failure. The reality that password-gating software installation at the OS level can be done on most desktops but not most phones is the opposite sign of a market failure. Mobile OSes have increasingly stripped down capabilities in recent years precisely because of anti-competitive practices. The reason standard desktops have not evolved even better parental control features is not because they're not doing better than phones under a free market. They are already doing better in spite of the fact that most kids use desktops a lot less than they use phones. It's just that the absolute level of demand for parental control features has been low until recent years, and even this recent wave of demand is somewhat manufactured.


You're focused on "password gating" for installing apps, but the largest subject here is websites. (also a nit: very few Linux systems are set up with noexec on home directories, I know "portable apps" exist for Windows, and I assume MacOS has similar dynamics)

> the absolute level of demand for parental control features has been low until recent years, and even this recent wave of demand is somewhat manufactured.

I don't agree with this. I think the demand has always been there and has been sort of discarded. I've personally done some of that discarding, in my younger days when the worry was of violent content but still on desktop/laptop computers where use was generally socially legible to parents. But these days we're dealing with pocket-sized devices that are no longer socially legible, plus malevolent commercial interests drawing kids in to get them hooked on dopamine drip loops.

But you seem to think the problem is solved, so tell me: what exactly are parents supposed to do here? I'm a new parent, we're still at the stage of watch videos with mama/dada, and play with the calculator app. The next step is probably curated sources of content/apps with general web browsing locked down (including self-curated things like perhaps a local copy of wikipedia). But then after that? What's the next level of expanding their scope look like, without them being subject to attack by corpos showing them ads/social media/weird slop shit/etc? Especially if they are going to have a SIM card such that I can't just filter most of this at the network level.

I haven't researched it all, and I'm sure there are some solutions. But I'm also more capable of seeking out bespoke solutions and actively choosing to use one, as opposed to the average person who wants things to "just work" and isn't going to delve too hard. Can't we agree that the pressure for this shitty legislation is coming from somewhere beyond merely Faceboot's money?


Ok, I'll give you my two cents, but you'll have to fill in the details on your own.

After curated local content, you could get an old desktop (and later a laptop too) and install a Linux distro of your choice on it, something reasonably modern. Put Minecraft on it, and show your child how to start a singleplayer world. Show them how to use the web browser, and add a curated list of sites in the bookmarks. Leave them to figure out the rest on their own. Withhold the sudo and BIOS passwords at the beginning, but give them the passwords when they're ready. I think for the sudo password, it's when they try to host their own Minecraft server for the first time, and BIOS password when they explicitly ask you for it (though these may never happen, depends on the kid, so set your own milestones). Configure the OS and programs as you see fit early on, but don't make changes secretly after they've had the computer for a few months. Block unwanted sites and limit access times with your WiFi router or OS firewall as you see fit. Eventually, they'll figure out how to get around or tear down the barriers you put up, and that's fine, just pretend you don't know or give them a vague hint if they do something too egregious like stealing the neighbor's WiFi. Gradually loosen your control as they get older. And if something breaks, let them watch how you fix it.

Don't give them a phone. Or even if you do, strip it down so that it can only be used for calls, but you can add apps over time. Don't buy them mobile data. Let them buy their own phone and mobile data when they're old enough to earn the money, and that's when your digital supervision ends.

Regarding a solution that "just works": when your child goes out to play, you're the guide that protects them and shows them around town. You know the roads, buildings, people and rules better than they do. There aren't any solutions that "just work" which exempt you from your job as a guide. Well, there are, but that just means someone else is watching your child for you. I think digital parenting is similar in this regard. Parents needs to understand the digital landscape well enough to guide and advise their kids. Solutions which don't strip away parental rights and responsibilities will require some effort to use.


1) Could be simpler for a start if 2) ensures that no web sites that send a special "over 18" server header are displayed. The header could be more detailed and the parent could select what things are allowed, but for a start make it simple.


Yes, that's even better. Make apps and websites provide an API that broadcasts the age rating of its content, then let the OS attest the apps and websites, not the other way around.

edit: on second thought, there is a trap here. If hardware manufacturers lock down the bootloader, then we're basically still handing over parental authority to governments and companies in the long run. So I think for a start, we just implement a app-install password lock like sudo. It will be easier to implement than the API. The convenience API can come later when hardware manufacturers are banned from locking bootloaders.


How would you make a website that can be over 18 or not, such as a social media feed? Would it become over 18 as soon as your following list contains a porn star (who may not have been one at the time you followed them), and then if you're under 18 you can't unfollow them because you can't load the page?


More and more countries want to make social media >18, or at least >16 or something. :/


Code-writing speed is not a bottleneck when the stakes are high. Sometimes, it's better to slow down, plan ahead, and consider the consequences because the cost of a failed iteration is too great.

Take the way AI is being developed as an example. People rush to build giant agents in giant datacenters that are aligned to giant corporations and governments. They're building the agentic organism equivalent of machiavellian organizations, even though they'd be better off building digital humans that are aligned to individual humans that run on people's gaming PCs at home. They will find out that the former is the wrong architecture, but the cost of that failed iteration is the future of human civilization, and nobody gets a second try.

Of course, this is an extreme example on one end of the scale. On the other end, it wouldn't matter at all if you're building a small game for yourself as a weekend project with no users to please or societal impacts to consider.


This seems fine as a short-term solution, but human-only is no good as a long-term rule. The AIs will soon surpass human capability. Even in the present, I think some AI comments are already decent quality. It's just most of them aren't high quality yet.

And I'm worried banning AIs altogether will eventually lead to some form of prove-you-are-human verification to use the site, which will reduce anonymity. Even something seemingly benign like verifying email would mean many unverified accounts like my own will disappear.

And there is a legitimate use for LLM rewrite to counter identification by stylometry, so rewrite shouldn't be banned. I think we'll have to allow the AI stuff at some point, and make a system that incentivizes quality posts regardless of where they come from or how they're written.


I don’t care to read a comment that nobody put their time in.


> The AIs will soon surpass human capability.

The rule can be revised later.

> I'm worried banning AIs altogether will eventually lead to some form of prove-you-are-human verification to use the site, which will reduce anonymity.

Of all the sites on the Web to worry about this happening, HN is low risk. Oppose that change if it comes, not this one.

> And there is a legitimate use for LLM rewrite to counter identification by stylometry

Source for comment-level stylometry ever actually being someone's downfall, despite availing themselves to every other much more standard defense measure? Regardless, if your experimental means of deanonymizing yourself comes at the expense of the site's quality, it is probably not welcome.


"prove you are human verification" as in something like Sam Altman-backed World and The Orb [1]? Or maybe even the bead [2] (backed by me)

1: https://world.org/orb

2: https://thebead.pixlw.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You