Additionally, just because it’s possible that this could happen doesn’t really give us an idea of how likely it is. Is it one of those theoretically possible, but it never actually happens events? there’s a huge difference between it impacting half of the stories that fall off that quickly, and it impacting 1 in 10,000 stories that fall off that quickly.
Communities would get a good sense for the frequency if forums would simply disclose content moderation to the submitting users. Offending users would learn what's not allowed and share that with the community.
But today's forums frequently do not disclose moderation to submitting users, and that is why we are now seeing major court cases over 230, government-led censorship, etc.
I don't know anything about other forums, but for the reasons why on HN we don't publish a full moderation log, see https://news.ycombinator.com/item?id=39234189 as well as the past explanations linked from there.
You can, however, always get a question answered. That's basically our implicit contract with the community.
Full moderation logs are different than showing submitters how their posts have been moderated.
On HN, my understanding is that you (moderators) can penalize stories without the submitter's knowledge. But if HN instead disclosed that penalty to the story's submitter, that would help this community communicate better.
As for how it works elsewhere, if a YouTube channel removes your comment, you won't know [1]. Same thing on Reddit, Facebook, and X. So while HN is relatively small, the practice of withholding content moderation decisions from submitters/commenters is widespread.
I'm sorry, but I think that would have the effect of making what is already a difficult job impossible. Even if most submitters saw that information and went "oh! well I guess that's that then," the number who would instantly fire off emails of protest would overwhelm our capacity to answer them.
Every submitter thinks their story deserves to make HN's front page, if not #1. Actually, that's not entirely true—the cleverest and most tasteful submitters are often the most humble. We have to go out of our way to try to find what they post because they're the last people who would ever send an email demanding attention.
But I can tell you from experience (81,556 emails and counting) that there are far more people who think their blog post ought to be #1 on HN than I could ever answer, and I can tell you what happens if one tries: many come back with a list of objections that is 3x longer than the entire conversation so far. The problem grows the more you feed it.
I want people to be able to get answers to their questions. No one would love it more than me if we could find some automated way of reducing that load while still answering people's questions. But so far every suggestion of how to do this sets off so many alarms in my body that I wonder if I'll sleep that night.
I'm afraid that might come across dismissive and I apologize if it does. It's just that the status quo already involves so much pressure that if I try to explain, I come across as a deranged beach ball that's been pinned deep underwater for 10 years.
Thank you for the question. I can think of two reasons:
(1) You wouldn't want someone to secretly remove or demote your own commentary. But secretive content moderation is extremely common on today's major platforms. In order to be heard there, you would need to fight back against the practice, and you cannot effectively do that while keeping secrets yourself.
(2) Undisclosed content moderation does not express any kind of message, and therefore the platforms' use of it may not even be protected by the first amendment.
#2 is currently under discussion in a few cases before the Supreme Court:
How is it a free speech issue when someone kicks you off their property? It has nothing to do with speech so why would the first amendment be involved?
And to add to that, USA's 1st Amendment applies only to actions by the government. But this does mean that in other situations that redress is never available. It just may require more nuance or collective action, or conversely, even the willingness to let something go.
(I am not commenting in this message on whether an HN issue may exist or should be let go. On those matters, I am still reading.)
In the case of shadow banning, you haven't kicked them off your property. You're asking them to stay while you earn ad money from their attention.
See the linked tweet for a more lawyerly argument in defense of shadow banning. The question before the court may hinge upon whether or not shadow banning expresses a message.
Wtf are you talking about? He’s literally telling us and has mentioned in the community many times that flagging quickly crushes a story.
I’ve seen it happen when I’ve flagged stories so either there is a vast conspiracy of moderators that receive pages when I flag things so they can downrank… or maybe dang isn’t lying about something that should be super obvious as a community self policing mechanism.
I appreciate the accuracy in your comment but do please edit out swipes like "Wtf are you talking about"—those spread bad feeling, and when we're talking about the community itself it's even more important not to do that.
> Wtf are you talking about? He’s literally telling us and has mentioned in the community many times that flagging quickly crushes a story.
It's discussed in the link, and elsewhere [1]. Some mod actions on HN are transparent, some are not. You should not assume that, just because you see marks of some form of moderation, that you can see them all.
Undisclosed content moderation is like directly modifying your production database. It's faster, but always more troublesome. Nobody else knows what changed or why, etc.
If you want a site with a public mod log, there's Lobsters. If you want a site with a mod log that's cryptographically auditable by users, I'm sure blockchainia has something on offer. You're not going to get either of those things here, for reasons the community has dug into in the past and you can surface with the search bar.
Sure, that could happen. And if it does, it will happen by way of people leaving sites like this one for sites moderated differently. I think we're all OK with letting the market decide.
> Often, it may be advantageous to shadowban a troll or spammer rather than ban them - an actual ban simply tells them that it's time to create a new account. With a shadowban, they don't know they've been banned. [1]
They've since renamed that to a "bot ban", but the effect is the same. Anyway, all comment removals on Reddit are shadow banned by default. You can try it yourself by commenting in r/CantSayAnything [2]. Your comment will be removed, you won't receive any notification, and it will still appear to you as if it's not removed.
> At least according to the words attributed to the former head of safety there, it IS complicated.
Yoel is talking about telling people WHY their content was actioned. Such removal reasons are not categorized according to Twitter's rules, they are stored in free-form text notes. So sure, assigning a rule to each of those would take time.
But it should take zero time to place a "demoted or removed" status on content so that the account owner can see its true status.
Bare notice [1] should take zero time to implement. Just show users the true status [2]. Reasons for removal can come later. Right now everyone is still in the dark, and it's a year after acquisition.
It is way too easy to shadowban on social media. Some mods are not only brazen about shadowbanning, they also suggest digital IDs would be a substitute [3].
However, once we move to digital IDs, I doubt shadowbanning will go anywhere. Just like we pay for ads on cable TV, shadowbanning will always be a thing. Don't fall for the line, "we'll stop when X happens."
> you guys need to first look at the site as it actually is, and not just look at your own pre-existing perceptions.
How can one see the site "as it actually is" when the decisions are kept secret from submitters?
> People think that when their team gets moderated, the mods are OMG obviously on the other side. The Other Side feels exactly the same way.
This will always be a thing. But it's also true that society is more divided now than it was 20 years ago. We find ourselves unable to communicate across ideological divides and we resort to shouting or in some cases violence. Some effort must be made to improve communication, and transparency for content authors is a minimal step towards that.
Increasing cost of attacks is effective against good faith people, not spammers.
Even Cory Doctorow made this case in "Como is Infosec" [1].
The only problem with Cory's argument is, he points people to the SC Principles [2]. The SCP contain exceptions for not notifying about "spam, phishing or malware." But anything can be considered spam, and transparency-with-exceptions has always been platforms' position. They've always argued they can secretly remove content when it amounts to "spam." Nobody has challenged them on that point. The reality is, platforms that use secretive moderation lend themselves to spammers.
No research has been done about whether shadow moderation is good or bad for discourse. It was simply adopted by the entire internet because it's perceived as "easier." Indeed, for platforms and advertisers, it certainly is an easier way to control messaging. It fools good-faith users all the time. I've shared examples of that elsewhere in this thread.
I think that you are reading this too narrowly. SPAMers etc are often in a hurry. For example, simply avoiding responding for a second or two to an inbound SMTP connection drops a whole group of bad email attempts on the floor while no one else even notices.[0] Another example: manually delaying admitting new users to a forum (and in the process checking for bad activity from their IP/email etc) seems to shed another bunch of unwanteds, and raising the cost a little with some simple questions on the way in, also. This point about small extra delay and effort deterring disproportionately bad behaviour is quite broad.
In your cost/benefit analysis, you overlook the harms created by secretive actions. That's why I asked for details about your experience.
The internet has run on secrets for 40 years. That doesn't make it right. Now that everyone and their mother is online, it's time to consider the harms that secrets create.
There are bad actors, and many of them are lazy/stupid. Their activity imposes a tax / harms on the rest of us. One way to minimise that harm to the good actors includes some mildly covert measures. The sendmail GreetPause is hardly a secret for example: it catches a common deliberate malicious protocol violation and is publicly documented. This is not unique to the Internet nor new; see also banking and personal security and so on.
This subthread started with a discussion about how "HN itself also shadow flags submissions" [1]. That's a slightly different form of moderation than the t.co delays.
Another commenter argued "Increasing cost of attacks is an effective defense strategy."
I argued it is not, and you said adding a delay can cut out bad stuff. Delays are certainly relevant to the main post, but that's not what I was referring to. And I certainly don't argue against using secrets for personal security! Securitizing public discourse, however, is another matter.
Can you elaborate on GreetPause? Was it to prevent a DDOS? I don't understand why bad requests couldn't just be rejected.
Okay, so the requests do get rejected, it just uses a delay to make that decision.
I don't consider GreetPause to be a form of shadow moderation because the sender knows the commands were rejected. The issue with shadow moderation on platforms is that the system shows you one thing while showing others something else.
Legally speaking, I have no problem with shadow moderation. I only argue it's morally wrong and bad for discourse. It discourages trust and encourages the growth of echo chambers and black-and-white thinking.
How do you view the rest of typical SPAM filtering, where the mail is apparently accepted for delivery but then silently thrown away? For simplicity assume a system such as mine where I control the MTA and client, so no one is making decisions hidden from me as the end user who wants to get the ham and see no SPAM. (I get tens of ham per day and many many thousands of SPAM attempts.)
Note that in the GreetPause case the SPAMmer will not see the rejection errors since they don't look at the response to their hit and run (ie no one gets to see any error, neither sender nor target), and a legitimate sender should never get the error, so even this may be messy by your criteria I think!
Spammers game the system while good-faith users get edged out. Spammers are determined actors who perceive threats everywhere, whereas good-faith users never imagine that a platform would secretly remove their content. Today, you see low quality content on social media, not because the world is dumb, but because the people who get their message out know the secret tricks.
Secret suppression is extremely common [1].
Many of today's content moderators say exceptions for shadowbans are needed [2]. They think lying to users promotes reality. That's bologna.
so to spammers shadowbanning makes no difference, but good-faith users somehow get discouraged even if they don't know they are shadowbanned just because they get no reaction to their posts? how is an explicit ban any less discouraging?
i can't see how shadowbanning makes things worse for good-faith users. and evidently it does work against spammers here on HN (though we don't know if it is the shadow or the banning that makes it effective, but i'll believe dang when he says that it does help)
It's about whose messages are sidelined, not who gets discouraged.
With shadow removals, good-faith users' content is elbowed out without their knowledge. Since they don't know about it, they don't adjust behavior and do not bring their comments elsewhere.
Over 50% of Reddit users have removed content they don't know about. Just look at what people say when they find out [1].
> and evidently it does work against spammers here on HN
It doesn't. It benefits people who know how to work the system. The more secret it is, the more special knowledge you need.
It's cool that you set up your own instance, but do you see no problem with covertly altering the score of a story?
Such secrecy leads to oversized, over-trusted forums, and is what this post seeks to address.