Who makes the rules of the internet? Who judges what’s offensive and what’s OK? What are the implications for those of us who create content?
In 1964, the U.S. Supreme Court had to decide whether the State of Ohio could ban a film it deemed to be obscene. Famously, Associate Justice Potter Stewart wrote that while he was hard pressed to define what qualifies something as obscene, “I know it when I see it.”
Where are the boundaries?

Image source: The Verve (Eric Peterson)
The boundaries of offensiveness have always been fuzzy and subject to change. Movie scenes that horrify one audience might not elicit even a blush from another. Books that would’ve gotten me in trouble had they been found in my high-school locker are part of the curriculum today.
Despite the lack of rules, the boundaries are very, very real. Most of us would say with all sincerity that, like Justice Stewart, we know when something transgresses a boundary. There are standards, even if they exist only in our minds and are sustained by our (illusory?) sense of belonging to a community.
The secret rules of the internet
This week I came upon The Secret Rules of the Internet, a long piece that describes the ways in which content is moderated on the major social-media platforms.
To the extent that I’d thought about how moderation works, which admittedly wasn’t much, I never would’ve supposed that:
- Moderators often work with guidelines that are slapdash and incomplete.
- Moderators are poorly trained, if they’re trained at all.
- Moderators are prone to depression and other psychological disorders, largely because their jobs force them to see things they can’t bring themselves to describe to anyone.
- There are no standards or best practices for moderation; rather, most media companies treat their moderation practices as trade secrets.
- Moderation is often shoved into a “silo,” segregated from the rest of the company, even — especially — from areas that set the company’s course in terms of legal and ethical principles.
- Some platforms are better at moderation than others. (The article contrasts Facebook, with its relatively well defined Safety Advisory Board, and Reddit, which has weak guidelines, a small team of moderators, and a reputation for harboring lots of offensive content.)
According to the article’s authors — Catherine Buni and Soraya Chemaly — all of these things are true. Continue reading