Who makes the rules of the internet? Who judges what’s offensive and what’s OK? What are the implications for those of us who create content?
In 1964, the U.S. Supreme Court had to decide whether the State of Ohio could ban a film it deemed to be obscene. Famously, Associate Justice Potter Stewart wrote that while he was hard pressed to define what qualifies something as obscene, “I know it when I see it.”
Where are the boundaries?
The boundaries of offensiveness have always been fuzzy and subject to change. Movie scenes that horrify one audience might not elicit even a blush from another. Books that would’ve gotten me in trouble had they been found in my high-school locker are part of the curriculum today.
Despite the lack of rules, the boundaries are very, very real. Most of us would say with all sincerity that, like Justice Stewart, we know when something transgresses a boundary. There are standards, even if they exist only in our minds and are sustained by our (illusory?) sense of belonging to a community.
The secret rules of the internet
This week I came upon The Secret Rules of the Internet, a long piece that describes the ways in which content is moderated on the major social-media platforms.
To the extent that I’d thought about how moderation works, which admittedly wasn’t much, I never would’ve supposed that:
- Moderators often work with guidelines that are slapdash and incomplete.
- Moderators are poorly trained, if they’re trained at all.
- Moderators are prone to depression and other psychological disorders, largely because their jobs force them to see things they can’t bring themselves to describe to anyone.
- There are no standards or best practices for moderation; rather, most media companies treat their moderation practices as trade secrets.
- Moderation is often shoved into a “silo,” segregated from the rest of the company, even — especially — from areas that set the company’s course in terms of legal and ethical principles.
- Some platforms are better at moderation than others. (The article contrasts Facebook, with its relatively well defined Safety Advisory Board, and Reddit, which has weak guidelines, a small team of moderators, and a reputation for harboring lots of offensive content.)
According to the article’s authors — Catherine Buni and Soraya Chemaly — all of these things are true.
A winding path
While I found the article eye-opening in some respects, it simply reinforces what we’ve always known: issues of decency and obscenity aren’t decided by decree, by legislation, or by popular vote. Instead, they reflect the tacit consensus of the community.
Don’t get me wrong. I’m not saying that there are no absolutes, that everything is relative. I think that some core values, like respect and compassion, are universal. What’s not universal is our understanding of how those values should play out in everyday life.
We can try to legislate that understanding — and people have tried, from the Code of Hammurabi to the Law of Moses right up to modern times. But written laws can’t anticipate every situation, and they can’t perfectly uphold the core values. Something more is needed: the consensus of the community.
That consensus comes about through a process that that winds, twists, and often doubles back on itself.
It’s the same process by which languages evolve. A process that might seem messy and haphazard, but which is quintessentially human.
I want to think that there are, and I think there should be, standards for judging online content. But no matter how much is codified, there’ll always be unforeseen cases that require new insights. When that happens, we have to rely on people — not people who are marginalized, not people on the low end of the pay scale, but people who have the experience and the good judgment to make the right call.
What do you think of the criteria for defining what’s acceptable and what’s not acceptable online? Does the process need to be improved? Can it be improved?
Finally, how can we, as creators of content, play a part in making things better?