This is about information: who controls its flow, who uses it, and who watches you when you use it.
This is about you. Because you access information — or content — on the internet, and because you probably create it as well.
Will someone have the power to tell you what content is and is not appropriate? Who controls what happens to the content you publish? Will someone use your content to deceive or mislead?
Just this month, 3 news stories have brought these questions into sharper focus. Will we, as writing professionals, have good answers? We’d better, because I don’t know if anyone else will.
When is content inappropriate? Who decides?
On March 1, Twitter CEO Jack Dorsey promised to start measuring the platform’s “health” as a first step to freeing users from trolls and propaganda. (Josh Bernoff does a great job of breaking down the announcement.) Admitting that “we didn’t fully predict or understand the real-world negative consequences” of Twitter’s free-for-all format, Dorsey promises to get busy and fix the problem.
Can he fix it? Can he put the lid back on Pandora’s box? It strikes me as too little, too late.
Of course, Dorsey isn’t the only social-media mogul wringing his hands. Mark Zuckerberg’s Facebook, besieged by covert Russian advertisers during the 2016 presidential campaign, is looking for ways to take back the platform, as it were, and ensure a more balanced flow of content.
This, too, sounds like too little, too late. More to the point, the people in charge of Twitter and Facebook aren’t journalists or publishers. They’re technologists, ill equipped to consider the complex ethical issues associated with controlling the creation, dissemination, and use of content.
Who controls the content?
The day after Jack Dorsey’s announcement, Foreign Policy reported that Chinese authorities are demanding personal information — home, school, and work addresses; photos; scans of their ID cards; even marriage certificates — from Uighurs who have emigrated to France. (The Uighurs are a mostly Muslim ethnic group concentrated in northwest China. For decades they’ve clashed with Beijing over issues of autonomy.)
Why would Uighurs living in France — some of them French citizens — give their personal information to the Chinese government? Because if they don’t, the government threatens to make life difficult for their family members back in China.
In possession of their personal information, of course, the Chinese authorities can more easily keep tabs on a group they see as seditious — even from thousands of miles away.
George Orwell foresaw this kind of thing when he wrote 1984. For most of us, though, the “information age” was touted as a time of greater community and greater freedom, as the light of knowledge and truth flooded the world.
What if content is used to deceive?
Knowledge and truth? Or something else?
Robert D. Kaplan, of the Center for a New American Security., writing in the Washington Post, described the potential for using digital and video technology to create an illusory world — in the same way movie directors used 20th century technology to stage lavish scenes, like the parting of the Red Sea in The Ten Commandments.
A movie director can keep an audience in thrall for a couple of hours, Kaplan explains. But a 21st century authoritarian regime will be able to create a propaganda bubble in which it entraps all of its citizens, all the time.
Already, Kaplan reports, “the Chinese, eventually with the help of big data, are working on following the Internet searches of their citizens, and then determining who needs to be singled out for further observation. If a government or a company knows the destination and sequence of all of your searches, it is virtually inside your mind.” (He could’ve said Google here. Not just the Chinese.)
For Kaplan, today’s “dark age of technology” is forcing us to fight for objectivity — for the assurance that the information we access is real and not an artifice made to spread propaganda or steal our privacy.
Is Kaplan being alarmist? Has he himself, by cherry-picking facts and exaggerating the magnitude of the problem, created the very artificial reality that he decries? Not if you ask me. From watching the news media and social media, I’m convinced that the situation is already more precarious than most people want to admit.
Which, again, prompts questions about how to ensure that content is accurate and unbiased, that it’s made freely available, and that it won’t be used to manipulate or deceive. Questions that might soon dominate the professions in which we, as content producers, work.
How will we answer?
When the questions come, will we have the right answers? We can’t expect Jack Dorsey or Mark Zuckerberg — well-intentioned though they may be — to even understand the complexity of the questions, let alone find the answers.
So it’s up to us, the content experts. What are we doing to prepare?