At a time when the news media is under intense scrutiny, when people struggle to distinguish reliable information from “fake news” from merely biased news, how will we decide — and who will decide:
- When is content inappropriate?
- Who controls the content?
- What if content is used to deceive?
I posed these questions last week, with emphasis on the information, or the content, that we create. And I asked how we — the content creators — will shape the answers.
Answering the content conundrum
NewsGuard, whose launch date has not been announced, will try to “help consumers distinguish between sites that are trying to get it right and sites that are trying to trick people.” Those are the words of Brian Stelter, who interviewed Brill for CNN’s “Reliable Sources” earlier this month.
NewsGuard won’t evaluate individual stories. But it will evaluate the sources of those stories. In Brill’s words, “We’re…telling people the difference between The Denver Post, which is a real newspaper, and The Denver Guardian, which broke a bunch of, you know, completely fake stories right before the election.”
Trying the human element
And then Brill reveals his secret sauce. Recognizing that algorithms aren’t up to the task of evaluating news sources, Brill proposes to use “guess what, human beings” for the job.
That’s right: real, live people with journalistic expertise will evaluate the news sources and tell us which ones we can trust.
I have to admit, it sounds good. Step aside, Facebook: put away your algorithms and let real humans do the job.
Craftspeople, not software, advising us who we can trust when we venture online. In an age that loves all things custom and bespoke — an age that disdains cold, impersonal, mass production — it sounds like a perfect fit.
But will it pay?
Seventy or eighty years ago, every small town had a newspaper. And every one of those papers had a real, human editor who pulled stories from the wire services and decided which ones to print. And every day everyone in town bought the paper. Until they didn’t any more.
As appealing as Brill’s NewsGuard sounds, I wonder how his business model will avoid the same fate that befell those small-town newspapers. He says NewsGuard will make money by licensing its services to the big search and social-media platforms — like Google and, yes, like Facebook, which I presume will gladly be rid of its algorithms — and of its responsibility to uphold the integrity of public discourse.
And isn’t it already being tried?
Writing in The Verge last week, Casey Newton reports that the SXSW tech festival is coming to grips with the problem of the internet as a platform for misinformation. (He calls it a “reckoning.”)
Newton describes how a couple of human-moderated sites already grapple with the problem, citing the narrowly curated Apple News and the venerable fact-checking site, Snopes, whose proprietor, David Mikkelson, laments how quickly and easily falsehoods spread across today’s internet.
In the past, Mikkelson says, “It kind of took weeks for things to go viral — gave us plenty of time to look into them, write them up.” Now it can take minutes. Newton adds, “Fact-checking requires research, and lying does not, which keeps fact-checkers and the social networks that rely on them at a constant disadvantage.”
In other words, the good guys are being outpaced by the bad guys.
What do you think? Is NewsGuard the beginning of a movement toward an online world where we enjoy high-quality, trustworthy content? Or will it fall short? If so, why? And what might a better solution look like?