At a time when the news media is under intense scrutiny, when people struggle to distinguish reliable information from “fake news” from merely biased news, how will we decide — and who will decide:
- When is content inappropriate?
- Who controls the content?
- What if content is used to deceive?
I posed these questions last week, with emphasis on the information, or the content, that we create. And I asked how we — the content creators — will shape the answers.
Answering the content conundrum
Here’s one answer, from Steven Brill, whose Wikipedia page calls him a “journalist-entrepreneur.” Brill’s new project is called NewsGuard.
NewsGuard, whose launch date has not been announced, will try to “help consumers distinguish between sites that are trying to get it right and sites that are trying to trick people.” Those are the words of Brian Stelter, who interviewed Brill for CNN’s “Reliable Sources” earlier this month.
NewsGuard won’t evaluate individual stories. But it will evaluate the sources of those stories. In Brill’s words, “We’re…telling people the difference between The Denver Post, which is a real newspaper, and The Denver Guardian, which broke a bunch of, you know, completely fake stories right before the election.”
Trying the human element
And then Brill reveals his secret sauce. Recognizing that algorithms aren’t up to the task of evaluating news sources, Brill proposes to use “guess what, human beings” for the job.
That’s right: real, live people with journalistic expertise will evaluate the news sources and tell us which ones we can trust.
I have to admit, it sounds good. Step aside, Facebook: put away your algorithms and let real humans do the job.
Craftspeople, not software, advising us who we can trust when we venture online. In an age that loves all things custom and bespoke — an age that disdains cold, impersonal, mass production — it sounds like a perfect fit.
But will it pay?
Seventy or eighty years ago, every small town had a newspaper. And every one of those papers had a real, human editor who pulled stories from the wire services and decided which ones to print. And every day everyone in town bought the paper. Until they didn’t any more.
As appealing as Brill’s NewsGuard sounds, I wonder how his business model will avoid the same fate that befell those small-town newspapers. He says NewsGuard will make money by licensing its services to the big search and social-media platforms — like Google and, yes, like Facebook, which I presume will gladly be rid of its algorithms — and of its responsibility to uphold the integrity of public discourse.
And isn’t it already being tried?
Writing in The Verge last week, Casey Newton reports that the SXSW tech festival is coming to grips with the problem of the internet as a platform for misinformation. (He calls it a “reckoning.”)
Newton describes how a couple of human-moderated sites already grapple with the problem, citing the narrowly curated Apple News and the venerable fact-checking site, Snopes, whose proprietor, David Mikkelson, laments how quickly and easily falsehoods spread across today’s internet.
In the past, Mikkelson says, “It kind of took weeks for things to go viral — gave us plenty of time to look into them, write them up.” Now it can take minutes. Newton adds, “Fact-checking requires research, and lying does not, which keeps fact-checkers and the social networks that rely on them at a constant disadvantage.”
In other words, the good guys are being outpaced by the bad guys.
What do you think? Is NewsGuard the beginning of a movement toward an online world where we enjoy high-quality, trustworthy content? Or will it fall short? If so, why? And what might a better solution look like?
I nearly always root for humans and certainly will root for Steven Brill. The democratization of publishing is primarily a good thing (anyone can publish and find an audience), but is also a serious problem (anyone can publish anything — including stuff that isn’t true but plays on people’s emotions — and find an audience). NewsGuard’s humans will do a better job of separating the honest from the dishonest. I think honest always wins in the long run. Fingers crossed…
Thanks for your comment. I’m rooting for the humans too, and I hope that Steven Brill can find a way to make his venture financially viable.
I think this is a much better model than any form of censorship. It is always better to bless than the damn.
That said, I think the question here is whether the problem we are trying to solve is that people are having a hard time finding news they can trust, or whether it is that some people are alarmed that other people are reading news that the first set of people don’t trust.
If it is the former — people looking for a more renewable news feed — then this model can work if people are sufficiently concerned to pay for it.
If it is the latter — people wanting to control what other people read — it is a non-starter.
From what I hear, though, I very much fear that most people are much more concerned about what other people are reading than about what they themselves are reading. Most of us seem to think that we have found news sources we can rely on. Of course, confirmation bias plays a huge role here. We trust news that confirms our prejudices and distrust news that contradicts them. This is true across the political spectrum.
And by this same token, we should recognize that if curation of reliable news becomes a viable business model, then we will inevitably see left wing curators and right wing curators and people subscribing to those that most closely match their own positions.
Another measure that I think would go along way to alleviate the problem, and which I am really surprise has not been implemented yet, is a universal subscription model. Few people are willing to pay to get the pay wall of a single publication, and you can be sure that those who do are choosing to subscribe to the source that match their current ideological alignment. But if you are not willing to pay for a single pay wall, then you will invariably rely more on free media, and since something is obviously motivating the purveyors of free media, chances are you are reading more biased and less reliable sources.
I am not willing to pay of anyone’s paywall. But I do find myself coming up against the pay walls of over a dozen different papers and magazines from around the world. If I could pay one subscription fee that gave me access, even limited access, to all of those papers, I would probably buy it. I think such a service would do something to increase both the variety and reliability of the content people are reading.
Pingback: Content questions: a crisis of trust | Leading Technical Communication