I’ve seen a couple of efforts lately to help determine who’s credible online and though I understand the need and the motive, these attempts are fundamentally flawed and perhaps even more damaging than they are helpful.
It’s very simple — though that’s the problem; credibility isn’t so simple. They list articles and you get to “credit” or “discredit” them. These scores are, in turn, compiled for writers and publications.
The first and most obvious problem, which TechCrunch points out, is that this is bait for grudges. Fox from one side, the Times from the other will get discredited by their detractors all day long. One man’s bias is often the other man’s truth.
The second and more fundamental problem is that there’s no basis to decide credibility. Does one error ruin an article’s credibility? How many discredits does it take to ruin a reporter’s or a publication’s? And then what does that mean? That they lied? That you don’t believe them? That you don’t like them? That they make mistakes? That they don’t report enough? That they use anonymous sources? That they relied on bad sources? That they wrote it badly? That they weren’t transparent?
And who’s doing the judging? Are they credible? Who’s judging the judges, then?
Over the years, I’ve heard of various attempts to determine credibility or bias algorithmically, in an effort to take out this human bias in the process of finding bias, but that’s just an engineer’s wet dream. Again, the problem is definition (not to mention technical limitations of analyzing text and ideas).
Newstrust has tried to do this in a subtler way, with star ratings and comment, but it faces the same issues: Who’s doing the rating? On what basis?
I think these folks are attacking the problem from the wrong perspective. They’re trying to play whack-a-mole with credibility and identify all the bad stuff — just as news people, long accustomed to packaging the world in a pretty box with a bow on top, keep wanting to kill every bad comment on their sites. They’ll fail. Life insists on being messy. The task of identifying the bad stuff is so large — there is, indeed, too much junk — that these folks try to scale their effort with simplicity or technology. Won’t work. They’ll never find all the bad stuff. Ultimately, this can be dangerous because good people who do good work can easily be besmirched by bad judges with grudges.
Instead, I think it would be far more useful to concentrate on finding the good stuff. That is the real challenge in the new architecture of news and media, in the ecosystem of distribution and aggregation. When all the articles on a given topic are brought together by Daylife (where, disclosure, I am a partner) or Google News the need and the true service is to find the best articles because that’s what we want to spend our time on. (A restaurant guide with only bad reviews doesn’t help me eat.)
We also need to find ways to surface original reporting so we can support that reporting with our attention (and with traffic and ads). This is why I believe that there should be an ethic in professional journalism, as there is in blogs, to link to prior work and sources. All roads should link back to the original reporting.
There is still clearly bias in this approach of finding the best. Many will recommend Paul Krugman, many won’t trust that recommendation. Who’s doing the recommending still matters (and so it would be very helpful to have transparency among them). But by highlighting the good rather than trying to expunge the bad, we would try to support good journalism wherever it is done — msm or blog. And that’s really the point, isn’t it?
On top of that, every news site should have a means for people to help correct errors — that’s as simple as adding comments (though doing so does add a cost to police them). Correcting errors makes one more credible; that, too, is an ethic of blogs. And that, too, will improve the journalism, just as you improve mine in comments here. At the end of the day, there’ll always be disagreements, though. Look at the post below about airlines; there’s plenty of argument there. Is that really about credibility? No. It’s about conversation.