Our representatives no longer represent the public; they do not care to. American democracy has died.
Democracy didn’t die in darkness. It died in the light.
The full glare of journalism was turned on this legislation, its impact and motives, but that didn’t matter to those who had the power to go ahead anyway. Journalism, then, proved to be an ineffective protector of democracy, just as it is proving ineffective against every other attack on democracy’s institutions by this gang. Fox News was right to declare a coup, wrong about the source. The coup already happened. The junta is in power. In fact, Fox News led it. We have an administration and Congress that are tearing down government institutions — law enforcement, the courts, the State Department and foreign relations, safety nets, consumer protection, environmental protection — and society’s institutions, starting with the press and science, not to mention truth itself. The junta is, collaborative or independently, doing the bidding of a foreign power whose aim is to get democracy to destroy itself. Journalism, apparently, was powerless to stop any of this.
I don’t mean to say that journalism is solely to blame. But journalism is far from blameless. I have sat in conferences listening to panels of journalists who blame the public they serve. These days, I watch journalists blame Facebook, as if the sickness in American democracy is only a decade old and could be turned on by a machine.I believe that journalism must engage in its own truth and reconciliation process to learn what we have done wrong: how our business models encourage discord and reduction of any complexity to the absurd; how we concentrate on prediction over information and education;how we waste journalistic resource on repetition; how we lately have used ourfalse god of balance to elevate the venal to the normal; how we are redlining quality journalism as a product for the elite. I don’t believe we can fix journalism until we recognize its faults. But I’ll save that for another day. Now I want to ask a more urgent question given the state of democracy.
What should journalism do? Better: What should journalism be? It is obviously insufficient to merely say “this happened today” or “this could happen tomorrow.”
What should journalism’s measures of value be? A stronger democracy? An informed public conversation? Civil discourse? We fail at all those measures now. How can journalism change to succeed at them? What can it do to strengthen — to rescue — democracy? That is the question that consumes me now.
I will start here. We must learn to listen and help the public listen to itself. We in media were never good at listening — not really — but in our defense our media, print and broadcast, were designed for speaking. The internet intervened and enabled everyone to speak but helped no one listen. So now we live in amid ceaseless dissonance: all mouths, no ears. I am coming to see that civility — through listening, understanding, and empathy — is a necessary precondition to learning, to accepting facts and understanding other positions, to changing one’s mind and finding common ground. Thus I have changed my own definition of journalism.
My definition used to be: Helping communities organize their knowledge to better organize themselves. That was conveniently broad enough to fit most any entrepreneurial journalism students’ ideas. It was information based, for that was my presumption — the accepted wisdom — about journalism: It lives to inform. Now I have a new definition of journalism:
To convene communities into civil, informed, and productive conversation.
Journalism — and, I’d argue, Facebook and internet platforms — share this imperative to reduce the polarization we all helped cause by helping citizens find common ground.
This is the philosophy behind our Social Journalism program at CUNY. It is why we at the News Integrity Initiative invested in Spaceship Media’s work, to convene communities in conflict into meaningful conversation. But that is just the beginning.
Clearly, journalism must devote more of its resources to investigation, to making sure that the powerful know they are watched, whether they give a damn or not. Journalism must understand its role as an educator and measure its success or failure based on whether the public is more informed. Journalism and the platforms need to provide the tools for communities to organize and act in collaborative, constructive ways. I will leave this exploration, too, for another day.
We are in a crisis of democracy and its institutions, including journalism. The solutions will not be easy or quick. We cannot get there if we assume what we are living through is a new normal or worse if we make it seem normal. We cannot succeed if we assume our old ways are sufficient for a new reality. We must explore new goals, new paths, new tools, new measures of our work.
Storyful and Moat — together with CUNY and our new News Integrity Initiative*— have announced a collaboration to help advertisers and platforms avoid associating with and supporting so-called fake news. This, I hope, is a first, small step toward fueling a flight to quality in news and media. Add to this:
A momentous announcement by Ben Gomes, Google’s VP of engineering for Search, that its algorithms will now favor “quality,” “authority,” and “the most reliable sources” — more on that below.
The advertiser revolt led by The Guardian, the BBC, and ad agency Havas against offensive content on YouTube, getting Google to quickly respond.
These things — small steps, each — give me a glimmer of hope for supporting news integrity. I will even go so far as to say — below — that I hope this can mark the start of renewing support to challenged institutions — like science and journalism — and rediscovering the market value of facts.
The Storyful-Moat partnership, called the Open Brand Safety framework, first attacks the low-hanging and rotten fruit: the sites that are known to produce the worst fraud, hate, and propaganda. I’ve been talking with both companies for some time because supporting quality is an extension of what they already do. Storyful verifies social content that makes news; its exhaust is knowing which sites can’t be verified because they lie. Moat tells advertisers when they should not waste money on ads that are not seen or clicked on by humans. Its CTO, Dan Fichter, came to me weeks ago saying they could add a warning about content that is crap (my word) — if someone could help them define crap. That is where this partnership comes in.
My hope is that we build a system around many signals of both vice and virtue so that ad agencies, ad networks, advertisers, and platforms can weigh them according to their own standards and goals. In other words, I don’t want blacklists or whitelists; I don’t want one company deciding truth for all. I want more data so that the companies that promote and support content — and by extension users — can make better decisions.
The hard work will be devising, generating, and using signals of quality and crapness, allowing for many different definitions of each. The best starting point for discussion of a definition is from the First Draft Coalition’s Claire Wardle:
One set of signals is obvious: sites whose content is consistently debunked as fraudulent. Storyful knows; so do Politifact, Buzzfeed’s Craig Silverman, and Snopes. There are other signals of caution, for example a site’s age: An advertiser might want to think twice before placing its brand on a two-week-old Denver Guardian vs the almost-200-years-old Guardian. Facebook and Google have their own signals around suspicious virality.
But even more important, we need to generate positive signals of credibility and quality. The Trust Project endeavors to do that by getting news organizations to display and uphold standards of ethics, fact-checking, diversity, and so on. Media organizations also need to add metadata around original reporting, showing their work.
In talking about all this at an event we held at CUNY to kick off the News Integrity Initiative, I came to see that human effort will be required. Trust cannot be automated. I think there will be a need for auditing of media organizations’ compliance with pledges — an Audit Bureau of Circulations of good behavior — and for appeal (“I know we screwed up once but we’re good now”) and review (“Yes, we’re only two weeks old but so was the Washington Post once”).
Who will pay for that work? In the end, it will be the advertisers. But it is very much an open question whether they will pay more for the safety of associating with credible sources and for the societal benefit of putting their money behind quality. With the abundance the net creates, advertisers have relished paying ever-lower prices. With the targeting opportunities technology and programmatic ad marketplaces afford, they have put more emphasis on data points about users than the environment in which their ads and brands appear. Will public pressure from the likes of Sleeping Giants and #grabyourwallet change that and make advertisers and their agencies and networks go to the trouble and expense of seeking quality? We don’t know yet.
I want to emphasize again that I do not want to see single arbiters of trust, quality, authority, or credibility — not the platforms, not journalistic organizations, not any self-appointed judge — nor single lists of the good and bad. I do want to see more metadata about sources of information so that everyone in the media ecosystem — from creator to advertiser to platform to citizen — can make better, more informed decisions about credibility.
With that added metadata in hand, these companies must weigh it according to their own standards and needs in their own judgments and algorithms. That is what Google does every second. That is why Google News creator Krishna Bharat’s post about how to detect fake news in real-time is so useful. The platforms, he writes, “are best positioned to see a disinformation outbreak forming. Their engineering teams have the technical chops to detect it and the knobs needed to to respond to it.”
And that is also why I see Ben Gomes’ blog post as so important. Google’s head of Search engineering writes:
Last month, we updated our Search Quality Rater Guidelines to provide more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories….
We combine hundreds of signals to determine which results we show for a given query — from the freshness of the content, to the number of times your search queries appear on the page. We’ve adjusted our signals to help surface more authoritative pages and demote low-quality content…
I count this as a very big deal. Google and Facebook — like news media before them — contend that they are mirrors to the world. Their mirrors might well be straight and true but they must acknowledge that the world is cracked and warped to try to manipulate them. For months now, I have argued to the platforms — and will argue the same to news media — that they must be more transparent about efforts to manipulate them … and thus the public.
Example: A few months ago, if you searched on Google for “climate change,” you’d get what I would call good results. But if your query was “is climate change real?” you’d get some dodgy results, in my view. In the latter, Google was at least in part anticipating, as it is wont to do, the desires or expectations of the user under the rubric of relevance (as in, “people who asked whether climate change is real clicked on this”). But what if a third-grader also asks that question? Search ranking was also influenced by the volume of chatter around that question, without necessarily full regard to whether and how that chatter was manufactured to manipulate — that is, the huge traffic and engagement around climate-change deniers and the skimpy discussion around peer-reviewed scientific papers on the topic. But today, if you try both searches, you’ll find similar good results. That tells me that Google has made a decision to compensate for manufactured controversy and in the end favor the institution of science. That’s big.
On This Week in Google, Leo Laporte and I had a long discussion about whether Google should play that role. I said that Google, Facebook, et al are left with no choice but to compensate for manipulation and thus decide quality; Leo played the devil’s advocate, saying no company can make that decision; our cohost Stacey Higginbotham called time at 40 minutes.
Facebook’s Mark Zuckerberg has made a similar decision to Google’s. He wrote in February: “It is our responsibility to amplify the good effects and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.” What’s good or bad, positive or not? As explained in an important white paper on mitigating manipulation, that is a decision Facebook will start to make as it expands it security focus “from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.” That includes not just fake news but the fake accounts that amplify it: fake people.
I know there are some who would argue that I’m giving lie to my frequent contention that Google and Facebook are not media companies and that by defending their need to rule on quality, I am having them make editorial decisions. No, what they’re really defining is fakery: (1) That which is devised to deceive or manipulate. (2) That which intentionally runs counter to fact and accepted knowledge. Accepted by whom? By science, by academics, by journalism, even by government — that is, by institutions. Thus this requires a bias in favor of institutions at a time when every institution in society is being challenged because — thanks to the net — it can be. Though I often challenge institutions myself, I don’t do so in the way Trumpists and Brexiters do, trying to dismantle them for the sake of destruction.
In the process of identifying and disadvantaging fake news, Krishna Bharat urges the platforms to be transparent about “all news that has been identified as false and slowed down or blocked” so there is a check on their authority. He further argues: “I would expect them to target fake news narrowly to only encompass factual claims that are demonstrably wrong. They should avoid policing opinion or claims that cannot be checked. Platforms like to avoid controversy and a narrow, crisp definition will keep them out of the woods.”
Maybe. In these circumstances, defending credibility, authority, quality, science, journalism, academics, and even expertise — that is, facts — becomes a political act. Politics is precisely where Google and Facebook, advertisers and agencies do not want to be. But they are given little choice. For if they do not reject lies, fraud, propaganda, hate, and terrorism they will end up supporting it with their presence, promotion, and dollars. On the other hand, if they do reject crap, they will end up supporting quality. They each have learned they face an economic necessity to do this: advertisers so they are not shamed by association, platforms so they do not create user experiences that descend into cesspools. Things got so bad, they have to do good. See that glimmer of hope I see?
None of this will be easy. Much of it will be contentious. We who can must help. That means that media should add metadata to content, linking to original sources; showing work so it can be checked; upholding standards of verification; openly collaborating on fact-checking and debunking (as First Draft is doing across newsrooms in France); and enabling independent verification of their work. That means that the advertising industry must recognize its responsibility not only to the reputation of its own brands but to the health of our information and media ecosystem it depends on. That means Facebook, Google — and, yes, Twitter — should stand on the side of sense and civility against manufactured nonsense and manipulated incivility. That means media and platforms should work together to reinvent the advertising industry, moving past the poison of reach and clickbait to a market built on value and quality. And that means that we as citizens and consumers should support those who support quality and must take responsibility for not spreading lies and propaganda, no matter how fun it seems at the time.
What we are really seeing is the need to gather around some consensus of fact, authority, and credibility if not also quality. We used to do that through the processes of education, journalism, and democratic deliberation. If we cannot do that as a society, if we cannot demand that our fellow citizens — starting with the President of the United States— respect fact, then we might as well pack it in on democracy, education, and journalism. I don’t think we’re ready for that. Please tell me we’re not. What ideas do you have?
* Disclosure: The News Integrity Initiative, operated independently at CUNY’s Tow-Knight Center, which I direct, received funding and support from the Craig Newmark Philanthropic Fund; Facebook; the Ford, Knight, and Tow foundations; Mozilla; Betaworks; AppNexus; and the Democracy Fund.
Jimmy Wales changed encyclopedias and news while he was at it. And now he’s at it at it again, announcing a crowdfunding campaign to start Wikitribune, a collaborative news platform with “professional journalists and community contributors working side-by-side to produce fact-checked, global news stories. The community of contributors will vet the facts, help make sure the language is factual and neutral, and will to the maximum extent possible be transparent about the source of news posting full transcripts, video, and audio of interviews.” The content will be free with monthly patrons providing as much support as possible, advertising as little as possible.
I’m excited about this for a few reasons:
First, I see the need for innovation around new forms of news.
Next, I want some news sites to break the overwhelming and constant flow of news and allow us in the public to pull back and find answers to the question, “What do we know about…?” We already have plenty of streams of news; we also need repositories of knowledge around news topics. As Jimmy explained this to me, it will have the value of a wiki (and Wikipedia) in a new platform built to purpose.
Finally, of course, I am delighted to see news services that respect and collaborate with the public.
I am listed as an adviser, personally. (I am not compensated and have no equity; just helping a good cause.) You can sign up here.
I’m proud that we at CUNY’s Graduate School of Journalism and the Tow-Knight Center just announced the creation of the News Integrity Initiative, charged with finding ways to better inform the public conversation and funded thus far with $14 million by nine foundations and companies, all listed on the press release. Here I want to tell its story.
This began after the election when my good friend Craig Newmark — who has been generously supporting work on trust in news — challenged us to address the problem of mis- and disinformation. There is much good work being done in this arena — from the First Draft Coalition, the Trust Project, Dan Gillmor’s work at ASU bringing together news literacy efforts, and the list goes on. Is there room for more?
I saw these needs and opportunities:
First, much of the work to date is being done from a media perspective. I want to explore this issue from a public perspective — not just about getting the public to read our news but more about getting media to listen to the public. This is the philosophy behind the Social Journalism program Carrie Brown runs at CUNY, which is guided by Jay Rosen’s summary of James Carey: “The press does not ‘inform’ the public. It is ‘the public’ that ought to inform the press. The true subject matter of journalism is the conversation the public is having with itself.” We must begin with the public conversation and must better understand it.
Second, I saw that the fake news brouhaha was focusing mainly on media and especially on Facebook — as if they caused it and could fix it. I wanted to expand the conversation to include other affected and responsible parties: ad agencies, brands, ad networks, ad technology, PR, politics, civil society.
Third, I wanted to shift the focus of our deliberations from the negative to the positive. In this tempest, I see the potential for a flight to quality — by news users, advertisers, platforms, and news organizations. I want to see how we can exploit this moment.
Fourth, because there is so much good work — and there are so many good events (I spent about eight weeks of weekends attending emergency fake news conferences) — we at the Tow-Knight Center wanted to offer to convene the many groups attacking this problem so we could help everyone share information, avoid duplication, and collaborate. We don’t want to compete with any of them, only to help them. At Tow-Knight, under the leadership of GM Hal Straus, we have made the support of professional communities of practice — so far around product development, audience development and membership, commerce, and internationalization — key to our work; we want to bring those resources to the fake news fight.
My dean and partner in crime, Sarah Bartlett, and I formulated a proposal for Craig. He quickly and generously approved it with a four-year grant.
And then my phone rang. Or rather, I got a Facebook message from the ever-impressive Áine Kerr, who manages journalism partnerships there. Facebook had recently begun working with fact-checking agencies to flag suspect content; it started its Journalism Project; and it held a series of meetings with news organizations to share what it is doing to improve the lot of news on the platform.
Áine said Facebook was looking to do much more in collaboration with others and that led to a grant to fund research, projects, and convenings under the auspices of what Craig had begun.
Soon, more funders joined: John Borthwick of Betaworks has been a supporter of our work since we collaborated on a call to cooperate against fake news. Mozilla agreed to collaborate on projects. Darren Walker at the Ford Foundation generously offered his support, as did the two funders of the center I direct, the Knight and Tow foundations. Brian O’Kelley, founder of AppNexus, and the Democracy Fund joined as well. More than a dozen additional organizations — all listed in the release — said they would participate as well. We plan to work with many more organizations as advisers, funders, and grantees.
Now let me get right to the questions I know you’re ready to tweet my way, particularly about one funder: Have I sold out to Facebook? Well, in the end, you will be the judge of that. For a few years now, I have been working hard to try to build bridges between the publishers and the platforms and I’ve had the audacity to tell both Facebook and Google what I think they should do for journalism. So when Facebook knocks on the door and says they want to help journalism, who am I to say I won’t help them help us? When Google started its Digital News Initiative in Europe, I similarly embraced the effort and I have been impressed at the impact it has had on building a productive relationship between Google and publishers.
Sarah and I worked hard in negotiations to assure CUNY’s and our independence. Facebook — and the other funders and participants present and future — are collaborators in this effort. But we designed the governance to assure that neither Facebook nor any other funder would have direct control over grants and to make sure that we would not be put in a position of doing anything we did not want to do. Note also that I am personally receiving no funds from Facebook, just as I’ve never been paid by Google (though I have had travel expenses reimbursed). We hope to also work with multiple platforms in the future; discussions are ongoing. I will continue to criticize and defend them as deserved.
My greatest hope is that this Initiative will provide the opportunity to work with Facebook and other platforms on reimagining news, on supporting innovation, on sharing data to study the public conversation, and on supporting news literacy broadly defined.
The work has already begun. A week and a half ago, we convened a meeting of high-level journalists and representatives from platforms (both Facebook and Google), ad agencies, brands, ad networks, ad tech, PR, politics, researchers, and foundations for a Chatham-House-rule discussion about propaganda and fraud (née “fake news”). We looked at research that needs to be done and at public education that could help.
The meeting ended with a tangible plan. We will investigate gathering and sharing many sets of signals about both quality and suspicion that publishers, platforms, ad networks, ad agencies, and brands can use — according to their own formulae — to decide not just what sites to avoid but better yet what journalism to support. That’s the flight to quality I have been hoping to see. I would like us to support this work as a first task of our new Initiative.
We will fund research. I want to start by learning what we already know about the public conversation: what people share, what motivates them to share it, what can have an impact on informing the conversation, and so on. We will reach out to the many researchers working in this field — danah boyd (read her latest!) of Data & Society, Zeynep Tufekci of UNC, Claire Wardle of First Draft, Duncan Watts and David Rothschild of Microsoft Research, Kate Starbird (who just published an eye-opening paper on alternative narratives of news) of the University of Washington, Rasmus Kleis Nielsen of the Reuters Institute, Charlie Beckett of POLIS-LSE, and others. I would like us to examine what it means to be informed so we can judge the effectiveness of our — indeed, of journalism’s — work.
We will fund projects that bring journalism to the public and the conversation in new ways.
We will examine new ways to achieve news literacy, broadly defined, and investigate the roots of trust and mistrust in news.
And we will help convene meetings to look at solutions — no more whining about “fake news,” please.
We will work with organizations around the world; you can see a sampling of them in the release and we hope to work with many more: projects, universities, companies, and, of course, newsrooms everywhere.
We plan to be very focused on a few areas where we can have a measurable impact. That said, I hope we also pursue the high ambition to reinvent journalism for this new age.
But we’re not quite ready. This has all happened very quickly. We are about to start a search for a manager to run this effort with a small staff to help with information sharing and events. As soon as we begin to identify key areas, we will invite proposals. Watch this space.
We keep looking at the problems of fake news and crap content — and the advertising that feeds them — through the wrong end of the periscope, staring down into the depths in search of sludge when we could be looking up, gathering quality.
There is a big business opportunity to be had right now in setting definitions and standards for and creating premium networks of quality.
In the last week, the Guardian, ad agency Havas, the UK government, the BBC, and now AT&T pulled their advertising from Google and YouTube, complaining about placement next to garbage: racist, terrorist, fake, and otherwise “inappropriate” and “offensive” content. Google was summoned to meet UK ministers under the threat they’ll exercise their European regulatory reflex.
Google responded quickly, promising to raise its standards regarding “hateful, offensive and derogatory content” and giving advertisers greater control over excluding specific sites.
Well, good. But this seems like a classic case of boiling the (polluted) ocean: taking the entire inventory of ad availabilities and trying to eliminate the bad ones. We’re doing the same thing with fake news: taking the entire corpus of content online and trying to warn people away from the crap.
So now turn this around.
The better, easier opportunity is to create premium networks built on quality: Not “we’ll put your ad anywhere except in that sewer we stumbled over” but instead “we found good sites we guarantee you’ll be proud to advertise on.”
Of course, this is how advertising used to work. Media brands produced quality products and sold ads there. Media departments at ad agencies chose where to put clients’ ads based on a number of factors — reach, demographic target, cost, and quality environment.
The net ruined this lovely, closed system by replacing media scarcity with online abundance. Google made it better — or worse, depending on your end of the periscope — by charging on performance and thus sharing risk with the advertisers and establishing the new metric for value: the click. AppNexus and other programmatic networks made it yet better/worse by creating huge and highly competitive marketplaces for advertising inventory, married with data about individual users, which commoditized media adjacency. Thus the advertiser wants to sell boots to you because you once looked at boots on Amazon and it doesn’t much matter where those boots follow you — even to shite like Breitbart…until Sleeping Giants comes along and shames the brand for being there.
So why not sell quality? Could happen. There are just a few matters standing in the way:
First, advertisers need to value quality. There has been much attention paid to assuring marketers that their ads are visible to the user and that they are clicked on by a human, not a bot. But what about the quality of the environment and its impact on the brand? In our recent research at CUNY’s Tow-Knight Center, we found that brands rub off both ways: users judge both media and brands by the company they keep. This is why it is to the Guardian’s benefit to take a stand against crappy ad adjacencies with Google — because The Guardian sells quality. But will advertisers buy quality?
Second, there’s the question of who defines and determines quality. Over the years, I have seen no end of attempts to automate the answer to this question, whether by determining trust in news or quality in media. Impossible. There is no God signal of trust or virtue. The decision in the end is a human one and human decisions cost money. Besides, there is no one-size-fits-all definition and measurement of quality; that should vary by media brand and advertiser and audience. Still, the responsibility for determining quality has to fall somewhere and this is a hot potato nobody — brands, agencies, networks, platforms — wants because it is an expensive task.
Third, there’s the matter of price. Media companies, ad agencies, and ad networks will need to convince advertisers of the value of quality and the wisdom of paying for it, returning to an ad market built on a new scarcity. With fewer avails in a quality market — plus the cost of monitoring and assuring quality — the price will rise. Will advertisers give a damn if they can still sell stuff on shitty but cheap sites? Will the cost of being humiliated for appearing on Breitbart be worth the premium of avoiding that? On the other hand, will the cost of being boycotted by Breitbart when the advertiser pulls ads there be worth the price? This is a business decision.
I always tell my entrepreneurial students that when they see a problem, they should look for the solution, as an engineer would, or the opportunity, as an entrepreneur would. There are many opportunities here: to create premium networks of quality and trustworthy news and content; to create mechanisms to judge and stand by quality; to audit quality … and, yes, to create quality.
Our opportunity is not so much to kill bad content and bad advertising placements and to teach people to avoid all that bad stuff but to return to the reason we all got into these businesses: to make good stuff.