A Call for Cooperation Against Fake News

We — John Borthwick and Jeff Jarvis — want to offer constructive suggestions for what the platforms — Facebook, Twitter, Google, Instagram, Snapchat, WeChat, Apple News, and others — as well as publishers and users can do now and in the future to grapple with fake news and build better experiences online and more civil and informed discussion in society.

Key to our suggestions is sharing more information to help users make better-informed decisions in their conversations: signals of credibility and authority from Facebook to users, from media to Facebook, and from users to Facebook. Collaboration between the platforms and publishers is critical. In this post we focus on Facebook, Twitter, and Google search. Two reasons: First simplicity. Second: today these platforms matter the most.

We do not believe that the platforms should be put in the position of judging what is fake or real, true or false as censors for all. We worry about creating blacklists. And we worry that circular discussions about what is fake and what is truth and whose truth is more truthy masks the fact that there are things that can be done today. We start from the view that almost all of what we do online is valuable and enjoyable but there are always things we can do to improve the experience and act more responsibly.

In that spirit, we offer these tangible suggestions for action and seek your ideas.

  1. Make it easier for users to report fake news, hate speech, harassment, and bots. Facebook does allow users to flag fake news but the function is buried so deep in a menu maze that it’s impossible to find; bring it to the surface. Twitter just added new means to mute harassment but we think it would also be beneficial if users can report false and suspicious accounts and the service can feed back that data in some form to other users (e.g., “20 of your friends have muted this account” or “this account tweets 500 times a day”). The same would be helpful for Twitter search, Google News, Google search, Bing search, and other platforms and other platforms.
  2. Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. The Trust Project is working on getting media to generate such signals. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.
  3. Expand systems of verified sources. As we said, we don’t endorse blacklists or whitelists of sites and sources (though when lists of sites are compiled to support a service — as with Google News — we urge responsible, informed selection). But it would be good if users could know the creator of a post has been online for only three hours with 35 followers or if this is a site with a known brand and proven track record. Twitter verifies users. We ask whether Twitter, Facebook, Google, et al could consider means to verify sources as well so users know the Denver Post is well-established while the Denver Guardian was just established.
  4. Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.
  5. Track back to original sources of news items and memes. We would like to see these technology platforms use their considerable computing power to help track back and find the source of news items, photos and video, and memes. For example, one of us saw an almost-all-blue mapwith 225K likes that was being passed around as evidence that millennials voted for Clinton when, in fact, at its origin the map was labeled as the results of a single, liberal site’s small online poll. It would not be difficult for any platform to find all instances of that graphic and pinpoint where it began. The source matters! Similarly, when memes are born and bred, it would be useful to know whether one or another started at a site with a certain frog as an avatar. While this is technically complicated its far less complicated than the facial recognition that social platforms have today.
  6. Address the echo-chamber problem with recommendations from outside users’ conversational spheres. We understand why Facebook, Twitter, and others surface so-called trending news: not only to display a heat map but also to bring serendipity to users, to show them what their feeds might not. We think there are other, perhaps better, ways to do this. Why not be explicit about the filter-bubble problem and present users with recommended items, accounts, and sources that do *not* usually appear in their feeds, so The Nation reader sees a much-talked-about column from the National Review, so a Clinton voter can begin — just begin — to connect with and perhaps better understand the worldview of Trump voter? Users will opt in or out but let’s give them the chance to choose.
  7. Recognize the role of autocomplete in search requests to spread impressions without substance. Type “George Soros is…” into a Google search box and you’re made to wonder whether he’s dead. He’s not. We well understand the bind the platforms are in: They are merely reflecting what people are asking and searching for. Google has been threatened with suits over what that data reveals. We know it is impossible and undesirable to consider editing autocomplete results. However, it would be useful to investigate whether even in autocomplete, more information could be surfaced to the user (e.g., “George Soros is dead” is followed by an asterisk and a link to its debunking). These are the kinds of constructive discussions we would like to see, rather than just volleys of complaint.
  8. Recognize how the design choices can surface information that might be better left under the rock. We hesitate to suggest doing this, but if you dare to search Google for the Daily Stormer, the extended listing for the site at the moment we write this includes a prominent link to “Jewish Problem: Jew Jake Tapper Triggered by Mention of Black …” Is that beneficial, revealing the true nature of the site? Or is that deeper information better revealed by getting quicker to the next listing in the search results: Wikipedia explaining that “The Daily Stormer is an American neo-Nazi and white supremacist news and commentary website. It is part of the alt-right movement …”? These design decisions have consequences.
  9. Create reference sites to enable users to investigate memes and dog whistles. G’bless Snopes; it is the cure for that email your uncle sends that has been forward a hundred times. Bless also Google for making it easy to search to learn the meanings of Pepe the frog and Wikipedia for building entries to explain the origins. We wonder whether it would be useful for one of these services or a media organization to also build a constantly updated directory of ugly memes and dog whistles to help those users — even if few — who will look into what is happening so they can pass it on. Such a resource would also help media and platforms recognize and understand the hidden meanings and secret codes their platforms are being used to spread.
  10. Establish the means to subscribe to and distribute corrections and updates. We would love it if we could edit a mistaken tweet. We understand the difficulty of that, once tweets have flown the nest to apps and firehoses elsewhere. But imagine you share a post you later find out to be false and then imagine if you could at least append a link to the tweet in the archive. Better yet, imagine if you could send a followup message that alerts people who shared your tweet, Facebook post, or Instagram image to the fact that you were mistaken. Ever since the dawn of blogging, we’ve wished for such a means to subscribe to and send updates, corrections, and alerts around what we’ve posted. It is critical that Twitter as well as the other platforms do everything they can to enable responsible users who want to correct their mistakes to do so.
  11. Media must learn and use the lesson of memes to spread facts over lies. Love ’em or hate ’em, meme-maker Occupy Democrats racked up 100 to 300 million impressions a week on Facebook, according to its cofounder, by providing users with the social tokens to use in their own conversations, the thing they share because it speaks for them. Traditional media should learn a lesson from this: that they must adapt to their new reality and bring their journalism — their facts, fact-checking, reporting, explanation, and context — to the public where the public is, in a form and voice that is appropriate to the context and use of each platform. Media cannot continue to focus only on their old business model, driving traffic back to their websites (that notion sounds more obsolete by the day). So, yes, we will argue that, say, Nick Kristof should take some of his important reporting, facts, arguments, and criticisms and try to communicate them not only in columns (which, yes, he should continue!) but also with memes, videos, photos, and the wealth of new tools we now have to communicate with and inform the public.
  12. Stop funding fake news. Google and Facebook have taken steps in the right direction to pull advertising and thus financial support (and motivation) for fake-news sites. Bing, Apple, and programmatic advertising platforms must follow suit. Publishers, meanwhile, should consider more carefully the consequences of promoting content — and sharing in revenue — from dubious sources distributed by the likes of Taboola and Outbrain.
  13. Support white-hat media hacking. The platforms should open themselves up to help from developers to address the problems we outline here. Look at what a group of students did in the midst of the fake-news brouhaha to meet the key goals we endorse: bringing more information to users about the sources of what they read and share. (Github here.) We urge the platforms to open up APIs and provide other help to developers and we urge funders to support work to improve not only the quality of discourse online but the quality of civic discourse and debate in society.
  14. Hire editors. We strongly urge the platforms to hire high-level journalists inside their organizations not to create content, not to edit, not to compete with the editors outside but instead to bring a sense of public responsibility to their companies and products; to inform and improve those products; to explain journalism to the technologists and technology to the journalists; to enable collaboration with news organizations such as we describe here; and foremost to help improve the experience for users. This is not a business-development function: deal-making. Nor is this a PR function: messaging. This sensibility and experience needs to be embedded in the core function in every one of these platform companies: product.
  15. Collaborate in an organization to support the cause of truth; research and develop solutions; and educate platforms, media companies, and the public. This is ongoing work that won’t be done with a new feature or option or tweak in an algo. This is important work. We urge that the platforms, media companies, and universities band together to continue it in an organization similar to but distinct from and collaborating with the First Draft Coalition, which concentrates on improving news, and the Trust Project, which seeks to gather more signals of authority around news. Similarly, the Coral Project works on improving comments on news sites. We also see the need to work on improving the quality of conversation where it occurs, on platforms and on the web. This would be an independent center for discussion and work around all that we suggest here. Think of it as the Informed Conversation Project.

We will bring our resources to the task. John Borthwick at Betaworks will help invest in and nurture startups that tackle these problems and opportunities. Jeff Jarvis at CUNY’s Graduate School of Journalism will play host to meetings where that is helpful and seek support to build the organization we propose above.

We do this mostly to solicit your suggestions to a vital task: better informing our conversations, our elections, and our society. (See another growing list of ideas here.) Pile on. Help out.

  • Robert Mehner

    Jeff – This is a great start! I was a bit worried about you there after the last TWIG episode but it’s heartening to see how fast you engaged in a pragmatic approach that benefits everyone. It was a frustrating time for everyone involved rest assured but it certainly does not mean that much good can not come of it.

    I know that you will be back in good form on Tuesday next. I sense excitement in the words here and I look forward to hearing the TWIT gang’s response as well.

    Ideology aside, maybe this is the time in history where people started to take control of their government and media and became more insistent on a more civil approach to the process.

    Although we differ in opinion on some things political we both share an interest in technology and we both have the notion that when all is said and done that technology has the ability to be utilized as a tool for the betterment and benefit of all mankind. I foresee a time when people look past governments and realize that people everywhere are largely in the same boat, albeit a big freaking boat located some 25000 light years from the center of the Milky Way so uh let’s cut the crap and enjoy the ride.

    Any support that I can provide to the effort and my fellow um, er countrypeople, I guess would be the proper term, then I will be happy to pile on.

    Sincerely,
    Rob M

  • TrylonSMR

    There may be an additional remedy for Google & Facebook’s fake news challenge. Several years ago, Editor & Publisher proposed that legitimate news sources display their adherence to the Society of Professional Journalist’s Code of Ethics by creating an official “seal of news integrity” and displaying it on their websites. See
    http://www.editorandpublisher.com/columns/editorial-in-news-we-trust/. The seal would signify a news outlet’s “moral and professional obligation to report objectively”. Google & Facebook should give the idea serious consideration. Once this kind of seal is adopted by major news sites, Facebook and Google could code their news results to automatically highlight such legitimate sites, alerting users that they are reading a credible article – and alerting them when they’re not. Though the development and implementation of such a system would take time, coordination and effort, the dynamics of the 2016 election require that the news industry, Facebook and Google work diligently to deliver a solution that protects the integrity of journalism and an informed citizenry.

  • Robert Mehner

    Thanks for those links guys, I had a chance to peruse the sites and started to get a bit down over what I know now is what you mean by fake news. I do not use Facebook and I probably have a twitter account that is still in it’s original packaging somewhere around here but The Coral Project has provided me with the lift I needed so as to not spoil my Friday night. There they have struck a nice balance that both garnered my full attention and admiration.

    With the Tagline “Because journalism needs everyone” I immediately felt welcome and will defer to their good sense in the matter that I am also needed but I will remain guarded until I learn more… so I go on.

    It the about page where they really get me and here’s why;

    They use words and phrases that express concepts and goals that are recurrent themes of importance to me and I would venture to say others as well.
    “Journalists and communities” of which I am of the latter, jsyk.

    Other keywords that grab me are “penchant” – I have a few of those!
    “passionate” – Now you’re talking!

    And my favorite from Sydette Harry, “I am an avid internet commenter on anywhere that will give me a password” and yes I did lol with that one. Had to.

    I share some commonality with Emma Grdina as well in that she is interested in “using design to create engaging online spaces that people look forward to visiting” and I use my architectural software and/or sketchup to collaborate with clients to design and build living spaces in the real world and I swear that everybody that ever worked for me got paid every dollar that we negotiated –sometimes more.

    Now that I feel better I kind of feel bad for the people that are compelled to send out the fake news and such. What a shame.

    I became very much a believer in the open software model and the community after I went through the agony of installing TurboLinux 6 on an old pentium 200 with a 1.2 gig hard drive. It took the better part of a week to configure the dang video card but I did succeed and still love my Linux boxes to this day.

  • Pingback: There’s always been fake news — get over it (and fix it) already | ARTS & FARCES internet()

  • Les2011

    This article is an excellent start. Idea: viral retractions.
    Facebook, etc should have a tool you can use to send a retraction to everyone who saw the fake news story, including forwarded messages.

  • bruce lancaster

    Now that we are going to crack down on fake news, what are you going to do, Jeff? No one buys books any more, so peddling your fiction and propaganda is going to become difficult. Are you even qualified to flip burgers? Probably not… guess you’ll have to go on welfare and continue your bloodsucking leach ways… You are a waste of human skin. You contribute exactly zero to society.

  • PeterMcPumpkinPhD

    Fake news like Sandy Hoax? The Boston Smoke Show? 9/11? Orlando?
    Government propaganda is the fake news son

  • David Green

    Glad you got there in the end. Totally agree with all these recommendations. Shame author didn’t see them coming when doing his best to undermine established media in favor of “citizen journalism” aka anything somebody fancies pumping onto the internet without verification. The problem with integrous news is that somebody has to give it the seal of integrity. Now in a post-fact world we don’t believe anyone can be trusted to do that other than our echo chamber selves we are royally fucked. So get thinking.

  • Lucas_Cioffi

    Hello Jeff, we met at the Politics Online conference about nine years ago.

    A friend nudged me to build an app that helps people get out of their own filter bubbles (if they want to). I’m wondering if there is an organization you know that can use something like this.

    In this first release, I’m taking the editorials and opinion articles from 10 of the top newspapers, putting them side-by-side, and running them through IBM’s sentiment analysis tool: http://opposingviews.herokuapp.com/ (herokuapp is a platform for software developers to stage their websites).

    Could something like this be helpful in the efforts you mention in the blog post above? I guess I could create a second page which includes the more partisan websites. I could also add a layer of commenting, but that would have to be done carefully to create an enjoyable user experience.

    I welcome suggestions from Jeff and anyone else. Thank you!