Posts about facebook

Moral Authority as a Platform

[See my disclosures below.*]

Since the election, I have been begging the platforms to be transparent about efforts to manipulate them — and thus the public. I wish they had not waited so long, until they were under pressure from journalists, politicians, and prosecutors. I wish they would realize the imperative to make these decisions based on higher responsibility. I wish they would see the need and opportunity to thus build moral authority.

Too often, technology companies hide behind the law as a minimal standard. At a conference in Vienna called Darwin’s Circle, Palantir CEO Alexander Karp (an American speaking impressive German) told Austrian Chancellor Christian Kern that he supports the primacy of the state and that government must set moral standards. Representatives of European institutions were pleasantly surprised not to be challenged with Silicon Valley libertarian dogma. But as I thought about it, I came to see that Karp was copping out, delegating his and his company’s ethical responsibility to the state.

At other events recently, I’ve watched journalists quiz representatives of platforms about what they reveal about manipulation and also what they do and do not distribute and promote on behalf of the manipulators. Again I heard the platforms duck under the law — “We follow the laws of the nations we are in,” they chant — while the journalists pushed them for a higher moral standard. So what is that standard?

Transparency should be easy. If Facebook, Twitter, and Google had revealed that they were the objects of Russian manipulation as soon as they knew it, then the story would have been Russia. Instead the story is the platforms.

I’m glad that Mark Zuckerberg has said that in the future, if you see a political ad in your feed, you will be able to link to the page or user that bought it. I’d like the platforms to all go farther:

  • First, internet platforms should make every political ad available for public inspection, setting a standard that goes far beyond the transparency required of political advertising on broadcast and certainly beyond what we can find out about dark political advertising in direct mail and robocalls. Why shouldn’t the platforms lead the way?
  • Second, I think it is critical that the platforms reveal the targeting criteria used for these political ads so we can see what messages (and lies and hate) are aimed at whom.
  • Third, I’d like to see all this data made available to researchers and journalists so the public — the real target of manipulation — can learn more about what is aimed at them.

The reason to do this is just not to avoid bad PR or merely to follow the law, to meet minimal expectations. The reason to do all this is to establish public responsibility consumate with the platforms’ roles as the proprietors of so much of the internet and thus the future.

In What Would Google Do?, I praised the Google founders’ admonition to their staff — “Don’t be evil” — as a means to keep the company honest. The cost of doing evil in business has risen as customers have gained the ability to talk about a company and as anyone could move to a competitor with a click. But that, too, was a minimal standard. I now see that Google — and its peers — should have evolved to a higher standard:

“Do good. Be good.”

I don’t buy the arguments of cynics who say it is impossible for a corporation to be anything other than greedy and evil and that we should give up on them. I believe in the possibility and wisdom of enlightened self-interest and I believe we can hold these companies to an expectation of public spirit if not benevolence. I also take Zuck at his word when he asks forgiveness “for the ways my work was used to divide people rather than bring us together,” and vows to do better. So let us help him define better.

The caveats are obvious: I agree with the platforms that we do not want them to become censors and arbiters of right v. wrong; to enforce prohibitions determined by the lowest-common-demoninators of offensiveness; to set precedents that will be exploited by authoritarian governments; to make editorial judgments.

But doing good and being good as a standard led Google to its unsung announcement last April that it would counteract manipulation of search ranking by taking account of the reliability, authority, and quality of sources. Thus Google took the side of science over crackpot conspirators, because it was the right thing to do. (But then again, I just saw that Alternet complains that it and other advocacy and radical sites are being hit hard by this change. We need to make clear that fighting racism and hate is not to be treated like spreading racism and hate. We must be able to have an open discussion about how these standards are being executed.)

Doing good and being good would have led Facebook to transparency about Russian manipulation sooner.

Doing good and being good would have led Twitter to devote resources to understanding and revealing how it is being used as a tool of manipulation — instead of merely following Facebook’s lead and disappointing Congressional investigators. More importantly, I believe a standard of doing good and being good would lead Twitter to set a higher bar of civility and take steps to stop the harassment, stalking, impersonation, fraud, racism, misogyny, and hate directed at its own innocent users.

Doing good and being good would also lead journalistic institutions to examine how they are being manipulated, how they are allowing Russians, trolls, and racists to set the agenda of the public conversation. It would lead us to decide what our real job is and what our outcomes should be in informing productive and civil civic conversation. It would lead us to recognize new roles and responsibilities in convening communities in conflict into uncomfortable but necessary conversation, starting with listening to those communities. It should lead us to collaborate with and set an example for the platforms, rather than reveling in schadenfreude when they get in trouble. It should also lead us all — media companies and platforms alike — to recognize the moral hazards embedded in our business models.

I don’t mean to oversimplify even as I know I am. I mean only to suggest that we must raise up not only the quality of public conversation but also our own expectations of ourselves in technology and media, of our roles in supporting democratic deliberation and civil (all senses of the word) society. I mean to say that this is the conversation we should be having among ourselves: What does it mean to do and be good? What are our standards and responsibilities? How do we set them? How do we live by them?

Building and then operating from that position of moral authority becomes the platform more than the technology. See how long it is taking news organizations to learn that they should be defined not by their technology — “We print content” — but instead by their trust and authority. That must be the case for technology companies as well. They aren’t just code; they must become their missions.


* Disclosure: The News Integrity Initiative, operated independently at CUNY’s Tow-Knight Center, which I direct, received funding from Facebook, the Craig Newmark Philanthropic Fund, and the Ford Foundation and support from the Knight and Tow foundations, Mozilla, Betaworks, AppNexus, and the Democracy Fund.

Real News

I’m proud that we at CUNY’s Graduate School of Journalism and the Tow-Knight Center just announced the creation of the News Integrity Initiative, charged with finding ways to better inform the public conversation and funded thus far with $14 million by nine foundations and companies, all listed on the press release. Here I want to tell its story.

This began after the election when my good friend Craig Newmark — who has been generously supporting work on trust in news — challenged us to address the problem of mis- and disinformation. There is much good work being done in this arena — from the First Draft Coalition, the Trust Project, Dan Gillmor’s work at ASU bringing together news literacy efforts, and the list goes on. Is there room for more?

I saw these needs and opportunities:

  • First, much of the work to date is being done from a media perspective. I want to explore this issue from a public perspective — not just about getting the public to read our news but more about getting media to listen to the public. This is the philosophy behind the Social Journalism program Carrie Brown runs at CUNY, which is guided by Jay Rosen’s summary of James Carey: “The press does not ‘inform’ the public. It is ‘the public’ that ought to inform the press. The true subject matter of journalism is the conversation the public is having with itself.” We must begin with the public conversation and must better understand it.
  • Second, I saw that the fake news brouhaha was focusing mainly on media and especially on Facebook — as if they caused it and could fix it. I wanted to expand the conversation to include other affected and responsible parties: ad agencies, brands, ad networks, ad technology, PR, politics, civil society.
  • Third, I wanted to shift the focus of our deliberations from the negative to the positive. In this tempest, I see the potential for a flight to quality — by news users, advertisers, platforms, and news organizations. I want to see how we can exploit this moment.
  • Fourth, because there is so much good work — and there are so many good events (I spent about eight weeks of weekends attending emergency fake news conferences) — we at the Tow-Knight Center wanted to offer to convene the many groups attacking this problem so we could help everyone share information, avoid duplication, and collaborate. We don’t want to compete with any of them, only to help them. At Tow-Knight, under the leadership of GM Hal Straus, we have made the support of professional communities of practice — so far around product development, audience development and membership, commerce, and internationalization — key to our work; we want to bring those resources to the fake news fight.

My dean and partner in crime, Sarah Bartlett, and I formulated a proposal for Craig. He quickly and generously approved it with a four-year grant.

And then my phone rang. Or rather, I got a Facebook message from the ever-impressive Áine Kerr, who manages journalism partnerships there. Facebook had recently begun working with fact-checking agencies to flag suspect content; it started its Journalism Project; and it held a series of meetings with news organizations to share what it is doing to improve the lot of news on the platform.

Áine said Facebook was looking to do much more in collaboration with others and that led to a grant to fund research, projects, and convenings under the auspices of what Craig had begun.

Soon, more funders joined: John Borthwick of Betaworks has been a supporter of our work since we collaborated on a call to cooperate against fake news. Mozilla agreed to collaborate on projects. Darren Walker at the Ford Foundation generously offered his support, as did the two funders of the center I direct, the Knight and Tow foundations. Brian O’Kelley, founder of AppNexus, and the Democracy Fund joined as well. More than a dozen additional organizations — all listed in the release — said they would participate as well. We plan to work with many more organizations as advisers, funders, and grantees.


Now let me get right to the questions I know you’re ready to tweet my way, particularly about one funder: Have I sold out to Facebook? Well, in the end, you will be the judge of that. For a few years now, I have been working hard to try to build bridges between the publishers and the platforms and I’ve had the audacity to tell both Facebook and Google what I think they should do for journalism. So when Facebook knocks on the door and says they want to help journalism, who am I to say I won’t help them help us? When Google started its Digital News Initiative in Europe, I similarly embraced the effort and I have been impressed at the impact it has had on building a productive relationship between Google and publishers.

Sarah and I worked hard in negotiations to assure CUNY’s and our independence. Facebook — and the other funders and participants present and future — are collaborators in this effort. But we designed the governance to assure that neither Facebook nor any other funder would have direct control over grants and to make sure that we would not be put in a position of doing anything we did not want to do. Note also that I am personally receiving no funds from Facebook, just as I’ve never been paid by Google (though I have had travel expenses reimbursed). We hope to also work with multiple platforms in the future; discussions are ongoing. I will continue to criticize and defend them as deserved.

My greatest hope is that this Initiative will provide the opportunity to work with Facebook and other platforms on reimagining news, on supporting innovation, on sharing data to study the public conversation, and on supporting news literacy broadly defined.


The work has already begun. A week and a half ago, we convened a meeting of high-level journalists and representatives from platforms (both Facebook and Google), ad agencies, brands, ad networks, ad tech, PR, politics, researchers, and foundations for a Chatham-House-rule discussion about propaganda and fraud (née “fake news”). We looked at research that needs to be done and at public education that could help.

The meeting ended with a tangible plan. We will investigate gathering and sharing many sets of signals about both quality and suspicion that publishers, platforms, ad networks, ad agencies, and brands can use — according to their own formulae — to decide not just what sites to avoid but better yet what journalism to support. That’s the flight to quality I have been hoping to see. I would like us to support this work as a first task of our new Initiative.

We will fund research. I want to start by learning what we already know about the public conversation: what people share, what motivates them to share it, what can have an impact on informing the conversation, and so on. We will reach out to the many researchers working in this field — danah boyd (read her latest!) of Data & Society, Zeynep Tufekci of UNC, Claire Wardle of First Draft, Duncan Watts and David Rothschild of Microsoft Research, Kate Starbird (who just published an eye-opening paper on alternative narratives of news) of the University of Washington, Rasmus Kleis Nielsen of the Reuters Institute, Charlie Beckett of POLIS-LSE, and others. I would like us to examine what it means to be informed so we can judge the effectiveness of our — indeed, of journalism’s — work.

We will fund projects that bring journalism to the public and the conversation in new ways.

We will examine new ways to achieve news literacy, broadly defined, and investigate the roots of trust and mistrust in news.

And we will help convene meetings to look at solutions — no more whining about “fake news,” please.

We will work with organizations around the world; you can see a sampling of them in the release and we hope to work with many more: projects, universities, companies, and, of course, newsrooms everywhere.

We plan to be very focused on a few areas where we can have a measurable impact. That said, I hope we also pursue the high ambition to reinvent journalism for this new age.

But we’re not quite ready. This has all happened very quickly. We are about to start a search for a manager to run this effort with a small staff to help with information sharing and events. As soon as we begin to identify key areas, we will invite proposals. Watch this space.

A Call for Cooperation Against Fake News

We — John Borthwick and Jeff Jarvis — want to offer constructive suggestions for what the platforms — Facebook, Twitter, Google, Instagram, Snapchat, WeChat, Apple News, and others — as well as publishers and users can do now and in the future to grapple with fake news and build better experiences online and more civil and informed discussion in society.

Key to our suggestions is sharing more information to help users make better-informed decisions in their conversations: signals of credibility and authority from Facebook to users, from media to Facebook, and from users to Facebook. Collaboration between the platforms and publishers is critical. In this post we focus on Facebook, Twitter, and Google search. Two reasons: First simplicity. Second: today these platforms matter the most.

We do not believe that the platforms should be put in the position of judging what is fake or real, true or false as censors for all. We worry about creating blacklists. And we worry that circular discussions about what is fake and what is truth and whose truth is more truthy masks the fact that there are things that can be done today. We start from the view that almost all of what we do online is valuable and enjoyable but there are always things we can do to improve the experience and act more responsibly.

In that spirit, we offer these tangible suggestions for action and seek your ideas.

  1. Make it easier for users to report fake news, hate speech, harassment, and bots. Facebook does allow users to flag fake news but the function is buried so deep in a menu maze that it’s impossible to find; bring it to the surface. Twitter just added new means to mute harassment but we think it would also be beneficial if users can report false and suspicious accounts and the service can feed back that data in some form to other users (e.g., “20 of your friends have muted this account” or “this account tweets 500 times a day”). The same would be helpful for Twitter search, Google News, Google search, Bing search, and other platforms and other platforms.
  2. Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. The Trust Project is working on getting media to generate such signals. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.
  3. Expand systems of verified sources. As we said, we don’t endorse blacklists or whitelists of sites and sources (though when lists of sites are compiled to support a service — as with Google News — we urge responsible, informed selection). But it would be good if users could know the creator of a post has been online for only three hours with 35 followers or if this is a site with a known brand and proven track record. Twitter verifies users. We ask whether Twitter, Facebook, Google, et al could consider means to verify sources as well so users know the Denver Post is well-established while the Denver Guardian was just established.
  4. Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.
  5. Track back to original sources of news items and memes. We would like to see these technology platforms use their considerable computing power to help track back and find the source of news items, photos and video, and memes. For example, one of us saw an almost-all-blue mapwith 225K likes that was being passed around as evidence that millennials voted for Clinton when, in fact, at its origin the map was labeled as the results of a single, liberal site’s small online poll. It would not be difficult for any platform to find all instances of that graphic and pinpoint where it began. The source matters! Similarly, when memes are born and bred, it would be useful to know whether one or another started at a site with a certain frog as an avatar. While this is technically complicated its far less complicated than the facial recognition that social platforms have today.
  6. Address the echo-chamber problem with recommendations from outside users’ conversational spheres. We understand why Facebook, Twitter, and others surface so-called trending news: not only to display a heat map but also to bring serendipity to users, to show them what their feeds might not. We think there are other, perhaps better, ways to do this. Why not be explicit about the filter-bubble problem and present users with recommended items, accounts, and sources that do *not* usually appear in their feeds, so The Nation reader sees a much-talked-about column from the National Review, so a Clinton voter can begin — just begin — to connect with and perhaps better understand the worldview of Trump voter? Users will opt in or out but let’s give them the chance to choose.
  7. Recognize the role of autocomplete in search requests to spread impressions without substance. Type “George Soros is…” into a Google search box and you’re made to wonder whether he’s dead. He’s not. We well understand the bind the platforms are in: They are merely reflecting what people are asking and searching for. Google has been threatened with suits over what that data reveals. We know it is impossible and undesirable to consider editing autocomplete results. However, it would be useful to investigate whether even in autocomplete, more information could be surfaced to the user (e.g., “George Soros is dead” is followed by an asterisk and a link to its debunking). These are the kinds of constructive discussions we would like to see, rather than just volleys of complaint.
  8. Recognize how the design choices can surface information that might be better left under the rock. We hesitate to suggest doing this, but if you dare to search Google for the Daily Stormer, the extended listing for the site at the moment we write this includes a prominent link to “Jewish Problem: Jew Jake Tapper Triggered by Mention of Black …” Is that beneficial, revealing the true nature of the site? Or is that deeper information better revealed by getting quicker to the next listing in the search results: Wikipedia explaining that “The Daily Stormer is an American neo-Nazi and white supremacist news and commentary website. It is part of the alt-right movement …”? These design decisions have consequences.
  9. Create reference sites to enable users to investigate memes and dog whistles. G’bless Snopes; it is the cure for that email your uncle sends that has been forward a hundred times. Bless also Google for making it easy to search to learn the meanings of Pepe the frog and Wikipedia for building entries to explain the origins. We wonder whether it would be useful for one of these services or a media organization to also build a constantly updated directory of ugly memes and dog whistles to help those users — even if few — who will look into what is happening so they can pass it on. Such a resource would also help media and platforms recognize and understand the hidden meanings and secret codes their platforms are being used to spread.
  10. Establish the means to subscribe to and distribute corrections and updates. We would love it if we could edit a mistaken tweet. We understand the difficulty of that, once tweets have flown the nest to apps and firehoses elsewhere. But imagine you share a post you later find out to be false and then imagine if you could at least append a link to the tweet in the archive. Better yet, imagine if you could send a followup message that alerts people who shared your tweet, Facebook post, or Instagram image to the fact that you were mistaken. Ever since the dawn of blogging, we’ve wished for such a means to subscribe to and send updates, corrections, and alerts around what we’ve posted. It is critical that Twitter as well as the other platforms do everything they can to enable responsible users who want to correct their mistakes to do so.
  11. Media must learn and use the lesson of memes to spread facts over lies. Love ’em or hate ’em, meme-maker Occupy Democrats racked up 100 to 300 million impressions a week on Facebook, according to its cofounder, by providing users with the social tokens to use in their own conversations, the thing they share because it speaks for them. Traditional media should learn a lesson from this: that they must adapt to their new reality and bring their journalism — their facts, fact-checking, reporting, explanation, and context — to the public where the public is, in a form and voice that is appropriate to the context and use of each platform. Media cannot continue to focus only on their old business model, driving traffic back to their websites (that notion sounds more obsolete by the day). So, yes, we will argue that, say, Nick Kristof should take some of his important reporting, facts, arguments, and criticisms and try to communicate them not only in columns (which, yes, he should continue!) but also with memes, videos, photos, and the wealth of new tools we now have to communicate with and inform the public.
  12. Stop funding fake news. Google and Facebook have taken steps in the right direction to pull advertising and thus financial support (and motivation) for fake-news sites. Bing, Apple, and programmatic advertising platforms must follow suit. Publishers, meanwhile, should consider more carefully the consequences of promoting content — and sharing in revenue — from dubious sources distributed by the likes of Taboola and Outbrain.
  13. Support white-hat media hacking. The platforms should open themselves up to help from developers to address the problems we outline here. Look at what a group of students did in the midst of the fake-news brouhaha to meet the key goals we endorse: bringing more information to users about the sources of what they read and share. (Github here.) We urge the platforms to open up APIs and provide other help to developers and we urge funders to support work to improve not only the quality of discourse online but the quality of civic discourse and debate in society.
  14. Hire editors. We strongly urge the platforms to hire high-level journalists inside their organizations not to create content, not to edit, not to compete with the editors outside but instead to bring a sense of public responsibility to their companies and products; to inform and improve those products; to explain journalism to the technologists and technology to the journalists; to enable collaboration with news organizations such as we describe here; and foremost to help improve the experience for users. This is not a business-development function: deal-making. Nor is this a PR function: messaging. This sensibility and experience needs to be embedded in the core function in every one of these platform companies: product.
  15. Collaborate in an organization to support the cause of truth; research and develop solutions; and educate platforms, media companies, and the public. This is ongoing work that won’t be done with a new feature or option or tweak in an algo. This is important work. We urge that the platforms, media companies, and universities band together to continue it in an organization similar to but distinct from and collaborating with the First Draft Coalition, which concentrates on improving news, and the Trust Project, which seeks to gather more signals of authority around news. Similarly, the Coral Project works on improving comments on news sites. We also see the need to work on improving the quality of conversation where it occurs, on platforms and on the web. This would be an independent center for discussion and work around all that we suggest here. Think of it as the Informed Conversation Project.

We will bring our resources to the task. John Borthwick at Betaworks will help invest in and nurture startups that tackle these problems and opportunities. Jeff Jarvis at CUNY’s Graduate School of Journalism will play host to meetings where that is helpful and seek support to build the organization we propose above.

We do this mostly to solicit your suggestions to a vital task: better informing our conversations, our elections, and our society. (See another growing list of ideas here.) Pile on. Help out.

My Facebook op-ed

Aftenposten asked me to adapt my Medium post about the Facebook napalm photo incident as an op-ed. Here it is in Norwegian. Here is the English text:

Text:

Facebook needs an editor — to stop Facebook from editing.

An editor might save Facebook from making embarrassing and offensive judgments about what will offend, such as its decision last week requiring writer Tom Egeland, Aftenposten editor Espen Egil Hansen, then Norwegian Prime Minister Erna Solberg to take down a photo of great journalistic meaning and historic importance: Nick Ut’s image of Vietnamese girl Kim Phúc running from a 1972 napalm attack after tearing off her burning clothes. Only after Hansen wrote an eloquent, forceful, and front-page letter to Facebook founder Mark Zuckerberg did the service relent.

Facebook’s reflexive decision to take down the photo is a perfect example of what I would call algorithmic thinking, the mindset that dominates the kingdom that software built, Silicon Valley. Facebook’s technologists, from top down, want to formulate rules and then enable algorithms to enforce those rules. That’s not only efficient (who can afford the staff to make these decisions with more than a billion people posting every day?) but they also believe it’s fair, equally enforced for all. As they like to say in Silicon Valley, it scales.

The rule that informed the algorithm in this case was clear: If a photo portrays a child (check) who is naked (check) then the photo is rejected. The motive behind that rule could not be more virtuous: eliminating the distribution of child pornography. But in this case, of course, the naked girl did not constitute child pornography. No, the pornography here is a tool of war, which is what Ut’s photo so profoundly portrays.

Technology scales but life does not and that is a problem Facebook of all companies should recognize, for Facebook is the post-mass company. Mass media treat everyone the same because that’s what Gutenberg’s invention demands; the technology of printing scales by forcing media to publish the exact same product for thousands unto millions of readers. Facebook, on the other hand, does not treat us all alike. Like Google, it is a personal services company that gives every user a unique service, no two pages ever the same. The problem with algorithmic thinking, paradoxically, is that it continues the mass mindset, treating everyone who posts and what they post exactly the same, under a rule meant to govern every circumstance.

The solution to Facebook’s dilemma is to insert human judgment into its processes. Hansen is right that editors cannot live with Zuckerberg and company as master editor. Facebook would be wise to recognize this. It should treat editors of respected, quality news organizations differently and give them the license to make decisions. Facebook might want to consider giving editors an allocation of attention they can use to better inform their users. It should allow an editor of Hansen’s stature to violate a rule for a reason. I am not arguing for a class system, treating editors better than the masses. I am arguing only that recognizing signals of trust, authority, credibility, and quality will improve Facebook’s recommendations and service.

When there is disagreement , and there will be, Facebook needs a process in place — a person: an editor — who can negotiate on the company’s behalf. The outsider needn’t always win; this is still Facebook’s service, brand, and company and in the end it has the right to decide what it distributes just as much as Hansen has the right to decide what appears in these pages. That is not censorship; it is editing. But the outsider should at least be heard: in short, respected.

If Facebook would hire an editor, would that not be the definitive proof that Facebook is what my colleagues in media insist it is: media? We in media tend to look at the world, Godlike, in our own image. We see something that has text and images (we insist on calling that content ) with advertising (we call that our revenue) and we say it is media, under the egocentric belief that everyone wants to be like us.

Mark Zuckerberg dissents. He says Facebook is not media. I agree with him. Facebook is something else, something new: a platform to connect people, anyone to anyone, so they may do what they want. The text and images we see on Facebook’s pages (though, of course, it’s really just one endless page) is not content. It is conversation. It is sharing. Content as media people think of it is allowed in but only as a tool, a token people use in their conversations. Media are guests there.

Every time we in media insist on squeezing Facebook into our institutional pigeonhole, we miss the trees for the forest: We don’t see that Facebook is a place for people — people we need to develop relationships with and learn to serve in new ways. That, I argue, is what will save journalism and media from extinction: getting to know the needs of people as individuals and members of communities and serving them with greater relevance and value as a result. Facebook could help us learn that.

An editor inside Facebook could explain Facebook’s worldview to journalists and explain journalism’s ethics, standards, and principles to Facebook’s engineers. For its part, Facebook still refuses to fully recognize the role it plays in helping to inform society and the responsibility — like it or not — that now rests on its shoulders. What are the principles under which Facebook operates? It is up to Mark Zuckerberg to decide those principles but an editor — and an advisory board of editors — could help inform his thinking. Does Facebook want to play its role in helping to better inform the public or just let the chips fall where they may (a question journalists also need to grapple with as we decide whether we measure our worth by our audience or by our impact)? Does Facebook want to enable smart people — not just editors  but authors and prime ministers and citizens— to use its platform to make brave statements about justice? Does Facebook want to have a culture in which intelligence — human intelligence — wins over algorithms? I think it does.

So Facebook should build procedures and hire people who can help make that possible. An editor inside Facebook could sit at the table with the technologists, product, and PR people to set policies that will benefit the users and the company. An editor could help inform its products so that Facebook does a better job of enlightening its users, even fact-checking users when they are about to share the latest rumor or meme that has already been proven false through journalists’ fact-checking. An editor inside Facebook could help Facebook help the journalism survive by informing the news industry’s strategy, teaching us how we must go to our readers rather than continuing to make our readers come to us.

But an editor inside Facebook should not hire journalists, create content, or build a newsroom. That would be a conflict of interest, not to mention a bad business decision. No, an editor inside Facebook would merely help make a better, smarter Facebook for us all.

Who should do that job? Based on his wise letter to Mark Zuckerberg, I nominate Mr. Hansen.

Dear Mark Zuckerberg

Dear Mark Zuckerberg

I’ve said it before and I’ll say it again: Facebook needs an editor — to stop Facebook from editing. It needs someone to save Facebook from itself by bringing principles to the discussion of rules.

There is actually nothing new in this latest episode: Facebook sends another takedown notice over a picture with nudity. What is new is that Facebook wants to take down an iconic photo of great journalistic meaning and historic importance and that Facebook did this to a leading editor, Espen Egil Hansen, editor-in-chief of Aftenposten, who answered forcefully:

The media have a responsibility to consider publication in every single case. This may be a heavy responsibility. Each editor must weigh the pros and cons. This right and duty, which all editors in the world have, should not be undermined by algorithms encoded in your office in California…. Editors cannot live with you, Mark, as a master editor.

Facebook has found itself — or put itself — in other tight spots lately, most recently the trending topics mess, in which it hired and then fired human editors to fix a screwy product.

In each case, my friends in media point their fingers, saying that Facebook is media and thus needs to operate under media’s rules, which my media friends help set. Mark Zuckerberg says Facebook is not media.

On this point, I will agree with Zuckerberg (though this isn’t going to get him off the hook). As I’ve said before, we in media tend to look at the world, Godlike, in our own image. We see something that has text and images (we insist on calling that content ) with advertising (we call that our revenue) and we say it is media, under the egocentric belief that everyone wants to be like us.

No, Facebook is something else, something new: a platform to connect people, anyone to anyone, so they may do whatever they want. The text and images we see on Facebook’s pages (though, of course, it’s really just one endless page, a different page for every single user) is not content. It is conversation. It is sharing. Content as we media people think of it is allowed in but only as a tool, a token people use in their conversations. We are guests there.

Every time we in media insist on squeezing Facebook into our institutional pigeonhole, we miss the trees for the forest: We miss understanding that Facebook is a place for people, people we need to develop relationships with and learn to serve in new ways. It’s not a place for content.

For its part, Facebook still refuses to acknowledge the role it has in helping to inform society and the responsibility — like it or not — that now rests on its shoulders. I’ve written about that here and so I’ll spare you the big picture again. Instead, in these two cases, I’ll try to illustrate how an editor — an executive with an editorial worldview — could help advise the company: its principles, its processes, its relationships, and its technology.

The problem at work here is algorithmic thinking. Facebook’s technologists, top down, want to formulate a rule and then enable an algorithm to enforce that rule. That’s not only efficient (who needs editors and customer-service people?) but they also believe it’s fair, equally enforced for all. It scales.Except life doesn’t scale and that’s a problem Facebook of all companies should recognize as it is the post-mass-media company, the company that does not treat us all alike; like Google, it is a personal-services company that gives every user a unique service and experience. The problem with algorithmic thinking, paradoxically, is that it continues a mass mindset.

In the case of Aftenposten and the Vietnam napalm photo, Hansen is quite right that editors cannot live with Mark et al as master editor. Facebook would be wise to recognize this. It should treat editors of respected, quality news organizations differently and give them the license to make decisions. Here I argued that Facebook might want to consider giving editors an allocation of attention they can use to better inform their users. In this current case, the editor can decide to post something that might violate a rule for a reason; that’s what editors do. I’m not arguing for a class system, treating editors better. I’m arguing that recognizing signals of trust, authority, credibility will improve Facebook’s recommendation and service. (As a search company, Google understands those signals better and this is the basis of the Trust Project Google is helping support.)

When there is disagreement , and there will be, Facebook needs a process in place — a person: an editor — who can negotiate on the company’s behalf. The outside editor needn’t always win; this is still Facebook’s service, brand, and company. But the outside editor should be heard: in short, respected.

These decisions are being made now on two levels: The rule in the algorithm spots a picture of a naked person (check) who is a child (check!) and kills it (because naked child equals child porn). The rule can’t know better. The algorithm should be aiding a human court of appeal who understand when the rule is wrong. On the second level, the rule is informed by the company’s brand protection: “We can’t ever allow a naked child to appear here.” We all get that. But there is a third level Facebook must have in house, another voice at the table when technology, PR, and product come together: a voice of principle.

What are the principles under which Facebook operates? Facebook should decide but an editor — and an advisory board of editors — could help inform those principles. Does Facebook want to play its role in helping to better inform the public or just let the chips fall where they may (something journalists also need to grapple with)? Does it want to enable smart people — not just editors — to make brave statements about justice? Does it want to have a culture in which intelligence — human intelligence — rules? I think it does. So build procedures and hire people who can help make that possible.

Now to the other case, trending topics . You and Facebook might remind me that here Facebook did hire people and that didn’t help; it got them in hot water when those human beings were accused of having human biases and the world was shocked!

Here the problem is not the algorithm, it is the fundamental conception of the Trending product. It sucks. It spits out crap. An algorithmist might argue that’s the public’s fault: we read crap so it gives us crap — garbage people in, garbage links out. First, just because we read it doesn’t mean we agree with it; we could be discussing what crap it is. Second, the world is filled with a constant share of idiots, bozos, and trolls and a bad algorithm listens to them and these dogs of hell know how to game the algorithm to have more influence on it. But third — the important part — if Facebook is going to recommend links, which Trending does, it should take care to recommend good links. If its algorithm can’t figure out how to do that then kill it. This is a simple matter of quality control. Editors can sometimes help with that, too.