Facebook is devoting impressive resources — months of time and untold millions of dollars — to developing systems of governance, of its users and of itself, raising fascinating questions about who governs whom according to what rules and principles, with what accountability. I’d like to ask similar questions about journalism.
I just spent a day at Facebook’s fallen skyscraper of a headquarters attending one of the last of more than two dozen workshops it has held to solicit input on its plans to start an Oversight Board. [Disclosures: Facebook paid for participants’ hotel rooms and I’ve raised money from Facebook for my school.] Weeks ago, I attended another such meeting in New York. In that time, the concept has advanced considerably. Most importantly, in New York, the participants were worried that the board would be merely an appeals court for disputes over content take-downs. Now it is clear that Facebook knows such a board must advise and openly press Facebook on bigger policy issues.
Facebook’s team showed the latest group of academics and others a near-final draft of a board charter (which will be released in a few weeks, in 20-plus languages). They are working on by-laws and finalizing legal structures for independence. They’ve thought through myriad details about how cases will rise (from users and Facebook) and be taken up by the board (at the board’s discretion); about conflict resolution and consensus; about transparency in board membership but anonymity in board decisions; about how members will be selected (after the first members join, the board will select its own members); about what the board will start with (content takedowns) and what it can tackle later (content demotion and taking down users, pages, groups — and ads); about how to deal with GDPR and other privacy regulation in sharing information about cases with the board; about how the board’s precedents will be considered but will not prevent the board from changing its mind; even about how other platforms could join the effort. They have grappled with most every structural, procedural, and legal question the 2,000 people they’ve consulted could imagine.
But as I sat there I saw something missing: the larger goal and soul of the effort and thus of the company and the communities it wants to foster. They have structured this effort around a belief, which I share, in the value of freedom of expression, and the need — recognized too late — to find ways to monitor and constrain that freedom when it is abused and used to abuse. But that is largely a negative: how and why speech (or as Facebook, media, and regulators all unfortunately refer to it: content) will be limited.
Facebook’s Community Standards — in essence, the statutes the Oversight Board will interpret and enforce and suggest to revise — are similarly expressed in the negative: what speech is not allowed and how the platform can maintain safety and promote voice and equality among its users by dealing with violations. In its Community Standards (set by Facebook and not by the community, by the way), there are nods to higher ends — sharing stories, seeing the world through others’ eyes, diversity, equity, empowerment. But then the Community Standards becomes a document about what users should not do. And none of the documents says much if anything about Facebook’s own obligations.
So in California, I wondered aloud what principles the Oversight Board would call upon in its decisions. More crucially, I wondered whom the board is meant to serve and represent: does it operate in loco civitas (in place of the community), publico (public), imperium (government and regulators), or Deus, (God — that is, higher ethics and standards)? [Anybody with better schooling than I had, please correct my effort at Latin.]
I think these documents, this effort, and this company — along with other tech companies — need a set of principles that should set forth:
Higher goals. Why are people coming to Facebook? What do they want to create? What does the company want to build? What good will it bring to the world? Why does it exist? For whose benefit? Zuckerberg issued a new mission statement in 2017: “To give people the power to build community and bring the world closer together.” And that is fine as far as it goes, but that’s not very far. What does this mean? What should we expect Facebook to be? This statement of goals should be the North Star that guides not just the Oversight Board but every employee and every user at Facebook.
A covenant with users and the public in which Facebook holds itself accountable for its own responsibilities and goals. As an executive from another tech company told me, terms of service and community standards are written to regulate the behavior of users, not companies. Well, companies should put forth their own promises and principles and draw them up in collaboration with users (civitas), the public (publico), and regulators (imperium). And that gives government — as in the case of proposed French legislation — the basis for holding the company accountable.
I’ll explore these ideas further in a moment, but first let me first address the elephant on my keyboard: whether Facebook and its founder and executives and employees have a soul. I’ve been getting a good dose of crap on Twitter the last few days from people who blithely declare — and others who retweet the declaration — that Zuckerberg is the most dangerous man on earth. I respond: Oh, come on. My dangerous-person list nowadays starts with Trump, Murdoch, Putin, Xi, Kim, Duterte, Orbán, Erdoğan, MBS…you get the idea. To which these people respond: But you’re defending Facebook. I will defend it and its founder from ridiculous, click-bait trolling that devalues the real danger our world is in today. I also criticize Facebook publicly and did at the meetings I attended there. Facebook has fucked up plenty lately and that’s why it needs oversight. At least they realize it.
When I defend internet platforms against what I see as media’s growing moral panic, irresponsiblereporting, and conflict of interest, I’m defending the internet itself and the freedoms it affords from what I fear will be continuing regulation of our own speech and freedom. I don’t oppose regulation; I have been proposing what I see as reasonable regimes. But I worry about where a growing unholy alliance against the internet between the far right and technophes in media will end.
That is why I attend meetings such as the ones that Facebook convenes and why I just spent two weeks in California meeting with both platform and newspaper executives, to try to build bridges and constructive relationships. That’s why I take Facebook’s effort to build its Oversight Board seriously, to hold them to account.
Indeed, as I sat in a conference room at Facebook hearing its plans, it occurred to me that journalism as a profession and news organizations individually would do well to follow this example. We in journalism have no oversight, having ousted most ombudsmen who tried to offer at least some self-reflection and -criticism (and having failed in the UK to come up with a press council that isn’t a sham). We journalists make no covenants with the public we serve. We refuse to acknowledge — as Facebook executives did acknowledge about their own company — our “trust deficit.”
We in journalism do love to give awards to each other. But we do not have a means to systematically identify and criticize bad journalism. That job has now fallen to, of all unlikely people, politicians, as Beto O’Rourke, Alexandria Ocasio-Cortez, and Julian Castro offer quite legitimate criticism of our field. It also falls to technologists, lawyers, and academics whohavebeen appalled at, for example, The New York Times’ horrendously erroneous and dangerous coverage of Section 230, our best protection of freedom of expression on the internet in America. I’m delighted that CJR has hired independent ombudsmen for The Times, The Post, CNN, and MSNBC. But what about Fox and the rest of the field?
I’ve been wondering how one might structure an oversight board for journalism to take the place of all those lost ombudsmen, to take complaints about bad journalism, to deliberate thoughtful and constructive responses, and to build data about the journalistic performance and responsibility of specific outlets. That will be a discussion for another day, soon. But even with such a structure, journalism, too — and each news outlet — should offer covenants with the public containing their own promises and statements of higher goals. I don’t just mean following standards for behavior; I mean sharing our highest ambitions.
I think such covenants for Facebook (and social networks and internet platforms) and journalism would do well to start with the mission of journalism that I teach: to convene communities into respectful, informed, and productive conversation. Democracy is conversation. Journalism is — or should be — conversation. The internet is built for conversation. The institutions and companies that serve the public conversation should promise they will do everything in their power to serve and improve that conversation. So here is the beginning of the kind of covenant I would like to see from Facebook:
Facebook should promise to create a safe environment where people can share their stories with each other to build bridges to understanding and to make strangers less strange. (So should journalism.)
Facebook should promise to enable and empower new and diverse voices that have been deprived of privilege and power by existing, entrenched institutions. (Including journalism.)
Facebook should promise to build systems that reward positive, productive, useful, respectful behavior among communities. (So should journalism.)
Facebook should promise not to build mechanisms to polarize people and inflame conflict. (So should journalism.)
Facebook should promise to help inform conversations by providing the means to find reliable information. (Journalism should provide that information.)
Facebook should promise not to build its business upon and enable others to benefit from crass attempts to exploit attention. (So should the news and media industries.)
Facebook should warrant to protect and respect users’ privacy, agency, and dignity.
Facebook should recognize that malign actors will exploit weak systems of protection to drive people apart and so it should promise to guard against being used to manipulate and deceive. (So should journalism.)
Facebook should share data about its performance against these goals, about its impact on the public conversation, and about the health of that conversation with researchers. (If only journalism had such data to share.)
Facebook should build its business, its tools, its rewards, and its judgment of itself around new metrics that measure its contributions to the health and constructive vitality of the public conversation and the value it brings to communities and people’s lives. (So should journalism.)
Clearly, journalism’s covenants with the public should contain more: about investigating and holding power to account, about educating citizens and informing the public conversation, and more. That’s for another day. But here’s a start for both institutions. They have more in common than they know.
This post began with Beto O’Rourke’s lesson. Then I added Alexandria Ocasio-Cortez. And then Eddie Glaude Jr.’s.
Reporter: Is there anything in your mind the President can do to make this better? Beto O’Rourke: What do you think? You know the shit he’s been saying. He’s been calling Mexican immigrants rapists. I don’t know, members of the press, what the fuck? [Reporter tries to interrupt.] Hold on a second. You know, it’s these questions that you know the answers to. I mean, connect the dots about what he’s been doing in this country. He’s not tolerating racism; he’s promoting racism. He’s not tolerating violence; he’s inciting racism and violence in this country…. I don’t know what kind of question that is.
O’Rourke’s scolding of the press is well-deserved. Allow me to translate it into a few rules to report by.
Tell the truth. Speak the word. If you prevaricate, refusing to call what you see racism or what you hear lies, you give license to the public to do the same and give license to the racists and liars to get away with it.
Stop getting other people to say what you should. It’s a journalistic trick as old as pencils: Asking someone else about racism so you don’t have to say it yourself.
It is not your job to ask stupid questions. Like Beto, I’ve had it with the milquetoast journalistic strategy of asking obvious questions to which we know the answer because “that’s our job, we just ask questions.” Arguing that you are asking these questions in loco publico only insults the public we serve.
You are not a tape recorder. Repeating lies and hate without context, correction, or condemnation makes you an accessory to the crimes. That goes for racists’ manifestos as well as racists at press conferences.
Do not accept bad answers. Follow up your questions. Follow up other reporters’ questions. Just because you’ve checked off your question doesn’t mean your work here is done.
Listen. Do not come to the story with blanks ready to fill in the narrative you’ve already imagined and pitched. Listen first. Learn.
Be human. You are are not separate from the community you serve; you are part of it. You are not objective; you have a worldview. You cannot hide that worldview; be transparent.
Be honest. The standard you work under as a journalist — the thing that separates your words from others’ — should be intellectual honesty. That is, report inconvenient truths.
Improve the world. You exist to serve the public conversation, not to incite conflict, not to pit sides against each other, not to make the world worse.
Finally, I’ll add: You’re not the public’s protector. If Beto says “what the fuck?” then I say report his words; spare us your asterisks.
We live in unusual times so usual methods will not suffice. We need new strategies to report on new dangers or we will be complicit in the result.
Moments after I posted this, I saw that Alexandria Ocasio-Cortez also offered excellent advice for journalists. Unusual times, indeed, when politicians know better how to do journalism than too many journalists. She tweeted:
Racism is the most important story of the day. It has been the most important story of the age in America but it was not the biggest story in news until now. That has happened only because we have an obvious racist in the White House and racists supporting him and now they cannot hide from the recognition and media cannot hide from covering the story. So take this good advice.
And then I saw Professor Eddie Glaude, Jr. on Nicolle Wallace’s MSNBC show deliver a vital, forceful, profound, brilliant lesson in racism in America. Please watch again and again.
[Disclosure: I raised money for my school from Facebook to aggregate signals of quality in news. I also have attended events convened by Google. I am independent of and receive no compensation personally from any technology company.]
Too many momentous decisions about the future of the internet and its regulation — as well as coverage in media — are being made on the basis of assumptions, fears, theories, myths, mere metaphors, isolated incidents, and hidden self-interest, not evidence. The discussion about the internet and the future should begin with questions and research and end with demonstrable facts, not with presumption or with what I fear most in media: moral panic. I will beg journalists to take on academics’ discipline of evidence over anecdote.
But first, let me praise an example of the kind of analysis we need. Axel Bruns, a professor at Queensland University of Technology, just presented an excellent paper at the International Association for Media and Communication Research conference in Madrid, sticking a pin in the idea of the filter bubble. He argues
that echo chambers and filter bubbles principally constitute an unfounded moral panic that presents a convenient technological scapegoat (search and social platforms and their affordances and algorithms) for a much more critical problem: growing social and political polarisation. But this is a problem that has fundamentally social and societal causes, and therefore cannot be solved by technological means alone. [My emphasis]
Based on his reading of available research, Bruns notes that these two metaphors — echo chamber and filter bubble — are not consistently defined, “making them moving targets in both public discourse and scholarly inquiry,” which also makes it impossible to “assess more systematically exactly how disconnected the denizens of such suspected echo chambers and filter bubbles really are.” In his upcoming book, Are Filter Bubbles Real?, Bruns will examine definitions of both metaphors and methodologies for measurement of their alleged impact.
In his paper, Bruns provides perspective and context, pointing out that well before the net, “different groups in society have always already informed themselves from different sources that suited their specific informational interests, needs, or literacies.” He asks: “Given that society and democracy have persisted nonetheless, should we even worry about them?” In short, the burden is on those who propagate these notions to answer the question: “What is new here, and how different is it from before?”
Further, Bruns points out that we live in a “complex and interwoven media ecology” and so it is foolhardy to argue that one factor in it — just Facebook, for example — is the direct cause of behavioral change. Too many rants about the impact of the internet in media ignore the impact of media. Wonder why.
As an academic, Bruns reads existing literature in search of evidence of filter bubbles and echo chambers in prior research. He doesn’t find much at all. Instead, he cites (with links here and full citations in Bruns’ paper):
Earlier studies of the bifurcated blog world 15 years ago uncovered “only mild echo chambers.”
The Pew Research Center found that Facebook users do not select friends based on political leaning and thus are exposed to other worldviews in social media.
Twostudies looked at already divisive topics — abortion, vaccination, Obamacare, gun control — and found, of course, they were also divisive online, though non-political but debatable topics — Game of Thrones and food porn — did not lead to polarization online. Is divisiveness online the cause or the effect?
In sum, a half-dozen academics argue, “at present there is little empirical evidence that warrants any worries about filter bubbles.”
Yetinmedia, noendofstoriesstillwarnoffilterbubbles. Though not all. Somejournalists are reporting on studies that question the filter bubble. Good. A new study comes out and sometimes, it will get coverage. But that leads to another journalistic weakness in reporting academic studies: stories that takes the latest word as the last word. Look at all the perennial, flip-flopping reports that wine will kill or save us. Journalists should do what academics do in their literature reviews: put the latest word in context. They should also do what, for example, Oxford’s Rasmus Kleis Nielsen does on Twitter, responding to assumptions with findings in research.
Now that we have tools like Google Scholar — and many scholarly (if, unfortunately, costly) databases — I urge reporters and editors to do their own academic literature reviews when a story is pitched or assigned, to make sure its premise is upheld by research thus far, to provide context and nuance, and to grapple with what will surely appear: contradictory information.
But I urge them to begin — as Bruns ends his paper — with questions before answers.
The central question now is what [people] do with such information when they encounter it: do they dismiss it immediately as running counter to their own views? Do they engage in a critical reading, turning it into material to support their own worldview, perhaps as evidence for their own conspiracy theories? Do they respond by offering counter-arguments, by vocally and even violently disagreeing, by making ad hominem attacks, or by knowingly disseminating all-out lies as ‘alternative facts’? More important yet, why do they do so? What is it that has so entrenched and cemented their beliefs that they are no longer open to contestation? This is the debate we need to have: not a proxy argument about the impact of platforms and algorithms, but a meaningful discussion about the complex and compound causes of political and societal polarisation. The ‘echo chamber’ and ‘filter bubble’ metaphors have kept us from pursuing that debate, and must now be put to rest.
These easy metaphors carry ill-defined presumptions that do not inform debate. Neither do terms that media love to appropriate and escalate. “Surveillance capitalism” is an extreme name for advertising cookies and the use of the word devalues the seriousness of actual surveillance by governments including my own. See also this very good commentary from Andrew Przybylski and Amy Orben of the Oxford Internet Institute, arguing that internet use is by no means “addiction.”
The state of media coverage of technology and society sucks. It sucked before by being utopian. It sucks now by being dystopian. I tire of the Damascene conversions of both formertechnologists (having safely cashed out) and of tech reporters who signal their virtue by distancing themselves from what they helped build or build up. I am disappointed that I never see media folk acknowledge their own conflict of interest about competing with the technology companies they cover and about their employers’ attempts to cash in political capital for the sake of protectionism against the platforms. I worry about the impact of this technology coverage on the future and freedoms of the net. (What interventions are being legislated based on emotional and vague concepts like filter bubble, echo chamber, surveillance, and addiction?) I worry, too, as Bruns does, that we are missing the real problem and real story: the roots of anger and polarization in society today. (It ain’t Twitter and you know it; start by examining racism.) I am angry to see journalists condescend to the public they serve, treating people as gullible fools who can be corrupted by a mere meme. I am even angrier to see journalists abandon social media and with it all the new voices who were never heard in mass media but now can speak. And I’m sad to see such simplistic, lazy, and poor quality coverage from my field.
Yes, of course, the technology companies have garnered power and wealth that merits close scrutiny. Yes, those companies fuck up and so I, too, am looking for useful regulatory regimes. But our coverage of society’s problems today should not begin and end on El Camino Real. We are too often covering the effect over the cause.
I wish both media and policymakers would follow the example of academics like Bruns (I use him just as an example; there are so many more). Begin with questions. Study the research that exists. Use data. Call for more research. Before making technology companies responsible for every modern ill — the definition of moral panic — make them instead responsible for sharing data to feed that research. And let that research concentrate not on technology and its impact on people — which too often gives people too little credit and agency. Instead let research and reporting look more carefully at how people are using the technology to have an impact on each other. Start by respecting those people and learning from them before condemning and dismissing them. Through fits and starts and missteps and mistakes — sometimes with, sometimes in spite of the companies involved — we the users are building a new society on the net. Watch, listen, and learn before criticizing, dismissing, and condemning. If it sounds like I want journalism to learn from anthropology, I do. More on that soon.
Around the world, news industry trade associations are corruptly cashing in their political capital — which they have because their members are newspapers, and politicians are scared of them — in desperate acts of protectionism to attack platform companies. The result is a raft of legislation that will damage the internet and in the end hurt everyone, including journalists and especially citizens.
As I was sitting in the airport leaving Newsgeist Europe, a convening for journalists and publishers [disclosure: Google pays for the venue, food, and considerable drink; participants pay their own travel], my Twitter feed lit up like the Macy’s fireworks as The New York Times reported — or rather, all but photocopied — a press release from the News Media Alliance (née Newspaper Association of America) contending that Google makes $4.7 billion a year from news, at the expense of news publishers.
The Times story itself is appalling as it swallowed the News Media Alliance’s PR whole, quoting people from the association and not including comment from Google until hours later. Many on Twitter were aghast at the poor journalism. I contacted Google PR, who said The Times did not reach out to the person who normally speaks on these matters or anyone in the company’s Washington office. Google sent me their statement:
These back of the envelope calculations are inaccurate as a number of experts are pointing out. The overwhelming number of news queries do not show ads. The study ignores the value Google provides. Every month Google News and Google Search drives over10 billion clicks to publishers’ websites, which drive subscriptions and significant ad revenue. We’ve worked very hard to be a collaborative and supportive technology and advertising partner to news publishers worldwide.
The “study” upon which The Times (and others) relied is, to say the least, specious. No, it’s humiliating. I want to dispatch with its fallacies quickly — to get to my larger point, about the danger legacy news publishers are posing to the future of news and the internet — and that won’t be hard. The study collapses in its second paragraph:
Google has emerged as a major gateway for consumers to access news. In 2011, Google Search combined with Google News accounted for the majority (approximately 75%) of referral traffic to top news sites. Since January 2017, traffic from Google Search to news publisher sites has risen by more than 25% to approximately 1.6 billion visits per week in January 2018. Corresponding with consumers’ shift towards Google for news consumption, news is becoming increasingly important to Google, as demonstrated by an increase in Google searches about news.
And that, ladies and gentlemen, is great news for news. For as anyone under the age of 99 understands, Google sends readers to sites based on links from search and other products. That Google is emphasizing news and currency more is good for publishers, as that sends them readers. (That 10-billion-click number Google cited above is eight years old and so I have little doubt it is much higher now thanks to all its efforts around news.)
The problem has long been that publishers aren’t competent at exploiting the full value of these clicks by creating meaningful and valuable ongoing relationships with the people sent their way. So what does Google do? It tries to help publishers by, for example, starting a subscription service that drives more readers to easily subscribe — and join and contribute — to news sites directly from Google pages. The NMA study cites that subscription service as an example of Google emphasizing news and by implication exploiting publishers. It is the opposite. Google started the subscription service because publishers begged for it — I was in the room when they did — and Google listened. The same goes for most every product change the study lists in which Google emphasizes news more. That helps publishers. The study then uses ridiculously limited data (including, crucially, an offhand and often disputed remark 10 years ago by a then-exec at Google about the conceptual value of news) to make leaps over logic to argue that news is important on its services and thus Google owes news publishers a cut of its revenue (which Google gains by offering publishers’ former customers, advertisers, a better deal; it’s called competition). By this logic, Instagram should be buying cat food for every kitty in the land and Reddit owes a fortune to conspiracy theorists.
The real problem here is news publishers’ dogged refusal to understand how the internet has changed their world, throwing the paradigm they understood into the grinder. In the US and Europe, they still contend that Google is taking their “content,” as if quoting and linking to their sites is like a camera stealing their soul. They cannot grok that value on the internet is concentrated not in a product or property called content — articles, headlines, snippets, thumbnails, words — but instead in relationships. Journalism is no longer a factory valued by how many widgets and words it produces but instead by how much it accomplishes for people in their lives. I have tried here and here and in many a meeting in newsrooms and journalism conferences to offer this advice to news publishers — with tangible ideas about how to build a new journalistic business around relationships — but most prove incapable of shifting mindset and strategy beyond valuing content for content’s sake. Editors who do understand are often stymied by their short-sighted publishers and KPIs and soon quit.
Most legacy publishers have come up with no sustainable business strategy for a changing world. So they try to stop the world from changing by unleashing their trade associations [read: lobbyists] on capitals from Brussels to Berlin to London to Melbourne to Washington (see: the NMA’s effort to get an antitrust exemption to go after the platforms for antitrust; its study was prepared to hand to Congress in time for its hearings this week). These trade associations attack the platforms without ever acknowledging the fault of their own members in our current polarization in society. (Yes, I’m talking about, for example, Fox News and other Murdoch properties, dues-paying members of many a trade association. By our silence in journalism and its trade associations in not criticizing their worst, we endorse it.)
The efforts of lobbyists for my industry are causing irreparable harm to the internet. No, Google, Facebook, and Twitter are not the internet, but what is done to them is done to the net. And what’s been done includes horrendous new copyright legislation in the EU that tries to force Google et al to have to negotiate to pay for quoting snippets of content to which they link. Google won’t; it would be a fool to. So I worry that platforms will link to news less and less resulting in self-inflicted harm for the news industry and journalists, but more important hurting the public conversation at exactly the wrong moment. Thanks, publishers. At Newsgeist Europe, I sat in a room filled with journalists terribly worried about the impact of the EU’s copyright directive on their work and their business but I have to say they have no one but their own publishers and lobbyists to blame.
I am tempted to say that I am ashamed of my own industry. But I won’t for two reasons: First, I want to believe that the industry’s lobbyists do not speak for journalists themselves — but I damned well better start hearing the protests of journalists to what their companies are doing. (That includes journalists on the NMA board.) Second, I am coming to see that I’m not part of the media industry but instead that we are all part of something larger, which we now see as the internet. (I’ll be writing more about this idea later.) That means we have a responsibility to criticize and help improve both technology and news companies. What I see instead is too many journalists stirring up moral panic about the internet and its current (by no means permanent) platforms, serving — inadvertently or not — the protectionist strategies of their own bosses, without examining media’s culpability in many of the sins they attribute to technology. (I wish I could discuss this with The New York Times’ ombudsman or any ombudsman in our field, but we know what happened to them.)
My point: We’re in this together. That is why I go to events put on by both the technology and news industries, why I try to help both, why I criticize both, why I try to help build bridges between them. It’s why I am devoting time and effort to my least favorite subject: internet regulation. It is why I am so exasperated at leaders in my own industry for their failure to recognize, adapt to, and exploit the change they try to deny. It’s why I’m disappointed in my own industry for not criticizing itself. Getting politicians who are almost all painfully ignorant about technology to try to define, limit, and regulate that technology and what we can do with it is the last thing we should do. It is irresponsible and dangerous of my industry to try.
Here are three intertwined posts in one: a report from inside a workshop on Facebook’s Oversight Board; a follow-up on the working group on net regulation I’m part of; and a brief book report on Jeff Kosseff’s new and very good biography of Section 230, The Twenty-Six Words That Created the Internet.
Facebook’s Oversight Board
Last week, I was invited — with about 40 others from law, media, civil society, and the academe — to one of a half-dozen workshops Facebook is holding globally to grapple with the thicket of thorny questions associated with the external oversight board Mark Zuckerberg promised.
(Disclosures: I raised money for my school from Facebook. We are independent and I receive no compensation personally from any platform. The workshop was held under Chatham House rule. I declined to sign an NDA and none was then required, but details about to real case studies were off the record.)
You may judge the oversight board as you like: as an earnest attempt to bring order and due process to Facebook’s moderation; as an effort by Facebook to slough off its responsibility onto outsiders; as a PR stunt. Through the two-day workshop, the group kept trying to find an analog for Facebook’s vision of this: Is it an appeals court, a small-claims court, a policy-setting legislature, an advisory council? Facebook said the board will have final say on content moderation appeals regarding Facebook and Instagram and will advise on policy. It’s two mints in one.
The devil is the details. Who is appointed to the board and how? How diverse and by what definitions of diversity are the members of the board selected? Who brings cases to the board (Facebook? people whose content was taken down? people who complained about content? board members?)? How does the board decide what cases to hear? Does the board enforce Facebook policyor can it countermand it? How much access to data about cases and usage will the board have? How much authority will the board have to bring in experts and researchers and what access to data will they have? How does the board scale its decision-making when Facebook receives 3 million reports against content a day? How is consistency found among the decisions of three-member panels in the 40ish-member board? How can a single board in a single global company be consistent across a universe of cultural differences and sensitive to them? As is Facebook’s habit, the event was tightly scheduled with presentations and case studies and so — at least before I had to leave in day two — there was less open debate of these fascinating questions than I’d have liked.
Facebook starts with its 40 pages of community standards, updated about every two weeks, which are in essence its statutes. I recommend you look through them. They are thoughtful and detailed. For example:
A hate organization is defined as: Any association of three or more people that is organized under a name, sign or symbol and that has an ideology, statements or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.
At the workshop, we heard how a policy team sets these rules, how product teams create the tools around them, and how operations — with people in 20 offices around the world, working 24/7, in 50 languages — are trained to enforce them.
But rules — no matter how detailed — are proving insufficient to douse the fires around Facebook. Witness the case, only days after the workshop, of the manipulated Nancy Pelosi video and subsequent cries for Facebook to take it down. I was amazed that so many smart people thought it was an easy matter for Facebook to take down the video because it was false, without acknowledging the precedent that would set requiring Facebook henceforth to rule on the truth of everything everyone says on its platform — something no one should want. Facebook VP for Product Policy and Counterterrorism Monika Bickert (FYI: I interviewed her at a Facebook safety event the week before) said the company demoted the video in News Feed and added a warning to the video. But that wasn’t enough for those out for Facebook’s hide. Here’s a member of the UK Parliament (who was responsible for the Commons report on the net I criticized here):
Jeff it’s already been independently certified as being fake. What Facebook are saying is that they won’t take down known sources of malicious political disinformation.
Damian, are you then going to expect them to take down any other video–or anything else–certified as fake? Certified by whom? Do you also want destruction of the evidence of this manipulation? Beware: slope slippery ahead.
So by Collins’ standard, if UK politicians in his own party claim as a matter of malicious political disinformation that the country pays £350m per week to the EU that would be freed up for the National Health Service with Brexit and that’s certified by journalists to be “willful distortion,” should Facebook be required to take that statement down? Just asking. It’s not hard to see where this notion of banning falsity goes off the rails and has a deleterious impact on freedom of expression and political discussion.
But politicians want to take bites out of Facebook’s butt. They want to blame Facebook for the ill-informed state of political debate. They want to ignore their own culpability. They want to blame technology and technology companies for what people — citizens — are doing.
Ditto media. Here’s Kara Swisher tearing off her bit of Facebook flesh regarding the Pelosi video: “Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.”
Sigh. The internet is not media. Facebook is not news (only 4% of what appears there is). What you see there is not content. It is conversation. The internet and Facebook are means for the vast majority of citizenry forever locked out of media and of politics to discuss whatever they want, whether you like it or not. Those who want to control that conversation are the privileged and powerful who resent competition from new voices.
By the way, media people: Beware what you wish for when you declare that platforms are media and that they must do this or that, for your wishes could blow back on you and open the door for governments and others to demand that media also erase that which someone declares to be false.
Facebook’s oversight board is trying to mollify its critics — and forestall regulation of it — by meeting their demands to regulate content. Therein lies its weakness, I think: regulating content.
Regulating Actors, Behaviors, or Content
A week before the Facebook workshop, I attended a second meeting of a Transatlantic High Level Working Group on Content Moderation and Freedom of Expression (read: regulation), which I wrote about earlier. At the first meeting, we looked at separating treatment of undesirable content (dealt with under community standards such as Facebook’s) from illegal content (which should be the purview of government and of an internet court; details on that proposal here.)
At this second meeting, one of the brilliant members of the group (held under Chatham House, so I can’t say who) proposed a fundamental shift in how to look at efforts to regulate the internet, proposing an ABC rule separating actors from behaviors from content. (Here’s another take on the latest meeting from a participant.)
It took me time to understand this, but it became clear in our discussion that regulating content is a dangerous path. First, making content illegal is making speech illegal. As long as we have a First Amendment and a Section 230 (more on that below) in the United States, that is a fraught notion. In the UK, a Commons committee recently released an Online Harms White Paper that demonstrates just how dangerous the idea of regulating content can be. The white paper wants to require — under pain of huge financial penalty for companies and executives — that platforms exercise a duty of care to take down “threats to our way of life” that include not only illegal and harmful content (child porn, terrorism) but also legal and harmful content (including trolling [please define] and disinformation [see above]). Can’t they see that government requiring the takedown of legal content makes it illegal? Can’t they see that by not defining harmful content, they put a chill on all speech? For an excellent takedown of the report, see this post by Graham Smith, who says that what the Commons committee is impossibly vague. He writes:
‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.
This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.
Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy…
See: This is the problem when you try to identify, regulate, and eliminate bad content. Smith concludes: “This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.” Nevermind the common analogy to regulation of broadcast. Would we ever suffer such talk about regulating the contents of bookstores or newspapers or — more to the point — conversations in the corner bar?
What becomes clear is that these regulatory methods — private (at Facebook) and public (in the UK and across Europe) — are aimed not at content but ultimately at behavior, only they don’t say so. It is nearly impossible to judge content in isolation. For example, my liberal world is screaming about the slow-Pelosi video. But then what about this video from three years ago?
What makes one abhorrent and one funny? The eye of the beholder? The intent of the creator? Both. Thus content can’t be judged on its own. Context matters. Motive matters. But who is to judge intent and impact and how?
The problem is that politicians and media do not like certain behavior by certain citizens. They cannot figure out how to regulate it at scale (and would prefer not to make the often unpopular decisions required), so they assign the task to intermediaries — platforms. Pols also cannot figure out how to define the bad behavior they want to forbid, so they decide instead to turn an act into a thing — content — and outlaw that under vague rules they expect intermediaries to enforce … or else.
The intermediaries, in turn, cannot figure out how to take this task on at scale and without risk. In an excellent Harvard Law Review paper called The New Governors: The People, Rules, and Processes Governing Online Speech, legal scholar Kate Klonick explains that the platforms began by setting standards. Facebook’s early content moderation guide was a page long, “so it was things like Hitler and naked people,” says early Facebook community exec Dave Willner. Charlotte Willner, who worked in customer service then (they’re now married), said moderators were told “if it makes you feel bad in your gut, then go ahead and take it down.” But standards — or statements of values— don’t scale as they are “often vague and open ended” and can be “subject to arbitrary and/or prejudiced enforcement.” And algos don’t grok values. So the platforms had to shift from standards to rules. “Rules are comparatively cheap and easy to enforce,” says Klonick, “but they can be over- and underinclusive and, thus, can lead to unfair results. Rules permit little discretion and in this sense limit the whims of decisionmakers, but they also can contain gaps and conflicts, creating complexity and litigation.” That’s where we are today. Thus Facebook’s systems, algorithmic and human, followed its rules when they came across the historic photo of a child in a napalm attack. Child? Check. Naked? Check. At risk? Check. Take it down. The rules and the systems of enforcement could not cope with the idea that what was indecent in that photo was the napalm.
Thus the platforms found their rule-led moderators and especially their algorithms needed nuance. Thus the proposal for Facebook’s Oversight Board. Thus the proposal for internet courts. These are attempts to bring human judgment back into the process. They attempt to bring back the context that standards provide over rules. As they do their work, I predict these boards and courts will inevitably shift from debating the acceptability of speech to trying to discern the intent of speakers and the impact on listeners. They won’t be regulating a thing: content. They will be regulating the behavior of actors: us.
There are additional weaknesses to the rules-based, content-based approach. One is that community standards are rarely set by the communities themselves; they are imposed on communities by companies. How could it be otherwise? I remember long ago that Zuckerberg proposed creating a crowdsourced constitution for Facebook but that quickly proved unwieldy. I still wonder whether there are creative ways to get intentional and explicit judgments from communities as to what is and isn’t acceptable for them — if not in a global service, then user-by-user or community-by-community. A second weakness of the community standards approach is that these rules bind users but not platforms. I argued in a prior post that platforms should create two-way covenants with their communities, making assurances of what the company will deliver so it can be held accountable.
Earlier this month, the French government proposed an admirably sensible scheme for regulation that tries to address a few of those issues. French authorities spent months embedded in Facebook in a role-playing exercise to understand how they could regulate the platform. I met a regulator in charge of this effort and was impressed with his nuanced, sensible, smart, and calm sense of the task. The proposal does not want to regulate content directly — as the Germans do with their hate speech law, called NetzDG, and as the Brits propose to do going after online harms.
Instead, the French want to hold the platforms accountable for enforcing the standards and promises they set: say what you do, do what you say. That enables each platform and community to have its own appropriate standards (Reddit ain’t Facebook). It motivates platforms to work with their users to set standards. It enables government and civil society to consult on how standards are set. It requires platforms to provide data about their performance and impact to regulators as well as researchers. And it holds companies accountable for whether they do what they say they will do. It enables the platforms to still self-regulate and brings credibility through transparency to those efforts. Though simpler than other schemes, this is still complex, as the world’s most complicated PowerPoint slide illustrates:
I disagree with some of what the French argue. They call the platforms media (see my argument above). They also want to regulate only the three to five largest social platforms — Facebook, YouTube, Twitter— because they have greater impact (and because that’s easier for the regulators). Except as soon as certain groups are shooed out of those big platforms, they will dig into small platforms, feeling marginalized and perhaps radicalized, and do their damage from there. The French think some of those sites are toxic and can’t be regulated.
All of these efforts — Facebook’s oversight board, the French regulator, any proposed internet court — need to be undertaken with a clear understanding of the complexity, size, and speed of the task. I do not buy cynical arguments that social platforms want terrorism and hate speech kept up because they make money on it; bull. In Facebook’s workshop and in discussions with people at various of the platforms, I’ve gained respect for the difficulty of their work and the sincerity of their efforts. I recommend Klonick’s paper as she attempts to start with an understanding of what these companies do, arguing that
platforms have created a voluntary system of self-regulation because they are economically motivated to create a hospitable environment for their users in order to incentivize engagement. This regulation involves both reflecting the norms of their users around speech as well as keeping as much speech as possible. Online platforms also self-regulate for reasons of social and corporate responsibility, which in turn reflect free speech norms.
She quotes Lawrence Lessig predicting that a “code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no doubt. But by whom, and with what values? That is the only choice we have left to make.”
And we’re not done making it. I think we will end up with a many-tiered approach, including:
Community standards that govern matters of acceptable and unacceptable behavior. I hope they are made with more community input.
Platform covenants that make warranties to users, the public, and government about what they will endeavor to deliver in a safe and hospitable environment, protecting users’ human rights.
Algorithmic means of identifying potentially violating behavior at scale.
Human appeals that operate like small claims courts.
High-level oversight boards that rule and advise on policy.
Regulators that hold companies accountable for the guarantees they make.
National internet courts that rule on questions of legality in takedowns in public, with due process. Companies should not be forced to judge legality.
Legacy courts to deal with matters of illegal behavior. Note that platforms often judge a complaint first against their terms of service and issue a takedown before reaching questions about illegality, meaning that the miscreants who engage in that illegal behavior are not reported to authorities. I expect that governments will complain platforms aren’t doing enough of their policing — and that platforms will complain that’s government’s job.
Numbers 1–5 occur on the private, company side; the rest must be the work of government. Klonick calls the platforms “the New Governors,” explaining that
online speech platforms sit between the state and speakers and publishers. They have the role of empowering both individual speakers and publishers … and their transnational private infrastructure tempers the power of the state to censor. These New Governors have profoundly equalized access to speech publication, centralized decentralized communities, opened vast new resources of communal knowledge, and created infinite ways to spread culture. Digital speech has created a global democratic culture, and the New Governors are the architects of the governance structure that runs it.
What we are seeking is a structure of checks and balances. We need to protect the human rights of citizens to speak and to be shielded from such behaviors as harassment, threat, and malign manipulation (whether by political or economic actors). We need to govern the power of the New Governors. We also need to protect the platforms from government censorship and legal harassment. That’s why we in America have Section 230.
Section 230 and ‘The Twenty-Six Words that Created the Internet’
We are having this debate at all because we have the “online speech platforms,” as Klonick calls them — and we have those platforms thanks to the protection given to technology companies as well as others (including old-fashioned publishers that go online) by Section 230, a law written by Oregon Sen. Ron Wyden (D) and former California Rep. Chris Cox (R) and passed in 1996 telecommunications reform. Jeff Kosseff wrote an excellent biography of the law that pays tribute to these 26 words in it:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Those words give online companies safe harbor from legal liability for what other people say on their sites and services. Without that protection, online site operators would have been motivated to cut off discussion and creativity by the public. Without 230, I doubt we would have Facebook, Twitter, Wikipedia, YouTube, Reddit, news comment sections, blog platforms, even blog comments. “The internet,” Kosseff writes, “would be little more than an electronic version of a traditional newspaper or TV station, with all the words, pictures, and videos provided by a company and little interaction among users.” Media might wish for that. I don’t.
In Wyden’s view, the 26 words give online companies not only this shield but also a sword: the power and freedom to moderate conversation on their sites and platforms. Before Section 230, a Prodigy case held that if an online proprietor moderated conversation and failed to catch something bad, the operator would be more liable than if it had not moderated at all. Section 230 reversed that so that online companies would be free to moderate without moderating perfectly — a necessity to encourage moderation at scale. Lately, Wyden has pushed the platforms to use their sword more.
In the debate on 230 on the House floor, Cox said his law “will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the internet….”
In his book, Kosseff takes us through the prehistory of 230 and why it was necessary, then the case law of how 230 has been tested again and again and, so far, survived.
But Section 230 is at risk from many quarters. From the far right, we hear Trump and his cultists whine that they are being discriminated against because their hateful disinformation (see: Infowars) is being taken down. From the left, we see liberals and media gang up on the platforms in a fit of what I see as moral panic to blame them for every ill in the public conversation (ignoring politicians’ and media’s fault). Thus they call for regulating and breaking up technology companies. In Europe, countries are holding the platforms — and their executives and potentially even their technologists — liable for what the public does through their technology. In other nations — China, Iran, Russia — governments are directly controlling the public conversation.
So Section 230 stands alone. It has suffered one slice in the form of the FOSTA/SESTA ban on online sex trafficking. In a visit to the Senate with the regulation working group I wrote about above, I heard a staffer warn that there could be further carve-outs regarding opioids, bullying, political extremism, and more. Meanwhile, the platforms themselves didn’t have the guts to testify in defense of 230 and against FOSTA/SESTA (who wants to seem to be on the other side of banning sex trafficking?). If these companies will not defend the internet, who will? No, Facebook and Google are not the internet. But what you do to them, you do to the net.
I worry for the future of the net and thus of the public conversation it enables. That is why I take so seriously the issues I outline above. If Section 230 is crippled; if the UK succeeds in demanding that Facebook ban undefined harmful but legal content; if Europe’s right to be forgotten expands; if France and Singapore lead to the spread of “fake news” laws that require platforms to adjudicate truth; if the authoritarian net of China and Iran continues to spread to Russia, Turkey, Hungary, the Philippines, and beyond; if …
If protections of the public conversation on the net are killed, then the public conversation will suffer and voices who could never be heard in big, old media and in big, old, top-down institutions like politics will be silenced again, which is precisely what those who used to control the conversation want. We’re in early days, friends. After five centuries of the Gutenberg era, society is just starting to relearn how to hold a conversation with itself. We need time, through fits and starts, good times and bad, to figure that out. We need our freedom protected.
Without online speech platforms and their protection and freedom, I do not think we would have had #metoo or #blacklivesmatter or #livingwhileblack. Just to see one example of what hashtags as platforms have enabled, please watch this brilliant talk by Baratunde Thurston and worry about what we could be missing.
None of this is simple and so I distrust all the politicians and columnists who think they have simple solutions: Just make Facebook kill this or Twitter that or make them pay or break them up. That’s simplistic, naive, dangerous, and destructive. This is hard. Democracy is hard.