Finding the line

In his interview with Kara Swisher, Mark Zuckerberg at last drew a line around what is not acceptable on Facebook.* I think he drew the line in the wrong place. So do many commentators.

So where do you think the line should be drawn? Where do I? If we cannot agree on where it should be, can we expect Facebook to determine this on its own and in every one of millions — perhaps billions — of questions it faces in its platform for human behavior?

Here I will try to put this discussion in the context of Facebook’s role in the world versus the role it has perceived for itself. More to the point, I will explore some of the standards that could be used to set the line: harm, threat, conspiracy, incivility, bigotry, hate, manipulation. Warning: I will fail. But especially because I raised money from Facebook for my school (disclosures below*), I need to address these questions myself.

Let us start with Zuckerberg’s pronouncements. With Swisher, he seemed to defend Holocaust denial as free speech. He said:

I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong…. It’s hard to impugn intent and to understand the intent.

That could not be more wrong, for denying something that is so overwhelmingly documented and universally accepted as true can only be to intentionally mislead. In a week marked by people walking back things they’d said, Zuckerberg emailed Swisher to walk this back:

I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that. Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services. If something is spreading and is rated false by fact checkers, it would lose the vast majority of its distribution in News Feed. And of course if a post crossed line into advocating for violence or hate against a particular group, it would be removed. These issues are very challenging but I believe that often the best way to fight offensive bad speech is with good speech.

As an example of what does cross the line, Zuckerberg discussed Myanmar and said Facebook would take down content that incited imminent violence or physical harm against people there.

That’s his line.

He also said he would not take down Infowars — but instead promote it less — because it did not cross that line. Infowars was the topic of a heated discussion between Facebook and journalists days before when the new head of News Feed, John Hegeman, said Facebook would not take down Infowars because Facebook does not “take down false news…. I guess just for being false that doesn’t violate the community standards.”

Right reasoning, perhaps, but wrong decision. I do not think we should expect Facebook to take down everything or anything that is false. For the 1000th time, can we agree that no one — least of all Facebook — wants them to be the arbiters of truth in society?

But I do think Facebook should take down Infowars. For me, that is an easy decision to make anecdotally as Infowars is so notoriously putrid. But as I said in a Twitter discussion on the topic the other day, the harder question is: What is the ongoing and enforceable standard that justifies that decision and that can be applied elsewhere at scale? Where’s the line?

In a lengthy discussion of all this on the latest This Week in Google, host Leo Laporte shifted the line, I think, to a better place: harm. Infowars may not harm bodies and take lives as disinformation, propaganda, hate speech, and incitement do in Myanmar, but with his despicable conspiracy theories and rabid lies Alex Jones certainly causes harm — to the families of Sandy Hook, to democracy, to decency.

I am not arguing that Facebook should take Infowars down as a matter of law or leave it up as a matter of free speech. I am arguing that taking down Infowars is an act of enlightened self-interest for Facebook: the service (being yelled at by an insane hate-monger is what I’d call a bad user experience), the brand (does Facebook really want to enable and be associated with such as this?), and the company (if Facebook loses users and advertisers because of this kind of crap, its bottom line and equity suffer).

I would also say this to Zuckerberg: Facebook is not the internet and should not want to be (though it is often accused of exactly that ambition, especially in developing markets). In this context, what I mean is that free speech is not Facebook’s burden. It’s not as if Alex Jones has no place else to nest with his cockroaches. That is the internet and it is uncontrollable. Facebook is controllable, by you, Mark. People are begging you to control it. That responsibility — and right — are yours. You need to decide  not whether speech is acceptable (of course, it is) but whether Infowars is (I say it is not).

That is far from a universally held opinion; many do not trust Facebook to make decisions and in any case do not believe it should. Back in the day, I might have agreed, being a dogmatist on the side of openness and free speech. But the platforms — and I — have had to learn that pure openness inevitably breeds manipulation of economic, psychological, and political origins. I’ve come to see that my friend Dov Seidman, founder of LRN and the How Institute, is right when he says that neutrality is not an option.

I should add that, no, I do not believe that if Zuckerberg and company choose what is and is not appropriate for their distribution, promotion, and monetization then that makes it a media company. Media people tend to look at the world, Godlike, in their own image and think anyone who does anything they ever did is media. No. Seeing Facebook in the analog of the past is what is getting us into this mess, for it blinds us to how the internet and Facebook are new and different and require a new perspective and new solutions. Facebook is a company not built around content or information but instead people. You can’t expect Facebook to be the Columbus Dispatch: neatly and cleanly packaged and produced. Facebook reflects life’s messes.

One could say that is a reason to leave Infowars up: it reflects society’s mess. Except Infowars is made to manipulate Facebook and YouTube and their algorithms — as well as every media outlet and their editors and every politician. It screams for attention and gets it. We are its chumps. We do not have to be. We can urge Facebook and YouTube to take it down, because it harms, and they are free to act.

I recognize the political hazards, of course. You have a member of Congress, allegedly Republican, wanting to interfere in the market and declare the platforms public utilities because his favored right-wing fake-news factory doesn’t get enough traffic online! This isn’t easy. I get that. Doing the right thing oftentimes is not.

So now we return to the hard question that remains: How to define harm in a way in which Myanmar incitement and Infowars conspiracy theories end up on one side of the line and mere controversy on the other? I wish I had a neat formula. I don’t. That is why the platforms — Facebook, Google, Twitter, all of them — avoid this decision, because they can’t turn it into a rule set, a formula, an algorithm. On Twitter just now, legendary VC Vinod Khosla — responding, it so happens, to a Swisher tweet (she is everywhere) — asked, “Is there a mathematical optimization for societal good?”

No. Humanity doesn’t scale. Civility isn’t a formula. Decency isn’t an algorithm. I’m afraid my first useful suggestion to Facebook is not formulaic but procedural, not technological but human, not cheap but very expensive. Facebook needs humans making human judgments. Facebook’s community standards clearly are not working as they allow all kinds of horrid behavior in, behavior any decent, mature, responsible human being would recognize as unacceptable in civil company. Facebook probably needs a very large customer service department and a means to communicate with them. (Want to ask someone with experience in this field, I’d suggest asking Craig Newmark,* whose job was customer service.) To understand the need, see Casey Newton’s stories here about people who have tried to reach Facebook to alert them to the harm they are experiencing. Yes, I understand that this is a problem of scale. At Facebook’s F8, Zuckerberg said the company is killing a million fake accounts a day. The bad guys are aggressive. So must Facebook be. But I still don’t have a clear standard — a line — they can point to.

If reporting harm is difficult to manage then does any other standard make it easier for Facebook to make its own judgments? As I said above, I don’t think truth is the test. Witness the pile of new books on my desk, each trying to figure out what truth is: The Death of Truth, Orwell on Truth, A Short History of Truth, Truth Matters, Truth, and Post-Truth (three of those). Truth is hard.

The flip-side of truth is conspiracy. This is Infowars’ specialty. Is it possible to judge conspiracy theories without also judging truth? I’m not sure. It could be possible to develop catalogs of harmful conspiracy theories: anti-vaccination does harm people; 9/11 conspirators do harm education and society. Who makes that catalog in each nation and society? Data & Society* is doing a good job of ferretting out such manipulation of truth online in the U.S. and the European Union is doing well against Russia in Europe. It might be possible. It starts by reversing Zuckerberg’s view that denialist conspiracy theories do not intend to misinform. They most certainly do.

How about incivility as a standard? Twitter has said it will pay more attention to its impact on the health of the public conversation. Facebook has said its aim now is to encourage “meaningful interactions between people.” I worry that the term “incivility” (like “fake news”) has been coopted by the Orwellian uncivil among us to cut off criticism of them. When I said that to authorYascha Mounk as we began his podcast, he urged me to hold onto the term, advising that it is possible to be civil while also meeting one’s obligation to call out evil. So civility matters; it is a precondition to becoming informed and to holding a productive conversation in society. But civility is too low a bar for this discussion. If the uncivil were banned, I’m afraid that many of us — including me sometimes, I’m sorry to say — would be doomed.

Then how about bigotry? Hate speech is forbidden in Facebook’s community standards. I believe that Infowars and its confederates engage in it. But they’re still on Facebook. So something’s not calibrated correctly here. In Germany, Facebook now has to enforce a ban on hate speech under the NetzDG law. Because of this — and laws in other countries — Facebook is hiring 20,000 crap detectors to get rid of hate speech, among other things. That might sound like what I’m asking for above: people with judgment and authority. But the law requires these people to act within 24 hours and so there’s no time for consideration. Because the fines are considerable — $60 million for a failure — the unintended consequence of legally required zealousness is that satire and legitimate speech are imperiled; caution wins every time. This is why I would prefer Facebook taking responsibility in its own self-interest over legislation.

That leaves me with manipulation. This is the criteria that I think is most comfortable for the platforms as they already make this judgment at scale when it comes to economically motivated bad guys: spammers and fraudsters. They decide what is made just to game them. Can they do the same with psychologically and politically motivated manipulators?

So I have failed. I, like many, disagree with where Zuckerberg drew the line. I want a test that Infowars will fail. But until we can formulate a rule set that does that, I fear we will be stuck with Alex Jones and I hate that.

I am not letting Zuckerberg or Facebook off the hook. I believe they must set standards for what is and what is not acceptable on their platform (ditto for Twitter, YouTube, Instagram, Snap, and other platforms that deal in the messiness that is us). They need to stand by their decisions. They need to invest in systems and people to reliably enforce those standards.

But we’re still only at the first stage: Where’s the line?

* Disclosures: I raised funds from Facebook as well as the Craig Newmark Philanthropies, the Ford Foundation, and others to start the News Integrity Initiative at CUNY. We are independent of Facebook. I personally receive no funds from any platform.
Craig Newmark Philanthropies recently gave a large endowment to the school where I work and it is being rechristened the Craig Newmark Graduate School of Journalism at CUNY.
Data & Society is a grantee of the News Integrity Initiative