Posts about oversight board

Here comes the judge

The first decisions of Facebook’s independent Oversight Board make Facebook’s judgment look good by comparison. Who saw that coming?

The Board has in essence said that it is OK to insult Muslim men as a group — yet not Azerbaijanis — and that freedom of expression justifies spreading medical misinformation. How in any logic does that make for a better Facebook, a better internet, and a better world?

The problem is that the Oversight Board is interpreting Facebook’s community standards, which are intended to guide moderators and algorithms in their decisions on what posts to take down. The rules are not — as friend Jasper Jackson put it — fit for purpose to be used as the basis of interpretation and enforcement by a court of ultimate authority, the Board.

I have said again and again (and again and again and again) that Facebook — and other technology companies (and journalistic enterprises) — need to establish and be held accountable to Constitutions, Bills of Rights, North Stars (call them what you will) to act as a covenant of mutual obligation with users, customers, and the public, answering the fundamental question, “Why are we here?”

Because Facebook does not have that higher-level expression of principles, the Oversight Board is left to judge its moderation decisions against the company’s nitty-gritty statutes on one end, or on the other, overly broad concepts like “hate speech” and “human rights,” with nothing in between. The Board acted like Supreme Court strict constructionists without a Constitution to call upon, so it depended on the exact wording of statutes to set bad precedents that will make bad policy.

The Board said that criticizing Muslim men did not rise to the standard of “hate speech.” If only Facebook had a principle — an article in a Bill of Rights — that said it expected users to respect each other as individuals and as members of groups of many identities, then it would have been impossible, in my view, for Facebook, the Board, or the community of users to condone a post that says there is “something wrong with Muslims psychologically.” As the organization Muslim Advocates said: “Facebook’s Oversight Board bent over backwards to excuse hate in Myanmar — a country where Facebook has been complicit in a genocide against Muslims.”

As for the medical disinformation: The Board said that a post endorsing hydroxychloroquine as a COVID-19 treatment did not rise to Facebook’s standard of “imminent physical harm,” because one needs a prescription to get it. Good Lord. We saw in the United States how Donald Trump inspired people to get the drug — and ignore other precautions — risking the health of themselves and others. The Board properly criticizes Facebook for some of its guidelines being too broad. But in this case, the guideline is too specific and created a loophole that allowed the Board to require — require! — Facebook to post medical misinformation. The Board suggested Facebook could have taken other steps, like adding context — but unfortunately, experience and data have shown that fact-checks of misinformation tend to amplify the misinformation. This is not about free expression and debate; there are no two sides to this — medicine has spoken. This decision ill informs, ill serves, and endangers the public.

I am glad — relieved — that after the Board’s decisions, Monika Bickert, Facebook’s head of content policy, said the company would still stick by science: “We do believe, however, that it is critical for everyone to have access to accurate information, and our current approach in removing misinformation is based on extensive consultation with leading scientists, including from the CDC and WHO. During a global pandemic this approach will not change.”

The problem with much of this discussion about bad shit online is that it’s the bad shit that then monopolizes our attention. Look at the news: The Q conspirators are still getting much more attention on cable news, their messages amplified every day, while the Black women of Georgia who especially saved our election and our democracy are not heard (exactly what they feared and foretold: that they would be exploited for this victory and then their circumstances and issues would be ignored). This is what comes of a journalism that focuses on the bad and a debate — I say a moral panic — about the net that obsesses on the awful. Every intervention we see is to find something more to forbid until one day we’ll be done. Not. Thus Facebook’s community standards are expressed in the negative, as statutes, as commandments: Thou shalt not. What about: Thou shalt?

How could we express our expectations in the positive? If I could get a bunch of Facebook executives in a room with a whiteboard, I would start by asking them why Facebook exists. What is it here to do? How do you want its presence to make a positive influence in the world? How would you like people to treat each other? What might you expect them to accomplish? “A connected world is a better world” is fine and I agree (not everyone would), but that’s a bumper sticker, not a Constitution. I thus would press them to express Facebook’s raison d’être. At a less high-falutin’ level, I’d ask who Facebook wants in its garden party and how they should be expected to behave. Out of this discussion might come principles such as users being expected to treat each other with respect. And then I’d ask them what the company warrants to foster and support such an atmosphere. Perhaps out of that comes Facebook’s promise to follow science. Statutes — the community standards — should be based on these principles. Oversight Board decisions should call on these principles. Regulators should expect data from the company to hold it accountable for these principles (this is the basis of the regulatory framework proposed by a high-level working group of which I was a part and which I endorse).

But such a covenant does not exist. So users, moderators, engineers writing algorithms, the Board, regulators, and media are left to interpret and enforce a set of rules posted on the playground.

I now dread the Oversight Board’s upcoming decision on whether Facebook should reinstate Donald Trump. I fear they will call upon freedom of expression — even of a white-supremacist authoritarian ruler inciting violence and rebellion to tear down the sacred institutions of a democracy — and have little more to go on than Facebook’s vague description of what it may do in cases of incitement and violence. I further fear how other heads of state will use this decision, even if Facebook does not, as a precedent. As I said in an earlier post, I am concerned that Germany, the EU, the UK, and most worryingly Poland are contemplating forcing platforms to carry their speech. I will repeat: Compelled speech is not free speech.

I wish to stop a cycle of reaction: user does something new and bad; Facebook reacts by creating a rule against it; the next time a user does something similar a moderator reacts by taking it down; the user reacts by appealing to Facebook and the Board; the Board reacts by ruling according to the statute, and so on. Jane, stop this crazy thing.

Facebook has already reinstated the posts the Board ordered it to reinstate (including in another case about Hermann Goering quote and another with naked breasts in the context of cancer, which Facebook had already put back up). Facebook will then react, in turn, to understand how to enforce the Board’s enforcement of its statutes.

I hope instead that Facebook will use this opportunity to see the weakness of its community standards as the basis for governing the behavior of communities and users online and in society. I hope they will not just sit with someone like me in a room with a whiteboard but will call upon the communities to help draw up their own standards and will work with academics and civil society to imagine a better Facebook in a better world and the principles that would undergird that. I further hope that the Oversight Board will stand back and ask whether by ruling according to the letter of inadequate law it is making Facebook and the world better or worse. I hope for a lot.

Disclosure: Facebook has funded activities at my school regarding journalism and disinformation.

Facebook: Constitution before statutes

Constitution of America, We the People.

The Facebook Oversight Board is now open for cases and I look forward to seeing the results. But I have the same question I’ve had since the planning for its creation began, and I asked that question in a web call today with board leadership:

What higher principles will the Board call upon in making its decisions? It will be ruling on Facebook’s content decisions based on the company’s own statutes — that is, the “community standards” Facebook sets for the community. 

The Board says it will also decide cases on the basis of international human rights standards. This could mean the board might find that Facebook correctly enforced its statute but that the statute violates a principle of human rights, which would result in a policy recommendation to Facebook. Good. 

But there remains a huge gap between community statutes and international human rights law. What is missing, I have argued, is a Constitution for Facebook: a statement of why it exists, what kind of community it wants to serve, what it expects of its community, in short: a north star. That doesn’t exist. 

But the Oversight Board might — whether it and Facebook know it or not — end up writing that Constitution, one in the English model, set by precedent, rather than the American model, set down in a document. That will be primarily in Facebook’s control. Though the Oversight Board can pose policy questions and make recommendations, it is limited by what cases come its way — from users and Facebook — and it does not set policy for the company; it only decides appeals and makes policy recommendations. 

It’s up to Facebook to decide how it treats the larger policy questions raised by the Oversight Board and the cases. In reacting to recommendations, Facebook can begin to build a set of principles that in turn begin to define Facebook’s raison d’être, its higher goals, its north star, its Constitution. That’s what I’ve told people at Facebook I want to see happen. 

The problem is, that’s not how Facebook or any of the technology companies think. Since, as Larry Lessig famously decreed, code is law, what the technologists want is rules — laws — to feed their code — their algorithms — to make consistent decisions at scale. 

The core problem of the technology companies and their relationship with society today is that they do not test that code and the laws behind it against higher principles other than posters on the wall: “Don’t be evil.” “Work fast and break things.” Those do not make for a good Constitution. 

But now is their chance to create one. And now, perhaps, is our chance. I didn’t realize that every Oversight Board case will begin with a public comment period. So we can raise issues with the Board. Indeed, community standards should come from the community, damnit, or they’re not community standards; they’re company standards. So we should speak up. 

And the Board will consult experts. They can raise issues with the Board. And the Board can, in turn, raise issues not just for Facebook but, by example, for all the technology companies. That discussion could be useful. 

Imagine if — as I so wish had been the case — the Board had been in operation when Twitter and Facebook decided what to do about blocking the blatant attempt at election interference by the New York Post and Rupert Murdoch in cahoots with Rudy Giuliani. The Board could have raised, addressed, and proposed policy recommendations based on principles useful to many internet companies and to the media that love to poke them. 

Regulators could also get involved productively more than punitively. I was a member of a Transatlantic Working Group on Content Moderation and Freedom of Expression, which recommended a flexible framework for regulation that would have government hold companies accountable for their own assurances, requiring the companies to share data on usage and impact so researchers and regulators can monitor their performance. This, in my view, would be far better than government trying to tell companies how to operate, especially when it comes to interference in free speech. But government can’t hold companies accountable to keeping promises if there are no promises to keep. A Constitution is a promise, a covenant with users and the public. Every company should have one. Every company should be held accountable for meeting its requirements. And the public discussion should revolve around those principles, not around whether Johnny is allowed to use a bad word. 

I make no predictions here. The Board could end up answering a handful of picayune complaints among tens of thousands of possible cases a week and produce the script of an online soap opera. Facebook could follow the letter of the law set down by the Board and miss the opportunity to set higher goals. Media, experts, and the public could be ignored or worse could just continue to snipe instead of contribute constructively. 

But I can hope. The net is young. We — all of us — are still designing it by how we use it.