Posts about facebook

Attacks on the People’s Press

Donald Trump’s war on TikTok in U.S. and Rupert Murdoch’s on Facebook in Australia are not being seen for their true import: as government attacks on the people’s press, on freedom of expression, on human rights. 

In Australia, Facebook just said that if Murdoch-backed legislation requiring platforms to pay for news is enacted, the company will stop media companies — and users — from posting news on Facebook and Instagram.

Who is hurt there? The public and its conversation. The public loses access to its means of sharing and debating news. Never before in history — never before the internet — has everyone had access to a press; only the privileged had it and now the privileged will rob the people of theirs. Without the people’s press, we would not have #BlackLivesMatter, #MeToo, #OccupyWallStreet and the voices of so many too long not heard. This is a matter of human rights. 

The Australian legislation is a cynical mess. It is bald protectionism by Murdoch and the old, corporate press, requiring platforms to “negotiate” with guns to their heads for the privilege of quoting, promoting, and sending traffic, audience, and tremendous value to news sites. It is illogical. Facebook, Google, et al did not steal a penny from old media. They competed. To say that Facebook owes newspapers is a white plutocrat’s regressive view of reparations; by this logic Amazon owes Walmart who owes A&P who owes the descendents of Luigi’s corner grocery who owes a pushcart vegetable vendor on Hester Street. Facebook owes news nothing. 

This is a case of outrageous regulatory capture on Murdoch’s part. He doesn’t give a rat’s ass about news and informed democracy. He, more than any human being alive, has been the scourge of democracy in the English-speaking world. The Australian legislation aims to give money only to large publishers, like Murdoch. If Facebook makes good on its threat and bans news, then the news business as a whole will suffer but the largest players in the field, who have brand recognition — i.e., Murdoch — will gain market share over smaller and newer competitors. Murdoch will be even freer to spread his propaganda. This is an attempt by the old press to impose a Stamp Tax on the new. Facebook is right to resist, just as Google was when Spain imposed its Stamp Tax on links (and Google News left the country). 

Now to Trump’s war on TikTok. This, too, is a matter of freedom of expression. TikTok is, to my mind, the first platform to begin to make us rethink media and the line separating producer and audience, for TikTok is a collaborative platform where people do not just comment on each others content but create together. It is the one social network that Trump and his cultists have not managed to game. It is the platform that has enabled Sarah Cooper and countless citizens to mock Trump. So he hates it and wants to abuse his power to kill it. 

If TikTok goes because of government fiat, so goes Sarah Cooper’s ability to criticize the man who killed it. What could be a clearer violation of the First Amendment? Why is no one screaming this? It’s because, I think, the old press still thinks the meaning of the “press” is a machine that spreads ink. No. The internet is the people’s press. It is a machine that spreads power. 

Keep in mind that none of these platforms was built for news and their lives would all, frankly, be easier without it and the controversy and advertiser repellant it brings. Facebook was built for hookups and party pix. The people decided to use it to share and discuss news. Twitter was built to tell friends where you were drinking. The people decided to use it to share what they witness with the world, to discuss public policy, and to organize movements. Google was built to find web sites, not news, but it added the ability to find news when the people showed they wanted that. YouTube was built to stream silly videos. The people decided they would use it for everything from education to news. TikTok was built to lip-sync music. The people decided they would use it to mock the fool in the White House. 

In every case, media could have built what the platforms did. They could have provided people a place to share what they witness and discuss public issues; instead, they provided dark, dank, neglected corners in which to comment on the journalist’s content. They could have provided a place for communities to meet, gather together, to share, to assemble and act. They did not. They could have provided a place for creators to collaborate but instead they care only about their own creation. News media blew every opportunity. Their publics— their readers, viewers, listeners, users, customers — went elsewhere to take advantage of the power the internet offered them. Platforms shared that power with the public. Publishers did not. The platforms owe the publishers nothing. The publishers owe their publics apologies. 

Now, of course, cynical Murdoch and his media mates found an ideal foil in Mark Zuckerberg because, these days, nobody likes Mark, right? Why is that? In part, of course, it’s because Mark is incredibly rich and not terribly telegenic and because he cannot control the bucking bronco he is riding. But it is also because of media’s narrative about him: that he is suddenly the cause of societal ills that have been around since man learned to talk. Please keep in mind when you read media stories about Facebook that even if subconsciously, reporters are writing from a position of jealous conflict of interest. Murdoch, more than any publisher this side of Germany, has sicced his troops on Facebook, Google, Twitter, and the internet, which they believe has robbed them of their manifest destiny and dollars. 

Necessary disclosure: Facebook has funded projects related to disinformation and news at my school, some of them reaching an end. I receive nothing personally from Facebook or any technology company, other than free drinks at the conferences they hold to help the news industry. I am accused of defending Facebook, though Facebook does always not make it easy to defend and I’m often critical of it. What I am defending is the internet and the power it gives citizens at last. What I am defending is the people’s press. 

I would like to hear First Amendment lawyers and scholars in the U.S. and human-rights advocates the world around defend the people’s press from attacks in the Philippines, Russia, China, Hong Kong, Hungary, Turkey, Belarus, Brazil — and in the United States and Australia. 

None of this is new. Every time there is a new technology that enables more people to speak, those who controlled the old technology — and the power it afforded — try to prevent the people they see as interlopers from sharing that power. It happened when scribe Filippo de Strata tried to convince the doge of Venice to outlaw the press and the drunken Germans who brought it to Italy. Princes tried to grant printing monopolies to allies. Popes and kings and autocrats of late banned and burned books and the people who wrote them. England had the Stationers Company license and censor authorized publishing. Charles II tried to close coffeehouses to shut off the discussion of news in them. American newspaper publishers tried to have new radio competitors banned from broadcasting news. Each time, eventually, they lost. For speech will out. 

Teapot and lid. Left side is marked “America: Liberty Restored” and right side is marked “No Stamp Act.” 2006.0229.01ab.

Mark Zuckerberg: Now is the time for your Oversight Board

Like Mark Zuckerberg, I defend freedom of expression. Two days ago, I wrote this post about the value of hearing many voices, about history’s lessons regarding the protection of speech.

But Donald Trump’s unfettered use of Facebook to sow division and encourage violence is not a matter of freedom of expression. There is no requirement that Facebook be his platform for noxious speech. This is a question of what Facebook stands for and what Mark Zuckerberg stands for. As I have asked before, what is Facebook’s North Star? Why does it exist?

Now is the moment for Facebook to convene its new Oversight Board — or for that board to convene itself to deliberate the issues raised and standards required to address this challenge. I don’t care that the systems and bureaucracy are not in place. This is urgent. Get on Zoom. If this independent Board does not meet on this issue of all issues, then why does it exist?

The Board has 20 smart and experienced members: leaders in freedom of expression and human rights, a former prime minister, a former Guardian editor (my friend, Alan Rusbridger), a Nobel prize winner. I would make a bad member of the Board (I was not asked) for if I were there I would be doing just what I am doing here: arguing in public for a public discussion at this critical time to deliberate Facebook’s public responsibility.

The Board isn’t necessary to do that. Facebook’s employees are starting to rise up to make their dissent heard. Zuckerberg can decide on his own or with the help of his Oversight Board, his employees, his users, and the public. But he can no longer not decide.

What is that decision? Perhaps to illustrate the choice it’s easier to take this out of the high-minded realm of freedom of expression and democracy, for that is where the company trips over itself. If Facebook did not exist tomorrow, we would find other ways to express ourselves.

Instead, try thinking of Facebook as a dinner at Mark Zuckerberg’s house. Let’s say that Donald Trump shows up. Donald starts insulting the other guests, shouting that he will bring violence down upon the heads of people who criticize him; blaming the troubles in this country on the Chinese; insulting African-Americans by insisting racists like them; attacking the journalists in the room, shouting that they’re all fake and enemies of the people. What is the host to do — and Mark Zuckerberg is undoubtedly the host? I would expect a host to ask rude Donald to leave. What are the guests to do? I would leave and never return.

So I repeat: Why does Facebook exist? Does it not have a vision for a better neighborhood, a connected world? How does it ever get there if it does not set an example? Does it have no norms of respectfulness? I don’t mean its statutes, its community standards; I mean an ethic, a moral foundation.

In disclosure, Facebook has contributed to my school to undertake various activities, including supporting others’ work around disinformation (I receive nothing personally from Facebook). I advocate that the news industry should work with Facebook, Google, Twitter, and other technology companies because I do not believe we can go our own way anymore; that is the path to obscurity. I defend the platforms against ill-conceived regulation for I worry about its impact on the net and our freedoms there. I think of myself as a defender of speech and thus a friend of the internet. Others call me a friend of the platforms. OK, then, friends tell friends when they’re screwing up. I’ve done that before and I’ll do it now.

Facebook: It is time to listen to friends and foes and reconsider what you are here to do. It is time to stop hiding behind freedom of expression, especially as Donald Trump threatens that very freedom. It is time to have the courage to stand for something. What do you stand for?

I was glad that Medium killed an ill-informed post about COVID by an armchair epidemiologist. I support Twitter’s decisions to begin to add warnings to, not promote, and add fact-checking to Donald Trump’s tweets. Those are just starts, but they are starts. I will not let Google off the hook, for YouTube has much to do as well.

Facebook needs to take a stand against Donald Trump’s racism, incitement, and lies. It cannot stand apart any longer. Our nation is burning. Yes, I am saying this now that it’s my nation on fire. Should I have raised my voice sooner and louder when other nations burned: Myanmar, the Philippines? Yes.

What do I want Facebook to do? Not much, actually. I don’t think Facebook should necessarily kill Trump’s account, for Zuckerberg has a point that citizens should see what their head of state is saying. I don’t think the internet is media nor do I believe that Facebook is a publisher or editor responsible for his words; I say it’s pointless to fact-check Trump. What I do want is for Facebook to separate itself from his vile behavior. Facebook should say: We do not agree. We do not approve. We say this is wrong.

If it does not, by its silence and with its power, it endorses what Trump is saying and becomes his willing agent — every bit as much as when a major newspaper quotes Trump’s posts and tweets without telling its users when he is lying and calling on his racist allies, and every bit as much as Republicans enabling him for their ends.

Trump attacked women and you did not protest. Trump went after immigrants and you did not stop him. Trump came for African-Americans and you stood back. Now Trump is coming for you, technology companies. He is attacking Section 230, the best protection we have for the freedom of expression you all say you hold dear. Will you stand up for that and your users? That should be easy. Will you then stand up for your users who are women and immigrants and African-American? What will you stand for?

In defense of targeting

In defending targeting, I am not defending Facebook, I am attacking mass media and what its business model has done to democracy — including on Facebook.

With targeting, a small business, a new candidate, a nascent movement can efficiently and inexpensively reach people who would be interested in their messages so they may transact or assemble and act. Targeted advertising delivers greater relevance at lower cost and democratizes advertising for those who could not previously afford it, whether that is to sell homemade jam or organize a march for equality.* Targeting has been the holy grail all media and all advertisers have sought since I’ve been in the business. But mass media could never accomplish it, offering only crude approximations like “zoned” newspaper and magazine editions in my day or cringeworthy buys for impotence ads on the evening news now. The internet, of course, changed that.

Without targeting, we are left with mass media — at the extreme, Super Bowl commercials — and the people who can afford them: billionaires and those loved by them. Without targeting, big money will forever be in charge of commerce and politics. Targeting is an antidote.

With the mass-media business model, the same message is delivered to all without regard for relevance. The clutter that results means every ad screams, cajoles, and fibs for attention and every media business cries for the opportunity to grab attention for its advertisers, and we are led inevitably to cats and Kardashians. That is the attention-advertising market mass media created and it is the business model the internet apes so long as it values, measures, and charges for attention alone.

Facebook and the scareword “microtargeting” are blamed for Trump. But listen to Andrew Bosworth, who knows whereof he speaks, as he managed advertising on Facebook during the 2016 election. In a private post made public, he said:

So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period….

They weren’t running misinformation or hoaxes. They weren’t microtargeting or saying different things to different people. They just used the tools we had to show the right creative to each person.

I disagree with him about Facebook deserving full blame or credit for electing Trump; that’s a bit of corporate hubris on the part of him and Facebook, touting the power of what they sell. But he makes an important point: Trump’s people made better use of the tools than their competitors, who had access to the same tools and the same help with them.

But they’re just tools. Bad guys and pornographers tend to be the first to exploit new tools and opportunities because they are smart and devious and cheap. Trump used it to sell the ultimate elixir: anger. Cambridge Analytica acted as if it were brilliant at using these tools, but as Bosworth also says in the post — and as every single campaign data expert I know has said — CA was pure bullshit and did not sway so much as a dandelion in the wind in 2016. Says Bosworth: “This was pure snake oil and we knew it; their ads performed no better than any other marketing partner (and in many cases performed worse).” But the involvement of evil CA and its evil backers and clients fed the media narrative of moral panic about the corruption and damnation of microtargeting.

Hillary Clinton &co. could have used the same tools well and at the time — and still — I have lamented that they did not. They relied on traditional presumptions about campaigning and media and the culture in a changed world. Richard Nixon was the first to make smart use of direct mail — targeting! — and then everyone learned how to. Trump &co. used targeting well and in this election I as sure as hell hope his many opponents have learned the lesson.

Unless, that is, well-meaning crusaders take that tool away by demonizing and even banning micro — call it effective — targeting. I have sat in too many rooms with too many of these folks who think that there is a single devil and that a single messiah can rescue us all. I call this moral panic because it fits Ashley Crossman’s excellent definition of it:

A moral panic is a widespread fear, most often an irrational one, that someone or something is a threat to the values, safety, and interests of a community or society at large. Typically, a moral panic is perpetuated by news media, fueled by politicians, and often results in the passage of new laws or policies that target the source of the panic. In this way, moral panic can foster increased social control.

The corollary is moral messianism: that outlawing this one thing will solve it all. I’ve heard lots of people proclaiming that microtargeting and targeting — as well as the data that powers it — should be banned. (“Data” has also become a scare word, which scares me, for data are information.) We’ve also seen media — in cahoots with similarly threatened legacy politicians — gang up on Facebook and Google for their power to target because media have been too damned stubborn and stupid, lo these two decades, to finally learn how to use the net to respect and serve people as individuals, not a mass, and learn information about people to deliver greater relevance and value for both users and advertisers. I wrote a book arguing for this strategy and tried to convince every media executive I know to compete with the platforms by building their own focused products to gather their own first-party data to offer advertisers their own efficient and effective value and to collaborate as an industry to do this. Instead, the industry prefers to whine. Mass media must mass.

Over the years, every time I’ve said that the net could enable a positive, I’ve been accused of technological determinism. Funny thing is, it’s the dystopians who are the determinists for they believe that a technology corrupts people. It is patronizing, paternalistic, and insulting to the public and robs them of agency to believe they can be transformed from decent, civilized human beings into raging lunatics and idiots by exposure to a Facebook ad. If we believe that and believe our problems are so easily fixed then we miss the real problems this country has: its long-standing racism; media’s exploitation and fueling of conflict and fear; and growing anti-intellectualism and hostility to education.

We also need to fix advertising — in mass media and on the internet in the platforms, especially on Facebook. Advertising needs to shift from mass-media measures of audience and attention and clicks to value-based measures of relevance and utility and efficacy — which will only occur with, yes, targeting. It also must become transparent, making clear who is advertising to us (Facebook may confirm the identity of an advertiser but that confirmed information is not shared with us) and on what basis we are being targeted (Facebook reveals only rough demographics, not targeting goals) and giving us the power to have some control over what we are shown. Instead of banning political advertising, I wish Twitter would also have endeavored to fix how advertising works.

I hear the more extreme moral messianists say their cure is to ban advertising. That’s not only naive, it’s dangerous, for without advertising journalists will starve and we will return to the age of the Medicis and their avissi: private information for the privileged few who can afford it. Paywalls are no paradise.

What’s really happening here — and this is a post and a book for another day — is a reflexive desire to control speech. I’ve been doing a lot of reading lately about the spread of printing in early-modern Europe and I am struck by how every attempt to control the press and outlaw forms of speech failed and backfired. At some point, we must have faith in our fellow citizens and shift our attention from playing Whac-a-Mole with the bad guys to instead finding, sharing, and supporting expertise, education, authority, intelligence, and quality so we can have a healthy, deliberative democracy in a marketplace of ideas. The alternatives are all worse.

* I leave you with a few ads I found in Facebook’s library that could work only via targeting and never on expensive mass media: the newspaper, TV, or radio. I searched on “march.”

When you eliminate targeting, you risk silencing these movements.

Opening photo credit and link: https://wellcomecollection.org/works/wagakkh5

Governance: Facebook designs its oversight board (should journalism?)

Facebook is devoting impressive resources — months of time and untold millions of dollars — to developing systems of governance, of its users and of itself, raising fascinating questions about who governs whom according to what rules and principles, with what accountability. I’d like to ask similar questions about journalism.

I just spent a day at Facebook’s fallen skyscraper of a headquarters attending one of the last of more than two dozen workshops it has held to solicit input on its plans to start an Oversight Board. [Disclosures: Facebook paid for participants’ hotel rooms and I’ve raised money from Facebook for my school.] Weeks ago, I attended another such meeting in New York. In that time, the concept has advanced considerably. Most importantly, in New York, the participants were worried that the board would be merely an appeals court for disputes over content take-downs. Now it is clear that Facebook knows such a board must advise and openly press Facebook on bigger policy issues.

Facebook’s team showed the latest group of academics and others a near-final draft of a board charter (which will be released in a few weeks, in 20-plus languages). They are working on by-laws and finalizing legal structures for independence. They’ve thought through myriad details about how cases will rise (from users and Facebook) and be taken up by the board (at the board’s discretion); about conflict resolution and consensus; about transparency in board membership but anonymity in board decisions; about how members will be selected (after the first members join, the board will select its own members); about what the board will start with (content takedowns) and what it can tackle later (content demotion and taking down users, pages, groups — and ads); about how to deal with GDPR and other privacy regulation in sharing information about cases with the board; about how the board’s precedents will be considered but will not prevent the board from changing its mind; even about how other platforms could join the effort. They have grappled with most every structural, procedural, and legal question the 2,000 people they’ve consulted could imagine.

But as I sat there I saw something missing: the larger goal and soul of the effort and thus of the company and the communities it wants to foster. They have structured this effort around a belief, which I share, in the value of freedom of expression, and the need — recognized too late — to find ways to monitor and constrain that freedom when it is abused and used to abuse. But that is largely a negative: how and why speech (or as Facebook, media, and regulators all unfortunately refer to it: content) will be limited.

Facebook’s Community Standards — in essence, the statutes the Oversight Board will interpret and enforce and suggest to revise — are similarly expressed in the negative: what speech is not allowed and how the platform can maintain safety and promote voice and equality among its users by dealing with violations. In its Community Standards (set by Facebook and not by the community, by the way), there are nods to higher ends — sharing stories, seeing the world through others’ eyes, diversity, equity, empowerment. But then the Community Standards becomes a document about what users should not do. And none of the documents says much if anything about Facebook’s own obligations.

So in California, I wondered aloud what principles the Oversight Board would call upon in its decisions. More crucially, I wondered whom the board is meant to serve and represent: does it operate in loco civitas (in place of the community), publico (public), imperium (government and regulators), or Deus, (God — that is, higher ethics and standards)? [Anybody with better schooling than I had, please correct my effort at Latin.]

I think these documents, this effort, and this company — along with other tech companies — need a set of principles that should set forth:

  • Higher goals. Why are people coming to Facebook? What do they want to create? What does the company want to build? What good will it bring to the world? Why does it exist? For whose benefit? Zuckerberg issued a new mission statement in 2017: “To give people the power to build community and bring the world closer together.” And that is fine as far as it goes, but that’s not very far. What does this mean? What should we expect Facebook to be? This statement of goals should be the North Star that guides not just the Oversight Board but every employee and every user at Facebook.
  • A covenant with users and the public in which Facebook holds itself accountable for its own responsibilities and goals. As an executive from another tech company told me, terms of service and community standards are written to regulate the behavior of users, not companies. Well, companies should put forth their own promises and principles and draw them up in collaboration with users (civitas), the public (publico), and regulators (imperium). And that gives government — as in the case of proposed French legislation — the basis for holding the company accountable.

I’ll explore these ideas further in a moment, but first let me first address the elephant on my keyboard: whether Facebook and its founder and executives and employees have a soul. I’ve been getting a good dose of crap on Twitter the last few days from people who blithely declare — and others who retweet the declaration — that Zuckerberg is the most dangerous man on earth. I respond: Oh, come on. My dangerous-person list nowadays starts with Trump, Murdoch, Putin, Xi, Kim, Duterte, Orbán, Erdoğan, MBS…you get the idea. To which these people respond: But you’re defending Facebook. I will defend it and its founder from ridiculous, click-bait trolling that devalues the real danger our world is in today. I also criticize Facebook publicly and did at the meetings I attended there. Facebook has fucked up plenty lately and that’s why it needs oversight. At least they realize it.

When I defend internet platforms against what I see as media’s growing moral panic, irresponsible reporting, and conflict of interest, I’m defending the internet itself and the freedoms it affords from what I fear will be continuing regulation of our own speech and freedom. I don’t oppose regulation; I have been proposing what I see as reasonable regimes. But I worry about where a growing unholy alliance against the internet between the far right and technophes in media will end.

That is why I attend meetings such as the ones that Facebook convenes and why I just spent two weeks in California meeting with both platform and newspaper executives, to try to build bridges and constructive relationships. That’s why I take Facebook’s effort to build its Oversight Board seriously, to hold them to account.

Indeed, as I sat in a conference room at Facebook hearing its plans, it occurred to me that journalism as a profession and news organizations individually would do well to follow this example. We in journalism have no oversight, having ousted most ombudsmen who tried to offer at least some self-reflection and -criticism (and having failed in the UK to come up with a press council that isn’t a sham). We journalists make no covenants with the public we serve. We refuse to acknowledge — as Facebook executives did acknowledge about their own company — our “trust deficit.”

We in journalism do love to give awards to each other. But we do not have a means to systematically identify and criticize bad journalism. That job has now fallen to, of all unlikely people, politicians, as Beto O’Rourke, Alexandria Ocasio-Cortez, and Julian Castro offer quite legitimate criticism of our field. It also falls to technologists, lawyers, and academics who have been appalled at, for example, The New York Times’ horrendously erroneous and dangerous coverage of Section 230, our best protection of freedom of expression on the internet in America. I’m delighted that CJR has hired independent ombudsmen for The Times, The Post, CNN, and MSNBC. But what about Fox and the rest of the field?

I’ve been wondering how one might structure an oversight board for journalism to take the place of all those lost ombudsmen, to take complaints about bad journalism, to deliberate thoughtful and constructive responses, and to build data about the journalistic performance and responsibility of specific outlets. That will be a discussion for another day, soon. But even with such a structure, journalism, too — and each news outlet — should offer covenants with the public containing their own promises and statements of higher goals. I don’t just mean following standards for behavior; I mean sharing our highest ambitions.

I think such covenants for Facebook (and social networks and internet platforms) and journalism would do well to start with the mission of journalism that I teach: to convene communities into respectful, informed, and productive conversation. Democracy is conversation. Journalism is — or should be — conversation. The internet is built for conversation. The institutions and companies that serve the public conversation should promise they will do everything in their power to serve and improve that conversation. So here is the beginning of the kind of covenant I would like to see from Facebook:

Facebook should promise to create a safe environment where people can share their stories with each other to build bridges to understanding and to make strangers less strange. (So should journalism.)

Facebook should promise to enable and empower new and diverse voices that have been deprived of privilege and power by existing, entrenched institutions. (Including journalism.)

Facebook should promise to build systems that reward positive, productive, useful, respectful behavior among communities. (So should journalism.)

Facebook should promise not to build mechanisms to polarize people and inflame conflict. (So should journalism.)

Facebook should promise to help inform conversations by providing the means to find reliable information. (Journalism should provide that information.)

Facebook should promise not to build its business upon and enable others to benefit from crass attempts to exploit attention. (So should the news and media industries.)

Facebook should warrant to protect and respect users’ privacy, agency, and dignity.

Facebook should recognize that malign actors will exploit weak systems of protection to drive people apart and so it should promise to guard against being used to manipulate and deceive. (So should journalism.)

Facebook should share data about its performance against these goals, about its impact on the public conversation, and about the health of that conversation with researchers. (If only journalism had such data to share.)

Facebook should build its business, its tools, its rewards, and its judgment of itself around new metrics that measure its contributions to the health and constructive vitality of the public conversation and the value it brings to communities and people’s lives. (So should journalism.)

Clearly, journalism’s covenants with the public should contain more: about investigating and holding power to account, about educating citizens and informing the public conversation, and more. That’s for another day. But here’s a start for both institutions. They have more in common than they know.

Regulating the net is regulating us

Here are three intertwined posts in one: a report from inside a workshop on Facebook’s Oversight Board; a follow-up on the working group on net regulation I’m part of; and a brief book report on Jeff Kosseff’s new and very good biography of Section 230, The Twenty-Six Words That Created the Internet.

Facebook’s Oversight Board

Last week, I was invited — with about 40 others from law, media, civil society, and the academe — to one of a half-dozen workshops Facebook is holding globally to grapple with the thicket of thorny questions associated with the external oversight board Mark Zuckerberg promised.

(Disclosures: I raised money for my school from Facebook. We are independent and I receive no compensation personally from any platform. The workshop was held under Chatham House rule. I declined to sign an NDA and none was then required, but details about to real case studies were off the record.)

You may judge the oversight board as you like: as an earnest attempt to bring order and due process to Facebook’s moderation; as an effort by Facebook to slough off its responsibility onto outsiders; as a PR stunt. Through the two-day workshop, the group kept trying to find an analog for Facebook’s vision of this: Is it an appeals court, a small-claims court, a policy-setting legislature, an advisory council? Facebook said the board will have final say on content moderation appeals regarding Facebook and Instagram and will advise on policy. It’s two mints in one.

The devil is the details. Who is appointed to the board and how? How diverse and by what definitions of diversity are the members of the board selected? Who brings cases to the board (Facebook? people whose content was taken down? people who complained about content? board members?)? How does the board decide what cases to hear? Does the board enforce Facebook policyor can it countermand it? How much access to data about cases and usage will the board have? How much authority will the board have to bring in experts and researchers and what access to data will they have? How does the board scale its decision-making when Facebook receives 3 million reports against content a day? How is consistency found among the decisions of three-member panels in the 40ish-member board? How can a single board in a single global company be consistent across a universe of cultural differences and sensitive to them? As is Facebook’s habit, the event was tightly scheduled with presentations and case studies and so — at least before I had to leave in day two — there was less open debate of these fascinating questions than I’d have liked.

Facebook starts with its 40 pages of community standards, updated about every two weeks, which are in essence its statutes. I recommend you look through them. They are thoughtful and detailed. For example:

A hate organization is defined as: Any association of three or more people that is organized under a name, sign or symbol and that has an ideology, statements or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.

At the workshop, we heard how a policy team sets these rules, how product teams create the tools around them, and how operations — with people in 20 offices around the world, working 24/7, in 50 languages — are trained to enforce them.

But rules — no matter how detailed — are proving insufficient to douse the fires around Facebook. Witness the case, only days after the workshop, of the manipulated Nancy Pelosi video and subsequent cries for Facebook to take it down. I was amazed that so many smart people thought it was an easy matter for Facebook to take down the video because it was false, without acknowledging the precedent that would set requiring Facebook henceforth to rule on the truth of everything everyone says on its platform — something no one should want. Facebook VP for Product Policy and Counterterrorism Monika Bickert (FYI: I interviewed her at a Facebook safety event the week before) said the company demoted the video in News Feed and added a warning to the video. But that wasn’t enough for those out for Facebook’s hide. Here’s a member of the UK Parliament (who was responsible for the Commons report on the net I criticized here):

So by Collins’ standard, if UK politicians in his own party claim as a matter of malicious political disinformation that the country pays £350m per week to the EU that would be freed up for the National Health Service with Brexit and that’s certified by journalists to be “willful distortion,” should Facebook be required to take that statement down? Just asking. It’s not hard to see where this notion of banning falsity goes off the rails and has a deleterious impact on freedom of expression and political discussion.

But politicians want to take bites out of Facebook’s butt. They want to blame Facebook for the ill-informed state of political debate. They want to ignore their own culpability. They want to blame technology and technology companies for what people — citizens — are doing.

Ditto media. Here’s Kara Swisher tearing off her bit of Facebook flesh regarding the Pelosi video: “Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.”

Sigh. The internet is not media. Facebook is not news (only 4% of what appears there is). What you see there is not content. It is conversation. The internet and Facebook are means for the vast majority of citizenry forever locked out of media and of politics to discuss whatever they want, whether you like it or not. Those who want to control that conversation are the privileged and powerful who resent competition from new voices.

By the way, media people: Beware what you wish for when you declare that platforms are media and that they must do this or that, for your wishes could blow back on you and open the door for governments and others to demand that media also erase that which someone declares to be false.

Facebook’s oversight board is trying to mollify its critics — and forestall regulation of it — by meeting their demands to regulate content. Therein lies its weakness, I think: regulating content.

Regulating Actors, Behaviors, or Content

A week before the Facebook workshop, I attended a second meeting of a Transatlantic High Level Working Group on Content Moderation and Freedom of Expression (read: regulation), which I wrote about earlier. At the first meeting, we looked at separating treatment of undesirable content (dealt with under community standards such as Facebook’s) from illegal content (which should be the purview of government and of an internet court; details on that proposal here.)

At this second meeting, one of the brilliant members of the group (held under Chatham House, so I can’t say who) proposed a fundamental shift in how to look at efforts to regulate the internet, proposing an ABC rule separating actors from behaviors from content. (Here’s another take on the latest meeting from a participant.)

It took me time to understand this, but it became clear in our discussion that regulating content is a dangerous path. First, making content illegal is making speech illegal. As long as we have a First Amendment and a Section 230 (more on that below) in the United States, that is a fraught notion. In the UK, a Commons committee recently released an Online Harms White Paper that demonstrates just how dangerous the idea of regulating content can be. The white paper wants to require — under pain of huge financial penalty for companies and executives — that platforms exercise a duty of care to take down “threats to our way of life” that include not only illegal and harmful content (child porn, terrorism) but also legal and harmful content (including trolling [please define] and disinformation [see above]). Can’t they see that government requiring the takedown of legal content makes it illegal? Can’t they see that by not defining harmful content, they put a chill on all speech? For an excellent takedown of the report, see this post by Graham Smith, who says that what the Commons committee is impossibly vague. He writes:

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy…

See: This is the problem when you try to identify, regulate, and eliminate bad content. Smith concludes: “This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.” Nevermind the common analogy to regulation of broadcast. Would we ever suffer such talk about regulating the contents of bookstores or newspapers or — more to the point — conversations in the corner bar?

What becomes clear is that these regulatory methods — private (at Facebook) and public (in the UK and across Europe) — are aimed not at content but ultimately at behavior, only they don’t say so. It is nearly impossible to judge content in isolation. For example, my liberal world is screaming about the slow-Pelosi video. But then what about this video from three years ago?

What makes one abhorrent and one funny? The eye of the beholder? The intent of the creator? Both. Thus content can’t be judged on its own. Context matters. Motive matters. But who is to judge intent and impact and how?

The problem is that politicians and media do not like certain behavior by certain citizens. They cannot figure out how to regulate it at scale (and would prefer not to make the often unpopular decisions required), so they assign the task to intermediaries — platforms. Pols also cannot figure out how to define the bad behavior they want to forbid, so they decide instead to turn an act into a thing — content — and outlaw that under vague rules they expect intermediaries to enforce … or else.

The intermediaries, in turn, cannot figure out how to take this task on at scale and without risk. In an excellent Harvard Law Review paper called The New Governors: The People, Rules, and Processes Governing Online Speech, legal scholar Kate Klonick explains that the platforms began by setting standards. Facebook’s early content moderation guide was a page long, “so it was things like Hitler and naked people,” says early Facebook community exec Dave Willner. Charlotte Willner, who worked in customer service then (they’re now married), said moderators were told “if it makes you feel bad in your gut, then go ahead and take it down.” But standards — or statements of values— don’t scale as they are “often vague and open ended” and can be “subject to arbitrary and/or prejudiced enforcement.” And algos don’t grok values. So the platforms had to shift from standards to rules. “Rules are comparatively cheap and easy to enforce,” says Klonick, “but they can be over- and underinclusive and, thus, can lead to unfair results. Rules permit little discretion and in this sense limit the whims of decisionmakers, but they also can contain gaps and conflicts, creating complexity and litigation.” That’s where we are today. Thus Facebook’s systems, algorithmic and human, followed its rules when they came across the historic photo of a child in a napalm attack. Child? Check. Naked? Check. At risk? Check. Take it down. The rules and the systems of enforcement could not cope with the idea that what was indecent in that photo was the napalm.

Thus the platforms found their rule-led moderators and especially their algorithms needed nuance. Thus the proposal for Facebook’s Oversight Board. Thus the proposal for internet courts. These are attempts to bring human judgment back into the process. They attempt to bring back the context that standards provide over rules. As they do their work, I predict these boards and courts will inevitably shift from debating the acceptability of speech to trying to discern the intent of speakers and the impact on listeners. They won’t be regulating a thing: content. They will be regulating the behavior of actors: us.

There are additional weaknesses to the rules-based, content-based approach. One is that community standards are rarely set by the communities themselves; they are imposed on communities by companies. How could it be otherwise? I remember long ago that Zuckerberg proposed creating a crowdsourced constitution for Facebook but that quickly proved unwieldy. I still wonder whether there are creative ways to get intentional and explicit judgments from communities as to what is and isn’t acceptable for them — if not in a global service, then user-by-user or community-by-community. A second weakness of the community standards approach is that these rules bind users but not platforms. I argued in a prior post that platforms should create two-way covenants with their communities, making assurances of what the company will deliver so it can be held accountable.

Earlier this month, the French government proposed an admirably sensible scheme for regulation that tries to address a few of those issues. French authorities spent months embedded in Facebook in a role-playing exercise to understand how they could regulate the platform. I met a regulator in charge of this effort and was impressed with his nuanced, sensible, smart, and calm sense of the task. The proposal does not want to regulate content directly — as the Germans do with their hate speech law, called NetzDG, and as the Brits propose to do going after online harms.

Instead, the French want to hold the platforms accountable for enforcing the standards and promises they set: say what you do, do what you say. That enables each platform and community to have its own appropriate standards (Reddit ain’t Facebook). It motivates platforms to work with their users to set standards. It enables government and civil society to consult on how standards are set. It requires platforms to provide data about their performance and impact to regulators as well as researchers. And it holds companies accountable for whether they do what they say they will do. It enables the platforms to still self-regulate and brings credibility through transparency to those efforts. Though simpler than other schemes, this is still complex, as the world’s most complicated PowerPoint slide illustrates:

I disagree with some of what the French argue. They call the platforms media (see my argument above). They also want to regulate only the three to five largest social platforms — Facebook, YouTube, Twitter— because they have greater impact (and because that’s easier for the regulators). Except as soon as certain groups are shooed out of those big platforms, they will dig into small platforms, feeling marginalized and perhaps radicalized, and do their damage from there. The French think some of those sites are toxic and can’t be regulated.

All of these efforts — Facebook’s oversight board, the French regulator, any proposed internet court — need to be undertaken with a clear understanding of the complexity, size, and speed of the task. I do not buy cynical arguments that social platforms want terrorism and hate speech kept up because they make money on it; bull. In Facebook’s workshop and in discussions with people at various of the platforms, I’ve gained respect for the difficulty of their work and the sincerity of their efforts. I recommend Klonick’s paper as she attempts to start with an understanding of what these companies do, arguing that

platforms have created a voluntary system of self-regulation because they are economically motivated to create a hospitable environment for their users in order to incentivize engagement. This regulation involves both reflecting the norms of their users around speech as well as keeping as much speech as possible. Online platforms also self-regulate for reasons of social and corporate responsibility, which in turn reflect free speech norms.

She quotes Lawrence Lessig predicting that a “code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no doubt. But by whom, and with what values? That is the only choice we have left to make.”

And we’re not done making it. I think we will end up with a many-tiered approach, including:

  1. Community standards that govern matters of acceptable and unacceptable behavior. I hope they are made with more community input.
  2. Platform covenants that make warranties to users, the public, and government about what they will endeavor to deliver in a safe and hospitable environment, protecting users’ human rights.
  3. Algorithmic means of identifying potentially violating behavior at scale.
  4. Human appeals that operate like small claims courts.
  5. High-level oversight boards that rule and advise on policy.
  6. Regulators that hold companies accountable for the guarantees they make.
  7. National internet courts that rule on questions of legality in takedowns in public, with due process. Companies should not be forced to judge legality.
  8. Legacy courts to deal with matters of illegal behavior. Note that platforms often judge a complaint first against their terms of service and issue a takedown before reaching questions about illegality, meaning that the miscreants who engage in that illegal behavior are not reported to authorities. I expect that governments will complain platforms aren’t doing enough of their policing — and that platforms will complain that’s government’s job.

Numbers 1–5 occur on the private, company side; the rest must be the work of government. Klonick calls the platforms “the New Governors,” explaining that

online speech platforms sit between the state and speakers and publishers. They have the role of empowering both individual speakers and publishers … and their transnational private infrastructure tempers the power of the state to censor. These New Governors have profoundly equalized access to speech publication, centralized decentralized communities, opened vast new resources of communal knowledge, and created infinite ways to spread culture. Digital speech has created a global democratic culture, and the New Governors are the architects of the governance structure that runs it.

What we are seeking is a structure of checks and balances. We need to protect the human rights of citizens to speak and to be shielded from such behaviors as harassment, threat, and malign manipulation (whether by political or economic actors). We need to govern the power of the New Governors. We also need to protect the platforms from government censorship and legal harassment. That’s why we in America have Section 230.

Section 230 and ‘The Twenty-Six Words that Created the Internet’

We are having this debate at all because we have the “online speech platforms,” as Klonick calls them — and we have those platforms thanks to the protection given to technology companies as well as others (including old-fashioned publishers that go online) by Section 230, a law written by Oregon Sen. Ron Wyden (D) and former California Rep. Chris Cox (R) and passed in 1996 telecommunications reform. Jeff Kosseff wrote an excellent biography of the law that pays tribute to these 26 words in it:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Those words give online companies safe harbor from legal liability for what other people say on their sites and services. Without that protection, online site operators would have been motivated to cut off discussion and creativity by the public. Without 230, I doubt we would have Facebook, Twitter, Wikipedia, YouTube, Reddit, news comment sections, blog platforms, even blog comments. “The internet,” Kosseff writes, “would be little more than an electronic version of a traditional newspaper or TV station, with all the words, pictures, and videos provided by a company and little interaction among users.” Media might wish for that. I don’t.

In Wyden’s view, the 26 words give online companies not only this shield but also a sword: the power and freedom to moderate conversation on their sites and platforms. Before Section 230, a Prodigy case held that if an online proprietor moderated conversation and failed to catch something bad, the operator would be more liable than if it had not moderated at all. Section 230 reversed that so that online companies would be free to moderate without moderating perfectly — a necessity to encourage moderation at scale. Lately, Wyden has pushed the platforms to use their sword more.

In the debate on 230 on the House floor, Cox said his law “will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the internet….”

In his book, Kosseff takes us through the prehistory of 230 and why it was necessary, then the case law of how 230 has been tested again and again and, so far, survived.

But Section 230 is at risk from many quarters. From the far right, we hear Trump and his cultists whine that they are being discriminated against because their hateful disinformation (see: Infowars) is being taken down. From the left, we see liberals and media gang up on the platforms in a fit of what I see as moral panic to blame them for every ill in the public conversation (ignoring politicians’ and media’s fault). Thus they call for regulating and breaking up technology companies. In Europe, countries are holding the platforms — and their executives and potentially even their technologists — liable for what the public does through their technology. In other nations — China, Iran, Russia — governments are directly controlling the public conversation.

So Section 230 stands alone. It has suffered one slice in the form of the FOSTA/SESTA ban on online sex trafficking. In a visit to the Senate with the regulation working group I wrote about above, I heard a staffer warn that there could be further carve-outs regarding opioids, bullying, political extremism, and more. Meanwhile, the platforms themselves didn’t have the guts to testify in defense of 230 and against FOSTA/SESTA (who wants to seem to be on the other side of banning sex trafficking?). If these companies will not defend the internet, who will? No, Facebook and Google are not the internet. But what you do to them, you do to the net.

I worry for the future of the net and thus of the public conversation it enables. That is why I take so seriously the issues I outline above. If Section 230 is crippled; if the UK succeeds in demanding that Facebook ban undefined harmful but legal content; if Europe’s right to be forgotten expands; if France and Singapore lead to the spread of “fake news” laws that require platforms to adjudicate truth; if the authoritarian net of China and Iran continues to spread to Russia, Turkey, Hungary, the Philippines, and beyond; if …

If protections of the public conversation on the net are killed, then the public conversation will suffer and voices who could never be heard in big, old media and in big, old, top-down institutions like politics will be silenced again, which is precisely what those who used to control the conversation want. We’re in early days, friends. After five centuries of the Gutenberg era, society is just starting to relearn how to hold a conversation with itself. We need time, through fits and starts, good times and bad, to figure that out. We need our freedom protected.

Without online speech platforms and their protection and freedom, I do not think we would have had #metoo or #blacklivesmatter or #livingwhileblack. Just to see one example of what hashtags as platforms have enabled, please watch this brilliant talk by Baratunde Thurston and worry about what we could be missing.

None of this is simple and so I distrust all the politicians and columnists who think they have simple solutions: Just make Facebook kill this or Twitter that or make them pay or break them up. That’s simplistic, naive, dangerous, and destructive. This is hard. Democracy is hard.