Posts about facebook

Facebook: Constitution before statutes

Constitution of America, We the People.

The Facebook Oversight Board is now open for cases and I look forward to seeing the results. But I have the same question I’ve had since the planning for its creation began, and I asked that question in a web call today with board leadership:

What higher principles will the Board call upon in making its decisions? It will be ruling on Facebook’s content decisions based on the company’s own statutes — that is, the “community standards” Facebook sets for the community. 

The Board says it will also decide cases on the basis of international human rights standards. This could mean the board might find that Facebook correctly enforced its statute but that the statute violates a principle of human rights, which would result in a policy recommendation to Facebook. Good. 

But there remains a huge gap between community statutes and international human rights law. What is missing, I have argued, is a Constitution for Facebook: a statement of why it exists, what kind of community it wants to serve, what it expects of its community, in short: a north star. That doesn’t exist. 

But the Oversight Board might — whether it and Facebook know it or not — end up writing that Constitution, one in the English model, set by precedent, rather than the American model, set down in a document. That will be primarily in Facebook’s control. Though the Oversight Board can pose policy questions and make recommendations, it is limited by what cases come its way — from users and Facebook — and it does not set policy for the company; it only decides appeals and makes policy recommendations. 

It’s up to Facebook to decide how it treats the larger policy questions raised by the Oversight Board and the cases. In reacting to recommendations, Facebook can begin to build a set of principles that in turn begin to define Facebook’s raison d’être, its higher goals, its north star, its Constitution. That’s what I’ve told people at Facebook I want to see happen. 

The problem is, that’s not how Facebook or any of the technology companies think. Since, as Larry Lessig famously decreed, code is law, what the technologists want is rules — laws — to feed their code — their algorithms — to make consistent decisions at scale. 

The core problem of the technology companies and their relationship with society today is that they do not test that code and the laws behind it against higher principles other than posters on the wall: “Don’t be evil.” “Work fast and break things.” Those do not make for a good Constitution. 

But now is their chance to create one. And now, perhaps, is our chance. I didn’t realize that every Oversight Board case will begin with a public comment period. So we can raise issues with the Board. Indeed, community standards should come from the community, damnit, or they’re not community standards; they’re company standards. So we should speak up. 

And the Board will consult experts. They can raise issues with the Board. And the Board can, in turn, raise issues not just for Facebook but, by example, for all the technology companies. That discussion could be useful. 

Imagine if — as I so wish had been the case — the Board had been in operation when Twitter and Facebook decided what to do about blocking the blatant attempt at election interference by the New York Post and Rupert Murdoch in cahoots with Rudy Giuliani. The Board could have raised, addressed, and proposed policy recommendations based on principles useful to many internet companies and to the media that love to poke them. 

Regulators could also get involved productively more than punitively. I was a member of a Transatlantic Working Group on Content Moderation and Freedom of Expression, which recommended a flexible framework for regulation that would have government hold companies accountable for their own assurances, requiring the companies to share data on usage and impact so researchers and regulators can monitor their performance. This, in my view, would be far better than government trying to tell companies how to operate, especially when it comes to interference in free speech. But government can’t hold companies accountable to keeping promises if there are no promises to keep. A Constitution is a promise, a covenant with users and the public. Every company should have one. Every company should be held accountable for meeting its requirements. And the public discussion should revolve around those principles, not around whether Johnny is allowed to use a bad word. 

I make no predictions here. The Board could end up answering a handful of picayune complaints among tens of thousands of possible cases a week and produce the script of an online soap opera. Facebook could follow the letter of the law set down by the Board and miss the opportunity to set higher goals. Media, experts, and the public could be ignored or worse could just continue to snipe instead of contribute constructively. 

But I can hope. The net is young. We — all of us — are still designing it by how we use it. 

Attacks on the People’s Press

Donald Trump’s war on TikTok in U.S. and Rupert Murdoch’s on Facebook in Australia are not being seen for their true import: as government attacks on the people’s press, on freedom of expression, on human rights. 

In Australia, Facebook just said that if Murdoch-backed legislation requiring platforms to pay for news is enacted, the company will stop media companies — and users — from posting news on Facebook and Instagram.

Who is hurt there? The public and its conversation. The public loses access to its means of sharing and debating news. Never before in history — never before the internet — has everyone had access to a press; only the privileged had it and now the privileged will rob the people of theirs. Without the people’s press, we would not have #BlackLivesMatter, #MeToo, #OccupyWallStreet and the voices of so many too long not heard. This is a matter of human rights. 

The Australian legislation is a cynical mess. It is bald protectionism by Murdoch and the old, corporate press, requiring platforms to “negotiate” with guns to their heads for the privilege of quoting, promoting, and sending traffic, audience, and tremendous value to news sites. It is illogical. Facebook, Google, et al did not steal a penny from old media. They competed. To say that Facebook owes newspapers is a white plutocrat’s regressive view of reparations; by this logic Amazon owes Walmart who owes A&P who owes the descendents of Luigi’s corner grocery who owes a pushcart vegetable vendor on Hester Street. Facebook owes news nothing. 

This is a case of outrageous regulatory capture on Murdoch’s part. He doesn’t give a rat’s ass about news and informed democracy. He, more than any human being alive, has been the scourge of democracy in the English-speaking world. The Australian legislation aims to give money only to large publishers, like Murdoch. If Facebook makes good on its threat and bans news, then the news business as a whole will suffer but the largest players in the field, who have brand recognition — i.e., Murdoch — will gain market share over smaller and newer competitors. Murdoch will be even freer to spread his propaganda. This is an attempt by the old press to impose a Stamp Tax on the new. Facebook is right to resist, just as Google was when Spain imposed its Stamp Tax on links (and Google News left the country). 

Now to Trump’s war on TikTok. This, too, is a matter of freedom of expression. TikTok is, to my mind, the first platform to begin to make us rethink media and the line separating producer and audience, for TikTok is a collaborative platform where people do not just comment on each others content but create together. It is the one social network that Trump and his cultists have not managed to game. It is the platform that has enabled Sarah Cooper and countless citizens to mock Trump. So he hates it and wants to abuse his power to kill it. 

If TikTok goes because of government fiat, so goes Sarah Cooper’s ability to criticize the man who killed it. What could be a clearer violation of the First Amendment? Why is no one screaming this? It’s because, I think, the old press still thinks the meaning of the “press” is a machine that spreads ink. No. The internet is the people’s press. It is a machine that spreads power. 

Keep in mind that none of these platforms was built for news and their lives would all, frankly, be easier without it and the controversy and advertiser repellant it brings. Facebook was built for hookups and party pix. The people decided to use it to share and discuss news. Twitter was built to tell friends where you were drinking. The people decided to use it to share what they witness with the world, to discuss public policy, and to organize movements. Google was built to find web sites, not news, but it added the ability to find news when the people showed they wanted that. YouTube was built to stream silly videos. The people decided they would use it for everything from education to news. TikTok was built to lip-sync music. The people decided they would use it to mock the fool in the White House. 

In every case, media could have built what the platforms did. They could have provided people a place to share what they witness and discuss public issues; instead, they provided dark, dank, neglected corners in which to comment on the journalist’s content. They could have provided a place for communities to meet, gather together, to share, to assemble and act. They did not. They could have provided a place for creators to collaborate but instead they care only about their own creation. News media blew every opportunity. Their publics— their readers, viewers, listeners, users, customers — went elsewhere to take advantage of the power the internet offered them. Platforms shared that power with the public. Publishers did not. The platforms owe the publishers nothing. The publishers owe their publics apologies. 

Now, of course, cynical Murdoch and his media mates found an ideal foil in Mark Zuckerberg because, these days, nobody likes Mark, right? Why is that? In part, of course, it’s because Mark is incredibly rich and not terribly telegenic and because he cannot control the bucking bronco he is riding. But it is also because of media’s narrative about him: that he is suddenly the cause of societal ills that have been around since man learned to talk. Please keep in mind when you read media stories about Facebook that even if subconsciously, reporters are writing from a position of jealous conflict of interest. Murdoch, more than any publisher this side of Germany, has sicced his troops on Facebook, Google, Twitter, and the internet, which they believe has robbed them of their manifest destiny and dollars. 

Necessary disclosure: Facebook has funded projects related to disinformation and news at my school, some of them reaching an end. I receive nothing personally from Facebook or any technology company, other than free drinks at the conferences they hold to help the news industry. I am accused of defending Facebook, though Facebook does always not make it easy to defend and I’m often critical of it. What I am defending is the internet and the power it gives citizens at last. What I am defending is the people’s press. 

I would like to hear First Amendment lawyers and scholars in the U.S. and human-rights advocates the world around defend the people’s press from attacks in the Philippines, Russia, China, Hong Kong, Hungary, Turkey, Belarus, Brazil — and in the United States and Australia. 

None of this is new. Every time there is a new technology that enables more people to speak, those who controlled the old technology — and the power it afforded — try to prevent the people they see as interlopers from sharing that power. It happened when scribe Filippo de Strata tried to convince the doge of Venice to outlaw the press and the drunken Germans who brought it to Italy. Princes tried to grant printing monopolies to allies. Popes and kings and autocrats of late banned and burned books and the people who wrote them. England had the Stationers Company license and censor authorized publishing. Charles II tried to close coffeehouses to shut off the discussion of news in them. American newspaper publishers tried to have new radio competitors banned from broadcasting news. Each time, eventually, they lost. For speech will out. 

Teapot and lid. Left side is marked “America: Liberty Restored” and right side is marked “No Stamp Act.” 2006.0229.01ab.

Mark Zuckerberg: Now is the time for your Oversight Board

Like Mark Zuckerberg, I defend freedom of expression. Two days ago, I wrote this post about the value of hearing many voices, about history’s lessons regarding the protection of speech.

But Donald Trump’s unfettered use of Facebook to sow division and encourage violence is not a matter of freedom of expression. There is no requirement that Facebook be his platform for noxious speech. This is a question of what Facebook stands for and what Mark Zuckerberg stands for. As I have asked before, what is Facebook’s North Star? Why does it exist?

Now is the moment for Facebook to convene its new Oversight Board — or for that board to convene itself to deliberate the issues raised and standards required to address this challenge. I don’t care that the systems and bureaucracy are not in place. This is urgent. Get on Zoom. If this independent Board does not meet on this issue of all issues, then why does it exist?

The Board has 20 smart and experienced members: leaders in freedom of expression and human rights, a former prime minister, a former Guardian editor (my friend, Alan Rusbridger), a Nobel prize winner. I would make a bad member of the Board (I was not asked) for if I were there I would be doing just what I am doing here: arguing in public for a public discussion at this critical time to deliberate Facebook’s public responsibility.

The Board isn’t necessary to do that. Facebook’s employees are starting to rise up to make their dissent heard. Zuckerberg can decide on his own or with the help of his Oversight Board, his employees, his users, and the public. But he can no longer not decide.

What is that decision? Perhaps to illustrate the choice it’s easier to take this out of the high-minded realm of freedom of expression and democracy, for that is where the company trips over itself. If Facebook did not exist tomorrow, we would find other ways to express ourselves.

Instead, try thinking of Facebook as a dinner at Mark Zuckerberg’s house. Let’s say that Donald Trump shows up. Donald starts insulting the other guests, shouting that he will bring violence down upon the heads of people who criticize him; blaming the troubles in this country on the Chinese; insulting African-Americans by insisting racists like them; attacking the journalists in the room, shouting that they’re all fake and enemies of the people. What is the host to do — and Mark Zuckerberg is undoubtedly the host? I would expect a host to ask rude Donald to leave. What are the guests to do? I would leave and never return.

So I repeat: Why does Facebook exist? Does it not have a vision for a better neighborhood, a connected world? How does it ever get there if it does not set an example? Does it have no norms of respectfulness? I don’t mean its statutes, its community standards; I mean an ethic, a moral foundation.

In disclosure, Facebook has contributed to my school to undertake various activities, including supporting others’ work around disinformation (I receive nothing personally from Facebook). I advocate that the news industry should work with Facebook, Google, Twitter, and other technology companies because I do not believe we can go our own way anymore; that is the path to obscurity. I defend the platforms against ill-conceived regulation for I worry about its impact on the net and our freedoms there. I think of myself as a defender of speech and thus a friend of the internet. Others call me a friend of the platforms. OK, then, friends tell friends when they’re screwing up. I’ve done that before and I’ll do it now.

Facebook: It is time to listen to friends and foes and reconsider what you are here to do. It is time to stop hiding behind freedom of expression, especially as Donald Trump threatens that very freedom. It is time to have the courage to stand for something. What do you stand for?

I was glad that Medium killed an ill-informed post about COVID by an armchair epidemiologist. I support Twitter’s decisions to begin to add warnings to, not promote, and add fact-checking to Donald Trump’s tweets. Those are just starts, but they are starts. I will not let Google off the hook, for YouTube has much to do as well.

Facebook needs to take a stand against Donald Trump’s racism, incitement, and lies. It cannot stand apart any longer. Our nation is burning. Yes, I am saying this now that it’s my nation on fire. Should I have raised my voice sooner and louder when other nations burned: Myanmar, the Philippines? Yes.

What do I want Facebook to do? Not much, actually. I don’t think Facebook should necessarily kill Trump’s account, for Zuckerberg has a point that citizens should see what their head of state is saying. I don’t think the internet is media nor do I believe that Facebook is a publisher or editor responsible for his words; I say it’s pointless to fact-check Trump. What I do want is for Facebook to separate itself from his vile behavior. Facebook should say: We do not agree. We do not approve. We say this is wrong.

If it does not, by its silence and with its power, it endorses what Trump is saying and becomes his willing agent — every bit as much as when a major newspaper quotes Trump’s posts and tweets without telling its users when he is lying and calling on his racist allies, and every bit as much as Republicans enabling him for their ends.

Trump attacked women and you did not protest. Trump went after immigrants and you did not stop him. Trump came for African-Americans and you stood back. Now Trump is coming for you, technology companies. He is attacking Section 230, the best protection we have for the freedom of expression you all say you hold dear. Will you stand up for that and your users? That should be easy. Will you then stand up for your users who are women and immigrants and African-American? What will you stand for?

In defense of targeting

In defending targeting, I am not defending Facebook, I am attacking mass media and what its business model has done to democracy — including on Facebook.

With targeting, a small business, a new candidate, a nascent movement can efficiently and inexpensively reach people who would be interested in their messages so they may transact or assemble and act. Targeted advertising delivers greater relevance at lower cost and democratizes advertising for those who could not previously afford it, whether that is to sell homemade jam or organize a march for equality.* Targeting has been the holy grail all media and all advertisers have sought since I’ve been in the business. But mass media could never accomplish it, offering only crude approximations like “zoned” newspaper and magazine editions in my day or cringeworthy buys for impotence ads on the evening news now. The internet, of course, changed that.

Without targeting, we are left with mass media — at the extreme, Super Bowl commercials — and the people who can afford them: billionaires and those loved by them. Without targeting, big money will forever be in charge of commerce and politics. Targeting is an antidote.

With the mass-media business model, the same message is delivered to all without regard for relevance. The clutter that results means every ad screams, cajoles, and fibs for attention and every media business cries for the opportunity to grab attention for its advertisers, and we are led inevitably to cats and Kardashians. That is the attention-advertising market mass media created and it is the business model the internet apes so long as it values, measures, and charges for attention alone.

Facebook and the scareword “microtargeting” are blamed for Trump. But listen to Andrew Bosworth, who knows whereof he speaks, as he managed advertising on Facebook during the 2016 election. In a private post made public, he said:

So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period….

They weren’t running misinformation or hoaxes. They weren’t microtargeting or saying different things to different people. They just used the tools we had to show the right creative to each person.

I disagree with him about Facebook deserving full blame or credit for electing Trump; that’s a bit of corporate hubris on the part of him and Facebook, touting the power of what they sell. But he makes an important point: Trump’s people made better use of the tools than their competitors, who had access to the same tools and the same help with them.

But they’re just tools. Bad guys and pornographers tend to be the first to exploit new tools and opportunities because they are smart and devious and cheap. Trump used it to sell the ultimate elixir: anger. Cambridge Analytica acted as if it were brilliant at using these tools, but as Bosworth also says in the post — and as every single campaign data expert I know has said — CA was pure bullshit and did not sway so much as a dandelion in the wind in 2016. Says Bosworth: “This was pure snake oil and we knew it; their ads performed no better than any other marketing partner (and in many cases performed worse).” But the involvement of evil CA and its evil backers and clients fed the media narrative of moral panic about the corruption and damnation of microtargeting.

Hillary Clinton &co. could have used the same tools well and at the time — and still — I have lamented that they did not. They relied on traditional presumptions about campaigning and media and the culture in a changed world. Richard Nixon was the first to make smart use of direct mail — targeting! — and then everyone learned how to. Trump &co. used targeting well and in this election I as sure as hell hope his many opponents have learned the lesson.

Unless, that is, well-meaning crusaders take that tool away by demonizing and even banning micro — call it effective — targeting. I have sat in too many rooms with too many of these folks who think that there is a single devil and that a single messiah can rescue us all. I call this moral panic because it fits Ashley Crossman’s excellent definition of it:

A moral panic is a widespread fear, most often an irrational one, that someone or something is a threat to the values, safety, and interests of a community or society at large. Typically, a moral panic is perpetuated by news media, fueled by politicians, and often results in the passage of new laws or policies that target the source of the panic. In this way, moral panic can foster increased social control.

The corollary is moral messianism: that outlawing this one thing will solve it all. I’ve heard lots of people proclaiming that microtargeting and targeting — as well as the data that powers it — should be banned. (“Data” has also become a scare word, which scares me, for data are information.) We’ve also seen media — in cahoots with similarly threatened legacy politicians — gang up on Facebook and Google for their power to target because media have been too damned stubborn and stupid, lo these two decades, to finally learn how to use the net to respect and serve people as individuals, not a mass, and learn information about people to deliver greater relevance and value for both users and advertisers. I wrote a book arguing for this strategy and tried to convince every media executive I know to compete with the platforms by building their own focused products to gather their own first-party data to offer advertisers their own efficient and effective value and to collaborate as an industry to do this. Instead, the industry prefers to whine. Mass media must mass.

Over the years, every time I’ve said that the net could enable a positive, I’ve been accused of technological determinism. Funny thing is, it’s the dystopians who are the determinists for they believe that a technology corrupts people. It is patronizing, paternalistic, and insulting to the public and robs them of agency to believe they can be transformed from decent, civilized human beings into raging lunatics and idiots by exposure to a Facebook ad. If we believe that and believe our problems are so easily fixed then we miss the real problems this country has: its long-standing racism; media’s exploitation and fueling of conflict and fear; and growing anti-intellectualism and hostility to education.

We also need to fix advertising — in mass media and on the internet in the platforms, especially on Facebook. Advertising needs to shift from mass-media measures of audience and attention and clicks to value-based measures of relevance and utility and efficacy — which will only occur with, yes, targeting. It also must become transparent, making clear who is advertising to us (Facebook may confirm the identity of an advertiser but that confirmed information is not shared with us) and on what basis we are being targeted (Facebook reveals only rough demographics, not targeting goals) and giving us the power to have some control over what we are shown. Instead of banning political advertising, I wish Twitter would also have endeavored to fix how advertising works.

I hear the more extreme moral messianists say their cure is to ban advertising. That’s not only naive, it’s dangerous, for without advertising journalists will starve and we will return to the age of the Medicis and their avissi: private information for the privileged few who can afford it. Paywalls are no paradise.

What’s really happening here — and this is a post and a book for another day — is a reflexive desire to control speech. I’ve been doing a lot of reading lately about the spread of printing in early-modern Europe and I am struck by how every attempt to control the press and outlaw forms of speech failed and backfired. At some point, we must have faith in our fellow citizens and shift our attention from playing Whac-a-Mole with the bad guys to instead finding, sharing, and supporting expertise, education, authority, intelligence, and quality so we can have a healthy, deliberative democracy in a marketplace of ideas. The alternatives are all worse.

* I leave you with a few ads I found in Facebook’s library that could work only via targeting and never on expensive mass media: the newspaper, TV, or radio. I searched on “march.”

When you eliminate targeting, you risk silencing these movements.

Opening photo credit and link: https://wellcomecollection.org/works/wagakkh5

Governance: Facebook designs its oversight board (should journalism?)

Facebook is devoting impressive resources — months of time and untold millions of dollars — to developing systems of governance, of its users and of itself, raising fascinating questions about who governs whom according to what rules and principles, with what accountability. I’d like to ask similar questions about journalism.

I just spent a day at Facebook’s fallen skyscraper of a headquarters attending one of the last of more than two dozen workshops it has held to solicit input on its plans to start an Oversight Board. [Disclosures: Facebook paid for participants’ hotel rooms and I’ve raised money from Facebook for my school.] Weeks ago, I attended another such meeting in New York. In that time, the concept has advanced considerably. Most importantly, in New York, the participants were worried that the board would be merely an appeals court for disputes over content take-downs. Now it is clear that Facebook knows such a board must advise and openly press Facebook on bigger policy issues.

Facebook’s team showed the latest group of academics and others a near-final draft of a board charter (which will be released in a few weeks, in 20-plus languages). They are working on by-laws and finalizing legal structures for independence. They’ve thought through myriad details about how cases will rise (from users and Facebook) and be taken up by the board (at the board’s discretion); about conflict resolution and consensus; about transparency in board membership but anonymity in board decisions; about how members will be selected (after the first members join, the board will select its own members); about what the board will start with (content takedowns) and what it can tackle later (content demotion and taking down users, pages, groups — and ads); about how to deal with GDPR and other privacy regulation in sharing information about cases with the board; about how the board’s precedents will be considered but will not prevent the board from changing its mind; even about how other platforms could join the effort. They have grappled with most every structural, procedural, and legal question the 2,000 people they’ve consulted could imagine.

But as I sat there I saw something missing: the larger goal and soul of the effort and thus of the company and the communities it wants to foster. They have structured this effort around a belief, which I share, in the value of freedom of expression, and the need — recognized too late — to find ways to monitor and constrain that freedom when it is abused and used to abuse. But that is largely a negative: how and why speech (or as Facebook, media, and regulators all unfortunately refer to it: content) will be limited.

Facebook’s Community Standards — in essence, the statutes the Oversight Board will interpret and enforce and suggest to revise — are similarly expressed in the negative: what speech is not allowed and how the platform can maintain safety and promote voice and equality among its users by dealing with violations. In its Community Standards (set by Facebook and not by the community, by the way), there are nods to higher ends — sharing stories, seeing the world through others’ eyes, diversity, equity, empowerment. But then the Community Standards becomes a document about what users should not do. And none of the documents says much if anything about Facebook’s own obligations.

So in California, I wondered aloud what principles the Oversight Board would call upon in its decisions. More crucially, I wondered whom the board is meant to serve and represent: does it operate in loco civitas (in place of the community), publico (public), imperium (government and regulators), or Deus, (God — that is, higher ethics and standards)? [Anybody with better schooling than I had, please correct my effort at Latin.]

I think these documents, this effort, and this company — along with other tech companies — need a set of principles that should set forth:

  • Higher goals. Why are people coming to Facebook? What do they want to create? What does the company want to build? What good will it bring to the world? Why does it exist? For whose benefit? Zuckerberg issued a new mission statement in 2017: “To give people the power to build community and bring the world closer together.” And that is fine as far as it goes, but that’s not very far. What does this mean? What should we expect Facebook to be? This statement of goals should be the North Star that guides not just the Oversight Board but every employee and every user at Facebook.
  • A covenant with users and the public in which Facebook holds itself accountable for its own responsibilities and goals. As an executive from another tech company told me, terms of service and community standards are written to regulate the behavior of users, not companies. Well, companies should put forth their own promises and principles and draw them up in collaboration with users (civitas), the public (publico), and regulators (imperium). And that gives government — as in the case of proposed French legislation — the basis for holding the company accountable.

I’ll explore these ideas further in a moment, but first let me first address the elephant on my keyboard: whether Facebook and its founder and executives and employees have a soul. I’ve been getting a good dose of crap on Twitter the last few days from people who blithely declare — and others who retweet the declaration — that Zuckerberg is the most dangerous man on earth. I respond: Oh, come on. My dangerous-person list nowadays starts with Trump, Murdoch, Putin, Xi, Kim, Duterte, Orbán, Erdoğan, MBS…you get the idea. To which these people respond: But you’re defending Facebook. I will defend it and its founder from ridiculous, click-bait trolling that devalues the real danger our world is in today. I also criticize Facebook publicly and did at the meetings I attended there. Facebook has fucked up plenty lately and that’s why it needs oversight. At least they realize it.

When I defend internet platforms against what I see as media’s growing moral panic, irresponsible reporting, and conflict of interest, I’m defending the internet itself and the freedoms it affords from what I fear will be continuing regulation of our own speech and freedom. I don’t oppose regulation; I have been proposing what I see as reasonable regimes. But I worry about where a growing unholy alliance against the internet between the far right and technophes in media will end.

That is why I attend meetings such as the ones that Facebook convenes and why I just spent two weeks in California meeting with both platform and newspaper executives, to try to build bridges and constructive relationships. That’s why I take Facebook’s effort to build its Oversight Board seriously, to hold them to account.

Indeed, as I sat in a conference room at Facebook hearing its plans, it occurred to me that journalism as a profession and news organizations individually would do well to follow this example. We in journalism have no oversight, having ousted most ombudsmen who tried to offer at least some self-reflection and -criticism (and having failed in the UK to come up with a press council that isn’t a sham). We journalists make no covenants with the public we serve. We refuse to acknowledge — as Facebook executives did acknowledge about their own company — our “trust deficit.”

We in journalism do love to give awards to each other. But we do not have a means to systematically identify and criticize bad journalism. That job has now fallen to, of all unlikely people, politicians, as Beto O’Rourke, Alexandria Ocasio-Cortez, and Julian Castro offer quite legitimate criticism of our field. It also falls to technologists, lawyers, and academics who have been appalled at, for example, The New York Times’ horrendously erroneous and dangerous coverage of Section 230, our best protection of freedom of expression on the internet in America. I’m delighted that CJR has hired independent ombudsmen for The Times, The Post, CNN, and MSNBC. But what about Fox and the rest of the field?

I’ve been wondering how one might structure an oversight board for journalism to take the place of all those lost ombudsmen, to take complaints about bad journalism, to deliberate thoughtful and constructive responses, and to build data about the journalistic performance and responsibility of specific outlets. That will be a discussion for another day, soon. But even with such a structure, journalism, too — and each news outlet — should offer covenants with the public containing their own promises and statements of higher goals. I don’t just mean following standards for behavior; I mean sharing our highest ambitions.

I think such covenants for Facebook (and social networks and internet platforms) and journalism would do well to start with the mission of journalism that I teach: to convene communities into respectful, informed, and productive conversation. Democracy is conversation. Journalism is — or should be — conversation. The internet is built for conversation. The institutions and companies that serve the public conversation should promise they will do everything in their power to serve and improve that conversation. So here is the beginning of the kind of covenant I would like to see from Facebook:

Facebook should promise to create a safe environment where people can share their stories with each other to build bridges to understanding and to make strangers less strange. (So should journalism.)

Facebook should promise to enable and empower new and diverse voices that have been deprived of privilege and power by existing, entrenched institutions. (Including journalism.)

Facebook should promise to build systems that reward positive, productive, useful, respectful behavior among communities. (So should journalism.)

Facebook should promise not to build mechanisms to polarize people and inflame conflict. (So should journalism.)

Facebook should promise to help inform conversations by providing the means to find reliable information. (Journalism should provide that information.)

Facebook should promise not to build its business upon and enable others to benefit from crass attempts to exploit attention. (So should the news and media industries.)

Facebook should warrant to protect and respect users’ privacy, agency, and dignity.

Facebook should recognize that malign actors will exploit weak systems of protection to drive people apart and so it should promise to guard against being used to manipulate and deceive. (So should journalism.)

Facebook should share data about its performance against these goals, about its impact on the public conversation, and about the health of that conversation with researchers. (If only journalism had such data to share.)

Facebook should build its business, its tools, its rewards, and its judgment of itself around new metrics that measure its contributions to the health and constructive vitality of the public conversation and the value it brings to communities and people’s lives. (So should journalism.)

Clearly, journalism’s covenants with the public should contain more: about investigating and holding power to account, about educating citizens and informing the public conversation, and more. That’s for another day. But here’s a start for both institutions. They have more in common than they know.