Posts about facebook

Regulating the net is regulating us

Here are three intertwined posts in one: a report from inside a workshop on Facebook’s Oversight Board; a follow-up on the working group on net regulation I’m part of; and a brief book report on Jeff Kosseff’s new and very good biography of Section 230, The Twenty-Six Words That Created the Internet.

Facebook’s Oversight Board

Last week, I was invited — with about 40 others from law, media, civil society, and the academe — to one of a half-dozen workshops Facebook is holding globally to grapple with the thicket of thorny questions associated with the external oversight board Mark Zuckerberg promised.

(Disclosures: I raised money for my school from Facebook. We are independent and I receive no compensation personally from any platform. The workshop was held under Chatham House rule. I declined to sign an NDA and none was then required, but details about to real case studies were off the record.)

You may judge the oversight board as you like: as an earnest attempt to bring order and due process to Facebook’s moderation; as an effort by Facebook to slough off its responsibility onto outsiders; as a PR stunt. Through the two-day workshop, the group kept trying to find an analog for Facebook’s vision of this: Is it an appeals court, a small-claims court, a policy-setting legislature, an advisory council? Facebook said the board will have final say on content moderation appeals regarding Facebook and Instagram and will advise on policy. It’s two mints in one.

The devil is the details. Who is appointed to the board and how? How diverse and by what definitions of diversity are the members of the board selected? Who brings cases to the board (Facebook? people whose content was taken down? people who complained about content? board members?)? How does the board decide what cases to hear? Does the board enforce Facebook policyor can it countermand it? How much access to data about cases and usage will the board have? How much authority will the board have to bring in experts and researchers and what access to data will they have? How does the board scale its decision-making when Facebook receives 3 million reports against content a day? How is consistency found among the decisions of three-member panels in the 40ish-member board? How can a single board in a single global company be consistent across a universe of cultural differences and sensitive to them? As is Facebook’s habit, the event was tightly scheduled with presentations and case studies and so — at least before I had to leave in day two — there was less open debate of these fascinating questions than I’d have liked.

Facebook starts with its 40 pages of community standards, updated about every two weeks, which are in essence its statutes. I recommend you look through them. They are thoughtful and detailed. For example:

A hate organization is defined as: Any association of three or more people that is organized under a name, sign or symbol and that has an ideology, statements or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.

At the workshop, we heard how a policy team sets these rules, how product teams create the tools around them, and how operations — with people in 20 offices around the world, working 24/7, in 50 languages — are trained to enforce them.

But rules — no matter how detailed — are proving insufficient to douse the fires around Facebook. Witness the case, only days after the workshop, of the manipulated Nancy Pelosi video and subsequent cries for Facebook to take it down. I was amazed that so many smart people thought it was an easy matter for Facebook to take down the video because it was false, without acknowledging the precedent that would set requiring Facebook henceforth to rule on the truth of everything everyone says on its platform — something no one should want. Facebook VP for Product Policy and Counterterrorism Monika Bickert (FYI: I interviewed her at a Facebook safety event the week before) said the company demoted the video in News Feed and added a warning to the video. But that wasn’t enough for those out for Facebook’s hide. Here’s a member of the UK Parliament (who was responsible for the Commons report on the net I criticized here):

So by Collins’ standard, if UK politicians in his own party claim as a matter of malicious political disinformation that the country pays £350m per week to the EU that would be freed up for the National Health Service with Brexit and that’s certified by journalists to be “willful distortion,” should Facebook be required to take that statement down? Just asking. It’s not hard to see where this notion of banning falsity goes off the rails and has a deleterious impact on freedom of expression and political discussion.

But politicians want to take bites out of Facebook’s butt. They want to blame Facebook for the ill-informed state of political debate. They want to ignore their own culpability. They want to blame technology and technology companies for what people — citizens — are doing.

Ditto media. Here’s Kara Swisher tearing off her bit of Facebook flesh regarding the Pelosi video: “Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.”

Sigh. The internet is not media. Facebook is not news (only 4% of what appears there is). What you see there is not content. It is conversation. The internet and Facebook are means for the vast majority of citizenry forever locked out of media and of politics to discuss whatever they want, whether you like it or not. Those who want to control that conversation are the privileged and powerful who resent competition from new voices.

By the way, media people: Beware what you wish for when you declare that platforms are media and that they must do this or that, for your wishes could blow back on you and open the door for governments and others to demand that media also erase that which someone declares to be false.

Facebook’s oversight board is trying to mollify its critics — and forestall regulation of it — by meeting their demands to regulate content. Therein lies its weakness, I think: regulating content.

Regulating Actors, Behaviors, or Content

A week before the Facebook workshop, I attended a second meeting of a Transatlantic High Level Working Group on Content Moderation and Freedom of Expression (read: regulation), which I wrote about earlier. At the first meeting, we looked at separating treatment of undesirable content (dealt with under community standards such as Facebook’s) from illegal content (which should be the purview of government and of an internet court; details on that proposal here.)

At this second meeting, one of the brilliant members of the group (held under Chatham House, so I can’t say who) proposed a fundamental shift in how to look at efforts to regulate the internet, proposing an ABC rule separating actors from behaviors from content. (Here’s another take on the latest meeting from a participant.)

It took me time to understand this, but it became clear in our discussion that regulating content is a dangerous path. First, making content illegal is making speech illegal. As long as we have a First Amendment and a Section 230 (more on that below) in the United States, that is a fraught notion. In the UK, a Commons committee recently released an Online Harms White Paper that demonstrates just how dangerous the idea of regulating content can be. The white paper wants to require — under pain of huge financial penalty for companies and executives — that platforms exercise a duty of care to take down “threats to our way of life” that include not only illegal and harmful content (child porn, terrorism) but also legal and harmful content (including trolling [please define] and disinformation [see above]). Can’t they see that government requiring the takedown of legal content makes it illegal? Can’t they see that by not defining harmful content, they put a chill on all speech? For an excellent takedown of the report, see this post by Graham Smith, who says that what the Commons committee is impossibly vague. He writes:

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy…

See: This is the problem when you try to identify, regulate, and eliminate bad content. Smith concludes: “This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.” Nevermind the common analogy to regulation of broadcast. Would we ever suffer such talk about regulating the contents of bookstores or newspapers or — more to the point — conversations in the corner bar?

What becomes clear is that these regulatory methods — private (at Facebook) and public (in the UK and across Europe) — are aimed not at content but ultimately at behavior, only they don’t say so. It is nearly impossible to judge content in isolation. For example, my liberal world is screaming about the slow-Pelosi video. But then what about this video from three years ago?

What makes one abhorrent and one funny? The eye of the beholder? The intent of the creator? Both. Thus content can’t be judged on its own. Context matters. Motive matters. But who is to judge intent and impact and how?

The problem is that politicians and media do not like certain behavior by certain citizens. They cannot figure out how to regulate it at scale (and would prefer not to make the often unpopular decisions required), so they assign the task to intermediaries — platforms. Pols also cannot figure out how to define the bad behavior they want to forbid, so they decide instead to turn an act into a thing — content — and outlaw that under vague rules they expect intermediaries to enforce … or else.

The intermediaries, in turn, cannot figure out how to take this task on at scale and without risk. In an excellent Harvard Law Review paper called The New Governors: The People, Rules, and Processes Governing Online Speech, legal scholar Kate Klonick explains that the platforms began by setting standards. Facebook’s early content moderation guide was a page long, “so it was things like Hitler and naked people,” says early Facebook community exec Dave Willner. Charlotte Willner, who worked in customer service then (they’re now married), said moderators were told “if it makes you feel bad in your gut, then go ahead and take it down.” But standards — or statements of values— don’t scale as they are “often vague and open ended” and can be “subject to arbitrary and/or prejudiced enforcement.” And algos don’t grok values. So the platforms had to shift from standards to rules. “Rules are comparatively cheap and easy to enforce,” says Klonick, “but they can be over- and underinclusive and, thus, can lead to unfair results. Rules permit little discretion and in this sense limit the whims of decisionmakers, but they also can contain gaps and conflicts, creating complexity and litigation.” That’s where we are today. Thus Facebook’s systems, algorithmic and human, followed its rules when they came across the historic photo of a child in a napalm attack. Child? Check. Naked? Check. At risk? Check. Take it down. The rules and the systems of enforcement could not cope with the idea that what was indecent in that photo was the napalm.

Thus the platforms found their rule-led moderators and especially their algorithms needed nuance. Thus the proposal for Facebook’s Oversight Board. Thus the proposal for internet courts. These are attempts to bring human judgment back into the process. They attempt to bring back the context that standards provide over rules. As they do their work, I predict these boards and courts will inevitably shift from debating the acceptability of speech to trying to discern the intent of speakers and the impact on listeners. They won’t be regulating a thing: content. They will be regulating the behavior of actors: us.

There are additional weaknesses to the rules-based, content-based approach. One is that community standards are rarely set by the communities themselves; they are imposed on communities by companies. How could it be otherwise? I remember long ago that Zuckerberg proposed creating a crowdsourced constitution for Facebook but that quickly proved unwieldy. I still wonder whether there are creative ways to get intentional and explicit judgments from communities as to what is and isn’t acceptable for them — if not in a global service, then user-by-user or community-by-community. A second weakness of the community standards approach is that these rules bind users but not platforms. I argued in a prior post that platforms should create two-way covenants with their communities, making assurances of what the company will deliver so it can be held accountable.

Earlier this month, the French government proposed an admirably sensible scheme for regulation that tries to address a few of those issues. French authorities spent months embedded in Facebook in a role-playing exercise to understand how they could regulate the platform. I met a regulator in charge of this effort and was impressed with his nuanced, sensible, smart, and calm sense of the task. The proposal does not want to regulate content directly — as the Germans do with their hate speech law, called NetzDG, and as the Brits propose to do going after online harms.

Instead, the French want to hold the platforms accountable for enforcing the standards and promises they set: say what you do, do what you say. That enables each platform and community to have its own appropriate standards (Reddit ain’t Facebook). It motivates platforms to work with their users to set standards. It enables government and civil society to consult on how standards are set. It requires platforms to provide data about their performance and impact to regulators as well as researchers. And it holds companies accountable for whether they do what they say they will do. It enables the platforms to still self-regulate and brings credibility through transparency to those efforts. Though simpler than other schemes, this is still complex, as the world’s most complicated PowerPoint slide illustrates:

I disagree with some of what the French argue. They call the platforms media (see my argument above). They also want to regulate only the three to five largest social platforms — Facebook, YouTube, Twitter— because they have greater impact (and because that’s easier for the regulators). Except as soon as certain groups are shooed out of those big platforms, they will dig into small platforms, feeling marginalized and perhaps radicalized, and do their damage from there. The French think some of those sites are toxic and can’t be regulated.

All of these efforts — Facebook’s oversight board, the French regulator, any proposed internet court — need to be undertaken with a clear understanding of the complexity, size, and speed of the task. I do not buy cynical arguments that social platforms want terrorism and hate speech kept up because they make money on it; bull. In Facebook’s workshop and in discussions with people at various of the platforms, I’ve gained respect for the difficulty of their work and the sincerity of their efforts. I recommend Klonick’s paper as she attempts to start with an understanding of what these companies do, arguing that

platforms have created a voluntary system of self-regulation because they are economically motivated to create a hospitable environment for their users in order to incentivize engagement. This regulation involves both reflecting the norms of their users around speech as well as keeping as much speech as possible. Online platforms also self-regulate for reasons of social and corporate responsibility, which in turn reflect free speech norms.

She quotes Lawrence Lessig predicting that a “code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no doubt. But by whom, and with what values? That is the only choice we have left to make.”

And we’re not done making it. I think we will end up with a many-tiered approach, including:

  1. Community standards that govern matters of acceptable and unacceptable behavior. I hope they are made with more community input.
  2. Platform covenants that make warranties to users, the public, and government about what they will endeavor to deliver in a safe and hospitable environment, protecting users’ human rights.
  3. Algorithmic means of identifying potentially violating behavior at scale.
  4. Human appeals that operate like small claims courts.
  5. High-level oversight boards that rule and advise on policy.
  6. Regulators that hold companies accountable for the guarantees they make.
  7. National internet courts that rule on questions of legality in takedowns in public, with due process. Companies should not be forced to judge legality.
  8. Legacy courts to deal with matters of illegal behavior. Note that platforms often judge a complaint first against their terms of service and issue a takedown before reaching questions about illegality, meaning that the miscreants who engage in that illegal behavior are not reported to authorities. I expect that governments will complain platforms aren’t doing enough of their policing — and that platforms will complain that’s government’s job.

Numbers 1–5 occur on the private, company side; the rest must be the work of government. Klonick calls the platforms “the New Governors,” explaining that

online speech platforms sit between the state and speakers and publishers. They have the role of empowering both individual speakers and publishers … and their transnational private infrastructure tempers the power of the state to censor. These New Governors have profoundly equalized access to speech publication, centralized decentralized communities, opened vast new resources of communal knowledge, and created infinite ways to spread culture. Digital speech has created a global democratic culture, and the New Governors are the architects of the governance structure that runs it.

What we are seeking is a structure of checks and balances. We need to protect the human rights of citizens to speak and to be shielded from such behaviors as harassment, threat, and malign manipulation (whether by political or economic actors). We need to govern the power of the New Governors. We also need to protect the platforms from government censorship and legal harassment. That’s why we in America have Section 230.

Section 230 and ‘The Twenty-Six Words that Created the Internet’

We are having this debate at all because we have the “online speech platforms,” as Klonick calls them — and we have those platforms thanks to the protection given to technology companies as well as others (including old-fashioned publishers that go online) by Section 230, a law written by Oregon Sen. Ron Wyden (D) and former California Rep. Chris Cox (R) and passed in 1996 telecommunications reform. Jeff Kosseff wrote an excellent biography of the law that pays tribute to these 26 words in it:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Those words give online companies safe harbor from legal liability for what other people say on their sites and services. Without that protection, online site operators would have been motivated to cut off discussion and creativity by the public. Without 230, I doubt we would have Facebook, Twitter, Wikipedia, YouTube, Reddit, news comment sections, blog platforms, even blog comments. “The internet,” Kosseff writes, “would be little more than an electronic version of a traditional newspaper or TV station, with all the words, pictures, and videos provided by a company and little interaction among users.” Media might wish for that. I don’t.

In Wyden’s view, the 26 words give online companies not only this shield but also a sword: the power and freedom to moderate conversation on their sites and platforms. Before Section 230, a Prodigy case held that if an online proprietor moderated conversation and failed to catch something bad, the operator would be more liable than if it had not moderated at all. Section 230 reversed that so that online companies would be free to moderate without moderating perfectly — a necessity to encourage moderation at scale. Lately, Wyden has pushed the platforms to use their sword more.

In the debate on 230 on the House floor, Cox said his law “will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the internet….”

In his book, Kosseff takes us through the prehistory of 230 and why it was necessary, then the case law of how 230 has been tested again and again and, so far, survived.

But Section 230 is at risk from many quarters. From the far right, we hear Trump and his cultists whine that they are being discriminated against because their hateful disinformation (see: Infowars) is being taken down. From the left, we see liberals and media gang up on the platforms in a fit of what I see as moral panic to blame them for every ill in the public conversation (ignoring politicians’ and media’s fault). Thus they call for regulating and breaking up technology companies. In Europe, countries are holding the platforms — and their executives and potentially even their technologists — liable for what the public does through their technology. In other nations — China, Iran, Russia — governments are directly controlling the public conversation.

So Section 230 stands alone. It has suffered one slice in the form of the FOSTA/SESTA ban on online sex trafficking. In a visit to the Senate with the regulation working group I wrote about above, I heard a staffer warn that there could be further carve-outs regarding opioids, bullying, political extremism, and more. Meanwhile, the platforms themselves didn’t have the guts to testify in defense of 230 and against FOSTA/SESTA (who wants to seem to be on the other side of banning sex trafficking?). If these companies will not defend the internet, who will? No, Facebook and Google are not the internet. But what you do to them, you do to the net.

I worry for the future of the net and thus of the public conversation it enables. That is why I take so seriously the issues I outline above. If Section 230 is crippled; if the UK succeeds in demanding that Facebook ban undefined harmful but legal content; if Europe’s right to be forgotten expands; if France and Singapore lead to the spread of “fake news” laws that require platforms to adjudicate truth; if the authoritarian net of China and Iran continues to spread to Russia, Turkey, Hungary, the Philippines, and beyond; if …

If protections of the public conversation on the net are killed, then the public conversation will suffer and voices who could never be heard in big, old media and in big, old, top-down institutions like politics will be silenced again, which is precisely what those who used to control the conversation want. We’re in early days, friends. After five centuries of the Gutenberg era, society is just starting to relearn how to hold a conversation with itself. We need time, through fits and starts, good times and bad, to figure that out. We need our freedom protected.

Without online speech platforms and their protection and freedom, I do not think we would have had #metoo or #blacklivesmatter or #livingwhileblack. Just to see one example of what hashtags as platforms have enabled, please watch this brilliant talk by Baratunde Thurston and worry about what we could be missing.

None of this is simple and so I distrust all the politicians and columnists who think they have simple solutions: Just make Facebook kill this or Twitter that or make them pay or break them up. That’s simplistic, naive, dangerous, and destructive. This is hard. Democracy is hard.

Scorched Earth

I just gave a talk in Germany where a prominent editor charged me with being a doomsayer. No, I said, I’m an optimist … in the long run. In the meantime, we in media will see doom and death until we are brutally honest with ourselves about what is not working and cannot ever work again. Then we can begin to build anew and grow again. Then we will have cause for optimism.

Late last year in New York, I spoke with a talented journalist laid off from a digital news enterprise. She warned that there would be more blood on the streets and she was right: In January, more than 2,000 people have lost their jobs at news companies old and now new: Gannett, McClatchy, BuzzFeed, Vice, Verizon. She warned that we are still fooling ourselves about broken models and until we come to terms with that, more blood will flow.

So let us be blunt about what is doomed:

  • Advertising in its current forms is burning out — perhaps even for the lucky ones who still have it.
  • Paywalls will not work for more than a few — and their builders often do not account for the real motives of people who pay and who don’t.
  • There is not enough philanthropy from the rich — or charity from the rest of us — to pay for what is needed.
  • Government support — whether financial or regulatory — is a dangerous folly.

There are no messiahs. There are no devils to blame, either.

  • Google and Facebook did not rob the news industry; they only took up the opportunity we were blind to. Our fate is not their fault. Taking them to the woodshed will produce little but schadenfreude.
  • VCs, private equity, and the public markets are not to blame; like lions killing antelope and vultures eating the rest, they are doing only what nature commanded.

Are we to blame for our own destruction? I confess I used to think that was somewhat true — for the optimist in me believed there had to be something we could do to find opportunity in all this disruption, to rebuild an old industry in a new image, and if we didn’t we were at fault for the result. But perhaps we simply could not see the fallacies in our operating assumptions:

  • Information is a commodity.
  • Content is a commodity.
  • In an age of abundance, commodities are losing businesses.
  • Nobody owes us a damned thing: not technologists, not financiers, not philanthropists, not advertisers, not the public, and certainly not government. Instead, we are in debt to many of them and can’t pay it back.

Maybe there is nothing we could have done to save businesses built on now-outmoded models. Maybe nobody is to blame. Reality sucks until it doesn’t.

I believe we can and must build new models for journalism based on real value, understanding people’s needs and motives so we can serve them. But I’m getting ahead of myself again. I can’t help it: I’m an optimist. Before we can build the new, we must recognize what is past. Only then can we rise from the ashes. That process — when it begins — will not be easy or short. As I am fond of telling anyone who will listen, I believe we are at the start of a long, slow evolution, akin to the start of the Gutenberg Age, as we enter a new and still-unknown age. It’s only 1475 in Gutenberg years. There might be a few peasants’ wars, a Thirty Years War, and a Reformation between us and a Renaissance ahead. No guarantees that there’ll be a Renaissance, either. But there’ll definitely be no resurrection of what was.

Recently, Ben Thompson and Jeremy Littau shared cogent analyses of how we got in this hole. I want to examine why merely adjusting those same strategies will not get us out of it. I want to shift our gaze from the ashes below to a north star above — to optimism about the future — but I don’t think we can do that until we are honest about our present. So let’s examine each of the bullets (to our heads) above.

ADVERTISING IS BURNING OUT.

Mass marketing — that is, volume-based advertising — killed quality and injured trust in media because, as abundance grew and prices fell and desperation rose, every movement and metric was reduced to a click and inevitably a cat and a Kardashian. Programmatic advertising commodifies everything it touches: content, media, consumers, data, and even the products it sells. Personalization via retargeting — those ads that follow you everywhere — is insulting and stupid. (Hey, Amazon: why do you keep advertising things to me you know I bought?)

Advertising ultimately exists to fool people into thinking they want something they hadn’t thought they wanted. Thus every new form of advertising inevitably burns out when customers catch on, when the jig is up. That’s why advertisers always want something new. Clicks at volume worked for Business Insider, Upworthy, and many like them until it no longer did. Native advertising worked for Quartz — which, I think, did the best and least fraudulent job implementing it — until it no longer paid all the bills. So-called influencer marketing sort-of worked until customers learned that even their friends can’t be trusted. Axios is proud that it is breaking even on corporate responsibility advertising —which will work until people remember that some evil empires are all still evil. Facing advertising’s limits, each of these companies is resorting to a paywall. We’ll discuss how well that will work in a minute.

Many years ago — at the start of all this — I said that by definition, advertising is failure. Every maker and every marketer wants to be loved, its products bought because its customers are already sold or because its customers sell its products with honest recommendations. When that doesn’t work, you advertise. The net puts seller and buyer into direct contact and advertisers will explore every possible way to avoid advertising.

Amazon finds another path by exploiting others’ cost structures — manufacturers, marketers, distributors — to arrive at pure sales. Then Amazon can eliminate all those middlemen by making products that require neither brand nor advertising, recommending them to customers based on their behavior and intent (and robots will eventually take care of distribution).

If advertising and brands are diminished, even Google and Facebook may suffer and fall because arbitraging data to intuit intent — like every other advertising business model so far — might be short-lived. I think the definition of “short” might be decades, and so I’m not ready to short their stock (disclosure: I own Google’s). I also expect no end of glee at their pain. My point: The platforms are not invincible.

I think that BuzzFeed was onto something before it pivoted to pivoting. It didn’t sell audience per se but instead sold expertise: We know how to make our shit viral and we can make your shit viral. If we in journalism have any hope of holding onto any scraps of advertising that still exist, I believe we need to think similarly and understand the expertise we could bring to others. I like to think that could be understanding how to serve communities. But first we have to learn how to do that.

The bottom line: Because it enables anyone to speak as an individual, the net kills the idea of the faceless mass and with it mass media and mass marketing and possibly mass manufacturing. It’s over, people. The mass was a myth and the net exposed that.

PAYWALLS WILL NOT WORK FOR MORE THAN A FEW

Not long ago, every time I encountered a paywall for an article I wanted to read, I recorded the annual cost. I stopped after two weeks when the total hit $3,650. NFW. Oh, I know: I’ve been Twitter-scolded along with the rest of the cheap-bastard masses for not comparing the intrinsic, moral worth of a news think piece to a latte. What entitlement it takes for journalists to lecture people on how they spend their own hard-earned money. Scolding is no business strategy.

Yes, at least for some years, some media properties will make money by charging readers for access to content — until the idea of “content” disappears (more on that in a minute) along with the concept of the “mass” and the industry called “advertising.” But let’s be honest about a few things:

  1. Consumer willingness to pay for content is a scarcity and we’ve already likely hit its limits. A recent Reuters Institute study said more than half of surveyed executives vowed paywalls would be their main focus for 2019. The line on the other side of the cash register is going to get mighty crowded.
  2. Much of the content behind many of the paywalls out there is not worth the price charged.
  3. Most of the information in that content is duplicative of what exists elsewhere for less or free.

Paywalls are an attempt to create a false scarcity in an age of abundance. They will work for the few that sell speed (see Bloomberg v. Reuters and also Michael Lewis’ Flash Boys — though time is a diminishing asset) or unique value (which inevitably means a limited audience of people who can make money on that value) or loyalty and quality (yes, the strategy is working wellfor The New York Times because it is the fucking New York Times — and you’re not).

The mistake that many paywallers make is that they don’t understand what might motivate people to pay. I pay for The Washington Post because I think it is the best newspaper in America and because Jeff Bezos gave me a great price. Personally, I pay for the New York Times and The Guardian out of patronage but only one of them is clever enough to realize that (more on that in a minute). You might be paying for social capital or access to journalists or to other members of a community or out of social responsibility. The product, the offering, and the marketing all need to take into account your motive.

The economics of subscriptions and paywalls are never discussed in full. I learned from my first day in the magazine business that you have to spend money in marketing to earn money in subscriptions. I’ve been privy to the subscriber acquisition cost of some news organizations and it is staggering. Yes, some of the fees news orgs are charging are high but the subscriber acquisition cost can be two or three times the cost to the consumer or more. And churn rates are higher than most will admit.

I do think we need to explore more sources of revenue from consumers. At Newmark’s Tow-Knight Center, we have brought together media companies trying commerce. Some companies are selling their own ancillary products — everything from books to wine to cooktops to gravity blankets. High-end media companies are surprised at how much people will spend through them on travel. The Telegraph is making financial services and sports betting a priority. Texas Tribune and others find success in events. I’m in favor of trying all these paths to consumer revenue but each one brings the need for expertise, resources, and risk. As for micropayments: dream on.

Those abandoning advertising — or rather, those abandoned by advertising — often argue for the moral superiority of paywalls. But every revenue source brings moral hazards to beware of, as Jay Rosen explores regarding dependence on readers. In the end, the arguments in favor of paywalls are often fatally tautological: They must be working because everyone is building them. Good luck with that.

There is not enough philanthropy from the rich — or charity from the rest of us

The Reuters Institute survey found that a third of executives expected more largesse from foundations this year. Well, last year, Harvard and Northeastern published a study of foundation support of journalism, tolling up $1.8 billion in grants over six years. Not counting support for education (but thanking those who give it), I calculate that comes to less than $200 million a year. For the sake of comparison, The New York Times’ costs add up to almost $1.5 billion. The grants are a drop in the empty bucket. Foundations can be wonderful but they cannot support all the efforts that think they are worthy. They also tend to have ADD, wanting to support the next new thing. They are not our salvation.

How about wealthy individuals? Depends on the wealthy individual. G’bless Jeff Bezos for bringing innovation to The Washington Post and giving Marty Baron the freedom to excel. It’s nice that Marc Benioff bought Time, though I’m not sure why he did and whether that was the best investment in journalism. Pierre Omidyar is funding ideologically diverse efforts from The Intercept to The Bulwark; good for him. Good for all that. But there are also many bad billionaires. Sugar daddies are not our salvation.

Then what about charity — patronage — from the public? I have been a proponent of membership over paywalls, of creating services that serve the affinities of people and communities. Jay Rosen’s Membership Puzzle Project is helping De Correspondent bring its lessons to the U.S. and key among them is that people give money not for access to content but to support the work of a journalist. I advocated a membership strategy for The Guardian but when its readers said they didn’t want a paywall because they wanted to support The Guardian’s journalism for the good of society, it became evident that the relationship was actually charity or contribution. And it works. The Guardian will finally break even thanks to the generosity of its readers. Is this for everyone? No, because everyone is not The Guardian. I give to The Guardian. I consider my payments to The New York Times patronage. I give to Talking Points Memo simply because I want to support its work. But just as with subscriptions, there is a finite pool of generosity. Charity won’t save us.

Government support — whether financial or regulatory — is a dangerous folly

I could go on and on about the lessons learned from regulatory protectionism in Europe but I won’t because I already did.

Should government support journalism in the U.S.? I have a two-word response to that.

Enough said.


So now onto the devils who get the blame for ruining news. Alexandria Ocasio-Cortez, whom I greatly admire and often agree with, identifies what she says are the biggest threats to journalism.

The platforms often — more often every day — deserve criticism for their behavior[Full disclosure: I raised money from Facebook for my J-school but we are independent of them and I receive no money personally from any platform.]

But their success is not the cause of our failure. As is often the case, Stratechery’s Ben Thompson said it better than I could:

While I know a lot of journalists disagree, I don’t think Facebook or Google did anything untoward: what happened to publishers was that the Internet made their business models — both print advertising and digital advertising — fundamentally unviable. That Facebook and Google picked up the resultant revenue was effect, not cause. To that end, to the extent there is concern about how dominant these companies are, even the most extreme remedies (like breakups) would not change the fact that publishers face infinite competition and have uncompetitive advertising offerings.

Optimist that I am, I still think there is reason to work with the platforms because the public we serve is often there and because I believe together we should share what I now define as journalism’s mission: to convene communities into civil, informed, and productive conversation. That would be in the enlightened self-interest of the platforms. But they have no obligation to pay media companies and we have learned the hard way that depending on platforms for stability appears impossible as they experiment and proudly fail with new models.

Siva Vaidhyanathan, a brilliant and harsh critic of the platforms, has argued to me that it is foolish to expect a Google to behave as anything other than a company, in the interest of shareholder return. That realism applies as well to the venture capitalists who sometimes invest in media and, lord knows, the hedge fund and private equity organizations that capitalize on news media’s debt and weakness. We can decry them all we want. I’m just saying that it’s foolish to think that we can change their ways via badgering, begging, or regulation. I strongly believe that innovation in news will require investment but we should enter into those arrangements with eyes wide open, recognizing that unless we can promise a return on investment, we should not knock on their doors. That return will come only when we concentrate on the real value of what we do.


Information is a commodity. Content is a commodity.

Our value does lie in the information we provide. But that is also our problem because information is a commodity. I tried to explain this in Geeks Bearing Gifts:

Information is less valuable in the market because it flows freely. Once a bit of information, a fact, appears in a newspaper, it can be repeated and spread, citizen to citizen, TV anchor to audience: “Oyez, oyez, oyez” shouts the town crier. “The king is dead. Long live the king. Pass it on.” Information itself cannot and must not be owned. Under copyright law, a creator cannot protect ownership of underlying facts or knowledge, only of their treatment. That is, you cannot copyright the fact that the Higgs boson was discovered at CERN in 2012, you can copyright only your treatment of that information: your cogent backgrounder or natty graphic that explains WTF a boson is. A well-informed society must protect and celebrate the easy sharing of information even if that does support freeloaders like TV news, which build businesses on the repetition of information others have uncovered. Society cannot find itself in a position in which information is property to be owned, for then the authorities will tell some people — whether they are academics or scientists or students or citizens — what they are not allowed to know because they didn’t buy permission to know it. Therein lies a fundamental flaw in the presumption that the public should and will pay for access to information — a fundamental flaw in the business model of journalism. I’m not saying that information wants to be free. I agree that information often is expensive to gather. Instead I am saying that the mission of journalism is to inform society by unlocking and spreading information. Journalism frees information.

In news, we copy and rewrite each other because our mass-media business models make us fill pages with content as inexpensively as possible (rewriting is cheaper than reporting — see The Hill) so we can place an ad and get a pageview and get a penny. What we complain Google and Facebook do — taking advantage of the commodification of information — is the basis of much of our own business model. We have hoisted ourselves on our own petard.

I believe information will remain the core of the public demand for and the value of journalism. But we cannot build our business models on that alone.

I also believe that we are not in the content business — and that’s a good thing because it, too, is a commodity, and in an age of abundance, commodities are bad businesses. I think we too often try to save the wrong business.


So what business are we in? Will I allow a bit of optimism at the end of all this doomsaying? Can I point up to a north star?

I return to my definition of journalism, with a debt to James Carey, whom I quoted at length in my recent defense of tweeting. Journalism exists to be of service to the public conversation. What does that look like? How will that serve society? How will it be sustained? I’m not sure.

I have long argued that local journalism needs to rise from communities. I thought that could take the form of hyperlocal blogs but I was wrong because I was still thinking of local journalism in terms of content. I confessed my error here, where I also acknowledged the difficulty — perhaps the impossibility — of building a new house while the old one is burning down around existing newsrooms. Is it possible to turn a content-based, information-based business into one that is built on and begins with the public conversation and is based on service? I don’t know.

I think I’ve seen a bare sprout of what this one model might look like rising from the ashes in the form of Spaceship Media’s plans for local journalism. Spaceship does just what I say journalism should do: convene communities into civil, informed, and productive conversation. So far, it has done that in collaboration with newsrooms, notably Advance’s in Alabama, learning how to rebuild trust between journalist and public. I recently spoke with Spaceship’s cofounder, Eve Pearlman, about how journalists convening, listening to, serving, and valuing local conversation could be a service and a business. Above I said that we could follow BuzzFeed’s lead by selling a skill and I wish that skill were serving communities. I hope Spaceship could teach us that. So I will watch its work with interest and enthusiasm. But I want to be careful and not present that as the salvation of journalism, only as one small experiment that could begin to teach us to rethink what journalism can and should be, not based on our old presumptions of mass media but on our essential value.

In the meantime, I think it is vital — as that unemployed journo told me on the streets of New York — that we be brutally honest with ourselves about our failures so we can learn from them. I hope that conversation continues.

The kids are alright. Grandpa’s the problem.

NYU and Princeton professors just released an important study that took a set of fake news domains identified by BuzzFeed’s Craig Silverman and others and asked who shares them on Facebook. They found that:

Sharing so-called fake news appears to be rare. “The vast majority of Facebook users in our data” —more than 90 %— “did not share any articles from fake news domains in 2016 at all.”
Most of the sharing is done by old people, not young people. People over 65 shared fake news at a rate seven times higher than young people 18–29. This factor held across controls for education, party affiliation and ideology, sex, race, or income.
It is also true that conservatives — and, interestingly, those calling themselves independent — shared most of the fake news (18.1% of Republicans vs. 3.5% of Democrats), though the researchers caution that the sample of fake news was predominantly pro-Trump.
Interestingly, people who share more on Facebook are less likely to share fake news than others, “consistent with the hypothesis that people who share many links are more familiar with what they are seeing and are able to distinguish fake news from real news.”
Compare this with accepted wisdom: That fake news is everywhere and that everyone on Facebook is sharing it. That Facebook users can’t tell fake from true. That young people are sharing this stuff and don’t understand how media work and thus are in need of news literacy training. Not so much.

Instead, we need other interventions: start by worrying about Grandpa. But I will argue this is not about dealing with Grandpa’s inability to discern facts. Fact-checking won’t enlighten Gramps. Instead, we have to examine Grandpa’s misplaced sense of anger, victimhood, paranoia, and general grumpiness. Grandpa grew up in a great time in this country and saw tremendous progress. So what’s making Grandpa into such an angry, loud-mouthed jerk?

Well, there’s another external factor that this study could not deal with. The factor I want to examine is how many fake-news sharers — how many Grandpa’s — are influenced by media, namely Fox News and talk radio.

I’d love to see more research such as this. I want to see Facebook and the platforms cooperate and hand over more data.

The researchers — Princeton’s Andrew Guess and NYU’s Jonathan Nagler and Joshua Tucker — point out that they lack data on what these older users are seeing in their feeds. To get perhaps some insight on that, go to Facebook’s new, open political ad archive, search on any contentious topic — say, “wall” — and you will see how money vs. money is battling for the minds of America. Look at Trump’s latest ads and I found in many of them that they were directed mostly at people over the age of 65.

Research such as this is critical to inform our discussion and fend off stupid interventions and decisions fueled by bad presumption and moral panic. More, please.

* Thanks to Josh Tucker for alerting me to this research — and for the joke in the headline.

Hot Trump. Cool @aoc.

I’ve been rereading a lot of Marshall McLuhan lately and I’m as confounded as ever by his conception of hot vs. cool media. And so I decided to try to test my thinking by comparing the phenomena of Donald Trump and Alexandria Ocasio-Cortez at this millennial media wendepunkt, as text and television give way to the net and whatever it becomes. I’ll also try to address the question: Why is @aoc driving the GOP mad?

McLuhan said that text and radio were hot media in that they were high-definition; they monopolized a sense (text the eye, radio the ear); they filled in all the blanks for the reader/listener and required or brooked no real interaction; they created — as we see with newspapers and journalism — a separation of creator from consumer. Television, he said, was a cool medium for it was low-definition across multiple senses, requiring the viewer to interact by filling in the blanks, starting quite literally with the blanks between the raster lines on the cathode-ray screen. “Low-definition invites participation,” explains McLuhan’s recently departed son Eric. (Thanks to Eric’s son, Andrew McLuhan, for sending me to this delightful video:)

Given that McLuhan formulated his theory at the fuzzy, black-and-white, rabbit-ears genesis of television, I wonder how much the label would be readjusted with 4K video and huge, wrap-around screens and surround sound. Eric McLuhan answers that hot v. cool is a continuum. I also wonder — as does every McLuhan follower — what the master would say about the internet. That presumes we can yet call the internet a thing unto itself and define it, which we can’t; it’s too early. So I’ll narrow the question to social media today.

And that brings us to Trump v. Ocasio-Cortez. Recall that McLuhan said that Richard Nixon lost his debate with John F. Kennedy because Nixon was too hot for the cool medium of TV. He told Playboy:

Kennedy was the first TV president because he was the first prominent American politician to ever understand the dynamics and lines of force of the television iconoscope. As I’ve explained, TV is an inherently cool medium, and Kennedy had a compatible coolness and indifference to power, bred of personal wealth, which allowed him to adapt fully to TV. Any political candidate who doesn’t have such cool, low-definition qualities, which allow the viewer to fill in the gaps with his own personal identification, simply electrocutes himself on television — as Richard Nixon did in his disastrous debates with Kennedy in the 1960 campaign. Nixon was essentially hot; he presented a high-definition, sharply-defined image and action on the TV screen that contributed to his reputation as a phony — the “Tricky Dicky” syndrome that has dogged his footsteps for years. “Would you buy a used car from this man?” the political cartoon asked — and the answer was no, because he didn’t project the cool aura of disinterest and objectivity that Kennedy emanated so effortlessly and engagingly.

As TV became hotter — as it became high-definition — it found its man in Trump, who is as hot and unsubtle as a thermonuclear blast. Trump burns himself out with every appearance before crowds and cameras, never able to go far enough past his last performance — and it is a performance — to find a destination. He is destruction personified and that’s why he won, because his voters and believers yearn to destroy the institutions they do not trust, which is every institution we have today. Trump then represents the destruction of television itself. He’s so hot, he blew it up, ruining it for any candidate to follow, who cannot possibly top him on it. Kennedy was the first cool television politician. Obama was the last cool TV politician. Trump is the hot politician, the one who then took the medium’s every weakness and nuked it. TV amused itself to death.

Alexandria Ocasio-Cortez was not a candidate of television or radio or text because media — that is, journalists — completely missed her presence and success, didn’t cover her, and had to trip over each other to discover her long after voters had. How did voters discover her? How did she succeed? Social media: TwitterFacebookInstagramYouTube….

I think McLuhan’s analysis here would be straightforward: Social media are cool. Twitter in particular is cool because it provides such low-fidelity and requires the world to fill in so much, not only in interpretation and empathy but also in distribution (sharing). And Ocasio-Cortez herself is cool in every definition.

She handles her opponents brilliantly on social media, always flying above, never taking flack from them. Some people say she’s trolling the Republicans but I disagree. Trolling’s sole purpose is to get a rise out of an opponent, to make them angry and force them to react. She does not do that. She consistently states her positions and policies with confidence; let the haters hate. Yes, she shoots at her opponents, but like a sniper, always from her position, her platform.

She uses the net not only to make pronouncements but to build a community, a constituency that is larger than her district.

 

And her constituents respond.

 

Now I know some of you will argue that Trump is also a genius at Twitter because, after all, he governs by it. But I disagree. Trump’s tweets get the impact they get only because they are amplified by big, old media making stories in print and TV every single time he hits the big, blue button. Trump treats cool Twitter like he treats cool TV: with a flamethrower. On Twitter, he doesn’t win anything he hasn’t already won. Indeed, in his desperation to outdo himself, I think (or hope), it is by Twitter that he destroys himself through revealing too much of his ignorance and hate. That’s not cool.

Trump and his allies don’t know how to tweet but Ocasio-Cortez does — and that’s what so disturbs and confounds the GOP about @aoc. They think it should be so simple: just tweet your press releases — your “social media statements,” as their leader recently said — plus your best lines from speeches that get the loudest, hottest applause and rack up the most followers like the highest TV ratings and you will win. No. Twitter, Facebook, et al are not means to make a mass, like TV was. They are means to develop relationships and trust and to gather people around not just a person but also an idea, a cause, a common goal. That’s how Ocasio-Cortez uses them.

I want to be careful not to diminish Ocasio-Cortez as merely a social-media phenom, nor to build her up into some omniscient political demigod who will not stumble; she will. She is a talented, insightful politician who has the courage of her progressive and socialist convictions. Even when old media tries to goad a fight — because old media feed on the fight — over Ocasio-Cortez’ college dancing video, she still manages to bring the discussion back to her stands, her agenda. That is what drives them nuts.

 

And then:

 

Everyone ends up dancing to her tune. But they don’t talk about the dancing. They talk about the policy — her foes and her allies alike. She suggests a 70% tax rate for the richest and here come her enemies and then some experts, who have her back:

 

So what lessons do we learn from the early days of @aoc as possibly the first true, native politician of social media, not old media?

I think the GOP will eventually learn that anger is a flame that runs out of fuel. Anger stands against everything, for nothing. Anger builds nothing, not even a wall. Oh, anger is easy to exploit and media will help you exploit it, but that takes you nowhere. Lots of people might want to scream with the screamy guy, but who wants to invite him home for dinner? Trump is the angry celebrity and you end up knowing everything you want to know about him by watching him; there is nothing to fill in because he is so hot. “If somebody starts screaming at you, you don’t move in closer, you back up a little. And if they get a little rowdy and scream a little louder, you back up a little more. You don’t move in closer and start hugging,” Eric McLuhan explains in the video above. “A really hot situation like that… doesn’t require or even invite involvement.”

@aoc is a little mysterious, someone you want to know better; she is cool. The GOP has no cool politicians. The Democrats do not need their Trump, their celebrity, their hot personality. They should be grateful they have someone like Ocasio-Cortez to teach them how to be cool, if they are smart enough to watch and learn.

Media, too, have much to learn. We in journalism must see that our old, hot media — text and TV — are of the past. They won’t go away but they probablywon’t be trusted again. If we journalists have any hope of meeting our mission of informing the public, we have to use our new tools of the net to build relationships of authenticity and trust as humans, not institutions. We need to measure our success not based on mass but instead based on value and trust. Then we have to find a place to stand — on the platform of facts would be a lovely spot — and stay there, relying on principle and not on a mushy foundation built of fake balance or fleeting popularity or our own savvy. This is social journalism.

Oh, and we also need to learn that the next politician worth paying attention to won’t come to us with press releases and press people trying to get them on TV as that won’t matter to them. They are already out there building relationships with their constituents on social media and we need new means to listen to what is happening there.

There is one more confounding McLuhan lesson to grapple with here: that the medium is the message, that content is meaningless but it’s the medium itself that models a way to see the world. McLuhan argued that linear, bounded text by its very form taught us to how to think. The line, he said — and this sentence is an example — became our organizing principle. Books have borders and so do nations. This, I’ll argue, is why Trump wants to build his wall: a last, desperate border as all borders crumble.

McLuhan said electricity broke that linearity and he saw the beginnings of what could happen to our worldviews with the impact of television upon us. But that was only the beginning. Imagine what he would say about Twitter, Facebook, et al. I think he would tell us to pay attention not to the content — see: fake news! — but instead to learn from the form. What does social media teach us to do? What does the net itself teach us to do? To connect.

Facebook. Sigh.

I’d rather like to inveigh against Facebook right now as it would be convenient, given that ever since I raised money for my school from the company, it keeps sinking deeper in a tub of hot, boiling bile in every media story and political pronouncement about its screwups. Last week’s New York Times story about Facebook sharing data with other companies seemed to present a nice opportunity to thus bolster my bona fides. But then not so much.

The most appalling revelation in The Times story was that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.” I was horrified when I read that and was ready to raise the hammer. But then I read Facebook’s response.

Specifically, we made it possible for people to message their friends what music they were listening to in Spotify or watching on Netflix directly from the Spotify or Netflix apps….

In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook. No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct.

And I read other background, including from Alex Stamos, Facebook’s former head of security, who has been an honest broker in these discussions:

And there’s James Ball, a respected London journalist — ex Guardian and BuzzFeed — who is writing a critical book about the internet:

In short: Of course, Netflix and Spotify had to be given the ability to send, receive, and delete messages as that was the only way the messaging feature could work for users. Thus in its story The Times comes off like a member of Congress grandstanding at a hearing, willfully misunderstanding basic internet functionality. Its report begins on a note of sensationalism. And not until way down in the article does The Times fess up that it similarly received a key to Facebook data. So this turns out not to be the ideal opening for inveighing. But I won’t pass up the opportunity.

The moral net

I’ve had a piece in the metaphorical typewriter for many months trying to figure out how to write about the moral responsibility of technology (and media) companies. It has given me an unprecedented case of writer’s block as I still don’t know how to attack the challenge. I interviewed a bunch of people I respect, beginning with my friend and mentor Jay Rosen, who said that we don’t even have agreement on the terms of the discussion. I concur. People seem to assume there are easy answers to the questions facing the platforms, but when the choices gets specific — free speech vs. control, authority vs. diversity, civility as censorship — the answers no longer look so easy.

None of this is to say that Facebook is not fucking up. It is. But its fuckups are not so much of the kind The Times, The Guardian, cable news, and others in media dream of in their dystopias: grand theft user data! first-degree privacy murder! malignant corporate cynicism! war on democracy! No, Facebook’s fuckups are cultural in the company — as in the Valley — which is to say they are more complex and might go deeper.

For example, I was most appalled recently when Facebook — with three Jewish executives at the head — hired a PR company to play into the anti-Semitic meme of attacking George Soros because he criticized Facebook. What the hell were they thinking? Why didn’t they think?

This case, I think, revealed the company’s hubristic opacity, the belief that it could and should get away with something in secret. I’m sure I needn’t point out the irony of a company celebrating publicness being so — to understate the case — taciturn. Facebook must learn transparency, starting with openness about its past sins. I’ve been saying the company needs to perform an audit of its past performance and clear the decks once and for all. But transparency is not just about confession. Transparency should be about pride and value. From the top, Facebook needs to infuse its culture with the idea that everything everyone does should shine in the light of public scrutiny. The company has to learn that secrecy is neither a cloak nor a competitive advantage (hell, who are its competitors anyway?) but a severe liability.

Facebook and its leaders are often accused of cynicism. I have a different diagnosis. I think they are infected with latent and lazy optimism. I do believe that they believe a connected world is a better world — and I agree with that. But Facebook, like its neighbors in Silicon Valley, harbored too much faith in mankind and — apart from spam— did not anticipate how it would be manipulated and thus did not guard against that and protect the public from it. I often hear Facebook accused of leaving trolling and disinformation online because it makes money from those pageviews. Nonsense. Shitstorms are bad for business. I think it’s the opposite: Facebook and the other platforms have not calculated the full cost of finding and compensating for manipulation, fraud, and general assholery. And in some fairness to them, we as a society have not yet agreed on what we want the platforms to do, for I often hear people say — in the same breath or paragraph — that Facebook and Twitter and YouTube must clean up their messes … but also that no one trusts them to make these judgments. What’s a platform to do?

If Facebook and its league had acted with transparent good faith in enacting their missions — and bad faith in anticipating the behavior of some small segment of malignant mankind — then perhaps when Russian or other manipulation reared its head the platforms would have been on top of the problem and would even have garnered sympathy for being victims of these bad actors. But no. They acted impervious when they weren’t, and that made it easier to yank them down off their high horses. Media — once technoboosters — now treat the platforms, especially Facebook, as malign actors whose every move and motive is to destroy society.

I have argued for a few years now that Facebook should hire an editor to bring a sense of public responsibility to the company and its products. As a journalist, that’s rather conceited, for as I’ll confess shortly, journalists have issues, too. Then perhaps Facebook should hire ethicists or philosophers or clergy or an Obama or two. It needs a strong, empowered, experienced, trusted, demanding, tough force in its executive suite with the authority to make change. While I’m giving unsolicited advice, I will also suggest that when Facebook replaces its outgoing head of coms and policy, Elliot Schrage, it should separate those functions. The head of policy should ask and demand answers to tough questions. The head of PR is hired to avoid tough questions. The tasks don’t go together.

So, yes, I’ll criticize Facebook. But I also believe it’s important for us in journalism to work with Facebook, Twitter, Google, YouTube, et al because they are running the internet of the day; they are the gateways to the public we serve; and they need our help to do the right thing. (That’s why I do what I do in the projects I linked to in the first sentence above.)

Moral exhibitionism

Instead, I see journalists tripping over each other to brag on social media about leaving social media. “I’m deleting Facebook — find me on Instagram,” they proclaim, without irony. “I deleted Facebook” is the new “I don’t own a TV.” This led me to tweet:

People with discernible senses of humor got the gag. One person attacked me for not attacking Facebook. And meanwhile, a few journalists agonized about the choice. A reporter I whose work I greatly respect, Julia Ioffe, was visibly torn, asking:

I responded that Facebook enriches her reporting and that journalists need more — not fewer — ways to listen to the public we serve. She said agreed with that. (I just asked what she decided and Ioffe said she is staying on Facebook.)

Quitting Facebook is often an act of the privileged. (Note that lower income teens are about twice as likely to use Facebook as teens from richer families.) It’s fine for white men like me to get pissy and leave because we have other outlets for our grievances and newsrooms are filled with people who look like us and report on our concerns. Without social media, the nation would not have had #metoo or #blacklivesmatter or most tellingly #livingwhileblack, which reported nothing that African-Americans haven’t experienced but which white editors didn’t report because it wasn’t happening to them. The key reason I celebrate social media is because it gives voice to people who for too long have not been heard. And so it is a mark of privilege to condemn all social media — and the supposed unwashed masses using them — as uncivilized. I find that’s simply not true. My Facebook and Twitter feeds are full of smart, concerned, witty, constructive people with a wide (which could always be wider) diversity of perspective. I respect them. I learn by listening to them.

When I talked about all this on the latest This Week in Google, I received this tweet in response:

I thanked Jeff and immediately followed him on Facebook.

A moral mirror

These days, too much of the reporting about the internet is done without knowledge of how technology works and without evidence behind the accusations made. I fear this is fueling a moral panic that will lead to legislation and regulation that will affect not just a few Silicon Valley technology companies but everyone on the net. This is why I so appreciate voices like Rasmus Kleis Nielsen, now head of the Reuters Institute for the Study of Journalism at Oxford, who often meets polemical presumptions about the net — for example, that we are supposedly all hermetically sealed in filter bubbles — with demands for evidence as well as research that dares contradict the pessimistic assertion. This is why I plan to convene a meeting of similarly disciplined researchers to examine how bad — or good — life on the net really is and to ask what yet needs to be asked to learn more.

Dave Winer, a pioneer in helping to create so much of the web we enjoy today (podcasts, RSS, blogging…) is quite critical of the closed systems the platforms create but also was very frustrated with the New York Times story that inspired this post:

Could this be why usually advocacy-allergic news organizations are oddly taking it upon themselves to try to convince people to delete Facebook?

This is also why Dave also had a suggestion for journalists covering technology and for journalism schools:

A bunch of us journo profs jumped on his idea and I hope we all make it happen soon.

But there’s more journalists need to do. As we in news and media attack the platforms and their every misstep — and there are many — we need to turn the mirror on ourselvesIt was news media that polarized the nation into camps of red v. blue, white v. black, 1 percent v. 99 percent long before Facebook was born. It was our business model in media that favored confrontation over resolution. It was our business model in advertising that valued volume, attention, page views, and eyeballs — the business model that then corrupted the internet. It was our failure to inform the public that enabled people to vote against their self-interest for Trump or Brexit. We bear much responsibility for the horrid mess we are in today.

So as we demand transparency of Facebook I ask that we demand it of ourselvesAs we expect ethical self-examination in Silicon Valley, we should do likewise in journalism. As we criticize the skewed priorities and moral hazards of technology’s business model, let us also recognize the pitfalls of our own — and that includes not just clickbait advertising but also paywalls and patronage (which will redline journalism for the privileged). Let us also be honest with ourselves about why trust in journalism is at historic lows and why people chose to leave the destinations we built for them, instead preferring to talk among themselves on social media. Let he who should live in a glass house — and expects everyone else to live in glass houses — think before throwing stones.

I’m neither defending nor condemning Facebook and the other platforms. My eyes are wide open about their faults — and also ours. They and the internet they are helping to build are our new reality and it is our mutual responsibility to build a better net and a better future together. These are difficult, nuanced problems and opportunities.