Perspective, please

I’m going to straddle a sword by on the one hand criticizing the platforms for not taking their public responsibility seriously enough, and on the other hand pleading for some perspective before we descend into a moral panic with unintended consequences for the net and the future.

[Disclosure: I raised $14 million for the News Integrity Initiative at CUNY from Facebook, Craig Newmark, the Ford Foundation, AppNexus, and others. We are independent of Facebook and I personally receive no money from any platform.]

The Observer’s reporting on Cambridge Analytica’s exploitation of Facebook data on behalf of Donald Trump has raised what the Germans call a shitstorm. There are nuances to this story I’ll get to below. But to begin, suffice it to say that Facebook is in a mess. As much as the other platforms would like to hide behind their schadenfreude, they can’t. Google has plenty of problems with YouTube (I write this the night before Google is set to announce new mitzvahs to the news industry). And Twitter is wisely begging for help in counteracting the ill effects it now concedes it has had on the health of the public conversation.

The platforms need to realize that they are not trusted. (And before media wrap themselves up in their own blanket of schadenfreude, I will remind them that they are not trusted either.) The internet industry’s cockiness cannot stand. They must listen to and respect concerns about them. They must learn humility and admit how hard that will be for them. They need to perform harsh and honest self-examinations of their cultures and moral foundations. Underlying all this, I believe they must adopt an ethic of radical transparency.

For a few years, I’ve been arguing that Facebook and its fellows should hire journalists not just to build relationships with media companies but more importantly to embrace a sense of public responsibility in decisions about their products, ranking, experiments, and impact. Now they would do well to also hire ethicists, psychologists, philosophers, auditors, prosecutors, and the Pope himself to help them understand not how to present themselves to the world — that’s PR — but instead to fully comprehend the responsibility they hold for the internet, society, and the future.

I still believe that most people in these companies themselves believe that they are creating and harnessing technology for the good. What they have not groked is the greater responsibility that has fallen on them based on how their technologies are used. In the early days of the internet, the citizens of the net — myself included — and the platforms that served them valued openness über alles. And it was good. What we all failed to recognize was — on the good side — how much people would come to depend on these services for information and social interaction and — on the bad side — how much they would be manipulated at scale. “When we built Twitter,” Ev Williams said at South by Southwest, “we weren’t thinking about these things. We laid down fundamental architectures that had assumptions that didn’t account for bad behavior. And now we’re catching on to that.”

This means that the platforms must be more aware of that bad behavior and take surer steps to counteract it. They must make the judgments they feared making when they defended openness as a creed. I will contend again that this does not make them media companies; we do not want them to clean and polish our internet as if the platforms were magazines and the world were China. We also must recognize the difficulty that scale brings to the task. But they now have little choice but to define and defend quality on their platforms and in the wider circles of impact they have on society in at least these areas:

  • Civility of the public conversation. Technology companies need to set and enforce standards for basic, civilized behavior. I still want to err on the side of openness but I see no reason to condone harassment and threats, bigotry and hate speech, and lies as incitement. (By these considerations, Infowars, for example, should be toast.)
  • An informed public conversation. Whether they wanted it or not, Facebook and Twitter particularly — and Google, YouTube, Snap and others as well — became the key mechanisms by which the public informs itself. Here, too, I’ll err on the side of openness but the platforms need to set standards for quality and credibility and build paths that lead users to both. They cannot walk away from the news because it is messy and inconvenient for we depend upon them now.
  • A healthy public sphere. One could argue that Facebook, Twitter, et al are the victims of manipulation by Russia, Cambridge Analytica, trolls, the alt-right, and conspiracy theorists. Except that they are not the bad guys’ real targets. We are. The platforms have an obligation to detect, measure, reveal, and counteract this manipulation. For a definition of manipulation, I give you C. Wright Mills in The Power Elite: “Authority is power that is explicit and more or less ‘voluntarily’ obeyed; manipulation is the ‘secret’ exercise of power, unknown to those who are influenced.”

Those are broad categories regarding the platforms’ external responsibilities. Internally they need to examine the ethical and moral bases for their decisions about what they do with user data, about what kinds of behaviors they reward and exploit, about the impact of their (and mass media’s) volume-based business model in fostering clickbait, and so on.

If the internet companies do not get their ethical and public acts together and quickly — making it clear that they are capable of governing their behavior for the greater good — I fear that the growing moral panic overtaking discussion of technology will lead to harmful legislation and legal precedent, hampering the internet’s potential for us all. In the rush to regulation, I worry that we will end up with more bad law (like Germany’s NetzDG hate-speech law and Europe’s right-to-be-forgotten court ruling — each of which, paradoxically, fights the platforms’ power by giving them more power to censor speech). My greater fear is that the regulatory mechanisms installed for good governments will be used by bad ones — and these days, what country does not worry about bad government? — leading to a lowest common denominator of freedom on the net.

So now let me pose a few challenges to the platforms’ critics.

On the current Cambridge Analytica story, I’ll agree that Facebook is foolish to split hairs about the use of the word “breach” even if Facebook is right that it wasn’t one. But it behooves us all to get the story right. Please read the complete threads (by opening each tweet) from Jay Pinho and Patrick Ruffini:

Note well that Facebook created mechanisms to benefit all campaigns, including Barack Obama’s. At the time, this was generally thought to be a good: using a social platform to enable civic participation. What went wrong in the meantime was (1) a researcher broke Facebook’s rules and shared data intended for research with his own company and then with Cambridge Analytica and (2) Donald Trump.

So do you think that Facebook should be forbidden from helping political campaigns? If we want television and the unlimited money behind it to lose influence in our elections, shouldn’t we desire more mechanisms to directly, efficiently, and relevantly reach voters by candidates and movements? If you agree, then what should be the limits of that? Should Facebook choose good and bad candidates as we expect them to choose good and bad news? I could argue in favor of banning or not aiding, say, a racist, admitted sexual abuser who incites hatred with conspiracy theories and lies. But what if such a person becomes the candidate of one of two major parties and ultimately the victor? Was helping candidates good before Trump and bad afterwards?

Before arguing that Facebook should never share data with anyone, know that there are many researchers who are dying to get their hands on this data to better understand how information and disinformation spread and how society is changing. I was among many such researchers some weeks ago at a valuable event on disinformation at the University of Pennsylvania (where, by the way, most of the academics in attendance scoffed at the idea that Cambridge Analytica actually had a secret sauce and any great power to influence elections … but now’s not the time for that argument). So what are the standards you expect from Facebook et al when it comes to sharing data? To whom? For what purposes? With what protections and restrictions?

I worry that if we reach a strict data crackdown — no data ever shared or used without explicit permission for the exact purpose — we will cut off the key to the only sustainable future for journalism and media that I see: one built on a foundation of delivering relevant and valuable services to people as individuals and members of communities, no longer as an anonymous mass. So please be careful about the laws, precedents, and unintended consequences you set.

When criticizing the platforms — and yes, they deserve criticism — I would ask you to examine whether their sins are unique. The advertising model we now blame for all the bad behavior we see on the net originated with and is still in use by mass media. We in news invented clickbait; we just called it headlines. We in media also set in motion the polarization that plagues society today with our chronic desire to pit simplistic stereotypes of red v. blue in news stories and cable-news arguments. Mass media is to blame for the idea of the mass and its results.

When demanding more of the platforms — as we should — I also would urge us to ask more of ourselves, to recognize our responsibility as citizens in encouraging a civil and informed conversation. The platforms should define bad behavior and enable us to report it. Then we need to report it. Then they need to act on what we report. And given the scale of the task, we need to be realistic in our expectations: On any reasonably open platform, someone will game the system and shit will rise — we know that. The question is how quickly and effectively the platforms respond.

I’ll repeat what I said in a recent postNo one — not platforms, not ad agencies and networks, not brands, not media companies, not government, not users — can stand back and say that disinformation, hate, and incivility are someone else’s problem to solve. We all bear responsibility. We all must help by bringing pressure and demanding quality; by collaborating to define what quality is; by fixing systems that enable manipulation and exploitation; and by contributing whatever resources we have (ad dollars to links to reporting bad actors).

Finally, let’s please base our actions and our pressure on platforms and government on research, facts, and data. Is Facebook polarizing or depolarizing society? We do not know enough about how Facebook and Twitter affected our election and we would be wise to know more before we think we can prescribe treatments that could be worse then the disease. That’s not to say there isn’t plenty we know that Facebook, Google, Twitter, media, and society need to fix now. But treating technology companies as the agents of ill intent that maliciously ruin our elections and split us apart and addict us to our devices is simplistic and ultimately won’t get us to the real problems we all must address.

Today I talked about this with my friend and mentor Jay Rosen — who four years ago wrote this wise piece about the kind of legitimacy platforms rely upon. Jay said we really don’t have the terms and concepts we need for this discussion. I agree.

I’ve been doing a lot of reading lately about the idea of the mass and its reputed manipulation at the hands of powerful and bad actors at other key moments in history: the French and American revolutions; the Industrial Revolution; the advent of mass media. At each wendepunkt, scholars and commentators worried about the impact of the change and struggled to find the language to describe and understand it. Now, in the midst of the digital revolution, we worry and struggle again. Facebook, Google, Twitter, and many of the people who created the internet we use today have no way to fully understand what their machines really do. Neither do we. I, for example, preached the openness that became the architecture and religion of the platforms without understanding the inevitability of that openness breeding trolls. We cannot use our analogs of the past to explain this future. That can be frightening. But I will continue to argue — optimist to a fault — that we can figure this out together.