Posts about facebook

Facebook. Sigh.

I’d rather like to inveigh against Facebook right now as it would be convenient, given that ever since I raised money for my school from the company, it keeps sinking deeper in a tub of hot, boiling bile in every media story and political pronouncement about its screwups. Last week’s New York Times story about Facebook sharing data with other companies seemed to present a nice opportunity to thus bolster my bona fides. But then not so much.

The most appalling revelation in The Times story was that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.” I was horrified when I read that and was ready to raise the hammer. But then I read Facebook’s response.

Specifically, we made it possible for people to message their friends what music they were listening to in Spotify or watching on Netflix directly from the Spotify or Netflix apps….

In order for you to write a message to a Facebook friend from within Spotify, for instance, we needed to give Spotify “write access.” For you to be able to read messages back, we needed Spotify to have “read access.” “Delete access” meant that if you deleted a message from within Spotify, it would also delete from Facebook. No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct.

And I read other background, including from Alex Stamos, Facebook’s former head of security, who has been an honest broker in these discussions:

And there’s James Ball, a respected London journalist — ex Guardian and BuzzFeed — who is writing a critical book about the internet:

In short: Of course, Netflix and Spotify had to be given the ability to send, receive, and delete messages as that was the only way the messaging feature could work for users. Thus in its story The Times comes off like a member of Congress grandstanding at a hearing, willfully misunderstanding basic internet functionality. Its report begins on a note of sensationalism. And not until way down in the article does The Times fess up that it similarly received a key to Facebook data. So this turns out not to be the ideal opening for inveighing. But I won’t pass up the opportunity.

The moral net

I’ve had a piece in the metaphorical typewriter for many months trying to figure out how to write about the moral responsibility of technology (and media) companies. It has given me an unprecedented case of writer’s block as I still don’t know how to attack the challenge. I interviewed a bunch of people I respect, beginning with my friend and mentor Jay Rosen, who said that we don’t even have agreement on the terms of the discussion. I concur. People seem to assume there are easy answers to the questions facing the platforms, but when the choices gets specific — free speech vs. control, authority vs. diversity, civility as censorship — the answers no longer look so easy.

None of this is to say that Facebook is not fucking up. It is. But its fuckups are not so much of the kind The Times, The Guardian, cable news, and others in media dream of in their dystopias: grand theft user data! first-degree privacy murder! malignant corporate cynicism! war on democracy! No, Facebook’s fuckups are cultural in the company — as in the Valley — which is to say they are more complex and might go deeper.

For example, I was most appalled recently when Facebook — with three Jewish executives at the head — hired a PR company to play into the anti-Semitic meme of attacking George Soros because he criticized Facebook. What the hell were they thinking? Why didn’t they think?

This case, I think, revealed the company’s hubristic opacity, the belief that it could and should get away with something in secret. I’m sure I needn’t point out the irony of a company celebrating publicness being so — to understate the case — taciturn. Facebook must learn transparency, starting with openness about its past sins. I’ve been saying the company needs to perform an audit of its past performance and clear the decks once and for all. But transparency is not just about confession. Transparency should be about pride and value. From the top, Facebook needs to infuse its culture with the idea that everything everyone does should shine in the light of public scrutiny. The company has to learn that secrecy is neither a cloak nor a competitive advantage (hell, who are its competitors anyway?) but a severe liability.

Facebook and its leaders are often accused of cynicism. I have a different diagnosis. I think they are infected with latent and lazy optimism. I do believe that they believe a connected world is a better world — and I agree with that. But Facebook, like its neighbors in Silicon Valley, harbored too much faith in mankind and — apart from spam— did not anticipate how it would be manipulated and thus did not guard against that and protect the public from it. I often hear Facebook accused of leaving trolling and disinformation online because it makes money from those pageviews. Nonsense. Shitstorms are bad for business. I think it’s the opposite: Facebook and the other platforms have not calculated the full cost of finding and compensating for manipulation, fraud, and general assholery. And in some fairness to them, we as a society have not yet agreed on what we want the platforms to do, for I often hear people say — in the same breath or paragraph — that Facebook and Twitter and YouTube must clean up their messes … but also that no one trusts them to make these judgments. What’s a platform to do?

If Facebook and its league had acted with transparent good faith in enacting their missions — and bad faith in anticipating the behavior of some small segment of malignant mankind — then perhaps when Russian or other manipulation reared its head the platforms would have been on top of the problem and would even have garnered sympathy for being victims of these bad actors. But no. They acted impervious when they weren’t, and that made it easier to yank them down off their high horses. Media — once technoboosters — now treat the platforms, especially Facebook, as malign actors whose every move and motive is to destroy society.

I have argued for a few years now that Facebook should hire an editor to bring a sense of public responsibility to the company and its products. As a journalist, that’s rather conceited, for as I’ll confess shortly, journalists have issues, too. Then perhaps Facebook should hire ethicists or philosophers or clergy or an Obama or two. It needs a strong, empowered, experienced, trusted, demanding, tough force in its executive suite with the authority to make change. While I’m giving unsolicited advice, I will also suggest that when Facebook replaces its outgoing head of coms and policy, Elliot Schrage, it should separate those functions. The head of policy should ask and demand answers to tough questions. The head of PR is hired to avoid tough questions. The tasks don’t go together.

So, yes, I’ll criticize Facebook. But I also believe it’s important for us in journalism to work with Facebook, Twitter, Google, YouTube, et al because they are running the internet of the day; they are the gateways to the public we serve; and they need our help to do the right thing. (That’s why I do what I do in the projects I linked to in the first sentence above.)

Moral exhibitionism

Instead, I see journalists tripping over each other to brag on social media about leaving social media. “I’m deleting Facebook — find me on Instagram,” they proclaim, without irony. “I deleted Facebook” is the new “I don’t own a TV.” This led me to tweet:

People with discernible senses of humor got the gag. One person attacked me for not attacking Facebook. And meanwhile, a few journalists agonized about the choice. A reporter I whose work I greatly respect, Julia Ioffe, was visibly torn, asking:

I responded that Facebook enriches her reporting and that journalists need more — not fewer — ways to listen to the public we serve. She said agreed with that. (I just asked what she decided and Ioffe said she is staying on Facebook.)

Quitting Facebook is often an act of the privileged. (Note that lower income teens are about twice as likely to use Facebook as teens from richer families.) It’s fine for white men like me to get pissy and leave because we have other outlets for our grievances and newsrooms are filled with people who look like us and report on our concerns. Without social media, the nation would not have had #metoo or #blacklivesmatter or most tellingly #livingwhileblack, which reported nothing that African-Americans haven’t experienced but which white editors didn’t report because it wasn’t happening to them. The key reason I celebrate social media is because it gives voice to people who for too long have not been heard. And so it is a mark of privilege to condemn all social media — and the supposed unwashed masses using them — as uncivilized. I find that’s simply not true. My Facebook and Twitter feeds are full of smart, concerned, witty, constructive people with a wide (which could always be wider) diversity of perspective. I respect them. I learn by listening to them.

When I talked about all this on the latest This Week in Google, I received this tweet in response:

I thanked Jeff and immediately followed him on Facebook.

A moral mirror

These days, too much of the reporting about the internet is done without knowledge of how technology works and without evidence behind the accusations made. I fear this is fueling a moral panic that will lead to legislation and regulation that will affect not just a few Silicon Valley technology companies but everyone on the net. This is why I so appreciate voices like Rasmus Kleis Nielsen, now head of the Reuters Institute for the Study of Journalism at Oxford, who often meets polemical presumptions about the net — for example, that we are supposedly all hermetically sealed in filter bubbles — with demands for evidence as well as research that dares contradict the pessimistic assertion. This is why I plan to convene a meeting of similarly disciplined researchers to examine how bad — or good — life on the net really is and to ask what yet needs to be asked to learn more.

Dave Winer, a pioneer in helping to create so much of the web we enjoy today (podcasts, RSS, blogging…) is quite critical of the closed systems the platforms create but also was very frustrated with the New York Times story that inspired this post:

Could this be why usually advocacy-allergic news organizations are oddly taking it upon themselves to try to convince people to delete Facebook?

This is also why Dave also had a suggestion for journalists covering technology and for journalism schools:

A bunch of us journo profs jumped on his idea and I hope we all make it happen soon.

But there’s more journalists need to do. As we in news and media attack the platforms and their every misstep — and there are many — we need to turn the mirror on ourselvesIt was news media that polarized the nation into camps of red v. blue, white v. black, 1 percent v. 99 percent long before Facebook was born. It was our business model in media that favored confrontation over resolution. It was our business model in advertising that valued volume, attention, page views, and eyeballs — the business model that then corrupted the internet. It was our failure to inform the public that enabled people to vote against their self-interest for Trump or Brexit. We bear much responsibility for the horrid mess we are in today.

So as we demand transparency of Facebook I ask that we demand it of ourselvesAs we expect ethical self-examination in Silicon Valley, we should do likewise in journalism. As we criticize the skewed priorities and moral hazards of technology’s business model, let us also recognize the pitfalls of our own — and that includes not just clickbait advertising but also paywalls and patronage (which will redline journalism for the privileged). Let us also be honest with ourselves about why trust in journalism is at historic lows and why people chose to leave the destinations we built for them, instead preferring to talk among themselves on social media. Let he who should live in a glass house — and expects everyone else to live in glass houses — think before throwing stones.

I’m neither defending nor condemning Facebook and the other platforms. My eyes are wide open about their faults — and also ours. They and the internet they are helping to build are our new reality and it is our mutual responsibility to build a better net and a better future together. These are difficult, nuanced problems and opportunities.

Congratulations, America. Victory against Infowars!

 

You did it, O, you denizens of social media, you sharers of cats, you time-wasters, you. With every appalled tweet and retweet and angry emoji on Facebook, you vanquished the foe, Infowars. You got it banished from Facebook, Apple, YouTube, and Spotify. Congratulations.

I have no inside information to know what made the platforms finally come to their senses. But I will bet that it was the cover provided by the public on social media that gave them the courage to do the right thing.

Consider what Sleeping Giants and Shannon Coulter’s #GrabYourWallet did to get thousands of advertisers to drop Breitbart. After Kellogg dropped Breitbart back in 2016, right-wingers tried to declare a cereal boycott. It fizzled like stale Rice Crispies. Then the social pressure started on every advertiser that appeared on Breitbart and by the hundreds they flew away. I spoke with advertisers who did not resent Sleeping Giants for this. No, they were grateful for the cover.

Meanwhile, #GrabYourWallet also put pressure on retailers to stop carrying the merchandise of the enabler-in-chief and éminence greed, Ivanka Trump, and she killed her company. Last week, many of us brought a shitstorm down on the Newseum for selling fake news T-shirts and they relented. Many of us keep screaming about cable news shows inviting on Trump’s liars and at least a few listened as Morning Joe stopped inviting Kellyanne Conway and Joy Reid, Nicolle Wallace, and Rachel Maddow stopped giving free airtime unencumbered with context to Trump rallies, press briefings, and tweets.

So that is your job, America: Keep demanding the best of platforms when it comes to distributing extremist bile. Demand the best of brands, ad networks, ad agencies, and retailers when it comes to supporting their shit. And demand the best of media — I’m looking at you, cable news — when it comes to inviting pathological liars and extremist nut jobs on your air to amplify their hate and disinformation.

Now it would be nice if the companies that run the internet had long since shown the decency, good sense, and courage to do this on their own. But it seems they feared blowback from the other side, the indecent side, the allies of Infowars and you-know-who. Well, we showed them who is more powerful.

Now I know there’s a risk here. A tool is a tool and bad guys can use them just as well as good guys. Indeed, it was the far right that first went after Facebook with accusations that it was disadvantaging conservative news in its (now gone, thank you) Trending feature. Facebook caved and then cowered — until now. I don’t want to see mobs going after voices because of disagreement. But that’s not what happened here. Citizens went after companies to uphold basic standards of decency. Big difference.

 

My message here is simple: Keep it up, social media. Keep it up, America. Demand the best of ourselves, our technology companies and media companies and their advertisers. Then come November, demand the best of the women and men who represent you in government.

What you’re seeing is democracy and civilization in action — and civilization is winning. At last.

 

What’s Wrong With This Picture?

Facebook is on its way to hiring 20,000 people to identify the hate and bile that we, the people, leave there because laws — Germany’s NetzDG, among others — and media demand it. Let me repeat that: 20,000 employees.

Now consider that the total number of daily newspaper journalists in America was 32,900 in 2015 and is probably below 30,000 today.

20,000 shit-pickers vs. 30,000 journalists.

What does that say about our priorities as a society? Yes, I know, I’m mixing a worldwide number (the 20,000 conversational janitors) with a U.S. number (journalists) but the scale is telling — not so much about Facebook or technology or business models but about us.

By these numbers, it is clear that we as a society are more concerned about policing playground twits who thereby get just what they want — attention — than about policing the truly powerful. How screwed up is that?

Now there are plenty of people who wish that Facebook would pay for journalists. Though I have argued that Facebook should hire journalists to bring a sense of public responsibility to the company, I do not believe Facebook, Google, or Twitter should build newsrooms to compete with news organizations. And, like many, I hope we find more ways for all the platforms to share more revenue and value with news companies to help pay for more journalism. If Facebook et al were not wasting so much money on the garbage crew, could it afford to be more generous to news? That depends on the value news brings to their users.

What can we do about this? Well, start here: Stop blaming everything we do on the platforms and expecting them to clean up our every mess. Maybe we, the users, should stop giving the trolls, twits, assholes, and Russians attention to rob them of their reasons to belch. Maybe we, the users, should ignore their crap (I have very little of it in my feed and I’ll bet that’s true for you, too) so we can see more resources devoted to watching the powerful. Maybe we, the users, should take more responsibility for reporting bad behavior — which will work only if the platforms, in turn, take the responsibility to listen to and act on what we say. Maybe media can recognize their role in polarizing society and valuing arguments over enlightenment. And, yes, the platforms should worry about the quality of conversation and information on their platforms. But can we also get them to pay attention to quality over crap? That is the real question I raise here.

Think of the problem this way: Every time some shithead spews hate, bigotry, lies, and idiocy, he (yes, I’m sure most are men) divert societal resources from positive impact to cleaning the sewers. Being too optimistic about the behavior of our fellow citizens is what got us — platforms, society, citizens — in this mess. But expecting and devoting resources to the worst behavior is little better.

We can all do better.

Perspective, please

I’m going to straddle a sword by on the one hand criticizing the platforms for not taking their public responsibility seriously enough, and on the other hand pleading for some perspective before we descend into a moral panic with unintended consequences for the net and the future.

[Disclosure: I raised $14 million for the News Integrity Initiative at CUNY from Facebook, Craig Newmark, the Ford Foundation, AppNexus, and others. We are independent of Facebook and I personally receive no money from any platform.]

The Observer’s reporting on Cambridge Analytica’s exploitation of Facebook data on behalf of Donald Trump has raised what the Germans call a shitstorm. There are nuances to this story I’ll get to below. But to begin, suffice it to say that Facebook is in a mess. As much as the other platforms would like to hide behind their schadenfreude, they can’t. Google has plenty of problems with YouTube (I write this the night before Google is set to announce new mitzvahs to the news industry). And Twitter is wisely begging for help in counteracting the ill effects it now concedes it has had on the health of the public conversation.

The platforms need to realize that they are not trusted. (And before media wrap themselves up in their own blanket of schadenfreude, I will remind them that they are not trusted either.) The internet industry’s cockiness cannot stand. They must listen to and respect concerns about them. They must learn humility and admit how hard that will be for them. They need to perform harsh and honest self-examinations of their cultures and moral foundations. Underlying all this, I believe they must adopt an ethic of radical transparency.

For a few years, I’ve been arguing that Facebook and its fellows should hire journalists not just to build relationships with media companies but more importantly to embrace a sense of public responsibility in decisions about their products, ranking, experiments, and impact. Now they would do well to also hire ethicists, psychologists, philosophers, auditors, prosecutors, and the Pope himself to help them understand not how to present themselves to the world — that’s PR — but instead to fully comprehend the responsibility they hold for the internet, society, and the future.

I still believe that most people in these companies themselves believe that they are creating and harnessing technology for the good. What they have not groked is the greater responsibility that has fallen on them based on how their technologies are used. In the early days of the internet, the citizens of the net — myself included — and the platforms that served them valued openness über alles. And it was good. What we all failed to recognize was — on the good side — how much people would come to depend on these services for information and social interaction and — on the bad side — how much they would be manipulated at scale. “When we built Twitter,” Ev Williams said at South by Southwest, “we weren’t thinking about these things. We laid down fundamental architectures that had assumptions that didn’t account for bad behavior. And now we’re catching on to that.”

This means that the platforms must be more aware of that bad behavior and take surer steps to counteract it. They must make the judgments they feared making when they defended openness as a creed. I will contend again that this does not make them media companies; we do not want them to clean and polish our internet as if the platforms were magazines and the world were China. We also must recognize the difficulty that scale brings to the task. But they now have little choice but to define and defend quality on their platforms and in the wider circles of impact they have on society in at least these areas:

  • Civility of the public conversation. Technology companies need to set and enforce standards for basic, civilized behavior. I still want to err on the side of openness but I see no reason to condone harassment and threats, bigotry and hate speech, and lies as incitement. (By these considerations, Infowars, for example, should be toast.)
  • An informed public conversation. Whether they wanted it or not, Facebook and Twitter particularly — and Google, YouTube, Snap and others as well — became the key mechanisms by which the public informs itself. Here, too, I’ll err on the side of openness but the platforms need to set standards for quality and credibility and build paths that lead users to both. They cannot walk away from the news because it is messy and inconvenient for we depend upon them now.
  • A healthy public sphere. One could argue that Facebook, Twitter, et al are the victims of manipulation by Russia, Cambridge Analytica, trolls, the alt-right, and conspiracy theorists. Except that they are not the bad guys’ real targets. We are. The platforms have an obligation to detect, measure, reveal, and counteract this manipulation. For a definition of manipulation, I give you C. Wright Mills in The Power Elite: “Authority is power that is explicit and more or less ‘voluntarily’ obeyed; manipulation is the ‘secret’ exercise of power, unknown to those who are influenced.”

Those are broad categories regarding the platforms’ external responsibilities. Internally they need to examine the ethical and moral bases for their decisions about what they do with user data, about what kinds of behaviors they reward and exploit, about the impact of their (and mass media’s) volume-based business model in fostering clickbait, and so on.

If the internet companies do not get their ethical and public acts together and quickly — making it clear that they are capable of governing their behavior for the greater good — I fear that the growing moral panic overtaking discussion of technology will lead to harmful legislation and legal precedent, hampering the internet’s potential for us all. In the rush to regulation, I worry that we will end up with more bad law (like Germany’s NetzDG hate-speech law and Europe’s right-to-be-forgotten court ruling — each of which, paradoxically, fights the platforms’ power by giving them more power to censor speech). My greater fear is that the regulatory mechanisms installed for good governments will be used by bad ones — and these days, what country does not worry about bad government? — leading to a lowest common denominator of freedom on the net.


So now let me pose a few challenges to the platforms’ critics.

On the current Cambridge Analytica story, I’ll agree that Facebook is foolish to split hairs about the use of the word “breach” even if Facebook is right that it wasn’t one. But it behooves us all to get the story right. Please read the complete threads (by opening each tweet) from Jay Pinho and Patrick Ruffini:

Note well that Facebook created mechanisms to benefit all campaigns, including Barack Obama’s. At the time, this was generally thought to be a good: using a social platform to enable civic participation. What went wrong in the meantime was (1) a researcher broke Facebook’s rules and shared data intended for research with his own company and then with Cambridge Analytica and (2) Donald Trump.

So do you think that Facebook should be forbidden from helping political campaigns? If we want television and the unlimited money behind it to lose influence in our elections, shouldn’t we desire more mechanisms to directly, efficiently, and relevantly reach voters by candidates and movements? If you agree, then what should be the limits of that? Should Facebook choose good and bad candidates as we expect them to choose good and bad news? I could argue in favor of banning or not aiding, say, a racist, admitted sexual abuser who incites hatred with conspiracy theories and lies. But what if such a person becomes the candidate of one of two major parties and ultimately the victor? Was helping candidates good before Trump and bad afterwards?

Before arguing that Facebook should never share data with anyone, know that there are many researchers who are dying to get their hands on this data to better understand how information and disinformation spread and how society is changing. I was among many such researchers some weeks ago at a valuable event on disinformation at the University of Pennsylvania (where, by the way, most of the academics in attendance scoffed at the idea that Cambridge Analytica actually had a secret sauce and any great power to influence elections … but now’s not the time for that argument). So what are the standards you expect from Facebook et al when it comes to sharing data? To whom? For what purposes? With what protections and restrictions?

I worry that if we reach a strict data crackdown — no data ever shared or used without explicit permission for the exact purpose — we will cut off the key to the only sustainable future for journalism and media that I see: one built on a foundation of delivering relevant and valuable services to people as individuals and members of communities, no longer as an anonymous mass. So please be careful about the laws, precedents, and unintended consequences you set.

When criticizing the platforms — and yes, they deserve criticism — I would ask you to examine whether their sins are unique. The advertising model we now blame for all the bad behavior we see on the net originated with and is still in use by mass media. We in news invented clickbait; we just called it headlines. We in media also set in motion the polarization that plagues society today with our chronic desire to pit simplistic stereotypes of red v. blue in news stories and cable-news arguments. Mass media is to blame for the idea of the mass and its results.

When demanding more of the platforms — as we should — I also would urge us to ask more of ourselves, to recognize our responsibility as citizens in encouraging a civil and informed conversation. The platforms should define bad behavior and enable us to report it. Then we need to report it. Then they need to act on what we report. And given the scale of the task, we need to be realistic in our expectations: On any reasonably open platform, someone will game the system and shit will rise — we know that. The question is how quickly and effectively the platforms respond.

I’ll repeat what I said in a recent postNo one — not platforms, not ad agencies and networks, not brands, not media companies, not government, not users — can stand back and say that disinformation, hate, and incivility are someone else’s problem to solve. We all bear responsibility. We all must help by bringing pressure and demanding quality; by collaborating to define what quality is; by fixing systems that enable manipulation and exploitation; and by contributing whatever resources we have (ad dollars to links to reporting bad actors).

Finally, let’s please base our actions and our pressure on platforms and government on research, facts, and data. Is Facebook polarizing or depolarizing society? We do not know enough about how Facebook and Twitter affected our election and we would be wise to know more before we think we can prescribe treatments that could be worse then the disease. That’s not to say there isn’t plenty we know that Facebook, Google, Twitter, media, and society need to fix now. But treating technology companies as the agents of ill intent that maliciously ruin our elections and split us apart and addict us to our devices is simplistic and ultimately won’t get us to the real problems we all must address.

Today I talked about this with my friend and mentor Jay Rosen — who four years ago wrote this wise piece about the kind of legitimacy platforms rely upon. Jay said we really don’t have the terms and concepts we need for this discussion. I agree.

I’ve been doing a lot of reading lately about the idea of the mass and its reputed manipulation at the hands of powerful and bad actors at other key moments in history: the French and American revolutions; the Industrial Revolution; the advent of mass media. At each wendepunkt, scholars and commentators worried about the impact of the change and struggled to find the language to describe and understand it. Now, in the midst of the digital revolution, we worry and struggle again. Facebook, Google, Twitter, and many of the people who created the internet we use today have no way to fully understand what their machines really do. Neither do we. I, for example, preached the openness that became the architecture and religion of the platforms without understanding the inevitability of that openness breeding trolls. We cannot use our analogs of the past to explain this future. That can be frightening. But I will continue to argue — optimist to a fault — that we can figure this out together.

The Flight to Quality is on the Runway

Sometimes, things need to get bad before they can get good. Such is the case, I fear, with content, conversation, and advertising on the net. But I see signs of progress.

First let’s be clear: No one — not platforms, not ad agencies and networks, not brands, not media companies, not government, not users — can stand back and say that disinformation, hate, and incivility are someone else’s problem to solve. We all bear responsibility. We all must help by bringing pressure and demanding quality; by collaborating to define what quality is; by fixing systems that enable manipulation and exploitation; and by contributing whatever resources we have (ad dollars to links to reporting bad actors).

Last May, I wrote about fueling a flight to quality. Coming up on a year later, here’s what I see happening:

Platforms:

  • Twitter CEO Jack Dorsey recently posted a thread acknowledging his company’s responsiblity to the health and civility of the public conversation and asking for help in a bunch of very knotty issues balancing openness with civility, free speech with identifying and stopping harassment and disinformation. It is an important step.
  • Facebook made what I now come to understand was an unintended but critical mistake at the start of News Feed when it threw all “public content” into one big tub with few means for identifying differences in quality. Like Twitter and like the net culture itself, Facebook valued openness and equality. But when some of that content — especially Russian disinformation and manipulation campaigns — got them in trouble, they threw out the entire tub of bathwater. Now they’re trying to bring some of the better babies back by defining and promoting quality news. This, too, involves many difficult questions about the definitions of quality and diversity. But when it is done, I hope that good content can stand out.
  • In that post last May, I wrote about how Google Search would thenceforth account for the reliability, authority, and quality of sources in ranking. Bravo. I believe we will see that as a critical moment in the development of the net. But as we see in the news about Logan Paul and Alex Jones on YouTube, there is still work to be done on the ad side of the company. A system that enables platforms to give audience and major brands to give financial support to the likes of Jones is broken. Can we start there?

Advertising:

  • Through the News Integrity Initiative,* we helped start an effort called Open Brand Safety to identify the worst, low-hanging, rotten fruit of disinformation sites to help advertisers shun them. It’s still just a bare beginning. But through it, we have seen that not-insignificant amounts of ad dollars still go to known crap sites.
  • That is why I’ve joined an effort to organize a meeting later this month, bringing together the many organizations trying to identify signals of quality v. crap with representatives from platforms, ad networks, ad agencies, brands, NGOs, and others. I do not believe the solution is one-size-fits-all black lists and white lists, for it is impossible to define trust and quality for everyone — Gerber and Red Bull have different standards for advertising, as they should. What I’ve been arguing for is a network made up of all these constituencies to share signals of quality or a lack of it so each company can use that information to inform decisions about ranking, promotion, ad buys, and so on. I’ll report more when this happens.

I’ve spoken with the people in these companies and I believe their sincerity in trying to tackle this problem. I also see the complexity of the issues involved. We all want to preserve the openness of our internet but we also have to acknowledge that that openness makes the net vulnerable to manipulation by bad actors. So, to start, we need to recognize, reveal, and counteract that manipulation while also identifying and supporting good content.

It is because I believe in the need for openness that I will continue to argue that the the internet is not a medium and the platforms are not publishers. When the net is viewed as a next-generation medium like a newspaper or TV network, that brings perilous presumptions — namely that the net should be edited and packaged like a media property. I don’t want that. I treasure the openness and freedom that allow me to blog and say whatever I want and to find and hear voices I never was able to hear through the eye of media’s needle.

I also think it’s important to recognize that scale is a double-edged sword: It is the scale of the net and the platforms that enables anyone anywhere to speak to anyone else without need of capital, technical expertise, or permission. But it is also scale that makes the problems being addressed here so difficult to attack. No, the platforms should not — I do not want them to — pass judgment on everything that is posted on the net through them. I do not want the platforms to be my or your editor, to act like media or to build a Chinese internet.

But the platforms — and media companies like them — can no longer sit back and argue that they are just mirrors to society. Society warped and cracked itself to exploit their weaknesses. Facebook is not blameless in enabling Russian disinformation campaigns; YouTube is not blameless in creating a mechanism that allows and pays for Alex Jones to spew his bile; Twitter is not blameless in helping to foster incivility. Add to that: news organizations are not blameless in helping to spread disinformation and give it attention, and in fueling polarization and incivility. The ad industry is not blameless in helping to support the manipulators, spammers, trolls, and haters. Law enforcement is not blameless when it does not alert platforms and media companies to intelligence about bad actors. And you — yes, you and I — are not blameless when we share, click on, laugh at, encourage, and fail to report the kind of behavior that threatens our net.

Every effort I mention here is just a beginning. Every one of them is entangled with knotty questions. We need to help each other tackle this problem and protect our net. We need to discuss our mutual, moral responsibility to society and to an informed, civil, and productive public conversation.

There is much more to be done: Journalists and news organizations need to help the platforms define quality (wisely but generously to continue to encourage new and diverse voices). Journalists should also get smarter about not being exploited by manipulators. And news organizations need to do much more to build bridges between communities in conflict to foster understanding and empathy. The platforms, researchers, law enforcement, and NGOs should share alerts about manipulation they see to cut the bad guys off at the pass. Ad networks and platforms have to make it possible for advertisers to support the quality not the crap (and not claim ignorance of where their dollars go). Consumers — now banded together by campaigns like Sleeping Giants and Grab Your Wallet — need to continue to put pressure on platforms, brands, agencies, and networks and thereby give them cover so they are empowered to do what’s right.

Above all, let’s please remember that the internet is not ruined just because there are a few assholes on it. This, too, is why I insist on not seeing the net as a medium. It is Times Square. On Times Square, you can find pickpockets and bad Elmos and idiots, to be sure. But you also find many more nice tourists from Missoula and Mexico City and New Yorkers trying to dodge them on their way to work. Let’s bring some perspective to the media narrative about the net today. Please go take a look at your Facebook or Twitter or Instagram feeds or any Google search. I bet you will not find them infested with nazis and Russians and trolls, oh, my. I  bet you will still find, on the whole, decent people like you and me. I fear that if we get carried away by moral panic we will end up not with a bustling Times Square of an internet but with China or Singapore or Iran as the model for a controlled digital future.

The net is good. We can and should make it better. We must protect it. That’s what these efforts are about.

*Disclosure: NII is funded by Facebook, Craig Newmark, the Ford Foundation, AppNexus, and others.