Posts about regulation

How the Dystos Defeated the Technos: A dystopian vision

History 302 — Fall semester, 2032 — Final essay

[This is not prediction about tomorrow; it is extrapolation from today -Ed.]

Summary

This paper explores the victory of technological dystopians over technologists in regulation, legislation, courts, media, business, and culture across the United States, Europe, and other nations in the latter years of what is now known as the Trump Time.

The key moment for the dystos came a decade ago, with what Wired.com dubbed the Unholy Compromise of 2022, which found Trumpist conservatives and Warrenite liberals joining forces to attack internet companies. Each had their own motives — the Trumpists complaining about alleged discrimination against them when what they said online was classified as hate speech; the liberals inveighing against data and large corporations. It is notable that in the sixteen years of Trump Time, virtually nothing else was accomplished legislatively — not regarding climate, health care, or guns — other than passing anti-tech, anti-net, and anti-data laws.

In the aftermath, the most successful internet companies — Alphabet/Google, Facebook, Amazon — were broken up by regulators (but interestingly Comcast, Verizon, AT&T, Microsoft, Twitter, and the news behemoth Fox-Gatehouse-Sinclair were not). Collection and use of data by commercial entities and as well as by academic researchers was severely curtailed by new laws. Moderation requirements and consequent liability for copyright violations, hate, falsehood, unauthorized memories, and other forbidden speech were imposed on social-media companies, and then, via court rulings, on media and news organizations as well as individuals online. New speech courts were established in the U.S., the European Union, and the former United Kingdom countries to adjudicate disputes of falsehood and hate as well as information ownership, excessive expression, and political neutrality by net companies. Cabinet-level technology regulators in the U.S., the E.U, Canada, and Australia established mechanisms to audit algorithms, supported by software taxes as well as fines against technology companies, executives, and individual programmers. Certain technologies — most notably facial recognition — were outright outlawed. And in many American states, new curricula were mandated to educate middle- and high-school students about the dangers of technology.

Unintended consequences

The impact of all this has been, in my opinion, a multitude of unintended consequences. The eight companies resulting from the big net breakups are all still profitable and leading their now-restricted sectors with commanding market shares, and many have quietly expanded into new fields as new technologies have developed. Their aggregate market value has increased manyfold and no serious challengers have emerged.

Academic studies of divisiveness, hate, and harassment — though limited in their scope by data laws — have shown no improvement and most have found a steady decline in online decency and respect, especially as trolls and sowers of discord and disinformation took to nesting in the smaller, off-shore platforms that regularly sprout up from Russia, lately China, and other nations unknown. Other studies have found that with the resurrection of media gatekeepers in a more controlled ecosystem of expression, minority voices are heard less often in mainstream media than before the Compromise.

Even though news and media companies and their lobbyists largely won political battles by cashing in their political capital to gain protectionist legislation, these legacy companies have nonetheless continued and accelerated their precipitous declines into bankruptcy and dissolution, with almost a half of legacy news organizations ceasing operation in the last decade even as legislatively blessed media consolidation continues.

I would not go so far as to declare that we have reached the dystopia of the dystopians, though some would. In his final book, The Last Optimist, Jeff Jarvis wrote:

Far too early in the life of the internet and its possibilities, the dystos have exhibited the hubris of the self-declared futurist to believe they could foretell everything that could go wrong — and little that could go right — with internet and data technologies. Thus in their moral panic they prematurely defined and limited these technologies and cut off unforeseen opportunities. We now live in their age of fear: fear of technology, fear of data (that is, information and knowledge), fear of each other, fear of the future.

There are many reasons to be angry with the technology companies of the early internet. They were wealthy, hubristic, optimistic, expansionist, and isolated, thus deaf to the concerns — legitimate and not — of the public, media, and government. They were politically naive, not understanding how and why the institutions the net challenged — journalism, media, finance, politics, government, even nations — would use their collaborative clout and political capital to fight back and restrain the net at every opportunity. They bear but also share responsibility for the state of the net and society today with those very institutions.

Doctrines of dystos

In examining the legislation and precedents that came before and after the Compromise, certain beliefs, themes, and doctrines emerged:

The Doctrine of Dangerous Data: It would be simplistic to attribute a societal shift against “data” solely to Facebook’s Cambridge Analytica scandal of 2016, but that certainly appears to have been a key moment triggering the legislative landslide that followed. Regulation of data shifted from its use to its collection as laws were enacted to limit the generation, gathering, storage, and analysis of information associated with the internet. Scores of laws now require that data be used only for the single purpose stated at collection and others impose strict expiration on the life of data, mandating expiration and erasure. Academics and medical researchers — as well as some journalists — have protested such legislation, contending that they all but kill their ability to find correlation and causation in their fields, but they have failed to overturn a single law. Note well that similar data collection offline — by stores through loyalty cards, banks through credit cards, and so on — has seen no increase in regulation; marketers and publishers still make use of mountains of offline data in their businesses.

News companies and their trade associations demonized the use of data by their competitors, the platforms. In our Geoffrey Nunberg reading, “Farewell to the Information Age,” he quotes Philip Agre saying that “the term ‘information’ rarely evokes the deep and troubling questions of epistemology that are usually associated with terms like ‘knowledge’ and ‘belief.’ One can be a skeptic about knowledge but not about information. Information, in short, is a strikingly bland substance.” “Data,” on the other hand, became a dystopian scare word thanks to campaigns led by news and media companies and their trade associations and lobbyists, using their own outlets.

The Debate over Preeminent Data Ownership: In the 2020 American elections and every one following, countless politicians have vowed to protect consumers’ ownership of “their data” — and passed many laws as a result — but courts have still not managed to arrive at a consistent view of what owning one’s data means. Data generation is so often transactional — that is, involving multiple parties — that is has proven difficult to find a Solomonic compromise in deciding who has preeminent rights over a given piece of data and thus the legal right to demand its erasure. In California v. Amazon Stores, Inc. — which arose from a customer’s embarrassment about purchases of lubricating gels — the Supreme Court decided, in an expansion of its long-held Doctrine of Corporate Personhood, that a company has equal rights and cannot be forced to forget its own transactions. In Massachusetts v. Amazon Web Services, Inc., an appellate panel ruled that AWS could be forced to notify individuals included in databases it hosted and in one case could be forced to erase entire databases upon demand by aggrieved individuals. Despite friend-of-the-court filings by librarians, educators, and civil libertarians, claims of a countervailing right to know or remember by parties to transactions — or by the public itself — have failed to dislodge the preeminence of the right to be forgotten.

Privacy Über Alles: Privacy legislation — namely Europe’s General Data Protection Regulation (GDPR) — led the way for all net legislation to follow. Every effort to track any activity by people — whether by cameras or cookies — was defined as “surveillance” and was banned under a raft of net laws worldwide. In every single case, though, these technologies and their use were reserved for government use. Thus “surveillance” lost its commercial meaning and regained its more focused definition as an activity of governments, which continue to track citizens. Separate legislation in some nations granted people the expectation of privacy in public, which led to severe restrictions on photography by not only journalists but also civilians, requiring that the faces of every unknown person in a photo or video who has not given written consent — such as a birthday party in a restaurant — be blurred.

The Doctrine of Could Happen: A pioneering 2024 German law that has been — to use our grandparents’ term — xeroxed by the European Commission and then the United States, Canada, Australia, and India requires that companies must file Technology Impact Reports (TIRs) for any new technology patent, algorithm, or device introduced to the market. To recount, the TIR laws give limited liability protection for any possible impact that has been revealed before the introduction of a technology; if a possible outcome is not anticipated and listed and then occurs, there is no limit to liability. Thus an entirely new industry — complete with conventions, consultants, and newsletters — has exploded to help any and every company using technology to imagine and disclose everything that could go wrong with any software or device. There is no comparable industry of consultants ready to imagine everything that could go right, for the law does not require or even suggest that as a means to balance decisions.

Laws of Forbidden Technologies: As an outcome of the Doctrine of Could Happen, some entire technologies — most notably facial recognition and bodily tracking of individuals’ movements in public places, such as malls — have been outright banned from commercial, consumer, or (except with severe restrictions) academic use in Germany, France, Canada, and some American states. In every case, the right to use such technologies is reserved to government, leading to fears of misuse by those with greater power to misuse them. There are also statutes banning and providing penalties for algorithms that discriminate on various bases, though in a number of cases, courts are struggling to define precisely what statutory discrimination is (against race, certainly, but also against opinion and ideology?). Similarly, statutes requiring algorithmic transparency are confounding courts, which have proven incapable of understanding formulae and code. Not only technologies are subject to these laws but so are the technologists who create them. English duty-of-care online harms laws (which were not preserved in Scotland, Wales, and Northern Ireland after the post-Brexit dissolution of the United Kingdom) place substantial personal liability and career-killing fines on not only internet company executives but also on technologists, including software engineers.

The Law of China: The paradox is lost on no one that China and Russia now play host to most vibrant online capitalism in the world, as companies in either country are not bound by Western laws, only by fealty to their governments. Thus, in the last decade, we have seen an accelerated reverse brain-drain of technologists and students to companies and universities in China, Russia, and other politically authoritarian but technologically inviting countries. Similarly, venture investment has fled England entirely, and the U.S. and E.U. in great measure. A 2018 paper by Kieron O’Hara and Wendy Hall posited the balkanization of the internet into four nets: the open net of Silicon Valley, the capitalist net of American business, the bourgeois and well-behaved net of the E.U., and the authoritarian net of China. The fear then was that China — as well as Iran, Brazil, Russia, and other nations that demanded national walls around their data — would balkanize the net. Instead, it was the West that balkanized the net with their restrictive laws. Today, China’s authoritarian (and, many would argue, truly dystopian) net — as well as Russia’s disinformation net — appear victorious as they are growing and the West’s net is, by all measures, shrinking.

The Law of Truth and Falsity: Beginning in France and Singapore, “fake news” laws were instituted to outlaw the telling and spreading of lies online. As these truth laws spread to other countries, online speakers and the platforms that carried their speech became liable for criminal and civil fines, under newly enhanced libel laws. Public internet courts in some nations — as well as Facebook’s Oversight Board, in essence a private internet court — were established to rule on disputes over content takedowns. The original idea was to bring decisions about content or speech — for example, violations of laws regarding copyright and hate speech — out into the open where they could be adjudicated with due process and where legal norms could be negotiated in public. It was not long before the remits of these courts were expanded to rule on truth and falsity in online claims. In nation after nation, a new breed of internet judges resisted this yoke but higher courts forced them to take on the task. In case law, the burden of proof has increasingly fallen on the speaker, for demonstrating falsity is, by definition, proving the negative. Thus, for all practical effect, when a complaint is filed, the speaker is presumed guilty until proven innocent — or truthful. Attempts to argue the unconstitutionality of this doctrine even in the United States proved futile once the internet was ruled to be a medium, subject to regulation like the medium of broadcast. Though broadcast itself (radio and television towers and signals using public “airwaves”) are now obsolete and gone, the regulatory regime that oversaw them in Europe — and that excepted them from the First Amendment in America — now carry over to the net.

Once internet courts were forced to rule on illegal speech and falsity, it was not a big step to also require them to rule on matters of political neutrality under laws requiring platforms to be symmetrical in content takedowns (no matter how asymmetrical disinformation and hate might be). And once that was done, the courts were expanded further to rule on such matters as data and information ownership, unauthorized sharing, and the developing field of excessive expression (below). In a few nations, especially those that are more authoritarian and lacking in irony, separate truth and hate courts have been established.

The Doctrine of Excessive Expression: In reading the assigned, archived Twitter threads, Medium posts, academic papers, and podcast transcripts from the late naughts, we see the first stirrings of a then- (but no longer) controversial question: Is there too much speech? In 2018, one communications academic wrote a paper questioning the then-accepted idea that the best answer to bad speech is more speech, even arguing that so-called “fake news” and the since-debunked notion of the filter bubble (see the readings by Axel Bruns) put into play the sanctity of the First Amendment. At the same time, a well-respected professor asked whether the First Amendment was — this is his word — obsolete. As we have discussed in class, even to raise that question a generation before the internet and its backlash would have been unthinkable. Also in 2018, one academic wrote a book contending that Facebook’s goal of connecting the world (said founder Mark Zuckerberg at the time: “We believe the more people who have the power to express themselves, the more progress our society makes together”) was fundamentally flawed, even corrupt; what does that say about our expectations of democracy and inclusion, let alone freedom of speech? The following year, a prominent newspaper columnist essentially told fellow journalists to abandon Twitter because it was toxic — while others argued that in doing so, journalists would be turning their back on voices enabled and empowered by Twitter through the relatively recent invention of the hashtag.

None of these doctrines of the post-technology dysto age has been contested more vigorously than this, the Doctrine of Excessive Expression (also known as the Law of Over-Sharing). But the forces of free expression largely lost when the American Section 230 and the European E-Commerce Directive were each repealed, thus making intermediaries in the public conversation — platforms as well as publishers and anyone playing host to others’ creativity, comment, or conversation— liable for everything said on their domains. As a result, countless news sites shut down fora and comments, grateful for the excuse to get out of the business of moderation and interaction with the public. Platforms that depended on interactivity — chief among them Twitter and the various divisions of the former Facebook — at first hired tens of thousands of moderators and empowered algorithms to hide any questionable speech, but this proved ineffective as the chattering public in the West learned lessons from Chinese users and invented coded languages, references, and memes to still say what they wanted, even and especially if hateful. As a result, the social platforms forced users to indemnify them against damages, which led not only to another new industry in speech insurance but also to the requirement that all users verify their identities. Prior to this, many believed that eliminating anonymity would all but eliminate trolling and hateful speech online. As we now know, they were wrong. Hate abounds. The combination of the doctrines of privacy, data ownership, and expression respect anonymity for the subjects of speech but not for the speakers, who are at risk for any uttered and outlawed thought. “Be careful what you say” is the watchword of every media literacy course taught today.

One result of the drive against unfettered freedom of expression has been the return of the power of gatekeeper, long wished for and welcomed by the gatekeepers themselves — newspaper, magazine, and book editors as well as authors of old, who believed their authority would be reestablished and welcomed. But the effect was not what they’d imagined. Resentment against these gatekeepers by those who once again found themselves outside the gates of media only increased as trust in media continued to plummet and, as I said previously, the business prospects of news and other legacy media only darkened yet further.

Wide impact

The impact of the dystos’ victory can be seen in almost every sector of society.

In business, smaller is now better as companies worry about becoming “too big” (never knowing the definition of “too”) and being broken up. As a result, the merger and acquisition market, especially in tech, has diminished severely. With fewer opportunities for exit, there is less appetite for investment in new ventures, at least in America and Europe. In what is being called the data dark ages in business, executives in many fields — especially in marketing — are driving blind, making advertising, product, and strategic decisions without the copious data they once had, which many blame for the falling value of much of the consumer sector of the economy. After a decade and a half of trade and border wars of the Donald/Ivanka Trump Time [Hey, I said it’s a dystopia -Ed.], it would be simplistic to blame history’s longest recession on a lack of data, but it certainly was a contributing factor to the state of the stock market. Business schools have widely abandoned teaching “change management” and are shifting to teaching “stability management.” One sector of business known in the past for rolling with punches and finding opportunity in adversity — pornography — has hit a historic slump thanks to data and privacy laws. One might have expected an era of privacy to be a boon for porn, but identity and adult verification laws have put a chill on demand. Other businesses to suffer are those offering consumers analysis of their and even their pets’ DNA and help with genealogy (in some nations, courts have held that the dead have a right to privacy and others have ruled in favor of pets’ privacy). But as is always the case in business, what is a loss for one industry is an opportunity for another to exploit, witness the explosion not only in Technology Impact Report Optimization but also in a new growth industry for fact-certifiers, speech insurers, and photo blurrers.

In culture, the dystos long-since won the day. The streaming series Black Mirror has been credited by dystos and blamed by technos for teaching the public to expect doom with every technology. It is worth noting that in my Black Mirror Criticism class last semester, we were shown optimistic films about technology such as You’ve Got Mail and Tomorrowland to disbelieving hoots from students. We were told that many generations before, dystopian films such as Reefer Madness — meant to frighten youth about the perils of marijuana — inspired similar derision by the younger generation, just as it still would today. It is fascinating to see how optimism and pessimism can, by turns, be taken seriously or mocked in different times.

I also believe we have seen the resurgence of narrative over data in media. In the early days of machine learning and artificial intelligence — before they, along with the data that fed them, also became scare words — it was becoming clear that dependence on story and narrative and the theory of mind were being superseded by the power of data to predict human actions. But when data-based artificial intelligence and machine learning predicted human actions, they provided no explanation, no motive, no assuring arc of a story. This led, some argued, to a crisis of cognition, a fear that humans would be robbed of purpose by data, just as the activities of the universe were robbed of divine purpose by Newtonian science and the divine will of creation was foiled by Darwin and evolution. So it was that a cultural version of the regulatory Unholy Compromise developed between religious conservatives, who feared that data would deny God His will, and cultural liberals, who feared that data would deny them their own will. So in cultural products just as in news stories and political speeches, data and its fruits came to be portrayed as objects of fear and resentment and the uplifting story of latter-day, triumphal humanism rose again. This has delighted the storytellers of journalism, fiction, drama, and comedy, who feed on a market for attention. Again, it has done little to reverse the business impact of abundance on their industries. Even with YouTube gone, there is still more than enough competition in media to drive prices toward zero.

The news industry, as I’ve alluded to above, deserves much credit or blame for the dysto movement and its results, having lobbied against internet companies and their collection of data and for protectionist legislation and regulation. But the story has not turned out as they had hoped, in their favor. Cutting off the collection of data affected news companies also. Without the ability to use data to target advertising, that revenue stream imploded, forcing even the last holdouts in the industry to retreat behind paywalls. But without the ability to use data to personalize their services, news media returned to their mass-media, one-size-fits-all, bland roots, which has not been conducive to grabbing subscription market share in what turns out to be a very small market of people willing to pay for news or content overall. The one bright spot in the industry is the fact that the platforms are are licensing content as their only way to deal with paywalls. Thus these news outlets that fought the platforms are dependent on the platforms for their most reliable source of revenue. Be careful what you wish for.

In education, the rush to require the teaching of media literacy curriculum at every level of schooling led to unforeseen consequences. Well, actually, the consequences were not unforeseen by internet researcher danah boyd, who argued in our readings from the 2000-teens that teachers and parents were succeeding all too well at instructing young people to distrust everything they heard and read. This and the universal distrust of others engendered by media and politicians in the same era were symptoms of what boyd called an epistemological war — that is: ‘If I don’t like you, I won’t like your facts.’ The elderly and retired journalists who spoke to class still believe that facts alone, coming from trusted sources, would put an end to the nation’s internal wars. Back in the early days of the net, it seemed as if we were leaving an age overseen by gatekeepers controlling the scarcities of attention and information and returning to a pre-mass-media era build on the value of conversation and relationships. As Nunberg put it in 1996, just as the web was born: “One of the most pervasive features of these media is how closely they seem to reproduce the conditions of discourse of the late seventeenth and eighteenth centuries, when the sense of the public was mediated through a series of transitive personal relationships — the friends of one’s friends, and so on — and anchored in the immediate connections of clubs, coffee-houses, salons, and the rest.” Society looked as if it would trade trust in institutions for trust in family, friends, and neighbors via the net. Instead, we came to distrust everyone, as we were taught to. Now we have neither healthy institutions nor the means to connect with people in healthy relationships. The dystos are, indeed, victorious.

[If I may be permitted an unorthodox personal note in a paper: Professor, I am grateful that you had the courage to share optimistic and well as pessimistic readings with us and gave us the respect and trust to decide for ourselves. I am sorry this proved to be controversial and I am devastated that you lost your quest for tenure. In any case, thank you for a challenging class. I wish you luck in your new career in the TIRO industry.]

[This paper will get an A. -Ed.]

News Publishers Go To War With the Internet — and We All Lose

Around the world, news industry trade associations are corruptly cashing in their political capital — which they have because their members are newspapers, and politicians are scared of them — in desperate acts of protectionism to attack platform companies. The result is a raft of legislation that will damage the internet and in the end hurt everyone, including journalists and especially citizens.

As I was sitting in the airport leaving Newsgeist Europe, a convening for journalists and publishers [disclosure: Google pays for the venue, food, and considerable drink; participants pay their own travel], my Twitter feed lit up like the Macy’s fireworks as The New York Times reported — or rather, all but photocopied — a press release from the News Media Alliance (née Newspaper Association of America) contending that Google makes $4.7 billion a year from news, at the expense of news publishers.

Bullshit.

The Times story itself is appalling as it swallowed the News Media Alliance’s PR whole, quoting people from the association and not including comment from Google until hours later. Many on Twitter were aghast at the poor journalism. I contacted Google PR, who said The Times did not reach out to the person who normally speaks on these matters or anyone in the company’s Washington office. Google sent me their statement:

These back of the envelope calculations are inaccurate as a number of experts are pointing out. The overwhelming number of news queries do not show ads. The study ignores the value Google provides. Every month Google News and Google Search drives over 10 billion clicks to publishers’ websites, which drive subscriptions and significant ad revenue. We’ve worked very hard to be a collaborative and supportive technology and advertising partner to news publishers worldwide.

The “study” upon which The Times (and others) relied is, to say the least, specious. No, it’s humiliating. I want to dispatch with its fallacies quickly — to get to my larger point, about the danger legacy news publishers are posing to the future of news and the internet — and that won’t be hard. The study collapses in its second paragraph:

Google has emerged as a major gateway for consumers to access news. In 2011, Google Search combined with Google News accounted for the majority (approximately 75%) of referral traffic to top news sites. Since January 2017, traffic from Google Search to news publisher sites has risen by more than 25% to approximately 1.6 billion visits per week in January 2018. Corresponding with consumers’ shift towards Google for news consumption, news is becoming increasingly important to Google, as demonstrated by an increase in Google searches about news.

And that, ladies and gentlemen, is great news for news. For as anyone under the age of 99 understands, Google sends readers to sites based on links from search and other products. That Google is emphasizing news and currency more is good for publishers, as that sends them readers. (That 10-billion-click number Google cited above is eight years old and so I have little doubt it is much higher now thanks to all its efforts around news.)

The problem has long been that publishers aren’t competent at exploiting the full value of these clicks by creating meaningful and valuable ongoing relationships with the people sent their way. So what does Google do? It tries to help publishers by, for example, starting a subscription service that drives more readers to easily subscribe — and join and contribute — to news sites directly from Google pages. The NMA study cites that subscription service as an example of Google emphasizing news and by implication exploiting publishers. It is the opposite. Google started the subscription service because publishers begged for it — I was in the room when they did — and Google listened. The same goes for most every product change the study lists in which Google emphasizes news more. That helps publishers. The study then uses ridiculously limited data (including, crucially, an offhand and often disputed remark 10 years ago by a then-exec at Google about the conceptual value of news) to make leaps over logic to argue that news is important on its services and thus Google owes news publishers a cut of its revenue (which Google gains by offering publishers’ former customers, advertisers, a better deal; it’s called competition). By this logic, Instagram should be buying cat food for every kitty in the land and Reddit owes a fortune to conspiracy theorists.

The real problem here is news publishers’ dogged refusal to understand how the internet has changed their world, throwing the paradigm they understood into the grinder. In the US and Europe, they still contend that Google is taking their “content,” as if quoting and linking to their sites is like a camera stealing their soul. They cannot grok that value on the internet is concentrated not in a product or property called content — articles, headlines, snippets, thumbnails, words — but instead in relationships. Journalism is no longer a factory valued by how many widgets and words it produces but instead by how much it accomplishes for people in their lives. I have tried here and here and in many a meeting in newsrooms and journalism conferences to offer this advice to news publishers — with tangible ideas about how to build a new journalistic business around relationships — but most prove incapable of shifting mindset and strategy beyond valuing content for content’s sake. Editors who do understand are often stymied by their short-sighted publishers and KPIs and soon quit.

Most legacy publishers have come up with no sustainable business strategy for a changing world. So they try to stop the world from changing by unleashing their trade associations [read: lobbyists] on capitals from Brussels to Berlin to London to Melbourne to Washington (see: the NMA’s effort to get an antitrust exemption to go after the platforms for antitrust; its study was prepared to hand to Congress in time for its hearings this week). These trade associations attack the platforms without ever acknowledging the fault of their own members in our current polarization in society. (Yes, I’m talking about, for example, Fox News and other Murdoch properties, dues-paying members of many a trade association. By our silence in journalism and its trade associations in not criticizing their worst, we endorse it.)

The efforts of lobbyists for my industry are causing irreparable harm to the internet. No, Google, Facebook, and Twitter are not the internet, but what is done to them is done to the net. And what’s been done includes horrendous new copyright legislation in the EU that tries to force Google et al to have to negotiate to pay for quoting snippets of content to which they link. Google won’t; it would be a fool to. So I worry that platforms will link to news less and less resulting in self-inflicted harm for the news industry and journalists, but more important hurting the public conversation at exactly the wrong moment. Thanks, publishers. At Newsgeist Europe, I sat in a room filled with journalists terribly worried about the impact of the EU’s copyright directive on their work and their business but I have to say they have no one but their own publishers and lobbyists to blame.

I am tempted to say that I am ashamed of my own industry. But I won’t for two reasons: First, I want to believe that the industry’s lobbyists do not speak for journalists themselves — but I damned well better start hearing the protests of journalists to what their companies are doing. (That includes journalists on the NMA board.) Second, I am coming to see that I’m not part of the media industry but instead that we are all part of something larger, which we now see as the internet. (I’ll be writing more about this idea later.) That means we have a responsibility to criticize and help improve both technology and news companies. What I see instead is too many journalists stirring up moral panic about the internet and its current (by no means permanent) platforms, serving — inadvertently or not — the protectionist strategies of their own bosses, without examining media’s culpability in many of the sins they attribute to technology. (I wish I could discuss this with The New York Times’ ombudsman or any ombudsman in our field, but we know what happened to them.)

My point: We’re in this together. That is why I go to events put on by both the technology and news industries, why I try to help both, why I criticize both, why I try to help build bridges between them. It’s why I am devoting time and effort to my least favorite subject: internet regulation. It is why I am so exasperated at leaders in my own industry for their failure to recognize, adapt to, and exploit the change they try to deny. It’s why I’m disappointed in my own industry for not criticizing itself. Getting politicians who are almost all painfully ignorant about technology to try to define, limit, and regulate that technology and what we can do with it is the last thing we should do. It is irresponsible and dangerous of my industry to try.

Regulating the net is regulating us

Here are three intertwined posts in one: a report from inside a workshop on Facebook’s Oversight Board; a follow-up on the working group on net regulation I’m part of; and a brief book report on Jeff Kosseff’s new and very good biography of Section 230, The Twenty-Six Words That Created the Internet.

Facebook’s Oversight Board

Last week, I was invited — with about 40 others from law, media, civil society, and the academe — to one of a half-dozen workshops Facebook is holding globally to grapple with the thicket of thorny questions associated with the external oversight board Mark Zuckerberg promised.

(Disclosures: I raised money for my school from Facebook. We are independent and I receive no compensation personally from any platform. The workshop was held under Chatham House rule. I declined to sign an NDA and none was then required, but details about to real case studies were off the record.)

You may judge the oversight board as you like: as an earnest attempt to bring order and due process to Facebook’s moderation; as an effort by Facebook to slough off its responsibility onto outsiders; as a PR stunt. Through the two-day workshop, the group kept trying to find an analog for Facebook’s vision of this: Is it an appeals court, a small-claims court, a policy-setting legislature, an advisory council? Facebook said the board will have final say on content moderation appeals regarding Facebook and Instagram and will advise on policy. It’s two mints in one.

The devil is the details. Who is appointed to the board and how? How diverse and by what definitions of diversity are the members of the board selected? Who brings cases to the board (Facebook? people whose content was taken down? people who complained about content? board members?)? How does the board decide what cases to hear? Does the board enforce Facebook policyor can it countermand it? How much access to data about cases and usage will the board have? How much authority will the board have to bring in experts and researchers and what access to data will they have? How does the board scale its decision-making when Facebook receives 3 million reports against content a day? How is consistency found among the decisions of three-member panels in the 40ish-member board? How can a single board in a single global company be consistent across a universe of cultural differences and sensitive to them? As is Facebook’s habit, the event was tightly scheduled with presentations and case studies and so — at least before I had to leave in day two — there was less open debate of these fascinating questions than I’d have liked.

Facebook starts with its 40 pages of community standards, updated about every two weeks, which are in essence its statutes. I recommend you look through them. They are thoughtful and detailed. For example:

A hate organization is defined as: Any association of three or more people that is organized under a name, sign or symbol and that has an ideology, statements or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.

At the workshop, we heard how a policy team sets these rules, how product teams create the tools around them, and how operations — with people in 20 offices around the world, working 24/7, in 50 languages — are trained to enforce them.

But rules — no matter how detailed — are proving insufficient to douse the fires around Facebook. Witness the case, only days after the workshop, of the manipulated Nancy Pelosi video and subsequent cries for Facebook to take it down. I was amazed that so many smart people thought it was an easy matter for Facebook to take down the video because it was false, without acknowledging the precedent that would set requiring Facebook henceforth to rule on the truth of everything everyone says on its platform — something no one should want. Facebook VP for Product Policy and Counterterrorism Monika Bickert (FYI: I interviewed her at a Facebook safety event the week before) said the company demoted the video in News Feed and added a warning to the video. But that wasn’t enough for those out for Facebook’s hide. Here’s a member of the UK Parliament (who was responsible for the Commons report on the net I criticized here):

So by Collins’ standard, if UK politicians in his own party claim as a matter of malicious political disinformation that the country pays £350m per week to the EU that would be freed up for the National Health Service with Brexit and that’s certified by journalists to be “willful distortion,” should Facebook be required to take that statement down? Just asking. It’s not hard to see where this notion of banning falsity goes off the rails and has a deleterious impact on freedom of expression and political discussion.

But politicians want to take bites out of Facebook’s butt. They want to blame Facebook for the ill-informed state of political debate. They want to ignore their own culpability. They want to blame technology and technology companies for what people — citizens — are doing.

Ditto media. Here’s Kara Swisher tearing off her bit of Facebook flesh regarding the Pelosi video: “Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.”

Sigh. The internet is not media. Facebook is not news (only 4% of what appears there is). What you see there is not content. It is conversation. The internet and Facebook are means for the vast majority of citizenry forever locked out of media and of politics to discuss whatever they want, whether you like it or not. Those who want to control that conversation are the privileged and powerful who resent competition from new voices.

By the way, media people: Beware what you wish for when you declare that platforms are media and that they must do this or that, for your wishes could blow back on you and open the door for governments and others to demand that media also erase that which someone declares to be false.

Facebook’s oversight board is trying to mollify its critics — and forestall regulation of it — by meeting their demands to regulate content. Therein lies its weakness, I think: regulating content.

Regulating Actors, Behaviors, or Content

A week before the Facebook workshop, I attended a second meeting of a Transatlantic High Level Working Group on Content Moderation and Freedom of Expression (read: regulation), which I wrote about earlier. At the first meeting, we looked at separating treatment of undesirable content (dealt with under community standards such as Facebook’s) from illegal content (which should be the purview of government and of an internet court; details on that proposal here.)

At this second meeting, one of the brilliant members of the group (held under Chatham House, so I can’t say who) proposed a fundamental shift in how to look at efforts to regulate the internet, proposing an ABC rule separating actors from behaviors from content. (Here’s another take on the latest meeting from a participant.)

It took me time to understand this, but it became clear in our discussion that regulating content is a dangerous path. First, making content illegal is making speech illegal. As long as we have a First Amendment and a Section 230 (more on that below) in the United States, that is a fraught notion. In the UK, a Commons committee recently released an Online Harms White Paper that demonstrates just how dangerous the idea of regulating content can be. The white paper wants to require — under pain of huge financial penalty for companies and executives — that platforms exercise a duty of care to take down “threats to our way of life” that include not only illegal and harmful content (child porn, terrorism) but also legal and harmful content (including trolling [please define] and disinformation [see above]). Can’t they see that government requiring the takedown of legal content makes it illegal? Can’t they see that by not defining harmful content, they put a chill on all speech? For an excellent takedown of the report, see this post by Graham Smith, who says that what the Commons committee is impossibly vague. He writes:

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy…

See: This is the problem when you try to identify, regulate, and eliminate bad content. Smith concludes: “This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.” Nevermind the common analogy to regulation of broadcast. Would we ever suffer such talk about regulating the contents of bookstores or newspapers or — more to the point — conversations in the corner bar?

What becomes clear is that these regulatory methods — private (at Facebook) and public (in the UK and across Europe) — are aimed not at content but ultimately at behavior, only they don’t say so. It is nearly impossible to judge content in isolation. For example, my liberal world is screaming about the slow-Pelosi video. But then what about this video from three years ago?

What makes one abhorrent and one funny? The eye of the beholder? The intent of the creator? Both. Thus content can’t be judged on its own. Context matters. Motive matters. But who is to judge intent and impact and how?

The problem is that politicians and media do not like certain behavior by certain citizens. They cannot figure out how to regulate it at scale (and would prefer not to make the often unpopular decisions required), so they assign the task to intermediaries — platforms. Pols also cannot figure out how to define the bad behavior they want to forbid, so they decide instead to turn an act into a thing — content — and outlaw that under vague rules they expect intermediaries to enforce … or else.

The intermediaries, in turn, cannot figure out how to take this task on at scale and without risk. In an excellent Harvard Law Review paper called The New Governors: The People, Rules, and Processes Governing Online Speech, legal scholar Kate Klonick explains that the platforms began by setting standards. Facebook’s early content moderation guide was a page long, “so it was things like Hitler and naked people,” says early Facebook community exec Dave Willner. Charlotte Willner, who worked in customer service then (they’re now married), said moderators were told “if it makes you feel bad in your gut, then go ahead and take it down.” But standards — or statements of values— don’t scale as they are “often vague and open ended” and can be “subject to arbitrary and/or prejudiced enforcement.” And algos don’t grok values. So the platforms had to shift from standards to rules. “Rules are comparatively cheap and easy to enforce,” says Klonick, “but they can be over- and underinclusive and, thus, can lead to unfair results. Rules permit little discretion and in this sense limit the whims of decisionmakers, but they also can contain gaps and conflicts, creating complexity and litigation.” That’s where we are today. Thus Facebook’s systems, algorithmic and human, followed its rules when they came across the historic photo of a child in a napalm attack. Child? Check. Naked? Check. At risk? Check. Take it down. The rules and the systems of enforcement could not cope with the idea that what was indecent in that photo was the napalm.

Thus the platforms found their rule-led moderators and especially their algorithms needed nuance. Thus the proposal for Facebook’s Oversight Board. Thus the proposal for internet courts. These are attempts to bring human judgment back into the process. They attempt to bring back the context that standards provide over rules. As they do their work, I predict these boards and courts will inevitably shift from debating the acceptability of speech to trying to discern the intent of speakers and the impact on listeners. They won’t be regulating a thing: content. They will be regulating the behavior of actors: us.

There are additional weaknesses to the rules-based, content-based approach. One is that community standards are rarely set by the communities themselves; they are imposed on communities by companies. How could it be otherwise? I remember long ago that Zuckerberg proposed creating a crowdsourced constitution for Facebook but that quickly proved unwieldy. I still wonder whether there are creative ways to get intentional and explicit judgments from communities as to what is and isn’t acceptable for them — if not in a global service, then user-by-user or community-by-community. A second weakness of the community standards approach is that these rules bind users but not platforms. I argued in a prior post that platforms should create two-way covenants with their communities, making assurances of what the company will deliver so it can be held accountable.

Earlier this month, the French government proposed an admirably sensible scheme for regulation that tries to address a few of those issues. French authorities spent months embedded in Facebook in a role-playing exercise to understand how they could regulate the platform. I met a regulator in charge of this effort and was impressed with his nuanced, sensible, smart, and calm sense of the task. The proposal does not want to regulate content directly — as the Germans do with their hate speech law, called NetzDG, and as the Brits propose to do going after online harms.

Instead, the French want to hold the platforms accountable for enforcing the standards and promises they set: say what you do, do what you say. That enables each platform and community to have its own appropriate standards (Reddit ain’t Facebook). It motivates platforms to work with their users to set standards. It enables government and civil society to consult on how standards are set. It requires platforms to provide data about their performance and impact to regulators as well as researchers. And it holds companies accountable for whether they do what they say they will do. It enables the platforms to still self-regulate and brings credibility through transparency to those efforts. Though simpler than other schemes, this is still complex, as the world’s most complicated PowerPoint slide illustrates:

I disagree with some of what the French argue. They call the platforms media (see my argument above). They also want to regulate only the three to five largest social platforms — Facebook, YouTube, Twitter— because they have greater impact (and because that’s easier for the regulators). Except as soon as certain groups are shooed out of those big platforms, they will dig into small platforms, feeling marginalized and perhaps radicalized, and do their damage from there. The French think some of those sites are toxic and can’t be regulated.

All of these efforts — Facebook’s oversight board, the French regulator, any proposed internet court — need to be undertaken with a clear understanding of the complexity, size, and speed of the task. I do not buy cynical arguments that social platforms want terrorism and hate speech kept up because they make money on it; bull. In Facebook’s workshop and in discussions with people at various of the platforms, I’ve gained respect for the difficulty of their work and the sincerity of their efforts. I recommend Klonick’s paper as she attempts to start with an understanding of what these companies do, arguing that

platforms have created a voluntary system of self-regulation because they are economically motivated to create a hospitable environment for their users in order to incentivize engagement. This regulation involves both reflecting the norms of their users around speech as well as keeping as much speech as possible. Online platforms also self-regulate for reasons of social and corporate responsibility, which in turn reflect free speech norms.

She quotes Lawrence Lessig predicting that a “code of cyberspace, defining the freedoms and controls of cyberspace, will be built. About that there can be no doubt. But by whom, and with what values? That is the only choice we have left to make.”

And we’re not done making it. I think we will end up with a many-tiered approach, including:

  1. Community standards that govern matters of acceptable and unacceptable behavior. I hope they are made with more community input.
  2. Platform covenants that make warranties to users, the public, and government about what they will endeavor to deliver in a safe and hospitable environment, protecting users’ human rights.
  3. Algorithmic means of identifying potentially violating behavior at scale.
  4. Human appeals that operate like small claims courts.
  5. High-level oversight boards that rule and advise on policy.
  6. Regulators that hold companies accountable for the guarantees they make.
  7. National internet courts that rule on questions of legality in takedowns in public, with due process. Companies should not be forced to judge legality.
  8. Legacy courts to deal with matters of illegal behavior. Note that platforms often judge a complaint first against their terms of service and issue a takedown before reaching questions about illegality, meaning that the miscreants who engage in that illegal behavior are not reported to authorities. I expect that governments will complain platforms aren’t doing enough of their policing — and that platforms will complain that’s government’s job.

Numbers 1–5 occur on the private, company side; the rest must be the work of government. Klonick calls the platforms “the New Governors,” explaining that

online speech platforms sit between the state and speakers and publishers. They have the role of empowering both individual speakers and publishers … and their transnational private infrastructure tempers the power of the state to censor. These New Governors have profoundly equalized access to speech publication, centralized decentralized communities, opened vast new resources of communal knowledge, and created infinite ways to spread culture. Digital speech has created a global democratic culture, and the New Governors are the architects of the governance structure that runs it.

What we are seeking is a structure of checks and balances. We need to protect the human rights of citizens to speak and to be shielded from such behaviors as harassment, threat, and malign manipulation (whether by political or economic actors). We need to govern the power of the New Governors. We also need to protect the platforms from government censorship and legal harassment. That’s why we in America have Section 230.

Section 230 and ‘The Twenty-Six Words that Created the Internet’

We are having this debate at all because we have the “online speech platforms,” as Klonick calls them — and we have those platforms thanks to the protection given to technology companies as well as others (including old-fashioned publishers that go online) by Section 230, a law written by Oregon Sen. Ron Wyden (D) and former California Rep. Chris Cox (R) and passed in 1996 telecommunications reform. Jeff Kosseff wrote an excellent biography of the law that pays tribute to these 26 words in it:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Those words give online companies safe harbor from legal liability for what other people say on their sites and services. Without that protection, online site operators would have been motivated to cut off discussion and creativity by the public. Without 230, I doubt we would have Facebook, Twitter, Wikipedia, YouTube, Reddit, news comment sections, blog platforms, even blog comments. “The internet,” Kosseff writes, “would be little more than an electronic version of a traditional newspaper or TV station, with all the words, pictures, and videos provided by a company and little interaction among users.” Media might wish for that. I don’t.

In Wyden’s view, the 26 words give online companies not only this shield but also a sword: the power and freedom to moderate conversation on their sites and platforms. Before Section 230, a Prodigy case held that if an online proprietor moderated conversation and failed to catch something bad, the operator would be more liable than if it had not moderated at all. Section 230 reversed that so that online companies would be free to moderate without moderating perfectly — a necessity to encourage moderation at scale. Lately, Wyden has pushed the platforms to use their sword more.

In the debate on 230 on the House floor, Cox said his law “will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the internet….”

In his book, Kosseff takes us through the prehistory of 230 and why it was necessary, then the case law of how 230 has been tested again and again and, so far, survived.

But Section 230 is at risk from many quarters. From the far right, we hear Trump and his cultists whine that they are being discriminated against because their hateful disinformation (see: Infowars) is being taken down. From the left, we see liberals and media gang up on the platforms in a fit of what I see as moral panic to blame them for every ill in the public conversation (ignoring politicians’ and media’s fault). Thus they call for regulating and breaking up technology companies. In Europe, countries are holding the platforms — and their executives and potentially even their technologists — liable for what the public does through their technology. In other nations — China, Iran, Russia — governments are directly controlling the public conversation.

So Section 230 stands alone. It has suffered one slice in the form of the FOSTA/SESTA ban on online sex trafficking. In a visit to the Senate with the regulation working group I wrote about above, I heard a staffer warn that there could be further carve-outs regarding opioids, bullying, political extremism, and more. Meanwhile, the platforms themselves didn’t have the guts to testify in defense of 230 and against FOSTA/SESTA (who wants to seem to be on the other side of banning sex trafficking?). If these companies will not defend the internet, who will? No, Facebook and Google are not the internet. But what you do to them, you do to the net.

I worry for the future of the net and thus of the public conversation it enables. That is why I take so seriously the issues I outline above. If Section 230 is crippled; if the UK succeeds in demanding that Facebook ban undefined harmful but legal content; if Europe’s right to be forgotten expands; if France and Singapore lead to the spread of “fake news” laws that require platforms to adjudicate truth; if the authoritarian net of China and Iran continues to spread to Russia, Turkey, Hungary, the Philippines, and beyond; if …

If protections of the public conversation on the net are killed, then the public conversation will suffer and voices who could never be heard in big, old media and in big, old, top-down institutions like politics will be silenced again, which is precisely what those who used to control the conversation want. We’re in early days, friends. After five centuries of the Gutenberg era, society is just starting to relearn how to hold a conversation with itself. We need time, through fits and starts, good times and bad, to figure that out. We need our freedom protected.

Without online speech platforms and their protection and freedom, I do not think we would have had #metoo or #blacklivesmatter or #livingwhileblack. Just to see one example of what hashtags as platforms have enabled, please watch this brilliant talk by Baratunde Thurston and worry about what we could be missing.

None of this is simple and so I distrust all the politicians and columnists who think they have simple solutions: Just make Facebook kill this or Twitter that or make them pay or break them up. That’s simplistic, naive, dangerous, and destructive. This is hard. Democracy is hard.

Europe Against the Net

I’ve spent a worrisome weekend reading three documents from Europe about regulating the net:

In all this, I see danger for the net and its freedoms posed by corporate protectionism and a rising moral panic about technology. One at a time:

Articles 11 & 13: Protectionism gone mad

Article 11 is the so-called link tax, the bastard son of the German Leistungsschutzrechtor ancillary copyright that publishers tried to use to force Google to pay for snippets. They failed. They’re trying again. Reda, a member of the European Parliament, details the dangers:

Reproducing more than “single words or very short extracts” of news stories will require a licence. That will likely cover many of the snippets commonly shown alongside links today in order to give you an idea of what they lead to….

No exceptionsare made even for services run by individuals, small companies or non-profits, which probably includes any monetised blogs or websites.

European journalists protest that this will serve media corporations, not journalists. Absolutely.

But the danger to free speech, to the public conversation, and to facts and evidence are greater. Journalism and the academe have long depended on the ability to quote — at length — source material to then challenge or expand upon or explain it. This legislation begins to make versions of that act illegal. You’d have to pay a license to a news property to quote it. Nevermind that 99.9 percent of journalism quotes others. The results: Links become blind alleys sending you to god-knows-what dark holes exploited by spammers and conspiracy theories. News sites lose audience and impact (witness how a link tax forced Google News out of Spain). Even bloggers like me could be restricted from quoting others as I did above, killing the web’s magnificent ability to foster conversation with substance.

Why do this? Because publishers think they can use their clout to get legislators to bully the platforms into paying them for their “content,” refusing to come to grips with the fact that the real value now is in the audience the platforms send to the publishers. It is corporate protectionism born of political capital. It is corrupt and corrupting of the net. It is a crime.

Article 13 is roughly Europe’s version of the SOPA/PIPA fight in the U.S.: protectionism on behalf of entertainment media companies. It requires sites where users might post material —isn’t that every interactive site on the net ?— to “preemptively buy licenses for anything that users may possibly upload,” in Reda’s explanation. They will also have to deploy upload filters — which are expensive to operate and notoriously full of false positives — to detect anything that is not licensed. The net: Sites will not allow anyone to post any media that could possibly come from anywhere.

So we won’t be able to quote or adapt. Death to the meme. Yes, there are exceptions for criticism, but as Lawrence Lessig famously said “fair use is the right to hire a lawyer.” This legislation attempts to kill what the net finally brought to society: diverse and open conversation.

Cairncross Review: Protecting journalism as it was

The UK dispatched Dame Frances Cairncross, a former journalist and economist, to review the imperiled state of news and she returned with a long and well-intentioned but out-of-date document. A number of observations:

  • She fails — along with many others — to define quality journalism. “Ultimately, ‘high quality journalism’ is a subjective concept that depends neither solely on the audience nor the news provider. It must be truthful and comprehensive and should ideally — but not necessarily — be edited. You know it when you see it….” (Just like porn, but porn’s easier.) Thus she cannot define the very thing her report strives to defend. A related frustration: She doesn’t very much criticize the state of journalism or the reasons why trust in it is foundering, only noting its fall.
  • I worry greatly about her conclusion that “intervention may be needed to determine what, and how, news is presented online.” So you can’t define quality but you’re going to regulate how platforms present it? Oh, the platforms are trying to understand quality in news. (Disclosure: I’m working on just such a project, funded by but independent of Facebook.) But the solutions are not obvious. Cairncross wants the platforms to have an obligation “to nudge people towards reading news of high quality” and even to impose quotas for quality news on the platforms. Doesn’t that make the platforms the editors? Is that what editors really want? Elsewhere in the report, she argues that “this task is too important to leave entirely to the judgment of commercial entities.” But BBC aside, that is where the task of news lies today: in commercial entities. Bottom line: I worry about *any* government intervention in speech and especially in journalism.
  • She rightly focuses less on national publications and more on the loss of what she calls “public interest news,” which really means local reporting on government. Agreed. She also glances by the paradox that public-interest news “is often of limited interest to the public.” Well, then, I wish she had looked at the problem and opportunity from the perspective of what the net makes possible. Why not start with new standards to require radical transparency of government, making every piece of legislation, every report, every budget public? There have been pioneering projects in the UK to do just that. That would make the task of any journalist more efficient and it would enable collaborative effort by the community: citizens, librarians, teachers, classes…. She wants a government fund to pay for innovations in this arena. Fine, then be truly innovative. She further calls for the creation of an Institute for Public Interest News. Do we need another such organization? Journalism has so many.
  • She explores a VAT tax break for subscriptions to online publications. Sounds OK, but I worry that this would motivate more publications to put up paywalls, which will further redline quality journalism for those who can afford it.
  • She often talked about “the unbalanced relationship between publishers and online platforms.” This assumes that there is some natural balance, some stasis that can be reestablished, as if history should be our only guide. No, life changed with the internet.
  • She recommends that the platforms be required to set out codes of conduct that would be overseen by a regulator “with powers to insist on compliance.” She wants the platforms to commit “not to index more than a certain amount of a publisher’s content without an explicit agreement.” First, robots.txt and such already put that in publishers’ control. Second, Cairncross acknowledges that links from platforms are beneficial. She worries about — but does not define — too much linking. I see a slippery slope to Article 11 (above) and, really, so does Cairncross: “There are grounds for worrying that the implementation of Article 11 in the EU may backfire and restrict access to news.” In her code of conduct, platforms should not impose their ad platforms on publishers — but if publishers want revenue from the platforms they pretty much have to. She wants platforms to give early warnings of changes in algorithms but that will be spammed. She wants transparency of advertising terms (what other industries negotiate in public?).
  • Cairncross complains that “most newspapers have lacked the skills and resources to make good use of data on their readers” and she wants the platforms to share user data with publishers. I agree heartily. This is why I worry that another European regulatory regime — GDPR — makes that nigh unto impossible.
  • She wants a study of the competitive landscape around advertising. Yes, fine. Note, thought, that advertising is becoming less of a force in publishers’ business plans by the day.
  • Good news: She rejects direct state support for journalism because “the effect may be to undermine trust in the press still further, at a time when it needs rebuilding.” She won’t endorse throttling the BBC’s digital efforts just because commercial publishers resent the competition. She sees danger in giving the publishing industry an antitrust exception to negotiate with the platforms (as is also being proposed in the U.S.) because that likely could lead to higher prices. And she thinks government should help publishers adapt by “encouraging the development and distribution of new technologies and business models.” OK, but what publishers and which technologies and models? If we knew which ones would work, we’d already be using them.
  • Finally, I note a subtle paternalism in the report. “The stories people want to read may not always be the ones they ought to read in order to ensure that a democracy can hold its public servants properly to account.” Or the news people need in their lives might not be the news that news organizations are reporting. Also: Poor people — who would be cut off by paywalls — “are not just more likely to have lower levels of literacy than the better-off; their digital skills also tend to be lower.” Class distinctions never end.

It’s not a bad report. It is cautious. But it’s also not visionary, not daring to imagine a new journalism for a new society. That is what is really needed.

The Commons report: Finding fault

The Digital, Culture, Media and Sport Committee is famously the body Mark Zuckerberg refused to testify before. And, boy, are they pissed. Most of this report is an indictment of Facebook on many sins, most notably Cambridge Analytica. For the purposes of this post, about possible regulation, I won’t indulge in further prosecuting or defending the case against Facebook (see my broader critique of the company’s culture here). What interests me in this case is the set of committee recommendations that could have an impact on the net, including our net outside of the UK.

The committee frets — properly — over malicious impact of Brexit. And where did much of the disinformation that led to that disaster come from? From politicians: Nigel Farage, Boris Johnson, et al. This committee, headed by a conservative, makes no mention of colleagues. As with the Cairncross report, why not start at home and ask what government needs to do to improve the state of its contribution to the information ecosystem? A few more notes:

  • Just as Cairncross has trouble defining quality journalism, the Commons committee has trouble defining the harm it sees everywhere on the internet. It puts off that critical and specific task to an upcoming Online Harms white paper from the government. (Will there also be an Online Benefits white paper?) The committee calls for holding social media companies — “which is not necessarily either a ‘platform’ or a ‘publisher’,” the report cryptically says — liable for “content identified as harmful after it has been posted by users.” The committee then goes much farther, threatening not just tech companies but technologists. My emphasis: “If tech companies (including technological engineers involved in creating the software for the companies) are found to have failed to meet their obligations under such a Code [of Ethics], and not acted against the distribution of harmful and illegal content, the independent regulator should have the ability to launch legal proceedings against them, with the prospect of large fines being administered….” Them’s fightin’ words, demonizing not just the technology and the technology company but the technologist.
  • Again and again in reading the committee’s report, I wrote in the margin “China” or “Iran,” wondering how the precedents and tools wished for here could be used by authoritarian regimes to control speech on the net. For example: “There is now an urgent need to establish independent regulation. We believe that a compulsory Code of Ethics should be established, overseen by an independent regulator, setting out what constitutes harmful content.” How — except in the details — does that differ from China deciding what is harmful to the minds of the masses? Do we really believe that a piece of “harmful content” can change the behavior of a citizen for the worse without many other underlying causes? Who knows best for those citizens? The state? Editors? Technologists? Or citizens themselves? The committee notes — with apparent approval — a new French law that “allows judges to order the immediate removal of online articles that they decide constitute disinformation.” All this sounds authoritarian to me and antithetical to the respect and freedom the net gives people.
  • The committee expands the definition of personal data — which, under GDPR, is already ludicrously broad, to include, for example, your IP address. It wants to include “inferred data.” I hate to think what that could do to the discipline of machine learning and artificial intelligence — to the patterns and inferences that will compose patterns discerned and knowledge produced by machines.
  • The committee wants to impose a 2% “digital services tax on UK revenues of big technology companies.” On what basis, besides vendetta against big (American) companies?
  • The Information Commissioner told the committee that “Facebook needs to significantly change its business model and its practices to maintain trust.” How often does government get into the nitty-gritty of companies’ business models? And let’s be clear: The problem with Facebook’s business model — click-based, volume-based, attention-based advertising — is precisely what drove media into the abyss of mistrust. So should the government tell media to change its business model? They wouldn’t dare.
  • The report worries about the “pernicious nature of micro-targeted political adverts” and quotes the Coalition for Reform in Political Advertising recommending that “all factual claims used in political ads be pre-cleared; an existing or new body should have the power to regulate political advertising content.” So government in power would clear the content of ads of challengers? What could possibly go wrong? And micro-targeting of one sort or another is also what enables small communities with specific interests to find each other and organize. Give up your presumptions of the mass.
  • The report argues “there needs to be absolute transparency of online political campaigning.” I agree. Facebook, under pressure, created a searchable database of political ads. I think Facebook should do more and make targeting data public. And I think every — every — other sector of media should match Facebook. Having said that, I still think we need to be careful about setting precedents that might not work so well in countries like, say, Hungry or Turkey, where complete transparency in political advertising and activism could lead to danger for opponents of authoritarian regimes.
  • The committee, like Cairncross, expresses affection for eliminating VAT taxes on digital subscriptions. “This would eliminate the false incentive for news companies against developing more paid-for digital services.” Who said what is the true or false business model? I repeat my concern that government meddling in subscription models could have a deleterious impact on news for the public at large, especially the poor. It would also put more news behind paywalls, with less audience, resulting in less impact from it. (A hidden agenda, perhaps?)
  • “The Government should put pressure on social media companies to publicize any instances of disinformation,” the committee urges. OK. But define “disinformation.” You’ll find it just as challenging as defining “quality news” and “harm.”
  • The committee, like Cairncross, salutes the flag of media literacy. I remain dubious.
  • And the committee, like Cairncross, sometimes reveals its condescension. “Some believe that friction should be reintroduced into the online experience, by both tech companies and by individual users themselves, in order to recognize the need to pause and think before generating or consuming content.” They go so far as to propose that this friction could include “the ability to share a post or a comment, only if the sharer writes about the post; the option to share a post only when it has been read in its entirety.” Oh, for God’s sake: How about politicians pausing and thinking before they speak, creating the hell that is Brexit or Trump?

In the end, I fear all this is hubris: to think that we know what the internet is and what its impact will be before we dare to define and limit the opportunities it presents. I fear the paternalistic unto authoritarian worldview that those with power know better than those without. I fear the unintended — and intended — consequences of all this regulation and protectionism. I trust the public to figure it out eventually. We figured out printing and steam power and the telegraph and radio and television. We will figure out the internet if given half a chance.

And I didn’t even begin to examine what they’re up to in Australia…

Leave our net alone*

The internet’s not broken.

So then why are there so many attempts to regulate it? Under the guises of piracy, privacy, pornography, predators, indecency, and security, not to mention censorship, tyranny, and civilization, governments from the U.S. to France to Germany to China to Iran to Canada — as well as the European Union and the United Nations — are trying to exert control over the internet.

Why? Is it not working? Is it presenting some new danger to society? Is it fundamentally operating any differently today than it was five or ten years ago? No, no, and no.

So why are governments so eager to claim authority over it? Why would legacy corporations, industries, and institutions egg them on? Because the net is working better than ever. Because they finally recognize how powerful it is and how disruptive it is to their power.

And that is precisely why we must fight against their attempts to regulate it, to change it, to throttle it, to oversee it, to insert controls into it, to grant them sovereignty over it. We also must resist the temptation to compromise, to accept the lesser of evils. Last week, Federal Communications Commissioner Robert McDowell warned of the danger of the U.N. asserting governance over the net, but then he turned around and argued that “merely saying ‘no’ to any changes to the current structure of Internet governance is likely to be a losing proposition.”

Why? I repeat: It’s not broken. This is why I urged French President Nicolas Sarkozy to take a Hippocratic oath for the net. This is why I have come to side with Sen. Al Franken on at least this: Net neutrality is not regulation; it is protecting the net from companies trying to change it. This is why the Reddit community is writing the Free Internet Act.

This is why I argued in Public Parts that we must have a discussion of the principles of an open society and the tools of publicness that enable it. This is why I wrote Public Parts. And that is why I’m posting the last chapter of the book, which argues that governments and companies are not protectors of the net and that we must be.

It’s not broken. Don’t fix it. Leave our net alone.

*Sung to the tune of….

We don’t need no regulation.
We dont need no thought control
No dark sarcasm in the network
Government: Leave our net alone
Hey! Government! Leave our net alone!
All in all it’s just another brick in the wall.
All in all you’re just another brick in the wall.