For the last year, I’ve been engaged in a project to aggregate signals of quality in news so platforms and advertisers can recognize and give greater promotion and support to good journalism over crap.
I’ve seen that we’re missing a key — the key — signal of quality in journalism. We don’t judge journalism ourselves.
Oh, we give each other lots and lots of awards. But we have no systematized way for the public we serve to question or complain about our work and no way to judge journalistic failure while providing guidance in matters of journalistic quality and ethics.
I would like to see a structure that would enable anyone — citizen, journalist, subject — to file a question or complaint to this organization — call it a board, a jury, a court, a council, a something — that would select cases to consider.
Who would be on that board? I doubt that working journalists would be willing to judge colleagues and competitors, lest they be judged. So I’d start with journalism professors — fully aware that’s a self-serving suggestion (I would not serve, as I’d be a runaway juror) and that we the ivy-covered can be accused of being either revolutionaries or sticks-in-the-mud; it’s a place to start. I would include journalism grandees who’ve retired or branched out to other callings but bring experience, authority, and credibility with journalists. I would add representatives of civil society, assuring diversity of community, background, and perspective. Will these people have biases? Of course, they will; judge their judgment accordingly.
How would cases be taken up? Anyone could file a case. Yes, some will try to game the system: ten thousand complaints against a given outlet; volume is meaningless. The jury must have full freedom and authority to grant certiorari to specific cases. Time would be limited, so they would pick cases based on whether they are particularly important, representative, instructive, or new.
What would jurors produce? I would want to see thoughtful debate and consideration of difficult questions about journalistic quality yielding constructive and useful criticism of present practice. There is no better model than Margaret Sullivan’s tenure as public editor of The New York Times.
Isn’t this the wrong time to do this, just as the president is attacking the press as the enemy of the people? It’s precisely the right time to do this, to show how we uphold standards, are not afraid of legitimate criticism, and learn from our mistakes. There is no better way to begin to build trust than to address our own faults with honesty and openness.
How would this be supported? Such an effort cannot be ad hoc and volunteer. Jurors’ time needs to be respected and compensated. There would need to be at least one administrative person to handle incoming cases and output of judgments. Calling all philanthropists.
Do journalists need to pay attention to the judgments? No. This is not a press council like the ones the UK keeps trying — and failing — to establish. It brings no obligation to news organizations. It is an independent organization itself with a responsibility to debate key issues in our rapidly changing field.
Would it enforce a given set of standards? I don’t think it should. There are as many journalistic codes of conduct and ethics as there are journalists and I think the jurors should feel free to call on any of them — or tread new territory, as demanded by the cases. I’m not sure that legacy standards will always be relevant as new circumstances evolve. It is also important to judge publications in their own context, against their own promises and standards. I have argued that news organizations (and internet platforms) should offer covenants to their users and the public; judge them against that.
Whom does it serve, journalists or the public? I think it must serve the interests of the public journalism serves. But I recognize that very few members of the public would read or necessarily give a damn about its opinions. The audience for the jury’s work would be primarily journalists as well as journalism students and teachers.
Will it convince our haters to love us? Of course not.
Isn’t Twitter the new ombudsperson? When The New York Times eliminated its public editor position, it said that social media would pick up the slack. “But today,” wrote then-publisher Arthur Sulzberger, “our followers on social media and our readers across the internet have come together to collectively serve as a modern watchdog, more vigilant and forceful than one person could ever be.” That should be the case. But in reality, when The Times is criticized, reporters there tend to unleash a barrage of defensiveness rather than dialog.
Now I don’t want to pick on The Times. I subscribe to and honor it as a critical institution; I disagree with those who react to Times’ missteps with public vows to cancel subscriptions. Indeed, I hope that this jury can act as a pressure-relief valve that leads to dialog over defensiveness and debate instead of our reflexive cancel culture.
Having said that, unfortunately The Times does give us a wealth of recent examples of the kinds of questions this jury could take up and debate. The latest is the paper’s decision to reveal details about the Trump-Ukraine whistleblower, potentially endangering the person, and the justification by editor Dean Baquet. I want to see a debate about the ethics and implications of such a decision and I believe we need a forum where that can happen. That case is why I decided to post this idea now.
This is not easy. It’s not simple. It’s not small. It might be a terrible idea. So make your suggestions, please. In the end, I believe we need to address trust in news not with media literacy that tries to teach the public how to use and trust what they don’t use and trust now; not with the codification of our processes and procedures; not with closing in around the few who love and pay for admission behind our walls; not by hectoring our legitimate critics with defensive whining; not with false balance in an asymmetrical media ecosystem; not with blaming others for our faults. No, I believe we need a means to listen to warranted criticism and gain value from it by grappling with our shortcoming so we can learn and improve. We don’t have that now. How could we build it?
Disclosure: NewsQA, the aggregator of news quality signals I helped start, has been funded in its first phase by Facebook. It is independent and will provide its data for free to all major platforms, ad networks, ad agencies, and advertisers as well as researchers.
This week, I wrote a dystopia of the dystopians, an extrapolation of current wishes among the anti-tech among us about dangers and regulation of technology, data, and the net. I tried to be detailed and in that I feared I may have gone too far. But now The Guardian shows me I wasn’t nearly dystopian enough, for a columnist there has beaten me to hell.
He essentially makes the argument that computers use a lot of energy; consumption of energy is killing the planet; ergo we should destroy the computers to save the planet.
But that is a smokescreen for his true argument against his real devil, data. And that frightens me. For to argue against data overall — its creation, its gathering, its analysis, its use — is to argue against information and knowledge. Tarnoff isn’t just trying to reverse the Digital Revolution and the Industrial Revolution. He’s trying to roll back the fucking Enlightenment.
That he is doing this in the pages of The Guardian, a paper I admire and love (and have worked and written for) saddens me doubly, for this is a news organization that once explored the opportunities — and risks — of technology with open eyes and curiosity in its reporting and with daring in its own strategy. Now its writers cry doom at every turn:
Digitization is a climate disaster: if corporations and governments succeed in making vastly more of our world into data, there will be less of a world left for us to live in.
It’s all digitization’s fault. That is textbook moral panic. To call on Ashley Crossman’s definition: “A moral panic is a widespread fear, most often an irrational one, that someone or something is a threat to the values, safety, and interests of a community or society at large. Typically, a moral panic is perpetuated by news media, fuelled by politicians, and often results in the passage of new laws or policies that target the source of the panic. In this way, moral panic can foster increased social control.”
The Bogeyman, in Tarnoff’s nightmare, is machine learning, for it creates an endless hunger for data to learn from. He acknowledges that computer scientists are working to run more of their machines off renewable energy rather than fossil fuel — see today’s announcement by Jeff Bezos. But again, computers consuming electricity isn’t Tarnoff’s real target.
But it’s clear that confronting the climate crisis will require something more radical than just making data greener. That’s why we should put another tactic on the table: making less data. We should reject the assumption that our built environment must become one big computer. We should erect barriers against the spread of “smartness” into all of the spaces of our lives.
To decarbonize, we need to decomputerize.
This proposal will no doubt be met with charges of Luddism. Good: Luddism is a label to embrace. The Luddites were heroic figures and acute technological thinkers.
Tarnoff admires the Luddites because they didn’t care about improvement in the future but fought to hold off that future because of their present complaints. They smashed looms. He wants to “destroy machinery hurtful to the common good.” He wants to smash computers. He wants to control and curtail data. He wants to reduce information .
No. Controlling information — call it data or call it knowledge — is never the solution, not in a free and enlightened society (not especially at the call of a journalist). If regulate you must, then regulate information’s use: You are free to know that I am 65 years old but you are not free to discriminate against me on the basis of that knowledge. Don’t outlaw facial recognition for police — as Bernie Sanders now proposes — instead, police how they use it. Don’t turn “machine learning” into a scare word and forbid it — when it can save lives — and be specific, bringing real evidence of the harms you anticipate, before cutting off the benefits. On this particular topic, I recommend Benedict Evans’ wise piece comparing today’s issues with facial recognition to those we had with databases at their introduction.
Here is where Tarnoff ends. Am I the only one who sees the irony in the greatest progressive newspaper of the English-speaking world coming out against progress?
The zero-carbon commonwealth of the future must empower people to decide not just how technologies are built and implemented, but whether they’re built and implemented. Progress is an abstraction that has done a lot of damage over the centuries. Luddism urges us to consider: progress towards what and progress for whom? Sometimes a technology shouldn’t exist. Sometimes the best thing to do with a machine is to break it.
[This is not prediction about tomorrow; it is extrapolation from today -Ed.]
This paper explores the victory of technological dystopians over technologists in regulation, legislation, courts, media, business, and culture across the United States, Europe, and other nations in the latter years of what is now known as the Trump Time.
The key moment for the dystos came a decade ago, with what Wired.com dubbed the Unholy Compromise of 2022, which found Trumpist conservatives and Warrenite liberals joining forces to attack internet companies. Each had their own motives — the Trumpists complaining about alleged discrimination against them when what they said online was classified as hate speech; the liberals inveighing against data and large corporations. It is notable that in the sixteen years of Trump Time, virtually nothing else was accomplished legislatively — not regarding climate, health care, or guns — other than passing anti-tech, anti-net, and anti-data laws.
In the aftermath, the most successful internet companies — Alphabet/Google, Facebook, Amazon — were broken up by regulators (but interestingly Comcast, Verizon, AT&T, Microsoft, Twitter, and the news behemoth Fox-Gatehouse-Sinclair were not). Collection and use of data by commercial entities and as well as by academic researchers was severely curtailed by new laws. Moderation requirements and consequent liability for copyright violations, hate, falsehood, unauthorized memories, and other forbidden speech were imposed on social-media companies, and then, via court rulings, on media and news organizations as well as individuals online. New speech courts were established in the U.S., the European Union, and the former United Kingdom countries to adjudicate disputes of falsehood and hate as well as information ownership, excessive expression, and political neutrality by net companies. Cabinet-level technology regulators in the U.S., the E.U, Canada, and Australia established mechanisms to audit algorithms, supported by software taxes as well as fines against technology companies, executives, and individual programmers. Certain technologies — most notably facial recognition — were outright outlawed. And in many American states, new curricula were mandated to educate middle- and high-school students about the dangers of technology.
The impact of all this has been, in my opinion, a multitude of unintended consequences. The eight companies resulting from the big net breakups are all still profitable and leading their now-restricted sectors with commanding market shares, and many have quietly expanded into new fields as new technologies have developed. Their aggregate market value has increased manyfold and no serious challengers have emerged.
Academic studies of divisiveness, hate, and harassment — though limited in their scope by data laws — have shown no improvement and most have found a steady decline in online decency and respect, especially as trolls and sowers of discord and disinformation took to nesting in the smaller, off-shore platforms that regularly sprout up from Russia, lately China, and other nations unknown. Other studies have found that with the resurrection of media gatekeepers in a more controlled ecosystem of expression, minority voices are heard less often in mainstream media than before the Compromise.
Even though news and media companies and their lobbyists largely won political battles by cashing in their political capital to gain protectionist legislation, these legacy companies have nonetheless continued and accelerated their precipitous declines into bankruptcy and dissolution, with almost a half of legacy news organizations ceasing operation in the last decade even as legislatively blessed media consolidation continues.
I would not go so far as to declare that we have reached the dystopia of the dystopians, though some would. In his final book, The Last Optimist, Jeff Jarvis wrote:
Far too early in the life of the internet and its possibilities, the dystos have exhibited the hubris of the self-declared futurist to believe they could foretell everything that could go wrong — and little that could go right — with internet and data technologies. Thus in their moral panic they prematurely defined and limited these technologies and cut off unforeseen opportunities. We now live in their age of fear: fear of technology, fear of data (that is, information and knowledge), fear of each other, fear of the future.
There are many reasons to be angry with the technology companies of the early internet. They were wealthy, hubristic, optimistic, expansionist, and isolated, thus deaf to the concerns — legitimate and not — of the public, media, and government. They were politically naive, not understanding how and why the institutions the net challenged — journalism, media, finance, politics, government, even nations — would use their collaborative clout and political capital to fight back and restrain the net at every opportunity. They bear but also share responsibility for the state of the net and society today with those very institutions.
Doctrines of dystos
In examining the legislation and precedents that came before and after the Compromise, certain beliefs, themes, and doctrines emerged:
The Doctrine of Dangerous Data: It would be simplistic to attribute a societal shift against “data” solely to Facebook’s Cambridge Analytica scandal of 2016, but that certainly appears to have been a key moment triggering the legislative landslide that followed. Regulation of data shifted from its use to its collection as laws were enacted to limit the generation, gathering, storage, and analysis of information associated with the internet. Scores of laws now require that data be used only for the single purpose stated at collection and others impose strict expiration on the life of data, mandating expiration and erasure. Academics and medical researchers — as well as some journalists — have protested such legislation, contending that they all but kill their ability to find correlation and causation in their fields, but they have failed to overturn a single law. Note well that similar data collection offline — by stores through loyalty cards, banks through credit cards, and so on — has seen no increase in regulation; marketers and publishers still make use of mountains of offline data in their businesses.
News companies and their trade associations demonized the use of data by their competitors, the platforms. In our Geoffrey Nunberg reading, “Farewell to the Information Age,” he quotes Philip Agre saying that “the term ‘information’ rarely evokes the deep and troubling questions of epistemology that are usually associated with terms like ‘knowledge’ and ‘belief.’ One can be a skeptic about knowledge but not about information. Information, in short, is a strikingly bland substance.” “Data,” on the other hand, became a dystopian scare word thanks to campaigns led by news and media companies and their trade associations and lobbyists, using their own outlets.
The Debate over Preeminent Data Ownership: In the 2020 American elections and every one following, countless politicians have vowed to protect consumers’ ownership of “their data” — and passed many laws as a result — but courts have still not managed to arrive at a consistent view of what owning one’s data means. Data generation is so often transactional — that is, involving multiple parties — that is has proven difficult to find a Solomonic compromise in deciding who has preeminent rights over a given piece of data and thus the legal right to demand its erasure. In California v. Amazon Stores, Inc. — which arose from a customer’s embarrassment about purchases of lubricating gels — the Supreme Court decided, in an expansion of its long-held Doctrine of Corporate Personhood, that a company has equal rights and cannot be forced to forget its own transactions. In Massachusetts v. Amazon Web Services, Inc., an appellate panel ruled that AWS could be forced to notify individuals included in databases it hosted and in one case could be forced to erase entire databases upon demand by aggrieved individuals. Despite friend-of-the-court filings by librarians, educators, and civil libertarians, claims of a countervailing right to know or remember by parties to transactions — or by the public itself — have failed to dislodge the preeminence of the right to be forgotten.
Privacy Über Alles: Privacy legislation — namely Europe’s General Data Protection Regulation (GDPR) — led the way for all net legislation to follow. Every effort to track any activity by people — whether by cameras or cookies — was defined as “surveillance” and was banned under a raft of net laws worldwide. In every single case, though, these technologies and their use were reserved for government use. Thus “surveillance” lost its commercial meaning and regained its more focused definition as an activity of governments, which continue to track citizens. Separate legislation in some nations granted people the expectation of privacy in public, which led to severe restrictions on photography by not only journalists but also civilians, requiring that the faces of every unknown person in a photo or video who has not given written consent — such as a birthday party in a restaurant — be blurred.
The Doctrine of Could Happen: A pioneering 2024 German law that has been — to use our grandparents’ term — xeroxed by the European Commission and then the United States, Canada, Australia, and India requires that companies must file Technology Impact Reports (TIRs) for any new technology patent, algorithm, or device introduced to the market. To recount, the TIR laws give limited liability protection for any possible impact that has been revealed before the introduction of a technology; if a possible outcome is not anticipated and listed and then occurs, there is no limit to liability. Thus an entirely new industry — complete with conventions, consultants, and newsletters — has exploded to help any and every company using technology to imagine and disclose everything that could go wrong with any software or device. There is no comparable industry of consultants ready to imagine everything that could go right, for the law does not require or even suggest that as a means to balance decisions.
Laws of Forbidden Technologies: As an outcome of the Doctrine of Could Happen, some entire technologies — most notably facial recognition and bodily tracking of individuals’ movements in public places, such as malls — have been outright banned from commercial, consumer, or (except with severe restrictions) academic use in Germany, France, Canada, and some American states. In every case, the right to use such technologies is reserved to government, leading to fears of misuse by those with greater power to misuse them. There are also statutes banning and providing penalties for algorithms that discriminate on various bases, though in a number of cases, courts are struggling to define precisely what statutory discrimination is (against race, certainly, but also against opinion and ideology?). Similarly, statutes requiring algorithmic transparency are confounding courts, which have proven incapable of understanding formulae and code. Not only technologies are subject to these laws but so are the technologists who create them. English duty-of-care online harms laws (which were not preserved in Scotland, Wales, and Northern Ireland after the post-Brexit dissolution of the United Kingdom) place substantial personal liability and career-killing fines on not only internet company executives but also on technologists, including software engineers.
The Law of China: The paradox is lost on no one that China and Russia now play host to most vibrant online capitalism in the world, as companies in either country are not bound by Western laws, only by fealty to their governments. Thus, in the last decade, we have seen an accelerated reverse brain-drain of technologists and students to companies and universities in China, Russia, and other politically authoritarian but technologically inviting countries. Similarly, venture investment has fled England entirely, and the U.S. and E.U. in great measure. A 2018 paper by Kieron O’Hara and Wendy Hall posited the balkanization of the internet into four nets: the open net of Silicon Valley, the capitalist net of American business, the bourgeois and well-behaved net of the E.U., and the authoritarian net of China. The fear then was that China — as well as Iran, Brazil, Russia, and other nations that demanded national walls around their data — would balkanize the net. Instead, it was the West that balkanized the net with their restrictive laws. Today, China’s authoritarian (and, many would argue, truly dystopian) net — as well as Russia’s disinformation net — appear victorious as they are growing and the West’s net is, by all measures, shrinking.
The Law of Truth and Falsity: Beginning in France and Singapore, “fake news” laws were instituted to outlaw the telling and spreading of lies online. As these truth laws spread to other countries, online speakers and the platforms that carried their speech became liable for criminal and civil fines, under newly enhanced libel laws. Public internet courts in some nations — as well as Facebook’s Oversight Board, in essence a private internet court — were established to rule on disputes over content takedowns. The original idea was to bring decisions about content or speech — for example, violations of laws regarding copyright and hate speech — out into the open where they could be adjudicated with due process and where legal norms could be negotiated in public. It was not long before the remits of these courts were expanded to rule on truth and falsity in online claims. In nation after nation, a new breed of internet judges resisted this yoke but higher courts forced them to take on the task. In case law, the burden of proof has increasingly fallen on the speaker, for demonstrating falsity is, by definition, proving the negative. Thus, for all practical effect, when a complaint is filed, the speaker is presumed guilty until proven innocent — or truthful. Attempts to argue the unconstitutionality of this doctrine even in the United States proved futile once the internet was ruled to be a medium, subject to regulation like the medium of broadcast. Though broadcast itself (radio and television towers and signals using public “airwaves”) are now obsolete and gone, the regulatory regime that oversaw them in Europe — and that excepted them from the First Amendment in America — now carry over to the net.
Once internet courts were forced to rule on illegal speech and falsity, it was not a big step to also require them to rule on matters of political neutrality under laws requiring platforms to be symmetrical in content takedowns (no matter how asymmetrical disinformation and hate might be). And once that was done, the courts were expanded further to rule on such matters as data and information ownership, unauthorized sharing, and the developing field of excessive expression (below). In a few nations, especially those that are more authoritarian and lacking in irony, separate truth and hate courts have been established.
The Doctrine of Excessive Expression: In reading the assigned, archived Twitter threads, Medium posts, academic papers, and podcast transcripts from the late naughts, we see the first stirrings of a then- (but no longer) controversial question: Is there too much speech? In 2018, one communications academic wrote a paper questioning the then-accepted idea that the best answer to bad speech is more speech, even arguing that so-called “fake news” and the since-debunked notion of the filter bubble (see the readings by Axel Bruns) put into play the sanctity of the First Amendment. At the same time, a well-respected professor asked whether the First Amendment was — this is his word — obsolete. As we have discussed in class, even to raise that question a generation before the internet and its backlash would have been unthinkable. Also in 2018, one academic wrote a book contending that Facebook’s goal of connecting the world (said founder Mark Zuckerberg at the time: “We believe the more people who have the power to express themselves, the more progress our society makes together”) was fundamentally flawed, even corrupt; what does that say about our expectations of democracy and inclusion, let alone freedom of speech? The following year, a prominent newspaper columnist essentially told fellow journalists to abandon Twitter because it was toxic — while others argued that in doing so, journalists would be turning their back on voices enabled and empowered by Twitter through the relatively recent invention of the hashtag.
None of these doctrines of the post-technology dysto age has been contested more vigorously than this, the Doctrine of Excessive Expression (also known as the Law of Over-Sharing). But the forces of free expression largely lost when the American Section 230 and the European E-Commerce Directive were each repealed, thus making intermediaries in the public conversation — platforms as well as publishers and anyone playing host to others’ creativity, comment, or conversation— liable for everything said on their domains. As a result, countless news sites shut down fora and comments, grateful for the excuse to get out of the business of moderation and interaction with the public. Platforms that depended on interactivity — chief among them Twitter and the various divisions of the former Facebook — at first hired tens of thousands of moderators and empowered algorithms to hide any questionable speech, but this proved ineffective as the chattering public in the West learned lessons from Chinese users and invented coded languages, references, and memes to still say what they wanted, even and especially if hateful. As a result, the social platforms forced users to indemnify them against damages, which led not only to another new industry in speech insurance but also to the requirement that all users verify their identities. Prior to this, many believed that eliminating anonymity would all but eliminate trolling and hateful speech online. As we now know, they were wrong. Hate abounds. The combination of the doctrines of privacy, data ownership, and expression respect anonymity for the subjects of speech but not for the speakers, who are at risk for any uttered and outlawed thought. “Be careful what you say” is the watchword of every media literacy course taught today.
One result of the drive against unfettered freedom of expression has been the return of the power of gatekeeper, long wished for and welcomed by the gatekeepers themselves — newspaper, magazine, and book editors as well as authors of old, who believed their authority would be reestablished and welcomed. But the effect was not what they’d imagined. Resentment against these gatekeepers by those who once again found themselves outside the gates of media only increased as trust in media continued to plummet and, as I said previously, the business prospects of news and other legacy media only darkened yet further.
The impact of the dystos’ victory can be seen in almost every sector of society.
In business, smaller is now better as companies worry about becoming “too big” (never knowing the definition of “too”) and being broken up. As a result, the merger and acquisition market, especially in tech, has diminished severely. With fewer opportunities for exit, there is less appetite for investment in new ventures, at least in America and Europe. In what is being called the data dark ages in business, executives in many fields — especially in marketing — are driving blind, making advertising, product, and strategic decisions without the copious data they once had, which many blame for the falling value of much of the consumer sector of the economy. After a decade and a half of trade and border wars of the Donald/Ivanka Trump Time [Hey, I said it’s a dystopia -Ed.], it would be simplistic to blame history’s longest recession on a lack of data, but it certainly was a contributing factor to the state of the stock market. Business schools have widely abandoned teaching “change management” and are shifting to teaching “stability management.” One sector of business known in the past for rolling with punches and finding opportunity in adversity — pornography — has hit a historic slump thanks to data and privacy laws. One might have expected an era of privacy to be a boon for porn, but identity and adult verification laws have put a chill on demand. Other businesses to suffer are those offering consumers analysis of their and even their pets’ DNA and help with genealogy (in some nations, courts have held that the dead have a right to privacy and others have ruled in favor of pets’ privacy). But as is always the case in business, what is a loss for one industry is an opportunity for another to exploit, witness the explosion not only in Technology Impact Report Optimization but also in a new growth industry for fact-certifiers, speech insurers, and photo blurrers.
In culture, the dystos long-since won the day. The streaming series Black Mirror has been credited by dystos and blamed by technos for teaching the public to expect doom with every technology. It is worth noting that in my Black Mirror Criticism class last semester, we were shown optimistic films about technology such as You’ve Got Mail and Tomorrowland to disbelieving hoots from students. We were told that many generations before, dystopian films such as Reefer Madness — meant to frighten youth about the perils of marijuana — inspired similar derision by the younger generation, just as it still would today. It is fascinating to see how optimism and pessimism can, by turns, be taken seriously or mocked in different times.
I also believe we have seen the resurgence of narrative over data in media. In the early days of machine learning and artificial intelligence — before they, along with the data that fed them, also became scare words — it was becoming clear that dependence on story and narrative and the theory of mind were being superseded by the power of data to predict human actions. But when data-based artificial intelligence and machine learning predicted human actions, they provided no explanation, no motive, no assuring arc of a story. This led, some argued, to a crisis of cognition, a fear that humans would be robbed of purpose by data, just as the activities of the universe were robbed of divine purpose by Newtonian science and the divine will of creation was foiled by Darwin and evolution. So it was that a cultural version of the regulatory Unholy Compromise developed between religious conservatives, who feared that data would deny God His will, and cultural liberals, who feared that data would deny them their own will. So in cultural products just as in news stories and political speeches, data and its fruits came to be portrayed as objects of fear and resentment and the uplifting story of latter-day, triumphal humanism rose again. This has delighted the storytellers of journalism, fiction, drama, and comedy, who feed on a market for attention. Again, it has done little to reverse the business impact of abundance on their industries. Even with YouTube gone, there is still more than enough competition in media to drive prices toward zero.
The news industry, as I’ve alluded to above, deserves much credit or blame for the dysto movement and its results, having lobbied against internet companies and their collection of data and for protectionist legislation and regulation. But the story has not turned out as they had hoped, in their favor. Cutting off the collection of data affected news companies also. Without the ability to use data to target advertising, that revenue stream imploded, forcing even the last holdouts in the industry to retreat behind paywalls. But without the ability to use data to personalize their services, news media returned to their mass-media, one-size-fits-all, bland roots, which has not been conducive to grabbing subscription market share in what turns out to be a very small market of people willing to pay for news or content overall. The one bright spot in the industry is the fact that the platforms are are licensing content as their only way to deal with paywalls. Thus these news outlets that fought the platforms are dependent on the platforms for their most reliable source of revenue. Be careful what you wish for.
In education, the rush to require the teaching of media literacy curriculum at every level of schooling led to unforeseen consequences. Well, actually, the consequences were not unforeseen by internet researcher danah boyd, who argued in our readings from the 2000-teens that teachers and parents were succeeding all too well at instructing young people to distrust everything they heard and read. This and the universal distrust of others engendered by media and politicians in the same era were symptoms of what boyd called an epistemological war — that is: ‘If I don’t like you, I won’t like your facts.’ The elderly and retired journalists who spoke to class still believe that facts alone, coming from trusted sources, would put an end to the nation’s internal wars. Back in the early days of the net, it seemed as if we were leaving an age overseen by gatekeepers controlling the scarcities of attention and information and returning to a pre-mass-media era build on the value of conversation and relationships. As Nunberg put it in 1996, just as the web was born: “One of the most pervasive features of these media is how closely they seem to reproduce the conditions of discourse of the late seventeenth and eighteenth centuries, when the sense of the public was mediated through a series of transitive personal relationships — the friends of one’s friends, and so on — and anchored in the immediate connections of clubs, coffee-houses, salons, and the rest.” Society looked as if it would trade trust in institutions for trust in family, friends, and neighbors via the net. Instead, we came to distrust everyone, as we were taught to. Now we have neither healthy institutions nor the means to connect with people in healthy relationships. The dystos are, indeed, victorious.
[If I may be permitted an unorthodox personal note in a paper: Professor, I am grateful that you had the courage to share optimistic and well as pessimistic readings with us and gave us the respect and trust to decide for ourselves. I am sorry this proved to be controversial and I am devastated that you lost your quest for tenure. In any case, thank you for a challenging class. I wish you luck in your new career in the TIRO industry.]
Professor Rosen gave me homework. He told me he wanted me to prepare a list like his, of the top problems he sees in journalism. I do not take an assignment from my academic mentor lightly and so it took me time to contemplate my greatest worries. When I did, I found links among them: Trump and all that comes with him, of course; race; the opportunity at last to listen to unheard voices; fear of criticism; fear of change — there’s a bit of all that in all of them. I second Jay’s current concerns and add my own:
The need to study our impact and consider our outcomes
Oh, I hear a lot of talk about impact in journalism but it is reliably egocentric: ‘What did my story accomplish?’ Impact starts with journalists, not the public. And it’s always positive in discussion. I rarely hear talk of our negative impact, how we in media polarize, fabricating and pitting sides against each other, exploiting attention with appeals to base instincts.
Coming to a university I learned the need to begin curriculum with outcomes: What should students learn? I wonder about outcomes-based journalism, which would begin by asking not just what the public needs to know (our supposed mission) but how we can improve the quality of the public conversation, how we can bring out voices rarely heard, how we can build bridges among communities in conflict, how we can appeal to the better nature of our citizens, how we can help build a better society.
If we did that, our metrics of success would be entirely different — not audience, attention, pageviews, clicks, even subscriptions. Thus our business models must change; more on that below. We cannot begin this process until we respect the public’s voices and build means to better listen to them. We also need research to understand communities’ needs and our impact on them. This is not nearly so practical a worry as Jay’s are, but it’s my biggest concern.
The need for self-criticism in journalism
What troubled me most about New York Times Executive Editor Dean Baquet’s round of interviews after the Unity vs. Racism headline debacle is an apparent unwillingness to hear outside critics, even while arguing that the paper doesn’t need an ombudsman because it has outside critics. Baquet dismissed politicians — Beto, AOC, Castro — who had legitimate criticism of the paper, saying: “I don’t need the entire political field to tell me we wrote a bad headline.” When told that Twitterati were criticizing the headline, Bacquet told his staff: “My reaction was to essentially say, ‘Fuck ’em, we’re already working on it.’” (Dismissing what citizens have to say on Twitter is a Times sport.) More worrisome to me from Slate’s transcript of the newsroom meeting was the evidence (as I said in a comment on Jay’s post) that Timespeople are scared of talking with each other. So one wonders how this family will ever work it all out. The most eloquent statement in the meeting came from a journalist who chose to remain anonymous in his own newsroom. Though I want to keep this short, I will quote it in full:
Saying something like divisive or racially charged is so euphemistic. Our stylebook would never allow it in other circumstances. I am concerned that the Times is failing to rise to the challenge of a historical moment. What I have heard from top leadership is a conservative approach that I don’t think honors the Times’ powerful history of adversarial journalism. I think that the NYT’s leadership, perhaps in an effort to preserve the institution of the Times, is allowing itself to be boxed in and hamstrung. This obviously applies to the race coverage. The headline represented utter denial, unawareness of what we can all observe with our eyes and ears. It was pure face value. I think this actually ends up doing the opposite of what the leadership claims it does. A headline like that simply amplifies without critique the desired narrative of the most powerful figure in the country. If the Times’ mission is now to take at face value and simply repeat the claims of the powerful, that’s news to me. I’m not sure the Times’ leadership appreciates the damage it does to our reputation and standing when we fail to call things like they are.
I don’t mean to join the Times pile-on; like Jay, I remain a loyalist and a subscriber. I also don’t mean to make The Times emblematic of all journalism; it is the grand exception. I use this episode as one example of how we journalists who criticize anyone do not let just anyone criticize us. Here I argue we need to consider — as Facebook, of all institutions, is — a systematic means of oversight of the quality of journalism as a necessity to build (not rebuild) trust. Instead, we tend to codify the way we’ve always done things — and wonder at the daily miracle of a front page — as if the goal is to recapture some Golden Age that never was.
Race is not the story of the moment. It is the story of the age that is finally in the moment in media. As a child of white privilege who grew up being taught the mythical ideal of the melting pot, I unlearn those lessons and learn more about racism in America every day. I learn mostly from the voices who were not heard in mass, mainstream media. I hear them now because they have a path around media (and then sometimes into media) thanks in considerable measure to the internet.
Race is a big story in media now not because of Donald Trump and his white nationalists. That gets things in the wrong order and gives credit to the devil. First, race is the story now because people of color can be heard and that is what scares the old, white men in power so much that they would rather burn down our institutions than share them — which is what has finally grabbed the attention of old, white media, so race is now news.
But it is apparent that media do not know how to cover this story. I don’t know how to, either. I am grateful for the publication — as I write this — of The New York Times’ and Nikole Hannah-Jones’ profoundly important 1619 Project and its curriculum. That’s not a worry; that’s gratitude. Yet it comes even as The Times itself grapples (above) with how to cover race and how to hear new voices. This is that hard.
Because I treasure those new voices I can now hear, because I value the expression the net brings to so many more communities, I want to protect the net and its freedoms. I see attacks on those freedoms from the right — from authoritarians abroad and right-wing white nationalists here. I also see attacks on the net and its freedoms from media (who never acknowledge their conflict of interest and jealousy over lost attention and revenue) and the left (who are attacking big corporations). I complained about the quality of tech-policy coverage here.
Those simple words are now being flagrantly misinterpreted across the political spectrum as a way to threaten companies like Facebook and Twitter. But make no mistake: if the law is repealed, the real casualties will not be the tech giants; it will the hundreds of millions of Americans who use the internet to communicate.
I havebeenworrying about moral panic over technology in media that is helping to fuel an exploitive and cynical moral panic among politicians, to damage the net and the new companies that challenge all of them and their power. My worries only worsen.
Here I lump together my fears about the state of political journalism, campaign coverage, disinformation, and manipulation. As Jay has been arguing and strenuously, the press has no strategy for covering the intentional aberration that is Donald Trump or the racism he exploits and propels. The press continues to insist on covering his “base,” a minority, rather than his opponents, a majority, which only gives more attention to the angry white man and less to voices still ignored. As many of us have been arguing, predictions do nothing to inform the electorate, but predicting is what pundits do (usually incorrectly). As James Carey argued, the polls upon which the pundits hang their predictions are anathema to democracy, for they preempt the public conversation they are meant to measure. Trump, the Russians, right-wing trolls, and too many others to imagine are taking the press as chumps, exploiting their weaknesses (“we just report, we don’t decide”) to turn news organizations into amplifiers for their dangerous ideas. (See the Times discussion face value above.) I see nothing to say that the political press has learned a single lesson. I’m plenty worried about that.
Of course, no list of worries about journalism is complete without existential fretting over business and the lack of any clear path to sustainability. There likely is no path to profitability for journalism as it was. The only way we are going to save journalism is to fundamentally reconsider it: to recognize at last all the new opportunities technology brings us to do more than produce a product we call content but instead to provide a service to the public; to build the means to listen to voices not heard before and, as I said above, to build bridges among communities; to bring value to people’s lives and communities and find value ourselves in that, basing our metrics of success there. The business of journalism is what I worry and write about more than anything else, so I won’t go on at length here. I join with Jay’s concern. I worry that newspapers continue to believe they can new find ways to sell their old ways; see Josh Benton’s frightening and insightful analysis of news on the L.A. Times’ subscriptions. I fear that Gannett and Gatehouse have no strategy and neither do most newspaper companies. I even worry that Google, Facebook, and the rest of the net are still built on mass media’s faulty, volume-based business model. I worry a lot. Then I remind myself that it’s still early days.
As I write this, I’m halfway through teaching our incoming class at the Newmark J-School about the context of their upcoming study and work: the history of media and journalism, the business and how we got here, and the new opportunities we have to reconsider journalism. I tell them it is their responsibility to reinvent journalism.
My favorite moments come when students challenge me. Friday one student did that, asking what I — and my generation in journalism — did wrong to get us in this fix. It was a good question and sadly I had many answers: about not listening to communities, about importing our flawed business model onto the net, about my overblown optimism for hyperlocal blogs as building blocks for new ecosystems. (I will try to post audio of the discussion soon.)
In that spirit, I should anticipate the question about my worries here: And what are you doing about them? These worries do inform my work. One thread you see in everything above is the need to listen to, respect, empathize with, and serve communities who for too long were not heard; this is what inspired the start of Social Journalism at my school. Now I am working on bringing other disciplines into a journalism school — anthropology, neuroscience, psychology, economics, philosophy, design—to consider how they would address society’s problems and the outcomes they would work toward. I am proud to work at a school where diversity is at the core of our strategy and we are starting new programs to address racial equity and inclusion in media leadership and ownership. Regarding moral panic in media coverage, I am working to organize training for reporters in coverage of major policy issues like Section 230. Regarding disinformation, I am working on projects to bring more attention and support to quality news. Whether any of those are the right paths, I will leave to others to judge.
Jay Rosen updates his list of concerns and problems and I will try to do the same as warranted. In the meantime, tell me: What problems worry you? What do you want to do about them?
Facebook is devoting impressive resources — months of time and untold millions of dollars — to developing systems of governance, of its users and of itself, raising fascinating questions about who governs whom according to what rules and principles, with what accountability. I’d like to ask similar questions about journalism.
I just spent a day at Facebook’s fallen skyscraper of a headquarters attending one of the last of more than two dozen workshops it has held to solicit input on its plans to start an Oversight Board. [Disclosures: Facebook paid for participants’ hotel rooms and I’ve raised money from Facebook for my school.] Weeks ago, I attended another such meeting in New York. In that time, the concept has advanced considerably. Most importantly, in New York, the participants were worried that the board would be merely an appeals court for disputes over content take-downs. Now it is clear that Facebook knows such a board must advise and openly press Facebook on bigger policy issues.
Facebook’s team showed the latest group of academics and others a near-final draft of a board charter (which will be released in a few weeks, in 20-plus languages). They are working on by-laws and finalizing legal structures for independence. They’ve thought through myriad details about how cases will rise (from users and Facebook) and be taken up by the board (at the board’s discretion); about conflict resolution and consensus; about transparency in board membership but anonymity in board decisions; about how members will be selected (after the first members join, the board will select its own members); about what the board will start with (content takedowns) and what it can tackle later (content demotion and taking down users, pages, groups — and ads); about how to deal with GDPR and other privacy regulation in sharing information about cases with the board; about how the board’s precedents will be considered but will not prevent the board from changing its mind; even about how other platforms could join the effort. They have grappled with most every structural, procedural, and legal question the 2,000 people they’ve consulted could imagine.
But as I sat there I saw something missing: the larger goal and soul of the effort and thus of the company and the communities it wants to foster. They have structured this effort around a belief, which I share, in the value of freedom of expression, and the need — recognized too late — to find ways to monitor and constrain that freedom when it is abused and used to abuse. But that is largely a negative: how and why speech (or as Facebook, media, and regulators all unfortunately refer to it: content) will be limited.
Facebook’s Community Standards — in essence, the statutes the Oversight Board will interpret and enforce and suggest to revise — are similarly expressed in the negative: what speech is not allowed and how the platform can maintain safety and promote voice and equality among its users by dealing with violations. In its Community Standards (set by Facebook and not by the community, by the way), there are nods to higher ends — sharing stories, seeing the world through others’ eyes, diversity, equity, empowerment. But then the Community Standards becomes a document about what users should not do. And none of the documents says much if anything about Facebook’s own obligations.
So in California, I wondered aloud what principles the Oversight Board would call upon in its decisions. More crucially, I wondered whom the board is meant to serve and represent: does it operate in loco civitas (in place of the community), publico (public), imperium (government and regulators), or Deus, (God — that is, higher ethics and standards)? [Anybody with better schooling than I had, please correct my effort at Latin.]
I think these documents, this effort, and this company — along with other tech companies — need a set of principles that should set forth:
Higher goals. Why are people coming to Facebook? What do they want to create? What does the company want to build? What good will it bring to the world? Why does it exist? For whose benefit? Zuckerberg issued a new mission statement in 2017: “To give people the power to build community and bring the world closer together.” And that is fine as far as it goes, but that’s not very far. What does this mean? What should we expect Facebook to be? This statement of goals should be the North Star that guides not just the Oversight Board but every employee and every user at Facebook.
A covenant with users and the public in which Facebook holds itself accountable for its own responsibilities and goals. As an executive from another tech company told me, terms of service and community standards are written to regulate the behavior of users, not companies. Well, companies should put forth their own promises and principles and draw them up in collaboration with users (civitas), the public (publico), and regulators (imperium). And that gives government — as in the case of proposed French legislation — the basis for holding the company accountable.
I’ll explore these ideas further in a moment, but first let me first address the elephant on my keyboard: whether Facebook and its founder and executives and employees have a soul. I’ve been getting a good dose of crap on Twitter the last few days from people who blithely declare — and others who retweet the declaration — that Zuckerberg is the most dangerous man on earth. I respond: Oh, come on. My dangerous-person list nowadays starts with Trump, Murdoch, Putin, Xi, Kim, Duterte, Orbán, Erdoğan, MBS…you get the idea. To which these people respond: But you’re defending Facebook. I will defend it and its founder from ridiculous, click-bait trolling that devalues the real danger our world is in today. I also criticize Facebook publicly and did at the meetings I attended there. Facebook has fucked up plenty lately and that’s why it needs oversight. At least they realize it.
When I defend internet platforms against what I see as media’s growing moral panic, irresponsiblereporting, and conflict of interest, I’m defending the internet itself and the freedoms it affords from what I fear will be continuing regulation of our own speech and freedom. I don’t oppose regulation; I have been proposing what I see as reasonable regimes. But I worry about where a growing unholy alliance against the internet between the far right and technophes in media will end.
That is why I attend meetings such as the ones that Facebook convenes and why I just spent two weeks in California meeting with both platform and newspaper executives, to try to build bridges and constructive relationships. That’s why I take Facebook’s effort to build its Oversight Board seriously, to hold them to account.
Indeed, as I sat in a conference room at Facebook hearing its plans, it occurred to me that journalism as a profession and news organizations individually would do well to follow this example. We in journalism have no oversight, having ousted most ombudsmen who tried to offer at least some self-reflection and -criticism (and having failed in the UK to come up with a press council that isn’t a sham). We journalists make no covenants with the public we serve. We refuse to acknowledge — as Facebook executives did acknowledge about their own company — our “trust deficit.”
We in journalism do love to give awards to each other. But we do not have a means to systematically identify and criticize bad journalism. That job has now fallen to, of all unlikely people, politicians, as Beto O’Rourke, Alexandria Ocasio-Cortez, and Julian Castro offer quite legitimate criticism of our field. It also falls to technologists, lawyers, and academics whohavebeen appalled at, for example, The New York Times’ horrendously erroneous and dangerous coverage of Section 230, our best protection of freedom of expression on the internet in America. I’m delighted that CJR has hired independent ombudsmen for The Times, The Post, CNN, and MSNBC. But what about Fox and the rest of the field?
I’ve been wondering how one might structure an oversight board for journalism to take the place of all those lost ombudsmen, to take complaints about bad journalism, to deliberate thoughtful and constructive responses, and to build data about the journalistic performance and responsibility of specific outlets. That will be a discussion for another day, soon. But even with such a structure, journalism, too — and each news outlet — should offer covenants with the public containing their own promises and statements of higher goals. I don’t just mean following standards for behavior; I mean sharing our highest ambitions.
I think such covenants for Facebook (and social networks and internet platforms) and journalism would do well to start with the mission of journalism that I teach: to convene communities into respectful, informed, and productive conversation. Democracy is conversation. Journalism is — or should be — conversation. The internet is built for conversation. The institutions and companies that serve the public conversation should promise they will do everything in their power to serve and improve that conversation. So here is the beginning of the kind of covenant I would like to see from Facebook:
Facebook should promise to create a safe environment where people can share their stories with each other to build bridges to understanding and to make strangers less strange. (So should journalism.)
Facebook should promise to enable and empower new and diverse voices that have been deprived of privilege and power by existing, entrenched institutions. (Including journalism.)
Facebook should promise to build systems that reward positive, productive, useful, respectful behavior among communities. (So should journalism.)
Facebook should promise not to build mechanisms to polarize people and inflame conflict. (So should journalism.)
Facebook should promise to help inform conversations by providing the means to find reliable information. (Journalism should provide that information.)
Facebook should promise not to build its business upon and enable others to benefit from crass attempts to exploit attention. (So should the news and media industries.)
Facebook should warrant to protect and respect users’ privacy, agency, and dignity.
Facebook should recognize that malign actors will exploit weak systems of protection to drive people apart and so it should promise to guard against being used to manipulate and deceive. (So should journalism.)
Facebook should share data about its performance against these goals, about its impact on the public conversation, and about the health of that conversation with researchers. (If only journalism had such data to share.)
Facebook should build its business, its tools, its rewards, and its judgment of itself around new metrics that measure its contributions to the health and constructive vitality of the public conversation and the value it brings to communities and people’s lives. (So should journalism.)
Clearly, journalism’s covenants with the public should contain more: about investigating and holding power to account, about educating citizens and informing the public conversation, and more. That’s for another day. But here’s a start for both institutions. They have more in common than they know.