The net is yet young and needs to learn from its present failures to build a better infrastructure not just for speaking but also for listening and finding that which is worth listening to, from experts and people with authority, intelligence, education, experience, erudition, good taste, and good sense.
Here is a preview of one nascent example of such a system built on expertise from Samir Arora, the former CEO of Glam. Samir and I bonded a dozen years ago over the power of networks, when he came to my office to show me one history’s ugliest Powerpoint slides, illustrating how open networks of independent blogs at Glam bested closed systems of owned content in a media company. I had been beating the same drum. Glam later imploded for many reasons, among them the change in the ad market with programmatic networks and investor politics.
Next Arora quietly set to work on Sage, a network of experts. It’s not public yet — he and his team plan to open it up in the first or second quarter — and it’s a complex undertaking whose full implications I won’t fully understand until we see how people use it. But as before, I think Arora is onto an important insight.
He started with a manageable arena: travel and food — that is, expertise about places. That topic made it easier to connect what someone says with a specific destination, hotel, or restaurant; to compare what others said about these entities; and to judge whether someone was actually there and spoke from experience as a test of credibility and accuracy.
I’ve begged Arora to also tackle news but watching him grapple with expertise — at the same time I’ve watched the platforms struggle with quality in news — it becomes apparent that every sector of knowledge will need its own methodology. Sports can probably operate similarly to travel. Health and science will depend on accredited institutions (i.e., people with degrees). Culture will be sensitive to, well, cultural perspective. International affairs will require linguistic abilities. Politics and expertise is probably oxymoronic.
Sage began with 100 manually curated experts in travel. Asking the experts to in turn recommend experts and analyzing their connections yielded a network of 1,000 experts with opinions about 10,000 places. Then AI took that learning-set to scale the system and find 250k experts and influencers with their judgments about 5 million places. 250k is a lot of sources, but it’s not 1.74 billion, which is how many web sites there are. It’s a manageable set from which to judge and rank quality.
I probably would have stopped there and released this service to the public as a new travel and restaurant search engine built on a next generation Google’s page rank that could identify expertise and control for quality and gaming. Or I’d license the technology to social platforms, as Twitter’s Jack Dorsey has been talking about finding ways to present expertise in topics and Facebook is in constant search for ways to improve its ranking.
But Arora did not. His reflex is to create tools for creators, going back to his cofounding of NetObjects in 1995, which built tools to build web sites. At Glam, he bought Ning, a tool that let any organization build its own social network.
So, at Sage, Arora is creating a suite of tools that enable an expert who has been vetted in the system to interact with users in a number of ways. The system starts by finding the content one has already shared on the web — on media sites, on Instagram or Pinterest, on YouTube, on blogs, in books, wherever — allowing that person to claim a profile and organize and present the content they’ve made. It enables them to create your own channels — e.g., best sushi — and their own lists within it — best sushi in L.A. — and their own reviews there. They can create content on Sage or link to content off Sage. They can interact with their users in Q&As and chats on Sage.
Because Arora is also reflexively social, he has built tools for each expert to invite in others in a highly moderated system. The creator can ask in contributors who may create content and curators who may make lists. Sage is also going to offer closed networks among the experts themselves so they will have a quality environment for private discussions, a Slack for experts. So creators can interact with the people they invite in, with a larger circle of experts, with the public in a closed system for members and subscribers, or with the public at large.
And because the real challenge here is to support creativity and expertise, Sage is building a number of monetization models. You can link to your content elsewhere on the web with whatever business models are in force there. You can offer content for free. You can set up a subscription model with previews (the meter) and one-time purchase (an article or a book). You can sell access to an event: an online Q&A, an individual consulting session, an in-person appearance, and so on. And you can sell physical products — a cookbook, a box of hot sauces — directly or via affiliate arrangements. Plus you can accept donations at various tiers as a kind of purchase. Note that Sage will begin without advertising.
I hope this platform could place where newly independent journalists covering certain topics could build an online presence and make money from it.
All this is mobile-first. Experts can build content within Sage, on the open web, and — if they have sufficient followers — in their own apps. The current iteration is built for iOS with Android and web sites in development. (Since I live la vida Google in Android, I haven’t been able to dig into it as much as I’d like.)
Users will discover content on Sage via search on topics or by links to experts’ channels there.
After starting with restaurants and travel, Sage is expanding into culture — reviews of books and movies. Next comes lifestyle, which can include health. News, I fear, will be harder.
So what is expertise? The answer in old media and legacy institutions was whatever they decided and whomever they hired. In a more open, networked world, there will be many answers, yours and mine: I will rely on one person’s taste in restaurants, you another. The problem with this — as, indeed, we see in news and political views today — is that this extreme relativity leads to epistemological warfare, especially when institutions old (journalism) and new (platforms) are so allergic to making judgments and everyone’s opinions are considered equal. I am not looking for gatekeepers to return to decide once for all. Neither do I want a world in which we are all our own experts in everything and thus nothing. Someone will have to draw lines somewhere separating the knowledgeable from the ignorant, evidence from fiction, experience and education from imagination and uninformed opinion.
Will Sage be any good at this task? We can’t know until it starts and we judge its judgments. But Arora gave me one anecdote as a preview: About 18 months ago, he said, Sage’s systems sent up an alert about a sudden decline in the quality and consistency of reviews from a well-known travel brand. Staff investigated and, sure enough, they found that the brand had fired all its critics and relied on user-driven reviews from an online supplier.
This is not to say that users cannot be experts. As Dan Gillmor famously said in the halcyon early days of blogs and online interaction: “My readers know more than I do.” In aggregate and in specific cases, he’s right. I will take that attitude over that of an anonymous journalist quoted in a paper I just read about foundations requiring the newsrooms they now help support to engage with the public:
The people are not as knowing about a story as I am. They haven’t researched the topic. They haven’t talked to a lot of people outside of social circles. I read legal briefs or other places’ journalism. I don’t think people do that. It can become infuriating when my bosses or Columbia Journalism Review or Jeff Jarvis tells me I’m missing an opportunity by not letting people tell me what to do. I get the idea, you know, but most people are ignorant or can’t be expected to know as much as I do. It’s not their job to look into something. They aren’t journalists.
No. Expertise will be collaborative and additive. That is the lesson we in journalism never learned from the academe and science. Reporters are too much in the habit of anointing the one expert they find to issue the final word on whatever subject they’re covering: “Wine will kill us!” says the expert. “Wine will save us!” says the expert. As opposed to: “Here’s what we know and don’t know about wine and health and here is the evidence these experts use to test their hypotheses.”
What we will need in the next phase of the net is the means to help us find people with the qualifications, experience, education, and means to make judgments and answer questions, giving us the evidence they use to reach their conclusions so we can judge them. Artificial intelligence will not do this; neither will it replace expertise. The question is whether it can help us sift through the magnificent abundance of voice and view to which we now have access to help us decide who knows what the fuck they’re talking about.
[This is not prediction about tomorrow; it is extrapolation from today -Ed.]
This paper explores the victory of technological dystopians over technologists in regulation, legislation, courts, media, business, and culture across the United States, Europe, and other nations in the latter years of what is now known as the Trump Time.
The key moment for the dystos came a decade ago, with what Wired.com dubbed the Unholy Compromise of 2022, which found Trumpist conservatives and Warrenite liberals joining forces to attack internet companies. Each had their own motives — the Trumpists complaining about alleged discrimination against them when what they said online was classified as hate speech; the liberals inveighing against data and large corporations. It is notable that in the sixteen years of Trump Time, virtually nothing else was accomplished legislatively — not regarding climate, health care, or guns — other than passing anti-tech, anti-net, and anti-data laws.
In the aftermath, the most successful internet companies — Alphabet/Google, Facebook, Amazon — were broken up by regulators (but interestingly Comcast, Verizon, AT&T, Microsoft, Twitter, and the news behemoth Fox-Gatehouse-Sinclair were not). Collection and use of data by commercial entities and as well as by academic researchers was severely curtailed by new laws. Moderation requirements and consequent liability for copyright violations, hate, falsehood, unauthorized memories, and other forbidden speech were imposed on social-media companies, and then, via court rulings, on media and news organizations as well as individuals online. New speech courts were established in the U.S., the European Union, and the former United Kingdom countries to adjudicate disputes of falsehood and hate as well as information ownership, excessive expression, and political neutrality by net companies. Cabinet-level technology regulators in the U.S., the E.U, Canada, and Australia established mechanisms to audit algorithms, supported by software taxes as well as fines against technology companies, executives, and individual programmers. Certain technologies — most notably facial recognition — were outright outlawed. And in many American states, new curricula were mandated to educate middle- and high-school students about the dangers of technology.
The impact of all this has been, in my opinion, a multitude of unintended consequences. The eight companies resulting from the big net breakups are all still profitable and leading their now-restricted sectors with commanding market shares, and many have quietly expanded into new fields as new technologies have developed. Their aggregate market value has increased manyfold and no serious challengers have emerged.
Academic studies of divisiveness, hate, and harassment — though limited in their scope by data laws — have shown no improvement and most have found a steady decline in online decency and respect, especially as trolls and sowers of discord and disinformation took to nesting in the smaller, off-shore platforms that regularly sprout up from Russia, lately China, and other nations unknown. Other studies have found that with the resurrection of media gatekeepers in a more controlled ecosystem of expression, minority voices are heard less often in mainstream media than before the Compromise.
Even though news and media companies and their lobbyists largely won political battles by cashing in their political capital to gain protectionist legislation, these legacy companies have nonetheless continued and accelerated their precipitous declines into bankruptcy and dissolution, with almost a half of legacy news organizations ceasing operation in the last decade even as legislatively blessed media consolidation continues.
I would not go so far as to declare that we have reached the dystopia of the dystopians, though some would. In his final book, The Last Optimist, Jeff Jarvis wrote:
Far too early in the life of the internet and its possibilities, the dystos have exhibited the hubris of the self-declared futurist to believe they could foretell everything that could go wrong — and little that could go right — with internet and data technologies. Thus in their moral panic they prematurely defined and limited these technologies and cut off unforeseen opportunities. We now live in their age of fear: fear of technology, fear of data (that is, information and knowledge), fear of each other, fear of the future.
There are many reasons to be angry with the technology companies of the early internet. They were wealthy, hubristic, optimistic, expansionist, and isolated, thus deaf to the concerns — legitimate and not — of the public, media, and government. They were politically naive, not understanding how and why the institutions the net challenged — journalism, media, finance, politics, government, even nations — would use their collaborative clout and political capital to fight back and restrain the net at every opportunity. They bear but also share responsibility for the state of the net and society today with those very institutions.
Doctrines of dystos
In examining the legislation and precedents that came before and after the Compromise, certain beliefs, themes, and doctrines emerged:
The Doctrine of Dangerous Data: It would be simplistic to attribute a societal shift against “data” solely to Facebook’s Cambridge Analytica scandal of 2016, but that certainly appears to have been a key moment triggering the legislative landslide that followed. Regulation of data shifted from its use to its collection as laws were enacted to limit the generation, gathering, storage, and analysis of information associated with the internet. Scores of laws now require that data be used only for the single purpose stated at collection and others impose strict expiration on the life of data, mandating expiration and erasure. Academics and medical researchers — as well as some journalists — have protested such legislation, contending that they all but kill their ability to find correlation and causation in their fields, but they have failed to overturn a single law. Note well that similar data collection offline — by stores through loyalty cards, banks through credit cards, and so on — has seen no increase in regulation; marketers and publishers still make use of mountains of offline data in their businesses.
News companies and their trade associations demonized the use of data by their competitors, the platforms. In our Geoffrey Nunberg reading, “Farewell to the Information Age,” he quotes Philip Agre saying that “the term ‘information’ rarely evokes the deep and troubling questions of epistemology that are usually associated with terms like ‘knowledge’ and ‘belief.’ One can be a skeptic about knowledge but not about information. Information, in short, is a strikingly bland substance.” “Data,” on the other hand, became a dystopian scare word thanks to campaigns led by news and media companies and their trade associations and lobbyists, using their own outlets.
The Debate over Preeminent Data Ownership: In the 2020 American elections and every one following, countless politicians have vowed to protect consumers’ ownership of “their data” — and passed many laws as a result — but courts have still not managed to arrive at a consistent view of what owning one’s data means. Data generation is so often transactional — that is, involving multiple parties — that is has proven difficult to find a Solomonic compromise in deciding who has preeminent rights over a given piece of data and thus the legal right to demand its erasure. In California v. Amazon Stores, Inc. — which arose from a customer’s embarrassment about purchases of lubricating gels — the Supreme Court decided, in an expansion of its long-held Doctrine of Corporate Personhood, that a company has equal rights and cannot be forced to forget its own transactions. In Massachusetts v. Amazon Web Services, Inc., an appellate panel ruled that AWS could be forced to notify individuals included in databases it hosted and in one case could be forced to erase entire databases upon demand by aggrieved individuals. Despite friend-of-the-court filings by librarians, educators, and civil libertarians, claims of a countervailing right to know or remember by parties to transactions — or by the public itself — have failed to dislodge the preeminence of the right to be forgotten.
Privacy Über Alles: Privacy legislation — namely Europe’s General Data Protection Regulation (GDPR) — led the way for all net legislation to follow. Every effort to track any activity by people — whether by cameras or cookies — was defined as “surveillance” and was banned under a raft of net laws worldwide. In every single case, though, these technologies and their use were reserved for government use. Thus “surveillance” lost its commercial meaning and regained its more focused definition as an activity of governments, which continue to track citizens. Separate legislation in some nations granted people the expectation of privacy in public, which led to severe restrictions on photography by not only journalists but also civilians, requiring that the faces of every unknown person in a photo or video who has not given written consent — such as a birthday party in a restaurant — be blurred.
The Doctrine of Could Happen: A pioneering 2024 German law that has been — to use our grandparents’ term — xeroxed by the European Commission and then the United States, Canada, Australia, and India requires that companies must file Technology Impact Reports (TIRs) for any new technology patent, algorithm, or device introduced to the market. To recount, the TIR laws give limited liability protection for any possible impact that has been revealed before the introduction of a technology; if a possible outcome is not anticipated and listed and then occurs, there is no limit to liability. Thus an entirely new industry — complete with conventions, consultants, and newsletters — has exploded to help any and every company using technology to imagine and disclose everything that could go wrong with any software or device. There is no comparable industry of consultants ready to imagine everything that could go right, for the law does not require or even suggest that as a means to balance decisions.
Laws of Forbidden Technologies: As an outcome of the Doctrine of Could Happen, some entire technologies — most notably facial recognition and bodily tracking of individuals’ movements in public places, such as malls — have been outright banned from commercial, consumer, or (except with severe restrictions) academic use in Germany, France, Canada, and some American states. In every case, the right to use such technologies is reserved to government, leading to fears of misuse by those with greater power to misuse them. There are also statutes banning and providing penalties for algorithms that discriminate on various bases, though in a number of cases, courts are struggling to define precisely what statutory discrimination is (against race, certainly, but also against opinion and ideology?). Similarly, statutes requiring algorithmic transparency are confounding courts, which have proven incapable of understanding formulae and code. Not only technologies are subject to these laws but so are the technologists who create them. English duty-of-care online harms laws (which were not preserved in Scotland, Wales, and Northern Ireland after the post-Brexit dissolution of the United Kingdom) place substantial personal liability and career-killing fines on not only internet company executives but also on technologists, including software engineers.
The Law of China: The paradox is lost on no one that China and Russia now play host to most vibrant online capitalism in the world, as companies in either country are not bound by Western laws, only by fealty to their governments. Thus, in the last decade, we have seen an accelerated reverse brain-drain of technologists and students to companies and universities in China, Russia, and other politically authoritarian but technologically inviting countries. Similarly, venture investment has fled England entirely, and the U.S. and E.U. in great measure. A 2018 paper by Kieron O’Hara and Wendy Hall posited the balkanization of the internet into four nets: the open net of Silicon Valley, the capitalist net of American business, the bourgeois and well-behaved net of the E.U., and the authoritarian net of China. The fear then was that China — as well as Iran, Brazil, Russia, and other nations that demanded national walls around their data — would balkanize the net. Instead, it was the West that balkanized the net with their restrictive laws. Today, China’s authoritarian (and, many would argue, truly dystopian) net — as well as Russia’s disinformation net — appear victorious as they are growing and the West’s net is, by all measures, shrinking.
The Law of Truth and Falsity: Beginning in France and Singapore, “fake news” laws were instituted to outlaw the telling and spreading of lies online. As these truth laws spread to other countries, online speakers and the platforms that carried their speech became liable for criminal and civil fines, under newly enhanced libel laws. Public internet courts in some nations — as well as Facebook’s Oversight Board, in essence a private internet court — were established to rule on disputes over content takedowns. The original idea was to bring decisions about content or speech — for example, violations of laws regarding copyright and hate speech — out into the open where they could be adjudicated with due process and where legal norms could be negotiated in public. It was not long before the remits of these courts were expanded to rule on truth and falsity in online claims. In nation after nation, a new breed of internet judges resisted this yoke but higher courts forced them to take on the task. In case law, the burden of proof has increasingly fallen on the speaker, for demonstrating falsity is, by definition, proving the negative. Thus, for all practical effect, when a complaint is filed, the speaker is presumed guilty until proven innocent — or truthful. Attempts to argue the unconstitutionality of this doctrine even in the United States proved futile once the internet was ruled to be a medium, subject to regulation like the medium of broadcast. Though broadcast itself (radio and television towers and signals using public “airwaves”) are now obsolete and gone, the regulatory regime that oversaw them in Europe — and that excepted them from the First Amendment in America — now carry over to the net.
Once internet courts were forced to rule on illegal speech and falsity, it was not a big step to also require them to rule on matters of political neutrality under laws requiring platforms to be symmetrical in content takedowns (no matter how asymmetrical disinformation and hate might be). And once that was done, the courts were expanded further to rule on such matters as data and information ownership, unauthorized sharing, and the developing field of excessive expression (below). In a few nations, especially those that are more authoritarian and lacking in irony, separate truth and hate courts have been established.
The Doctrine of Excessive Expression: In reading the assigned, archived Twitter threads, Medium posts, academic papers, and podcast transcripts from the late naughts, we see the first stirrings of a then- (but no longer) controversial question: Is there too much speech? In 2018, one communications academic wrote a paper questioning the then-accepted idea that the best answer to bad speech is more speech, even arguing that so-called “fake news” and the since-debunked notion of the filter bubble (see the readings by Axel Bruns) put into play the sanctity of the First Amendment. At the same time, a well-respected professor asked whether the First Amendment was — this is his word — obsolete. As we have discussed in class, even to raise that question a generation before the internet and its backlash would have been unthinkable. Also in 2018, one academic wrote a book contending that Facebook’s goal of connecting the world (said founder Mark Zuckerberg at the time: “We believe the more people who have the power to express themselves, the more progress our society makes together”) was fundamentally flawed, even corrupt; what does that say about our expectations of democracy and inclusion, let alone freedom of speech? The following year, a prominent newspaper columnist essentially told fellow journalists to abandon Twitter because it was toxic — while others argued that in doing so, journalists would be turning their back on voices enabled and empowered by Twitter through the relatively recent invention of the hashtag.
None of these doctrines of the post-technology dysto age has been contested more vigorously than this, the Doctrine of Excessive Expression (also known as the Law of Over-Sharing). But the forces of free expression largely lost when the American Section 230 and the European E-Commerce Directive were each repealed, thus making intermediaries in the public conversation — platforms as well as publishers and anyone playing host to others’ creativity, comment, or conversation— liable for everything said on their domains. As a result, countless news sites shut down fora and comments, grateful for the excuse to get out of the business of moderation and interaction with the public. Platforms that depended on interactivity — chief among them Twitter and the various divisions of the former Facebook — at first hired tens of thousands of moderators and empowered algorithms to hide any questionable speech, but this proved ineffective as the chattering public in the West learned lessons from Chinese users and invented coded languages, references, and memes to still say what they wanted, even and especially if hateful. As a result, the social platforms forced users to indemnify them against damages, which led not only to another new industry in speech insurance but also to the requirement that all users verify their identities. Prior to this, many believed that eliminating anonymity would all but eliminate trolling and hateful speech online. As we now know, they were wrong. Hate abounds. The combination of the doctrines of privacy, data ownership, and expression respect anonymity for the subjects of speech but not for the speakers, who are at risk for any uttered and outlawed thought. “Be careful what you say” is the watchword of every media literacy course taught today.
One result of the drive against unfettered freedom of expression has been the return of the power of gatekeeper, long wished for and welcomed by the gatekeepers themselves — newspaper, magazine, and book editors as well as authors of old, who believed their authority would be reestablished and welcomed. But the effect was not what they’d imagined. Resentment against these gatekeepers by those who once again found themselves outside the gates of media only increased as trust in media continued to plummet and, as I said previously, the business prospects of news and other legacy media only darkened yet further.
The impact of the dystos’ victory can be seen in almost every sector of society.
In business, smaller is now better as companies worry about becoming “too big” (never knowing the definition of “too”) and being broken up. As a result, the merger and acquisition market, especially in tech, has diminished severely. With fewer opportunities for exit, there is less appetite for investment in new ventures, at least in America and Europe. In what is being called the data dark ages in business, executives in many fields — especially in marketing — are driving blind, making advertising, product, and strategic decisions without the copious data they once had, which many blame for the falling value of much of the consumer sector of the economy. After a decade and a half of trade and border wars of the Donald/Ivanka Trump Time [Hey, I said it’s a dystopia -Ed.], it would be simplistic to blame history’s longest recession on a lack of data, but it certainly was a contributing factor to the state of the stock market. Business schools have widely abandoned teaching “change management” and are shifting to teaching “stability management.” One sector of business known in the past for rolling with punches and finding opportunity in adversity — pornography — has hit a historic slump thanks to data and privacy laws. One might have expected an era of privacy to be a boon for porn, but identity and adult verification laws have put a chill on demand. Other businesses to suffer are those offering consumers analysis of their and even their pets’ DNA and help with genealogy (in some nations, courts have held that the dead have a right to privacy and others have ruled in favor of pets’ privacy). But as is always the case in business, what is a loss for one industry is an opportunity for another to exploit, witness the explosion not only in Technology Impact Report Optimization but also in a new growth industry for fact-certifiers, speech insurers, and photo blurrers.
In culture, the dystos long-since won the day. The streaming series Black Mirror has been credited by dystos and blamed by technos for teaching the public to expect doom with every technology. It is worth noting that in my Black Mirror Criticism class last semester, we were shown optimistic films about technology such as You’ve Got Mail and Tomorrowland to disbelieving hoots from students. We were told that many generations before, dystopian films such as Reefer Madness — meant to frighten youth about the perils of marijuana — inspired similar derision by the younger generation, just as it still would today. It is fascinating to see how optimism and pessimism can, by turns, be taken seriously or mocked in different times.
I also believe we have seen the resurgence of narrative over data in media. In the early days of machine learning and artificial intelligence — before they, along with the data that fed them, also became scare words — it was becoming clear that dependence on story and narrative and the theory of mind were being superseded by the power of data to predict human actions. But when data-based artificial intelligence and machine learning predicted human actions, they provided no explanation, no motive, no assuring arc of a story. This led, some argued, to a crisis of cognition, a fear that humans would be robbed of purpose by data, just as the activities of the universe were robbed of divine purpose by Newtonian science and the divine will of creation was foiled by Darwin and evolution. So it was that a cultural version of the regulatory Unholy Compromise developed between religious conservatives, who feared that data would deny God His will, and cultural liberals, who feared that data would deny them their own will. So in cultural products just as in news stories and political speeches, data and its fruits came to be portrayed as objects of fear and resentment and the uplifting story of latter-day, triumphal humanism rose again. This has delighted the storytellers of journalism, fiction, drama, and comedy, who feed on a market for attention. Again, it has done little to reverse the business impact of abundance on their industries. Even with YouTube gone, there is still more than enough competition in media to drive prices toward zero.
The news industry, as I’ve alluded to above, deserves much credit or blame for the dysto movement and its results, having lobbied against internet companies and their collection of data and for protectionist legislation and regulation. But the story has not turned out as they had hoped, in their favor. Cutting off the collection of data affected news companies also. Without the ability to use data to target advertising, that revenue stream imploded, forcing even the last holdouts in the industry to retreat behind paywalls. But without the ability to use data to personalize their services, news media returned to their mass-media, one-size-fits-all, bland roots, which has not been conducive to grabbing subscription market share in what turns out to be a very small market of people willing to pay for news or content overall. The one bright spot in the industry is the fact that the platforms are are licensing content as their only way to deal with paywalls. Thus these news outlets that fought the platforms are dependent on the platforms for their most reliable source of revenue. Be careful what you wish for.
In education, the rush to require the teaching of media literacy curriculum at every level of schooling led to unforeseen consequences. Well, actually, the consequences were not unforeseen by internet researcher danah boyd, who argued in our readings from the 2000-teens that teachers and parents were succeeding all too well at instructing young people to distrust everything they heard and read. This and the universal distrust of others engendered by media and politicians in the same era were symptoms of what boyd called an epistemological war — that is: ‘If I don’t like you, I won’t like your facts.’ The elderly and retired journalists who spoke to class still believe that facts alone, coming from trusted sources, would put an end to the nation’s internal wars. Back in the early days of the net, it seemed as if we were leaving an age overseen by gatekeepers controlling the scarcities of attention and information and returning to a pre-mass-media era build on the value of conversation and relationships. As Nunberg put it in 1996, just as the web was born: “One of the most pervasive features of these media is how closely they seem to reproduce the conditions of discourse of the late seventeenth and eighteenth centuries, when the sense of the public was mediated through a series of transitive personal relationships — the friends of one’s friends, and so on — and anchored in the immediate connections of clubs, coffee-houses, salons, and the rest.” Society looked as if it would trade trust in institutions for trust in family, friends, and neighbors via the net. Instead, we came to distrust everyone, as we were taught to. Now we have neither healthy institutions nor the means to connect with people in healthy relationships. The dystos are, indeed, victorious.
[If I may be permitted an unorthodox personal note in a paper: Professor, I am grateful that you had the courage to share optimistic and well as pessimistic readings with us and gave us the respect and trust to decide for ourselves. I am sorry this proved to be controversial and I am devastated that you lost your quest for tenure. In any case, thank you for a challenging class. I wish you luck in your new career in the TIRO industry.]
Another video from the World Editors Forum in South Africa, this about multiple newspapers from a single newsroom. The editor of Germany’s Die Welt, Andrea Seibel, makes a strong statement at the end:
One of the most important, if not bitter discoveries for journalists over the past decade has been that they do not operate in the ivory tower of of intellectual pursuit but are subject to the laws and mechanisms of the market. They simply have to open up and realize that the reader is their actual employer. Which means saying goodbye to the hallowed notion of journalism of journalism as a cultural good worthy of protection. You may smile at this statement but that is exactly what still bedevils the thinking amongst many in Germany. In a recent essay, the respected philosopher JÃ¼rgen Habermas actually stipulated that quality journalism has to be shielded from the brutality of the marketplace. We beg to differ. Protectionism is not the answer.
: ALSO: I’m delighted to see video coverage of WEF here, here, and here.