[Disclosure: I raised money for my school from Facebook to aggregate signals of quality in news. I also have attended events convened by Google. I am independent of and receive no compensation personally from any technology company.]
Too many momentous decisions about the future of the internet and its regulation — as well as coverage in media — are being made on the basis of assumptions, fears, theories, myths, mere metaphors, isolated incidents, and hidden self-interest, not evidence. The discussion about the internet and the future should begin with questions and research and end with demonstrable facts, not with presumption or with what I fear most in media: moral panic. I will beg journalists to take on academics’ discipline of evidence over anecdote.
But first, let me praise an example of the kind of analysis we need. Axel Bruns, a professor at Queensland University of Technology, just presented an excellent paper at the International Association for Media and Communication Research conference in Madrid, sticking a pin in the idea of the filter bubble. He argues
that echo chambers and filter bubbles principally constitute an unfounded moral panic that presents a convenient technological scapegoat (search and social platforms and their affordances and algorithms) for a much more critical problem: growing social and political polarisation. But this is a problem that has fundamentally social and societal causes, and therefore cannot be solved by technological means alone. [My emphasis]
Based on his reading of available research, Bruns notes that these two metaphors — echo chamber and filter bubble — are not consistently defined, “making them moving targets in both public discourse and scholarly inquiry,” which also makes it impossible to “assess more systematically exactly how disconnected the denizens of such suspected echo chambers and filter bubbles really are.” In his upcoming book, Are Filter Bubbles Real?, Bruns will examine definitions of both metaphors and methodologies for measurement of their alleged impact.
In his paper, Bruns provides perspective and context, pointing out that well before the net, “different groups in society have always already informed themselves from different sources that suited their specific informational interests, needs, or literacies.” He asks: “Given that society and democracy have persisted nonetheless, should we even worry about them?” In short, the burden is on those who propagate these notions to answer the question: “What is new here, and how different is it from before?”
Further, Bruns points out that we live in a “complex and interwoven media ecology” and so it is foolhardy to argue that one factor in it — just Facebook, for example — is the direct cause of behavioral change. Too many rants about the impact of the internet in media ignore the impact of media. Wonder why.
As an academic, Bruns reads existing literature in search of evidence of filter bubbles and echo chambers in prior research. He doesn’t find much at all. Instead, he cites (with links here and full citations in Bruns’ paper):
- Three separate studies found the opposite of what Eli Pariser reported in his book, The Filter Bubble: Google Search and News users are not presented with unique and isolating worldviews.
- Earlier studies of the bifurcated blog world 15 years ago uncovered “only mild echo chambers.”
- The Pew Research Center found that Facebook users do not select friends based on political leaning and thus are exposed to other worldviews in social media.
- Two studies looked at already divisive topics — abortion, vaccination, Obamacare, gun control — and found, of course, they were also divisive online, though non-political but debatable topics — Game of Thrones and food porn — did not lead to polarization online. Is divisiveness online the cause or the effect?
- “Social media users generally encounter a greater diversity of newssources than non-users do.”
- “Those users frequenting the most extremely partisan conservative sites in the United States have been found also to be more likely than ordinary internet users to visit the centrist New York Times.”
- “Exposure to highly partisan political information … does not come at the expense of contact with other viewpoints.”
- In sum, a half-dozen academics argue, “at present there is little empirical evidence that warrants any worries about filter bubbles.”
Yet in media, no end of stories still warn of filter bubbles. Though not all. Some journalists are reporting on studies that question the filter bubble. Good. A new study comes out and sometimes, it will get coverage. But that leads to another journalistic weakness in reporting academic studies: stories that takes the latest word as the last word. Look at all the perennial, flip-flopping reports that wine will kill or save us. Journalists should do what academics do in their literature reviews: put the latest word in context. They should also do what, for example, Oxford’s Rasmus Kleis Nielsen does on Twitter, responding to assumptions with findings in research.
Now that we have tools like Google Scholar — and many scholarly (if, unfortunately, costly) databases — I urge reporters and editors to do their own academic literature reviews when a story is pitched or assigned, to make sure its premise is upheld by research thus far, to provide context and nuance, and to grapple with what will surely appear: contradictory information.
But I urge them to begin — as Bruns ends his paper — with questions before answers.
The central question now is what [people] do with such information when they encounter it: do they dismiss it immediately as running counter to their own views? Do they engage in a critical reading, turning it into material to support their own worldview, perhaps as evidence for their own conspiracy theories? Do they respond by offering counter-arguments, by vocally and even violently disagreeing, by making ad hominem attacks, or by knowingly disseminating all-out lies as ‘alternative facts’? More important yet, why do they do so? What is it that has so entrenched and cemented their beliefs that they are no longer open to contestation? This is the debate we need to have: not a proxy argument about the impact of platforms and algorithms, but a meaningful discussion about the complex and compound causes of political and societal polarisation. The ‘echo chamber’ and ‘filter bubble’ metaphors have kept us from pursuing that debate, and must now be put to rest.
These easy metaphors carry ill-defined presumptions that do not inform debate. Neither do terms that media love to appropriate and escalate. “Surveillance capitalism” is an extreme name for advertising cookies and the use of the word devalues the seriousness of actual surveillance by governments including my own. See also this very good commentary from Andrew Przybylski and Amy Orben of the Oxford Internet Institute, arguing that internet use is by no means “addiction.”
The state of media coverage of technology and society sucks. It sucked before by being utopian. It sucks now by being dystopian. I tire of the Damascene conversions of both former technologists (having safely cashed out) and of tech reporters who signal their virtue by distancing themselves from what they helped build or build up. I am disappointed that I never see media folk acknowledge their own conflict of interest about competing with the technology companies they cover and about their employers’ attempts to cash in political capital for the sake of protectionism against the platforms. I worry about the impact of this technology coverage on the future and freedoms of the net. (What interventions are being legislated based on emotional and vague concepts like filter bubble, echo chamber, surveillance, and addiction?) I worry, too, as Bruns does, that we are missing the real problem and real story: the roots of anger and polarization in society today. (It ain’t Twitter and you know it; start by examining racism.) I am angry to see journalists condescend to the public they serve, treating people as gullible fools who can be corrupted by a mere meme. I am even angrier to see journalists abandon social media and with it all the new voices who were never heard in mass media but now can speak. And I’m sad to see such simplistic, lazy, and poor quality coverage from my field.
Yes, of course, the technology companies have garnered power and wealth that merits close scrutiny. Yes, those companies fuck up and so I, too, am looking for useful regulatory regimes. But our coverage of society’s problems today should not begin and end on El Camino Real. We are too often covering the effect over the cause.
I wish both media and policymakers would follow the example of academics like Bruns (I use him just as an example; there are so many more). Begin with questions. Study the research that exists. Use data. Call for more research. Before making technology companies responsible for every modern ill — the definition of moral panic — make them instead responsible for sharing data to feed that research. And let that research concentrate not on technology and its impact on people — which too often gives people too little credit and agency. Instead let research and reporting look more carefully at how people are using the technology to have an impact on each other. Start by respecting those people and learning from them before condemning and dismissing them. Through fits and starts and missteps and mistakes — sometimes with, sometimes in spite of the companies involved — we the users are building a new society on the net. Watch, listen, and learn before criticizing, dismissing, and condemning. If it sounds like I want journalism to learn from anthropology, I do. More on that soon.