Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

Gibberish from the machine

I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish. 

We have Gutenberg to blame. It is because of his invention, print, that society came to think of public discourse, creativity, and news as “content,” a commodity to fill the products we call publications or lately websites. Journalists believe that their value resides primarily in making content. To fill the internet’s insatiable maw, reporters at some online sites are given content quotas, and their news organizations no longer appoint editors-in-chief but instead “chief content officers.” For the record, Stern still has actual editors, many of them.

And now here comes a machine — generative artificial intelligence or large language models (LLMs), such as ChatGPT — that can create no end of content: text that sounds just like us because it has been trained on all our words. An LLM maps the trillions of relationships among billions of words, turning them and their connections into numbers a computer can calculate. LLMs have no understanding of the words, no conception of truth. They are programmed only to predict the next most likely word to occur in a sentence.

A New York lawyer named Steven Schwartz had to learn his lesson about ChatGPT’s factual fallibility the hard way. In a now-infamous case, attorney Schwartz asked ChatGPT for precedents in a lawsuit involving an errant airline snack cart and his client’s allegedly injured knee. Schwartz needed to find cases relating to highly technical issues of international treaties and bankruptcy. ChatGPT dutifully delivered more than a half-dozen citations.

As soon as Schwartz’s firm filed the resulting legal brief in federal court, opposing counsel said they could not find the cases, and the judge, P. Kevin Castel, directed the lawyers to produce them. Schwartz returned to ChatGPT. The machine is programmed to tell us what we want to hear, so when Schwartz asked whether the cases were real, ChatGPT said they were. Schwartz then asked ChatGPT to show him the complete cases; it did, and he sent them to the court. The judge called them “gibberish” and ordered Schwartz and his colleagues into court to explain why they should not be sanctioned. I was there, along with many more journalists, to witness the humbling of the attorneys at the hands of technology and the media.

“The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer told the judge. “The court has done its job warning the public of these risks.” Judge Castel interrupted: “I did not set out to do that.” The problem here was not with the technology but with the lawyers who used it, who failed to heed warnings about the dubious citations, who failed to use other tools — even Google — to verify them, and who failed to serve their clients. The lawyers’ lawyer said Schwartz “was playing with live ammo. He didn’t know because technology lied to him.”

But ChatGPT did not lie because, again, it has no conception of truth. Nor did it “hallucinate,” in the description of its creators. It simply predicted strings of words, which sounded right but were not. The judge fined the lawyers $5,000 each and acknowledged that they had suffered humiliation enough in news coverage of their predicament.

Herein lies a cautionary tale for news organizations that are rushing to have large language models write stories — because they want to be cool and trendy, or save work, or perhaps to eliminate jobs, and manufacture ever more content. The news companies CNET and G/O Media have gotten into hot water for using AI to produce content that turned out to be less than factual. America’s largest newspaper chain, Gannett, just turned off artificial intelligence that was producing embarrassing sports stories that would call a football game “a close encounter of the athletic kind.” I have heard online editors plead that they are in a war to produce more and more content to attract more likes and clicks so they may earn more digital advertising pennies. Their problem is that they think their mission is only to make content.

My advice to editors and publishers is to steer clear of large language models for writing the news, except in well-proven use cases, such as turning highly structured financial reports into basic news stories, which must be checked before release. I would give the same advice to Microsoft and Google about connecting LLMs with their search engines. Fact-free gibberish coming out of the machine could ruin the authority and credibility of both news and technology companies — and affect the reputation of artificial intelligence overall.

There are good uses for AI. I benefit from it every day in, for example, Google Translate, Maps, Assistant, and autocomplete. As for large language models, they could be useful to augment — not replace — journalists’ work. I recently tested a new Google tool called NotebookLM, which can take a folder filled with a journalist’s research and summarize it, organize it, and allow the writer to ask questions of it. LLMs could also be used in, for example, language education, where what matters is fluency, not facts. My international students use these programs to smooth out their English for school and work. I even believe LLMs could be used to extend literacy, to help people who are intimidated by writing to communicate more effectively and tell their own stories.

Ah, but therein lies the rub for writers, like me. We believe we are special, that we hold a skill — a talent for writing — that few others can boast. We are storytellers and wield the power to tell others’ tales, to decide what tales are told, who shall be heard in them, and how they will begin and neatly end. We think that gives us the ability to explain the world in what journalists like to call the first draft of history — the news.

Now writers and journalists see both the internet and AI as competition. The internet enables the silent mass of citizens who were not heard in media to at last have their say — and to create a lot of content. And by producing credible prose in seconds, AI devalues writing and robs writers of their special status.

This is one reason why I believe we see hostile coverage of technology in media these days. News organizations and their proprietors claim that Google, Facebook, et al steal away audience, attention, and advertising money (as if God granted publishers those assets in perpetuity). Journalists are engaged in their latest moral panic — another in a long line of panics over movies, television, comic books, rock lyrics, and video games. They warn about the dangers of the internet, social media, our phones, and now AI, claiming that these technologies will make us stupid, addict us, take away our jobs, and destroy democracy under a deluge of disinformation.

They should calm down. A 2020 study found that in the US no age group “spent more than an average of a minute a day engaging with fake news, nor did it occupy more than 0.2% of their overall media consumption.” The issue for democracy isn’t so much disinformation but the willingness — the eagerness — of some citizens to believe lies that stoke their own fears and hatreds. Journalism should be reporting on the roots of bigotry and extremism rather than simplistically blaming technology.

In my book, The Gutenberg Parenthesis, I track society’s entry into the age of print as we now leave it for the digital age that follows. Print’s development as an institution of authority took time. Not until fifty years after Gutenberg’s Bible, around 1500, did the book take the shape we know today, with titles, title pages, and page numbers. It took another century, a few years either side of 1600, before the technology and its technologists — printers — faded into the background, making way for tremendous innovation with print: the birth of the modern novel with Cervantes, the essay with Montaigne, and the newspaper. A business model for print did not arrive until one century more, in 1710, with the advent of copyright. Come the 1800s, the technology of print — which had hardly changed since Gutenberg — evolved at last with the arrival of steam-powered presses and typesetting machines, leading to the birth of mass media. The twentieth century brought print’s first competitors, radio and television. And here we are today, just over a quarter century past the introduction of the commercial web browser. This is to say that we are likely at just the beginning of a long transition into the digital age. It is only 1480 in Gutenberg years.

In the beginning, rumor was trusted more than print because any anonymous printer could produce a book or pamphlet — just as anyone today can make a web site or tweet. In 1470 — only fifteen years after Gutenberg’s Bible came off the press — Latin scholar Niccolò Perotti made what is said to be the first call for censorship of print. Offended by a bad translation of Pliny, he wrote to the Pope demanding that a censor be assigned to approve all text before it came off the press. As I thought about this, I realized Perroti was not seeking censorship. Instead, he was anticipating the establishment of the institutions of editing and publishing, which would assure quality and authority in print for centuries.

Like Perotti in his day, media and politicians today demand that something must be done about harmful content online. Governments — like editors and publishers — cannot cope with the scale of speech now, so they deputize platforms to police and censor all that is said online. It is an impossible task.

Journalists must be careful using AI to produce the news. At the same time, there is a danger in demonizing the technology. In the best case, the rise of AI might force journalists to examine their role in society, to ask how they improve public discourse. The internet provides them with many new ways to connect with communities, to build relationships of trust and authority with them, to listen to their needs, to discover and share voices too long not heard in the public sphere, to expand the work of journalism past publishing to the wider canvas of the internet.

Journalists think their content is what makes them valuable, and so publishers and their lawyers and lobbyists are threatening to sue AI companies, dreaming of huge payments for machines that read their content. That is no strategy for the future of journalism. Neither is Axel Springer’s plan to replace journalists in content factories with AI. That is not where the value of journalism lies. It lies with reporting on and serving communities. Like Nicollò Perotti, we should anticipate the creation of new services to help internet users cope with the abundance of content today, to verify the truth and falsity of what we see online, to assess authority, to discover more diverse voices, to nurture new talent, to recommend content that is worth our time and attention. Could such a service be the basis of a new journalism for the online, AI age?

A generation later: What have we learned?

The date sneaked up on me this year, attacking from behind. Every year on 9/11 I reflect, grateful that I survived the attack. This year, though, I find myself angry. Some of that might be my own loss: my father to COVID this year; my imminent unemployment.

But I am angry on this 22nd anniversary at what has fallen since: at the authoritarianism that overtook this country and threatens the world, at racism and bigotry set loose, at the pandemic killing still, at my own field — journalism — failing to meet these challenges. 

A generation has passed since 9/11/01 and what have we learned? Authoritarians attacked us that day and now authoritarians attack from within. My failing field — journalism — elevates the evil as if it is merely another side in a spectator sport.

Since 9/11/01, our only popularly elected presidents succeeded in strengthening the nation. Under Biden, the economy & nation are strong. But journalism fails at informing the public and wants to make jet lag an election issue while normalizing the fascism in the house. WTF. 

It was on 9/11/01, on my way to work through the World Trade Center, that I decided it was time to leave my job. I would teach. Now I leave that role and I ask what I have accomplished. I pray my students will turn around journalism, for we, their elders, have failed. 

I am, of course, still grateful to have survived 9/11/01. The images and lessons of that day are seared into my soul and will never leave me; they define me. I regret that the spirit in the nation was perverted into war in Iraq. I worry about the state of politics everywhere. 

But on this day I will try to rise above my anger and remember the names of the souls lost and the faces of the selfless first responders I saw rushing toward danger and mercy. This is a day for memorial and gratitude to them.

The only suitable memorial to those lost on 9/11/01 is to recognize the evil that took them and for our institutions — government, politics, journalism, education — to protect present and future generations from further fascism.

Moving on

I have news: I am leaving CUNY’s Newmark Graduate School of Journalism at the end of this term. Technically I’m retiring, though if you know me you know I will never retire. I’m looking at some things to do next and I’m open to others. More on that later. Now, I want to recollect — brag — about my time there.

Eighteen years ago, in 2005, I was the first professor hired at our new school. The New York Times was dubious:

For some old-school journalists, blogging is the worst thing to hit the print medium since, well, journalism school. They may want to avert their eyes today, when Stephen B. Shepard, dean of the new Graduate School of Journalism at the City University of New York, is to name Jeff Jarvis director of the new-media program and associate professor.

On my first day on the job, after attending my first faculty meeting, I quit. I had suggested that faculty needed to learn the new tools of online and digital journalism and some of them jumped down my throat: How dare I tell them what to learn? This festered in me, as things do, and I emailed Steve Shepard and Associate Dean Judith Watson saying that we had made a mistake. I’d already quit my job as president of But, oh well. 

Steve emailed me asking WTF I was doing. That curriculum committee was a temporary body. They weren’t on the faculty of the school. I was. Over lunch, Steve and Judy salved my neuroses and said I could teach that entrepreneurial journalism thing the committee had killed. I stayed. 

Steve took a flier on me. It wasn’t just that I was a blogger and a neurotic but I had only a bachelor’s degree. I’ve always said that I am a poseur in the academy, a fake academic. Nonetheless, I’ve had the privilege of starting three master’s degrees at the school. (Recently, visiting with actual academics at the University of St Andrews, I said I had started three degrees and they looked at me cock-eyed and asked why I hadn’t finished any of them.) 

With Steve, I took our entrepreneurial class and turned it into the nation’s first Advanced Certificate and M.A. in Entrepreneurial Journalism, to prepare journalists to be responsible stewards of our field. The program has been run brilliantly ever since by my colleague Jeremy Caplan, a most generous educator. It has evolved into an online program for independent journalists. 

I’m grateful that our next dean, Sarah Bartlett, also took a flier on involving me in her strategy for growth and we built much together. This week, I’m teaching the fourth cohort in our News Innovation and Leadership executive program. I’d long seen the need for such a degree, so news people would not be corrupted getting MBAs, and so our school, dedicated to diversity, would have an impact not just at the entry level in newsrooms but also on their management. I had to wait to recruit the one person who could build this program, Anita Zielina, and she has done a phenomenal job; she is the leaders’ leader. The program is in great hands with her successor, Niketa Patel. (And I plan to stick around to teach with them in this program after I leave.) 

My proudest accomplishment at the school and indeed in my career has been creating the Engagement Journalism degree in 2014, inspired when Sarah read what I’d written about building relationships with communities as the proper basis of journalism. She asked whether we taught that at the school. Not really, I said. How about a new degree? Cool, I said. We scribbled curricula on napkins. By the end of that week in California we had seed funding from Reid Hoffman, and by that fall we had students in class. I had the great good fortune of hiring, once again, the one person who could build the program, Dr. Carrie Brown, with whom I’ve had the privilege of teaching and learning ever since. She is a visionary in journalism. 

The program is, I’m sad to say, on pause right now. But after having just attended preconferences at AEJMC and ONA on Engagement Journalism, I am gratified to report that the movement is spreading widely. Each gathering was filled with journalists, educators, and community leaders dedicated to centering our work on communities, to building trust through listening and collaboration, to valuing the experience-as-expertise of the public over the tired doctrine of journalistic objectivity, and to repairing the damage journalism has done. I have told our Engagement students that they would be Trojan horses in newsrooms and they have been just that, getting important jobs and reimagining and rebuilding journalism from within.

I am proud of those graduates as I am of those from the executive and Entrepreneurial programs. Since arriving at the school, I have said to each class that I am too old to change journalism. Instead, I would watch and try to help students take on that responsibility. It is wonderful to witness their success. Of course, there is much yet to do. 

Lately, I have turned my attention to internet studies and the wider canvas on which journalism should work in our connected world. What interests me most is bringing the humanities into the discussion of this most human enterprise, which has for too long been dominated (as print was in its first half-century) by the technologists. This is work I hope to continue. 

I love starting things. In my career, I have had the honor of founding Entertainment Weekly at Time Inc., and lots of web sites at Advance. Here I had the great opportunity to help start a school. At the Tow-Knight Center, which I direct, we started communities of practice for new roles in newsrooms; two of these organizations have flown the nest to become independent and sustainable: the News Product Alliance and the Lenfest Institute’s Audience Community of Practice. I’m also proud to have had a small role in helping at the start of Montclair State’s Center for Cooperative Media, which is doing amazing work in Engagement under Stefanie Murray and our alum, Joe Amditis. Those are activities I expected from our Center.

What I had not imagined was that the Center would become an incubator for new degrees. That was made possible by funders. I also never thought that I’d be in the business of fundraising. But without funders’ support, none of these programs would have been born. 

Sarah Bartlett taught me much about raising money, because she’s so good at it. I haven’t heard her say it just this way, but from her I learned that fundraising is about friendship. I am grateful for the friendship of so many supporters of the school and of my work there. 

My friend Leonard Tow challenged Steve and me — with a $3 million challenge grant — when we said we wanted to start a center dedicated to exploring sustainability for news. Emily Tow, who heads the family’s foundation, took us under her wise wing and patiently taught us how to tell our story. It worked. Our friend Alberto Ibargüen, CEO of the Knight Foundation, asked Steve what would make his new school stand apart. Steve said entrepreneurial journalism. Alberto matched the Tows’ grant and the Tow-Knight Center was born. Knight’s Eric Newton was the one who insisted we should make our Entrepreneurial Journalism program a degree and later Jennifer Preston supported our work there. 

As time went on, Len Tow also endowed the Tow Chair in Journalism Innovation, which I am honored to hold. 

When my long-time friend Craig Newmark decided to make it his life’s mission to support journalism (and veterans and cybersecurity and women in tech and pigeons), he generously told me to bring him an idea and thus was born Tow-Knight’s News Integrity Initiative, also supported by my Facebook friends (literally), Áine Kerr, Meredith Carden, and Campbell Brown. Next, Craig most generously endowed the school that now proudly carries his name. His endowment has been a life-saver in the crisis years of the pandemic. His friendship, support, and guidance are invaluable to me. And we love nerding about gadgets. 

I have more friends to thank for their support: John Bracken, way back when he was at the MacArthur Foundation, gave me my first grant to support Entrepreneurial Journalism students’ enterprises. Ford, Carnegie, McCormick, and others contributed to what has added up to — I’m amazed to say — about $53 million in support in which I had a hand. 

And I am grateful for the latest support of the Center, thanks to my friend Richard Gingras of Google. (By way of disclosure, I’ll add that I have not been paid by any technology company.)

I must give my thanks to Hal Straus and Peter Hauck, who worked alongside me — that is to say, tolerated my every inefficiency and eccentricity — managing Tow-Knight, as well as other colleagues (especially Jesenia De Moya Correa), who made possible the convenings the Center brought to the school. The latest were a Black Twitter Summit convened by Meredith Clark, André Brock, Charlton McIlwain, and Johnathan Flowers, and a gathering of internet researchers led by Siva Vaidhyanthan. I have learned so much from such scholars, journalists, technologists, and business and community leaders who have lent their time to the school and the Center.

Finally, I’d like to thank my friend Jay Rosen of NYU, who from the start has taught me much about teaching and scholarship. 

Having subjected you to my Oscar speech, I won’t burden you now with valedictory thoughts on the fate of journalism. That, too, awaits another day. But there’s one more thing I’m grateful for: the opportunity teaching has given me to research and write. I didn’t just blog, to the consternation of our neighbors at The Times, but also got to write books: What Would Google Do? (Harper 2009), Public Parts (Simon & Schuster 2011), and Geeks Bearing Gifts: Imagining New Futures for News (published by the CUNY Journalism Press in 2014). 

I spent the last decade digging into and geeking out about Gutenberg and the vast sweep of media history, leading to The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet, recently published by Bloomsbury Academic. Here is its dedication:

I have another brief work of media history, Magazine, in Bloomsbury’s Object Lessons series, coming out this fall (in which I finally tell my story of the founding of Entertainment Weekly). I have a future book about the internet and media’s moral panic over it — and AI ; I just submitted the manuscript to Basic Books. And I have another few books I want to work on after that. So, yes, I’ll be busy. 

I do hope to continue teaching — perhaps internet studies or even book and media history — and to get back out speaking and consulting and helping start more things. I’d like a fellowship and would welcome the chance to return to serving on boards. Feel free to ping me if you have thoughts. 

I am grateful for my time at CUNY and the privilege to teach there and wish nothing but the best future for the Newmark School.

Copyright and AI and journalism

The US Copyright Office just put out a call for comment on copyright and artificial intelligence. It is a thoughtful document based on listening sessions already held, with thirty-four questions on rights regarding inclusion in learning sets, transparency, the copyrightability of generative AI’s output, and use of likeness. Some of the questions — for example, on whether legislation should require assent or licensing — frighten me, for reasons I set forth in my comments, which I offer to the Office in the context of journalism and its history:

I am a journalist and journalism professor at the City University of New York. I write — speaking for myself — in reply to the Copyright Office’s queries regarding AI, to bring one perspective from my field, as well as the context of history. I will warn that precedents set in regulating this technology could impinge on freedom of expression and quality of information for all. I also will share a proposal for an updated framework for copyright that I call creditright, which I developed in a project with the World Economic Forum at Davos.

First, some context from present practice and history in journalism. It is ironic that newspaper publishers would decry AI reading and learning from their text when journalists themselves read, learn from, rewrite, and repurpose each others’ work in their publications every day. They do the same with sources and experts, without remuneration and often without credit. This is the time-honored tradition in the field.

The 1792 US Post Office Act provided for newspapers to send copies to each other for free for the express purpose of allowing them to copy each other, creating a de facto network of news in the new nation. In fact, many newspapers employed “scissors editors” — their actual job title — to cut out stories to reprint. As I recount in my book, The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet (Bloomsbury Academic, 2023, 217), the only thing that would irritate publishers was if they were not credited.

As the Office well knows, the Copyright Act of 1790 covered only books, charts, and maps, and not newspapers or magazines. Not until 1909 did copyright law include newspapers, but even then, according to Will Slauter in Who Owns the News?: A History of Copyright (Stanford University Press, 2019), there was debate as to whether news articles, as opposed to literary features, were to be protected, for they were often anonymous, the product of business interest more than authorship. Thus the definition of authorship — whether by person, publication, or now machine — remains unsettled.

As to Question 1, regarding the benefits and risks of this technology (in the context of news), I have warned editors away from using generative AI to produce news stories. I covered the show-cause hearing for the attorney who infamously asked ChatGPT for citations for a federal court filing. I use that tale as an object lesson for news organizations (and search platforms) to keep large language models far away from any use involving the expectation of facts and credibility. However, I do see many uses for AI in journalism and I worry that the larger technological field of artificial intelligence and machine learning could be swept up in regulation because of the misuse, misrepresentation, factual fallibility, and falling reputation of generative AI specifically.

AI is invaluable in translation, allowing both journalists and users to read news around the world. I have tested Google’s upcoming product, NotebookLM; augmentative tools such as this, used to summarize and organize a writer’s research, could be quite useful in improving journalists’ work. In discussing the tool with the project’s editorial director, author Steven Johnson, we saw another powerful use and possible business model for news: allowing readers to query and enter into dialogue with a publisher’s content. Finally, I have speculated that generative AI could extend literacy, helping those who are intimidated by the act of writing to help tell — and illustrate — their own stories.

In reviewing media coverage of AI, I ask you to keep in mind that journalists and publishers see the internet and now artificial intelligence as competition. In an upcoming book, I assert that media are embroiled in a full-fledged moral panic over these technologies. The arrival of a machine that can produce no end of fluent prose commodifies the content media produce and robs writers of our special status. This is why I teach that journalists must understand that their value is not resident in the commodity they produce, content, but instead in qualities of authority, credibility, independence, service, and empathy.

As for Question 8 on fair use, I am no lawyer, but it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted for the machine as an agent for users, precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.

I see another potential use for large language models: to provide researchers and scholars with a window on the presumptions, biases, myths, and misapprehensions reflected in the relationships of all the words analyzed by them — the words of those who had the power and privilege of publishing them. To restrict access skews that vision and potentially harms scholarly uses that have not yet been imagined.

The speculation in Question 9, about requiring affirmative permission for any copyrighted material to be used in training AI models, and in Question 10, regarding collective management organizations or legislatively establishing a compulsory licensing scheme, frightens me. AI companies already offer a voluntary opt-out mechanism, in the model of robots.txt. As media report, many news organizations are availing themselves of that option. To legally require opt-in or licensing sets up unimaginable complications.

Such complication raises the barrier to entry for new and open-source competitors and the spectre of regulatory capture — as does discussion in the EU of restricting open-source AI models (Question 25.1). The best response to the rising power of the already-huge incumbent companies involved in AI is to open the door — not close it — to new competition and open development.

As for Questions 18–21 on copyrightability, I would suggest a different framework for considering both the input and output of generative AI: as an intellectual, cultural, and informational commons, whose use and benefits we cannot not predict. Shouldn’t policy encourage at least a period of development, research, and experimentation?

Finally, permit me to propose another framework for consideration of copyright in this new age in which connected technologies enable collaborative creation and communal distribution. In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum in Davos on rethinking intellectual property and the support of creativity in the digital age. In the safe space of the mountains, even entertainment executives would concede that copyright law could be considered outmoded and is due for reconsideration. The WEF report is available here.

Out of that work, I conceived of a framework I call “creditright,” which I write about in Geeks Bearing Gifts (CUNY Journalism Press, 2014) and in The Gutenberg Parenthesis (221–2): “This is not the right to copy text but the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward. I didn’t mention blockchain; but the technology and its automated contracts could be useful to record credit and trigger rewards.” I do not pretend that this is a fully thought-through solution, only one idea to spark discussion on alternatives for copyright.

The idea of creditright has some bearing on your Questions 15–17 on transparency and recordkeeping — what might ledgers of credit in creation look like? — though I am trying to make a larger argument about the underpinnings of copyright. As I have come to learn, 1710’s Statute of Anne was not formulated at the urging of — or to protect the rights of — authors, so much as it was in response to the demands of publishers and booksellers, to create a marketplace for creativity as a tradable asset. Said historian Peter Baldwin in The Copyright Wars: Three Centuries of Trans-Atlantic Battle (Princeton University Press, 2016, 53–6): “The booksellers claimed to be supporting authors’ just and natural right to property. But in fact their aim was to take for themselves what nature had supposedly granted their clients.”

I write in my book that the metaphor of creativity as property — of art as artifact rather than an act — “might be appropriate for land, buildings, ships, and tangible possessions, but is it for such intangibles as creativity, inspiration, information, education, and art? Especially once electronics — from broadcast to digital — eliminated the scarcity of the printed page or the theater seat, one need ask whether property is still a valid metaphor for such a nonrivalrous good as culture.”

Around the world, copyright law and doctrine are being mangled to suit the protectionist ends of those lobbying on behalf of incumbent publishers and producers, who remain flummoxed by the challenges and opportunities of technology, of both the internet and now artificial intelligence. In the context of journalism and news, Germany’s Leistungsschutzrecht or ancillary copyright law, Spain’s recently superseded link tax, Australia’s News Media Bargaining Code, the proposed Journalism Competition and Preservation Act in the US, and lately Canada’s C-18 Online News Act do nothing to protect the public’s interest in informed discourse and, in Canada’s case, will end up harming news consumers, journalists, and platforms alike as Facebook and Google are forced to take down links to news.

I urge the Copyright Office to continue its process of study as exemplified by this request for comments and not to rush into the frenzied discussion in media over artificial intelligence, large language models, and generative AI. It is too soon. Too little is known. Too much is at stake.