Posts about ai

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

Gibberish from the machine


I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish. 


We have Gutenberg to blame. It is because of his invention, print, that society came to think of public discourse, creativity, and news as “content,” a commodity to fill the products we call publications or lately websites. Journalists believe that their value resides primarily in making content. To fill the internet’s insatiable maw, reporters at some online sites are given content quotas, and their news organizations no longer appoint editors-in-chief but instead “chief content officers.” For the record, Stern still has actual editors, many of them.

And now here comes a machine — generative artificial intelligence or large language models (LLMs), such as ChatGPT — that can create no end of content: text that sounds just like us because it has been trained on all our words. An LLM maps the trillions of relationships among billions of words, turning them and their connections into numbers a computer can calculate. LLMs have no understanding of the words, no conception of truth. They are programmed only to predict the next most likely word to occur in a sentence.

A New York lawyer named Steven Schwartz had to learn his lesson about ChatGPT’s factual fallibility the hard way. In a now-infamous case, attorney Schwartz asked ChatGPT for precedents in a lawsuit involving an errant airline snack cart and his client’s allegedly injured knee. Schwartz needed to find cases relating to highly technical issues of international treaties and bankruptcy. ChatGPT dutifully delivered more than a half-dozen citations.

As soon as Schwartz’s firm filed the resulting legal brief in federal court, opposing counsel said they could not find the cases, and the judge, P. Kevin Castel, directed the lawyers to produce them. Schwartz returned to ChatGPT. The machine is programmed to tell us what we want to hear, so when Schwartz asked whether the cases were real, ChatGPT said they were. Schwartz then asked ChatGPT to show him the complete cases; it did, and he sent them to the court. The judge called them “gibberish” and ordered Schwartz and his colleagues into court to explain why they should not be sanctioned. I was there, along with many more journalists, to witness the humbling of the attorneys at the hands of technology and the media.

“The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer told the judge. “The court has done its job warning the public of these risks.” Judge Castel interrupted: “I did not set out to do that.” The problem here was not with the technology but with the lawyers who used it, who failed to heed warnings about the dubious citations, who failed to use other tools — even Google — to verify them, and who failed to serve their clients. The lawyers’ lawyer said Schwartz “was playing with live ammo. He didn’t know because technology lied to him.”

But ChatGPT did not lie because, again, it has no conception of truth. Nor did it “hallucinate,” in the description of its creators. It simply predicted strings of words, which sounded right but were not. The judge fined the lawyers $5,000 each and acknowledged that they had suffered humiliation enough in news coverage of their predicament.

Herein lies a cautionary tale for news organizations that are rushing to have large language models write stories — because they want to be cool and trendy, or save work, or perhaps to eliminate jobs, and manufacture ever more content. The news companies CNET and G/O Media have gotten into hot water for using AI to produce content that turned out to be less than factual. America’s largest newspaper chain, Gannett, just turned off artificial intelligence that was producing embarrassing sports stories that would call a football game “a close encounter of the athletic kind.” I have heard online editors plead that they are in a war to produce more and more content to attract more likes and clicks so they may earn more digital advertising pennies. Their problem is that they think their mission is only to make content.

My advice to editors and publishers is to steer clear of large language models for writing the news, except in well-proven use cases, such as turning highly structured financial reports into basic news stories, which must be checked before release. I would give the same advice to Microsoft and Google about connecting LLMs with their search engines. Fact-free gibberish coming out of the machine could ruin the authority and credibility of both news and technology companies — and affect the reputation of artificial intelligence overall.

There are good uses for AI. I benefit from it every day in, for example, Google Translate, Maps, Assistant, and autocomplete. As for large language models, they could be useful to augment — not replace — journalists’ work. I recently tested a new Google tool called NotebookLM, which can take a folder filled with a journalist’s research and summarize it, organize it, and allow the writer to ask questions of it. LLMs could also be used in, for example, language education, where what matters is fluency, not facts. My international students use these programs to smooth out their English for school and work. I even believe LLMs could be used to extend literacy, to help people who are intimidated by writing to communicate more effectively and tell their own stories.

Ah, but therein lies the rub for writers, like me. We believe we are special, that we hold a skill — a talent for writing — that few others can boast. We are storytellers and wield the power to tell others’ tales, to decide what tales are told, who shall be heard in them, and how they will begin and neatly end. We think that gives us the ability to explain the world in what journalists like to call the first draft of history — the news.

Now writers and journalists see both the internet and AI as competition. The internet enables the silent mass of citizens who were not heard in media to at last have their say — and to create a lot of content. And by producing credible prose in seconds, AI devalues writing and robs writers of their special status.

This is one reason why I believe we see hostile coverage of technology in media these days. News organizations and their proprietors claim that Google, Facebook, et al steal away audience, attention, and advertising money (as if God granted publishers those assets in perpetuity). Journalists are engaged in their latest moral panic — another in a long line of panics over movies, television, comic books, rock lyrics, and video games. They warn about the dangers of the internet, social media, our phones, and now AI, claiming that these technologies will make us stupid, addict us, take away our jobs, and destroy democracy under a deluge of disinformation.

They should calm down. A 2020 study found that in the US no age group “spent more than an average of a minute a day engaging with fake news, nor did it occupy more than 0.2% of their overall media consumption.” The issue for democracy isn’t so much disinformation but the willingness — the eagerness — of some citizens to believe lies that stoke their own fears and hatreds. Journalism should be reporting on the roots of bigotry and extremism rather than simplistically blaming technology.

In my book, The Gutenberg Parenthesis, I track society’s entry into the age of print as we now leave it for the digital age that follows. Print’s development as an institution of authority took time. Not until fifty years after Gutenberg’s Bible, around 1500, did the book take the shape we know today, with titles, title pages, and page numbers. It took another century, a few years either side of 1600, before the technology and its technologists — printers — faded into the background, making way for tremendous innovation with print: the birth of the modern novel with Cervantes, the essay with Montaigne, and the newspaper. A business model for print did not arrive until one century more, in 1710, with the advent of copyright. Come the 1800s, the technology of print — which had hardly changed since Gutenberg — evolved at last with the arrival of steam-powered presses and typesetting machines, leading to the birth of mass media. The twentieth century brought print’s first competitors, radio and television. And here we are today, just over a quarter century past the introduction of the commercial web browser. This is to say that we are likely at just the beginning of a long transition into the digital age. It is only 1480 in Gutenberg years.

In the beginning, rumor was trusted more than print because any anonymous printer could produce a book or pamphlet — just as anyone today can make a web site or tweet. In 1470 — only fifteen years after Gutenberg’s Bible came off the press — Latin scholar Niccolò Perotti made what is said to be the first call for censorship of print. Offended by a bad translation of Pliny, he wrote to the Pope demanding that a censor be assigned to approve all text before it came off the press. As I thought about this, I realized Perroti was not seeking censorship. Instead, he was anticipating the establishment of the institutions of editing and publishing, which would assure quality and authority in print for centuries.

Like Perotti in his day, media and politicians today demand that something must be done about harmful content online. Governments — like editors and publishers — cannot cope with the scale of speech now, so they deputize platforms to police and censor all that is said online. It is an impossible task.

Journalists must be careful using AI to produce the news. At the same time, there is a danger in demonizing the technology. In the best case, the rise of AI might force journalists to examine their role in society, to ask how they improve public discourse. The internet provides them with many new ways to connect with communities, to build relationships of trust and authority with them, to listen to their needs, to discover and share voices too long not heard in the public sphere, to expand the work of journalism past publishing to the wider canvas of the internet.

Journalists think their content is what makes them valuable, and so publishers and their lawyers and lobbyists are threatening to sue AI companies, dreaming of huge payments for machines that read their content. That is no strategy for the future of journalism. Neither is Axel Springer’s plan to replace journalists in content factories with AI. That is not where the value of journalism lies. It lies with reporting on and serving communities. Like Nicollò Perotti, we should anticipate the creation of new services to help internet users cope with the abundance of content today, to verify the truth and falsity of what we see online, to assess authority, to discover more diverse voices, to nurture new talent, to recommend content that is worth our time and attention. Could such a service be the basis of a new journalism for the online, AI age?

Copyright and AI and journalism

The US Copyright Office just put out a call for comment on copyright and artificial intelligence. It is a thoughtful document based on listening sessions already held, with thirty-four questions on rights regarding inclusion in learning sets, transparency, the copyrightability of generative AI’s output, and use of likeness. Some of the questions — for example, on whether legislation should require assent or licensing — frighten me, for reasons I set forth in my comments, which I offer to the Office in the context of journalism and its history:

I am a journalist and journalism professor at the City University of New York. I write — speaking for myself — in reply to the Copyright Office’s queries regarding AI, to bring one perspective from my field, as well as the context of history. I will warn that precedents set in regulating this technology could impinge on freedom of expression and quality of information for all. I also will share a proposal for an updated framework for copyright that I call creditright, which I developed in a project with the World Economic Forum at Davos.

First, some context from present practice and history in journalism. It is ironic that newspaper publishers would decry AI reading and learning from their text when journalists themselves read, learn from, rewrite, and repurpose each others’ work in their publications every day. They do the same with sources and experts, without remuneration and often without credit. This is the time-honored tradition in the field.

The 1792 US Post Office Act provided for newspapers to send copies to each other for free for the express purpose of allowing them to copy each other, creating a de facto network of news in the new nation. In fact, many newspapers employed “scissors editors” — their actual job title — to cut out stories to reprint. As I recount in my book, The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet (Bloomsbury Academic, 2023, 217), the only thing that would irritate publishers was if they were not credited.

As the Office well knows, the Copyright Act of 1790 covered only books, charts, and maps, and not newspapers or magazines. Not until 1909 did copyright law include newspapers, but even then, according to Will Slauter in Who Owns the News?: A History of Copyright (Stanford University Press, 2019), there was debate as to whether news articles, as opposed to literary features, were to be protected, for they were often anonymous, the product of business interest more than authorship. Thus the definition of authorship — whether by person, publication, or now machine — remains unsettled.

As to Question 1, regarding the benefits and risks of this technology (in the context of news), I have warned editors away from using generative AI to produce news stories. I covered the show-cause hearing for the attorney who infamously asked ChatGPT for citations for a federal court filing. I use that tale as an object lesson for news organizations (and search platforms) to keep large language models far away from any use involving the expectation of facts and credibility. However, I do see many uses for AI in journalism and I worry that the larger technological field of artificial intelligence and machine learning could be swept up in regulation because of the misuse, misrepresentation, factual fallibility, and falling reputation of generative AI specifically.

AI is invaluable in translation, allowing both journalists and users to read news around the world. I have tested Google’s upcoming product, NotebookLM; augmentative tools such as this, used to summarize and organize a writer’s research, could be quite useful in improving journalists’ work. In discussing the tool with the project’s editorial director, author Steven Johnson, we saw another powerful use and possible business model for news: allowing readers to query and enter into dialogue with a publisher’s content. Finally, I have speculated that generative AI could extend literacy, helping those who are intimidated by the act of writing to help tell — and illustrate — their own stories.

In reviewing media coverage of AI, I ask you to keep in mind that journalists and publishers see the internet and now artificial intelligence as competition. In an upcoming book, I assert that media are embroiled in a full-fledged moral panic over these technologies. The arrival of a machine that can produce no end of fluent prose commodifies the content media produce and robs writers of our special status. This is why I teach that journalists must understand that their value is not resident in the commodity they produce, content, but instead in qualities of authority, credibility, independence, service, and empathy.

As for Question 8 on fair use, I am no lawyer, but it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted for the machine as an agent for users, precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.

I see another potential use for large language models: to provide researchers and scholars with a window on the presumptions, biases, myths, and misapprehensions reflected in the relationships of all the words analyzed by them — the words of those who had the power and privilege of publishing them. To restrict access skews that vision and potentially harms scholarly uses that have not yet been imagined.

The speculation in Question 9, about requiring affirmative permission for any copyrighted material to be used in training AI models, and in Question 10, regarding collective management organizations or legislatively establishing a compulsory licensing scheme, frightens me. AI companies already offer a voluntary opt-out mechanism, in the model of robots.txt. As media report, many news organizations are availing themselves of that option. To legally require opt-in or licensing sets up unimaginable complications.

Such complication raises the barrier to entry for new and open-source competitors and the spectre of regulatory capture — as does discussion in the EU of restricting open-source AI models (Question 25.1). The best response to the rising power of the already-huge incumbent companies involved in AI is to open the door — not close it — to new competition and open development.

As for Questions 18–21 on copyrightability, I would suggest a different framework for considering both the input and output of generative AI: as an intellectual, cultural, and informational commons, whose use and benefits we cannot not predict. Shouldn’t policy encourage at least a period of development, research, and experimentation?

Finally, permit me to propose another framework for consideration of copyright in this new age in which connected technologies enable collaborative creation and communal distribution. In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum in Davos on rethinking intellectual property and the support of creativity in the digital age. In the safe space of the mountains, even entertainment executives would concede that copyright law could be considered outmoded and is due for reconsideration. The WEF report is available here.

Out of that work, I conceived of a framework I call “creditright,” which I write about in Geeks Bearing Gifts (CUNY Journalism Press, 2014) and in The Gutenberg Parenthesis (221–2): “This is not the right to copy text but the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward. I didn’t mention blockchain; but the technology and its automated contracts could be useful to record credit and trigger rewards.” I do not pretend that this is a fully thought-through solution, only one idea to spark discussion on alternatives for copyright.

The idea of creditright has some bearing on your Questions 15–17 on transparency and recordkeeping — what might ledgers of credit in creation look like? — though I am trying to make a larger argument about the underpinnings of copyright. As I have come to learn, 1710’s Statute of Anne was not formulated at the urging of — or to protect the rights of — authors, so much as it was in response to the demands of publishers and booksellers, to create a marketplace for creativity as a tradable asset. Said historian Peter Baldwin in The Copyright Wars: Three Centuries of Trans-Atlantic Battle (Princeton University Press, 2016, 53–6): “The booksellers claimed to be supporting authors’ just and natural right to property. But in fact their aim was to take for themselves what nature had supposedly granted their clients.”

I write in my book that the metaphor of creativity as property — of art as artifact rather than an act — “might be appropriate for land, buildings, ships, and tangible possessions, but is it for such intangibles as creativity, inspiration, information, education, and art? Especially once electronics — from broadcast to digital — eliminated the scarcity of the printed page or the theater seat, one need ask whether property is still a valid metaphor for such a nonrivalrous good as culture.”

Around the world, copyright law and doctrine are being mangled to suit the protectionist ends of those lobbying on behalf of incumbent publishers and producers, who remain flummoxed by the challenges and opportunities of technology, of both the internet and now artificial intelligence. In the context of journalism and news, Germany’s Leistungsschutzrecht or ancillary copyright law, Spain’s recently superseded link tax, Australia’s News Media Bargaining Code, the proposed Journalism Competition and Preservation Act in the US, and lately Canada’s C-18 Online News Act do nothing to protect the public’s interest in informed discourse and, in Canada’s case, will end up harming news consumers, journalists, and platforms alike as Facebook and Google are forced to take down links to news.

I urge the Copyright Office to continue its process of study as exemplified by this request for comments and not to rush into the frenzied discussion in media over artificial intelligence, large language models, and generative AI. It is too soon. Too little is known. Too much is at stake.

A few unpopular opinions about AI

In a conversation with Jason Howell for his upcoming AI podcast on the TWiT network, I came to wonder whether ChatGPT and large language models might give all of artificial intelligence cultural cooties, for the technology is being misused by companies and miscast by media such that the public may come to wonder whether they can ever trust the output of a machine. That is the disaster scenario the AI boys do not account for.

While AI’s boys are busy thumping their chests about their power to annihilate humanity, if they are not careful — and they are not — generative AI could come to be distrusted for misleading users (the companies’ fault more than the machine’s); filling our already messy information ecosystem with the data equivalent of Styrofoam peanuts and junk mail; making news worse; making customer service even worse; making education worse; threatening jobs; and hurting the environment. What’s not to dislike?

Below I will share my likely unpopular opinions about large language models — how they should not be used in search or news, how building effective guardrails is improbable, how we already have enough fucking content in the world. But first, a few caveats:

I do see limited potential uses for synthetic text and generative AI. Watch this excellent talk by Emily Bender, one of the authors of the seminal Stochastic Parrots paper and a leading critic of AI hype, suggesting criteria for acceptable applications: cases where language form and fluency matter but facts do not (e.g., foreign language instruction), where bias can be filtered, and where originality is not required.

Here I explored the idea that large language models could help extend literacy for those who are intimidated by writing and thus excluded from discourse. I am impressed with Google’s NotebookLM (which I’ve seen thanks to Steven Johnson, its editorial director), as an augmentative tool designed not to create content but to help writers organize research and enter into dialog with text (a possible new model for interaction with news, by the way). Gutenberg can be blamed for giving birth to the drudgery of bureaucracy and perhaps LLMs can save us some of the grind of responding to it.

I value much of what machine learning makes possible today — in, for example, Google’s Search, Translate, Maps, Assistant, and autocomplete. I am a defender of the internet (subject of my next book) and, yes, social media. Yet I am cautious about this latest AI flavor of the month, not because generative AI itself is dangerous but because the uses to which it is being put are stupid and its current proprietors are worrisome.

So here are a few of my unpopular opinions about large language models like ChatGPT:

It is irresponsible to use generative AI models as presently constituted in search or anywhere users are conditioned to expect facts and truthful responses. Presented with the empty box on Bing’s or Google’s search engines, one expects at least a credible list of sites relevant to one’s query, or a direct response based on a trusted source: Wikipedia or services providing the weather, stock prices, or sports scores. To have an LLM generate a response — knowing full well that the program has no understanding of fact — is simply wrong.

No news organization should use generative AI to write news stories, except in very circumscribed circumstances. For years now, wire services have used artificial intelligence software to generate simple news stories from limited, verified, and highly structured data — finance, sports, weather — and that works because of the strictly bounded arena in which such programs work. Using LLMs trained on the entire web to generate news stories from the ether is irresponsible, for it only predicts words, it cannot discern facts, and it reflects biases. I endorse experimenting with AI to augment journalists’ work, organizing information or analyzing data. Otherwise, stay away.

The last thing the world needs is more content. This, too, we can blame on Gutenberg (and I do, in The Gutenberg Parenthesis), for printing brought about the commodification of conversation and creativity as a product we call content. Journalists and other writers came to believe that their value resides entirely in content, rather than in the higher, human concepts of service and relationships. So my industry, at its most industrial, thinks its mission is to extrude ever more content. The business model encourages that: more stuff to fill more pages to get more clicks and more attention and a few more ad pennies. And now comes AI, able to manufacture no end of stuff. No. Tell the machine to STFU.

There will be no way to build foolproof guardrails against people making AI do bad things. We regularly see news articles reporting that an LLM lied about — even libeled — someone. First note well that LLMs do not lie or hallucinate because they have no conception of truth or meaning. Thus they can be made to say anything about anyone. The only limit on such behavior is the developers’ ability to predict and forbid everything bad that anyone could do with the software. (See, for example, how ChatGPT at first refused to go where The New York Times’ Kevin Roose wanted it to go and even scolded him for trying to draw out its dark side. But Roose persevered and led it astray anyway.) No policy, no statute, no regulation, no code can prevent this. So what do we do? We try to hold accountable the user who gets the machine to say bad shit and then spread it, just as we would if you printed out nasty shit on your HP printer and posted it around the neighborhood. Not much else we can do.

AI will not ruin democracy. We see regular alarms that AI will produce so much disinformation that democracy is in peril — see a recent warning from Jon Naughton of The Guardian that “a tsunami of AI misinformation will shape next year’s knife-edge elections.” But hold on. First, we already have more than enough misinformation; who’s to say that any more will make a difference? Second, research finds again and again that online disinformation played a small role in the 2016 election. We have bigger problems to address about the willful credulity of those who want to signal their hatreds with misinformation and we should not let tropes of techno moral panic distract us from that greater peril.

Perhaps LLMs should have been introduced as fiction machines. ChatGPT is a nice parlor trick, no doubt. It can make shit up. It can sound like us. Cool. If that entertaining power were used to write short stories or songs or poems and if it were clearly understood that the machine could do little else, I’m not sure we’d be in our current dither about AI. Problem is, as any novelist or songwriter or poet can tell you, there’s little money in creativity anymore. That wouldn’t attract billions in venture capital and the stratospheric valuations that go with it whenever AI is associated with internet search, media, and McKinsey finding a new way to kill jobs. As with so much else today, the problem isn’t with the tool or the user but with capitalism. (To those who would correct me and say it’s late-stage capitalism, I respond: How can you be so sure it is in its last stages?)

Training artificial intelligence models on existing content could be considered fair use. Their output is generally transformative. If that is true, then training machines on content would not be a violation of copyright or theft. It will take years for courts to adjudicate the implications of generative AI on outmoded copyright doctrine and law. As Harvard Law Professor Lawrence Lessig famously said, fair use is the right to hire an attorney. Media moguls are rushing to do just that, hiring lawyers to force AI companies to pay for the right to use news content to train their machines — just as the publishers paid lobbyists to get legislators to pass laws to get search engines and social media platforms to pay to link to news content. (See how well that’s working out in Canada.) I am no lawyer but I believe training machines on any content that is lawfully acquired so it can be inspired to produce new content is not a violation of copyright. Note my italics.

Machines should have the same right to learn as humans; to say otherwise is to set a dangerous precedent for humans. If we say that a machine is not allowed to learn, to read, to extract knowledge from existing content and adapt it to other uses, then I fear it would not be a long leap to declare what we as humans are not allowed to read, see, or know some things. This puts us in the odd position of having to defend the machine’s rights so as to protect our own.

Stopping large language models from having access to quality content will make them even worse. Same problem we have in our democracy: Pay walls restrict quality information to the already rich and powerful, leaving the field — whether that is news or democracy or machine learning — free to bad actors and their disinformation.

Does the product of the machine deserve copyright protection? I’m not sure. A federal court just upheld the US Copyright Office’s refusal to grant copyright protection to the product of AI. I’m just as happy as the next copyright revolutionary to see the old doctrine fenced in for the sake of a larger commons. But the agency’s ruling was limited to content generated solely by the machine and in most cases (in fact, all cases) people are involved. So I’m not sure where we will end up. The bottom line is that we need a wholesale reconsideration of copyright (which I also address in The Gutenberg Parenthesis). Odds of that happening? About as high as the odds that AI will destroy mankind.

The most dangerous prospect arising from the current generation of AI is not the technology, but the philosophy espoused by some of its technologists. I won’t venture deep down this rat hole now, but the faux philosophies espoused by many of the AI boys — in the acronym of Émile Torres and Timnit Gebru, TESCREAL, or longtermism for short — is noxious and frightening, serving as self-justification for their wealth and power. Their philosophizing might add up to a glib freshman’s essay on utilitarianism if it did not also border on eugenics and if these boys did not have the wealth and power they wield. See Torres’ excellent reporting on TESCREAL here. Media should be paying attention to this angle instead of acting as the boys’ fawning stenographers. They must bring the voices of responsible scholars — from many fields, including the humanities — into the discussion. And government should encourage truly open-source development and investment to bring on competitors that can keep these boys, more than their machines, in check.

ChatGPT goes to court

I attended a show-cause hearing for two attorneys and their firm who submitted nonexistent citations and then entirely fictitious cases manufactured by ChatGPT to federal court, and then tried to blame the machine. “This case is Schadenfreude for any lawyer,” said the attorneys’ attorney, misusing a word as ChatGPT might. “There but for the grace of God go I…. Lawyers have always had difficulty with new technology.”

The judge, P. Kevin Castel, would have none of it. At the end of the two-hour hearing in which he meticulously and patiently questioned each of the attorneys, he said it is “not fair to pick apart people’s words,” but he noted that the actions of the lawyers were “repeatedly described as a mistake.” The mistake might have been the first submission with its nonexistent citations. But “that is the beginning of the narrative, not the end,” as again and again the attorneys failed to do their work, to follow through once the fiction was called to their attention by opposing counsel and the court, to even Google the cases ChatGPT manufactured to verify their existence, let alone to read what “gibberish” — in the judge’s description—ChatGPT fabricated. And ultimately, they failed to fully take responsibility for their own actions.

Over and over again, Steven Schwartz, the attorney who used ChatGPT to do his work, testified to the court that “I just never could imagine that ChatGPT would fabricate cases…. It never occurred to me that it would be making up cases.” He thought it was a search engine — a “super search engine.” And search engines can be trusted, yes? Technology can’t be wrong, right?

Now it’s true that one may fault some large language models’ creators for giving people the impression that generative AI is credible when we know it is not — and especially Microsoft for later connecting ChatGPT with its search engine, Bing, no doubt misleading more people. But Judge Castel’s point stands: It was the lawyer’s responsibility — to themselves, their client, the court, and truth itself — to check the machine’s work. This is not a tale of technology’s failures but of humans’, as most are.

Technology got blamed for much this day. Lawyers faulted their legal search engine, Fastcase, for not giving this personal-injury firm, accustomed to state courts, access to federal cases (a billing screwup). They blamed Microsoft Word for their cut-and-paste of a bolloxed notorization. In a lovely Gutenberg-era moment, Judge Castel questioned them about the odd mix of fonts — Times Roman and something sans serif — in the fake cases, and the lawyer blamed that, too, on computer cut-and-paste. The lawyers’ lawyer said that with ChatGPT, Schwartz “was playing with live ammo. He didn’t know because technology lied to him.” When Schwartz went back to ChatGPT to “find” the cases, “it doubled down. It kept lying to him.” It made them up out of digital ether. “The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer said. “The court has done its job warning the public of these risks.” The judge interrupted: “I did not set out to do that.” For the issue here is not the machine, it is the men who used it.

The courtroom was jammed, sending some to an overflow courtroom to listen. There were some reporters there, whose presence the lawyers noted as they lamented their public humiliation. The room was also filled with young, dark-suited law students and legal interns. I hope they listened well to the judge (and I hope the journalists did, too) about the real obligations of truth.

ChatGPT is designed to tell you what you want it to say. It is a personal propaganda machine that strings together words to satisfy the ear, with no expectation that it is right. Kevin Roose of The New York Times asked ChatGPT to reveal a dark soul and he was then shocked and disturbed when it did just what he had requested. Same for attorney Schwartz. In his questioning of the lawyer, the judge noted this important nuance: Schwartz did not ask ChatGPT for explanation and case law regarding the somewhat arcane — especially to a personal-injury lawyer usually practicing in state courts — issues of bankruptcy, statutes of limitation, and international treaties in this case of an airline passenger’s knee and an errant snack cart. “You were not asking ChatGPT for an objective analysis,” the judge said. Instead, Schwartz admitted, he asked ChatGPT to give him cases that would bolster his argument. Then, when doubted about the existence of the cases by opposing counsel and judge, he went back to ChatGPT and it produced the cases for him, gibberish and all. And in a flash of apparent incredulity, when he asked ChatGPT “are the other cases you provided fake?”, it responded as he doubtless hoped: “No, the other cases I provided are real.” It instructed that they could be found on reputible legal databases such as LexisNexis and Westlaw, which Schwartz did not consult. The machine did as it was told; the lawyer did not. “It followed your command,” noted the judge. “ChatGPT was not supplementing your research. It was your research.”

Schwartz gave a choked-up apology to the court and his colleagues and his opponents, though as the judge pointedly remarked, he left out of that litany his own ill-served client. Schwartz took responsibility for using the machine to do his work but did not take responsibility for the work he did not do to verify the meaningless strings of words it spat out.

I have some empathy for Schwartz and his colleagues, for they will likely be a long-time punchline in jokes about the firm of Nebbish, Nebbish, & Luddite and the perils of technological progress. All its associates are now undergoing continuing legal education courses in the proper use of artificial intelligence (and there are lots of them already). Schwartz has the ill luck of being the hapless pioneer who came upon this new tool when it was three months in the world, and was merely the first to find a new way to screw up. His lawyers argued to the judge that he and his colleagues should not be sanctioned because they did not operate in bad faith. The judge has taken the case under advisement, but I suspect he might not agree, given their negligence to follow through when their work was doubted.

I also have some anthropomorphic sympathy for ChatGPT, as it is a wronged party in this case: wronged by the lawyers and their blame, wronged by the media and their misrepresentations, wronged by the companies — Microsoft especially — that are trying to tell users just what Schwartz wrongly assumed: that ChatGPT is a search engine that can supply facts. It can’t. It supplies credible-sounding — but not credible — language. That is what it is designed to do. That is what it does, quite amazingly. Its misuse is not its fault.

I have come to believe that journalists should stay away from ChatGPT, et al., for creating that commodity we call content. Yes, AI has long been used to produce stories from structured and limited data: sports games and financial results. That works well, for in these cases, stories are just another form of data visualization. Generative AI is something else again. It picks any word in the language to place after another word based not on facts but on probability. I have said that I do see uses for this technology in journalism: expanding literacy, helping people who are intimidated by writing and illustration to tell their own stories rather than having them extracted and exploited by journalists, for example. We should study and test this technology in our field. We should learn about what it can and cannot do with experience, rather than misrepresenting its capabilities or perils in our reporting. But we must not have it do our work for us.

Besides, the world already has more than enough content. The last thing we need is a machine that spits out yet more. What the world needs from journalism is research, reporting, service, solutions, accountability, empathy, context, history, humanity. I dare tell my journalism students who are learning to write stories that writing stories is not their job; it is merely a useful skill. Their job as journalists is to serve communities and that begins with listening and speaking with people, not machines.


Image: Lady Justice casts off her scale for the machine, by DreamStudio