Posts about chatgpt

Copyright and AI and journalism

The US Copyright Office just put out a call for comment on copyright and artificial intelligence. It is a thoughtful document based on listening sessions already held, with thirty-four questions on rights regarding inclusion in learning sets, transparency, the copyrightability of generative AI’s output, and use of likeness. Some of the questions — for example, on whether legislation should require assent or licensing — frighten me, for reasons I set forth in my comments, which I offer to the Office in the context of journalism and its history:

I am a journalist and journalism professor at the City University of New York. I write — speaking for myself — in reply to the Copyright Office’s queries regarding AI, to bring one perspective from my field, as well as the context of history. I will warn that precedents set in regulating this technology could impinge on freedom of expression and quality of information for all. I also will share a proposal for an updated framework for copyright that I call creditright, which I developed in a project with the World Economic Forum at Davos.

First, some context from present practice and history in journalism. It is ironic that newspaper publishers would decry AI reading and learning from their text when journalists themselves read, learn from, rewrite, and repurpose each others’ work in their publications every day. They do the same with sources and experts, without remuneration and often without credit. This is the time-honored tradition in the field.

The 1792 US Post Office Act provided for newspapers to send copies to each other for free for the express purpose of allowing them to copy each other, creating a de facto network of news in the new nation. In fact, many newspapers employed “scissors editors” — their actual job title — to cut out stories to reprint. As I recount in my book, The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet (Bloomsbury Academic, 2023, 217), the only thing that would irritate publishers was if they were not credited.

As the Office well knows, the Copyright Act of 1790 covered only books, charts, and maps, and not newspapers or magazines. Not until 1909 did copyright law include newspapers, but even then, according to Will Slauter in Who Owns the News?: A History of Copyright (Stanford University Press, 2019), there was debate as to whether news articles, as opposed to literary features, were to be protected, for they were often anonymous, the product of business interest more than authorship. Thus the definition of authorship — whether by person, publication, or now machine — remains unsettled.

As to Question 1, regarding the benefits and risks of this technology (in the context of news), I have warned editors away from using generative AI to produce news stories. I covered the show-cause hearing for the attorney who infamously asked ChatGPT for citations for a federal court filing. I use that tale as an object lesson for news organizations (and search platforms) to keep large language models far away from any use involving the expectation of facts and credibility. However, I do see many uses for AI in journalism and I worry that the larger technological field of artificial intelligence and machine learning could be swept up in regulation because of the misuse, misrepresentation, factual fallibility, and falling reputation of generative AI specifically.

AI is invaluable in translation, allowing both journalists and users to read news around the world. I have tested Google’s upcoming product, NotebookLM; augmentative tools such as this, used to summarize and organize a writer’s research, could be quite useful in improving journalists’ work. In discussing the tool with the project’s editorial director, author Steven Johnson, we saw another powerful use and possible business model for news: allowing readers to query and enter into dialogue with a publisher’s content. Finally, I have speculated that generative AI could extend literacy, helping those who are intimidated by the act of writing to help tell — and illustrate — their own stories.

In reviewing media coverage of AI, I ask you to keep in mind that journalists and publishers see the internet and now artificial intelligence as competition. In an upcoming book, I assert that media are embroiled in a full-fledged moral panic over these technologies. The arrival of a machine that can produce no end of fluent prose commodifies the content media produce and robs writers of our special status. This is why I teach that journalists must understand that their value is not resident in the commodity they produce, content, but instead in qualities of authority, credibility, independence, service, and empathy.

As for Question 8 on fair use, I am no lawyer, but it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted for the machine as an agent for users, precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.

I see another potential use for large language models: to provide researchers and scholars with a window on the presumptions, biases, myths, and misapprehensions reflected in the relationships of all the words analyzed by them — the words of those who had the power and privilege of publishing them. To restrict access skews that vision and potentially harms scholarly uses that have not yet been imagined.

The speculation in Question 9, about requiring affirmative permission for any copyrighted material to be used in training AI models, and in Question 10, regarding collective management organizations or legislatively establishing a compulsory licensing scheme, frightens me. AI companies already offer a voluntary opt-out mechanism, in the model of robots.txt. As media report, many news organizations are availing themselves of that option. To legally require opt-in or licensing sets up unimaginable complications.

Such complication raises the barrier to entry for new and open-source competitors and the spectre of regulatory capture — as does discussion in the EU of restricting open-source AI models (Question 25.1). The best response to the rising power of the already-huge incumbent companies involved in AI is to open the door — not close it — to new competition and open development.

As for Questions 18–21 on copyrightability, I would suggest a different framework for considering both the input and output of generative AI: as an intellectual, cultural, and informational commons, whose use and benefits we cannot not predict. Shouldn’t policy encourage at least a period of development, research, and experimentation?

Finally, permit me to propose another framework for consideration of copyright in this new age in which connected technologies enable collaborative creation and communal distribution. In 2012, I led a series of discussions with multiple stakeholders — media executives, creative artists, policymakers — for a project with the World Economic Forum in Davos on rethinking intellectual property and the support of creativity in the digital age. In the safe space of the mountains, even entertainment executives would concede that copyright law could be considered outmoded and is due for reconsideration. The WEF report is available here.

Out of that work, I conceived of a framework I call “creditright,” which I write about in Geeks Bearing Gifts (CUNY Journalism Press, 2014) and in The Gutenberg Parenthesis (221–2): “This is not the right to copy text but the right to receive credit for contributions to a chain of collaborative inspiration, creation, and recommendation of creative work. Creditright would permit the behaviors we want to encourage to be recognized and rewarded. Those behaviors might include inspiring a work, creating that work, remixing it, collaborating in it, performing it, promoting it. The rewards might be payment or merely credit as its own reward. I didn’t mention blockchain; but the technology and its automated contracts could be useful to record credit and trigger rewards.” I do not pretend that this is a fully thought-through solution, only one idea to spark discussion on alternatives for copyright.

The idea of creditright has some bearing on your Questions 15–17 on transparency and recordkeeping — what might ledgers of credit in creation look like? — though I am trying to make a larger argument about the underpinnings of copyright. As I have come to learn, 1710’s Statute of Anne was not formulated at the urging of — or to protect the rights of — authors, so much as it was in response to the demands of publishers and booksellers, to create a marketplace for creativity as a tradable asset. Said historian Peter Baldwin in The Copyright Wars: Three Centuries of Trans-Atlantic Battle (Princeton University Press, 2016, 53–6): “The booksellers claimed to be supporting authors’ just and natural right to property. But in fact their aim was to take for themselves what nature had supposedly granted their clients.”

I write in my book that the metaphor of creativity as property — of art as artifact rather than an act — “might be appropriate for land, buildings, ships, and tangible possessions, but is it for such intangibles as creativity, inspiration, information, education, and art? Especially once electronics — from broadcast to digital — eliminated the scarcity of the printed page or the theater seat, one need ask whether property is still a valid metaphor for such a nonrivalrous good as culture.”

Around the world, copyright law and doctrine are being mangled to suit the protectionist ends of those lobbying on behalf of incumbent publishers and producers, who remain flummoxed by the challenges and opportunities of technology, of both the internet and now artificial intelligence. In the context of journalism and news, Germany’s Leistungsschutzrecht or ancillary copyright law, Spain’s recently superseded link tax, Australia’s News Media Bargaining Code, the proposed Journalism Competition and Preservation Act in the US, and lately Canada’s C-18 Online News Act do nothing to protect the public’s interest in informed discourse and, in Canada’s case, will end up harming news consumers, journalists, and platforms alike as Facebook and Google are forced to take down links to news.

I urge the Copyright Office to continue its process of study as exemplified by this request for comments and not to rush into the frenzied discussion in media over artificial intelligence, large language models, and generative AI. It is too soon. Too little is known. Too much is at stake.

A few unpopular opinions about AI

In a conversation with Jason Howell for his upcoming AI podcast on the TWiT network, I came to wonder whether ChatGPT and large language models might give all of artificial intelligence cultural cooties, for the technology is being misused by companies and miscast by media such that the public may come to wonder whether they can ever trust the output of a machine. That is the disaster scenario the AI boys do not account for.

While AI’s boys are busy thumping their chests about their power to annihilate humanity, if they are not careful — and they are not — generative AI could come to be distrusted for misleading users (the companies’ fault more than the machine’s); filling our already messy information ecosystem with the data equivalent of Styrofoam peanuts and junk mail; making news worse; making customer service even worse; making education worse; threatening jobs; and hurting the environment. What’s not to dislike?

Below I will share my likely unpopular opinions about large language models — how they should not be used in search or news, how building effective guardrails is improbable, how we already have enough fucking content in the world. But first, a few caveats:

I do see limited potential uses for synthetic text and generative AI. Watch this excellent talk by Emily Bender, one of the authors of the seminal Stochastic Parrots paper and a leading critic of AI hype, suggesting criteria for acceptable applications: cases where language form and fluency matter but facts do not (e.g., foreign language instruction), where bias can be filtered, and where originality is not required.

Here I explored the idea that large language models could help extend literacy for those who are intimidated by writing and thus excluded from discourse. I am impressed with Google’s NotebookLM (which I’ve seen thanks to Steven Johnson, its editorial director), as an augmentative tool designed not to create content but to help writers organize research and enter into dialog with text (a possible new model for interaction with news, by the way). Gutenberg can be blamed for giving birth to the drudgery of bureaucracy and perhaps LLMs can save us some of the grind of responding to it.

I value much of what machine learning makes possible today — in, for example, Google’s Search, Translate, Maps, Assistant, and autocomplete. I am a defender of the internet (subject of my next book) and, yes, social media. Yet I am cautious about this latest AI flavor of the month, not because generative AI itself is dangerous but because the uses to which it is being put are stupid and its current proprietors are worrisome.

So here are a few of my unpopular opinions about large language models like ChatGPT:

It is irresponsible to use generative AI models as presently constituted in search or anywhere users are conditioned to expect facts and truthful responses. Presented with the empty box on Bing’s or Google’s search engines, one expects at least a credible list of sites relevant to one’s query, or a direct response based on a trusted source: Wikipedia or services providing the weather, stock prices, or sports scores. To have an LLM generate a response — knowing full well that the program has no understanding of fact — is simply wrong.

No news organization should use generative AI to write news stories, except in very circumscribed circumstances. For years now, wire services have used artificial intelligence software to generate simple news stories from limited, verified, and highly structured data — finance, sports, weather — and that works because of the strictly bounded arena in which such programs work. Using LLMs trained on the entire web to generate news stories from the ether is irresponsible, for it only predicts words, it cannot discern facts, and it reflects biases. I endorse experimenting with AI to augment journalists’ work, organizing information or analyzing data. Otherwise, stay away.

The last thing the world needs is more content. This, too, we can blame on Gutenberg (and I do, in The Gutenberg Parenthesis), for printing brought about the commodification of conversation and creativity as a product we call content. Journalists and other writers came to believe that their value resides entirely in content, rather than in the higher, human concepts of service and relationships. So my industry, at its most industrial, thinks its mission is to extrude ever more content. The business model encourages that: more stuff to fill more pages to get more clicks and more attention and a few more ad pennies. And now comes AI, able to manufacture no end of stuff. No. Tell the machine to STFU.

There will be no way to build foolproof guardrails against people making AI do bad things. We regularly see news articles reporting that an LLM lied about — even libeled — someone. First note well that LLMs do not lie or hallucinate because they have no conception of truth or meaning. Thus they can be made to say anything about anyone. The only limit on such behavior is the developers’ ability to predict and forbid everything bad that anyone could do with the software. (See, for example, how ChatGPT at first refused to go where The New York Times’ Kevin Roose wanted it to go and even scolded him for trying to draw out its dark side. But Roose persevered and led it astray anyway.) No policy, no statute, no regulation, no code can prevent this. So what do we do? We try to hold accountable the user who gets the machine to say bad shit and then spread it, just as we would if you printed out nasty shit on your HP printer and posted it around the neighborhood. Not much else we can do.

AI will not ruin democracy. We see regular alarms that AI will produce so much disinformation that democracy is in peril — see a recent warning from Jon Naughton of The Guardian that “a tsunami of AI misinformation will shape next year’s knife-edge elections.” But hold on. First, we already have more than enough misinformation; who’s to say that any more will make a difference? Second, research finds again and again that online disinformation played a small role in the 2016 election. We have bigger problems to address about the willful credulity of those who want to signal their hatreds with misinformation and we should not let tropes of techno moral panic distract us from that greater peril.

Perhaps LLMs should have been introduced as fiction machines. ChatGPT is a nice parlor trick, no doubt. It can make shit up. It can sound like us. Cool. If that entertaining power were used to write short stories or songs or poems and if it were clearly understood that the machine could do little else, I’m not sure we’d be in our current dither about AI. Problem is, as any novelist or songwriter or poet can tell you, there’s little money in creativity anymore. That wouldn’t attract billions in venture capital and the stratospheric valuations that go with it whenever AI is associated with internet search, media, and McKinsey finding a new way to kill jobs. As with so much else today, the problem isn’t with the tool or the user but with capitalism. (To those who would correct me and say it’s late-stage capitalism, I respond: How can you be so sure it is in its last stages?)

Training artificial intelligence models on existing content could be considered fair use. Their output is generally transformative. If that is true, then training machines on content would not be a violation of copyright or theft. It will take years for courts to adjudicate the implications of generative AI on outmoded copyright doctrine and law. As Harvard Law Professor Lawrence Lessig famously said, fair use is the right to hire an attorney. Media moguls are rushing to do just that, hiring lawyers to force AI companies to pay for the right to use news content to train their machines — just as the publishers paid lobbyists to get legislators to pass laws to get search engines and social media platforms to pay to link to news content. (See how well that’s working out in Canada.) I am no lawyer but I believe training machines on any content that is lawfully acquired so it can be inspired to produce new content is not a violation of copyright. Note my italics.

Machines should have the same right to learn as humans; to say otherwise is to set a dangerous precedent for humans. If we say that a machine is not allowed to learn, to read, to extract knowledge from existing content and adapt it to other uses, then I fear it would not be a long leap to declare what we as humans are not allowed to read, see, or know some things. This puts us in the odd position of having to defend the machine’s rights so as to protect our own.

Stopping large language models from having access to quality content will make them even worse. Same problem we have in our democracy: Pay walls restrict quality information to the already rich and powerful, leaving the field — whether that is news or democracy or machine learning — free to bad actors and their disinformation.

Does the product of the machine deserve copyright protection? I’m not sure. A federal court just upheld the US Copyright Office’s refusal to grant copyright protection to the product of AI. I’m just as happy as the next copyright revolutionary to see the old doctrine fenced in for the sake of a larger commons. But the agency’s ruling was limited to content generated solely by the machine and in most cases (in fact, all cases) people are involved. So I’m not sure where we will end up. The bottom line is that we need a wholesale reconsideration of copyright (which I also address in The Gutenberg Parenthesis). Odds of that happening? About as high as the odds that AI will destroy mankind.

The most dangerous prospect arising from the current generation of AI is not the technology, but the philosophy espoused by some of its technologists. I won’t venture deep down this rat hole now, but the faux philosophies espoused by many of the AI boys — in the acronym of Émile Torres and Timnit Gebru, TESCREAL, or longtermism for short — is noxious and frightening, serving as self-justification for their wealth and power. Their philosophizing might add up to a glib freshman’s essay on utilitarianism if it did not also border on eugenics and if these boys did not have the wealth and power they wield. See Torres’ excellent reporting on TESCREAL here. Media should be paying attention to this angle instead of acting as the boys’ fawning stenographers. They must bring the voices of responsible scholars — from many fields, including the humanities — into the discussion. And government should encourage truly open-source development and investment to bring on competitors that can keep these boys, more than their machines, in check.

ChatGPT goes to court

I attended a show-cause hearing for two attorneys and their firm who submitted nonexistent citations and then entirely fictitious cases manufactured by ChatGPT to federal court, and then tried to blame the machine. “This case is Schadenfreude for any lawyer,” said the attorneys’ attorney, misusing a word as ChatGPT might. “There but for the grace of God go I…. Lawyers have always had difficulty with new technology.”

The judge, P. Kevin Castel, would have none of it. At the end of the two-hour hearing in which he meticulously and patiently questioned each of the attorneys, he said it is “not fair to pick apart people’s words,” but he noted that the actions of the lawyers were “repeatedly described as a mistake.” The mistake might have been the first submission with its nonexistent citations. But “that is the beginning of the narrative, not the end,” as again and again the attorneys failed to do their work, to follow through once the fiction was called to their attention by opposing counsel and the court, to even Google the cases ChatGPT manufactured to verify their existence, let alone to read what “gibberish” — in the judge’s description—ChatGPT fabricated. And ultimately, they failed to fully take responsibility for their own actions.

Over and over again, Steven Schwartz, the attorney who used ChatGPT to do his work, testified to the court that “I just never could imagine that ChatGPT would fabricate cases…. It never occurred to me that it would be making up cases.” He thought it was a search engine — a “super search engine.” And search engines can be trusted, yes? Technology can’t be wrong, right?

Now it’s true that one may fault some large language models’ creators for giving people the impression that generative AI is credible when we know it is not — and especially Microsoft for later connecting ChatGPT with its search engine, Bing, no doubt misleading more people. But Judge Castel’s point stands: It was the lawyer’s responsibility — to themselves, their client, the court, and truth itself — to check the machine’s work. This is not a tale of technology’s failures but of humans’, as most are.

Technology got blamed for much this day. Lawyers faulted their legal search engine, Fastcase, for not giving this personal-injury firm, accustomed to state courts, access to federal cases (a billing screwup). They blamed Microsoft Word for their cut-and-paste of a bolloxed notorization. In a lovely Gutenberg-era moment, Judge Castel questioned them about the odd mix of fonts — Times Roman and something sans serif — in the fake cases, and the lawyer blamed that, too, on computer cut-and-paste. The lawyers’ lawyer said that with ChatGPT, Schwartz “was playing with live ammo. He didn’t know because technology lied to him.” When Schwartz went back to ChatGPT to “find” the cases, “it doubled down. It kept lying to him.” It made them up out of digital ether. “The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer said. “The court has done its job warning the public of these risks.” The judge interrupted: “I did not set out to do that.” For the issue here is not the machine, it is the men who used it.

The courtroom was jammed, sending some to an overflow courtroom to listen. There were some reporters there, whose presence the lawyers noted as they lamented their public humiliation. The room was also filled with young, dark-suited law students and legal interns. I hope they listened well to the judge (and I hope the journalists did, too) about the real obligations of truth.

ChatGPT is designed to tell you what you want it to say. It is a personal propaganda machine that strings together words to satisfy the ear, with no expectation that it is right. Kevin Roose of The New York Times asked ChatGPT to reveal a dark soul and he was then shocked and disturbed when it did just what he had requested. Same for attorney Schwartz. In his questioning of the lawyer, the judge noted this important nuance: Schwartz did not ask ChatGPT for explanation and case law regarding the somewhat arcane — especially to a personal-injury lawyer usually practicing in state courts — issues of bankruptcy, statutes of limitation, and international treaties in this case of an airline passenger’s knee and an errant snack cart. “You were not asking ChatGPT for an objective analysis,” the judge said. Instead, Schwartz admitted, he asked ChatGPT to give him cases that would bolster his argument. Then, when doubted about the existence of the cases by opposing counsel and judge, he went back to ChatGPT and it produced the cases for him, gibberish and all. And in a flash of apparent incredulity, when he asked ChatGPT “are the other cases you provided fake?”, it responded as he doubtless hoped: “No, the other cases I provided are real.” It instructed that they could be found on reputible legal databases such as LexisNexis and Westlaw, which Schwartz did not consult. The machine did as it was told; the lawyer did not. “It followed your command,” noted the judge. “ChatGPT was not supplementing your research. It was your research.”

Schwartz gave a choked-up apology to the court and his colleagues and his opponents, though as the judge pointedly remarked, he left out of that litany his own ill-served client. Schwartz took responsibility for using the machine to do his work but did not take responsibility for the work he did not do to verify the meaningless strings of words it spat out.

I have some empathy for Schwartz and his colleagues, for they will likely be a long-time punchline in jokes about the firm of Nebbish, Nebbish, & Luddite and the perils of technological progress. All its associates are now undergoing continuing legal education courses in the proper use of artificial intelligence (and there are lots of them already). Schwartz has the ill luck of being the hapless pioneer who came upon this new tool when it was three months in the world, and was merely the first to find a new way to screw up. His lawyers argued to the judge that he and his colleagues should not be sanctioned because they did not operate in bad faith. The judge has taken the case under advisement, but I suspect he might not agree, given their negligence to follow through when their work was doubted.

I also have some anthropomorphic sympathy for ChatGPT, as it is a wronged party in this case: wronged by the lawyers and their blame, wronged by the media and their misrepresentations, wronged by the companies — Microsoft especially — that are trying to tell users just what Schwartz wrongly assumed: that ChatGPT is a search engine that can supply facts. It can’t. It supplies credible-sounding — but not credible — language. That is what it is designed to do. That is what it does, quite amazingly. Its misuse is not its fault.

I have come to believe that journalists should stay away from ChatGPT, et al., for creating that commodity we call content. Yes, AI has long been used to produce stories from structured and limited data: sports games and financial results. That works well, for in these cases, stories are just another form of data visualization. Generative AI is something else again. It picks any word in the language to place after another word based not on facts but on probability. I have said that I do see uses for this technology in journalism: expanding literacy, helping people who are intimidated by writing and illustration to tell their own stories rather than having them extracted and exploited by journalists, for example. We should study and test this technology in our field. We should learn about what it can and cannot do with experience, rather than misrepresenting its capabilities or perils in our reporting. But we must not have it do our work for us.

Besides, the world already has more than enough content. The last thing we need is a machine that spits out yet more. What the world needs from journalism is research, reporting, service, solutions, accountability, empathy, context, history, humanity. I dare tell my journalism students who are learning to write stories that writing stories is not their job; it is merely a useful skill. Their job as journalists is to serve communities and that begins with listening and speaking with people, not machines.


Image: Lady Justice casts off her scale for the machine, by DreamStudio

Journalism is lossy compression

There has been much praise in human chat — Twitter — about Ted Chiang’s New Yorker piece on machine chat — ChatGPT. Because New Yorker; because Ted Chiang. He makes a clever comparison between lossy compression — how JPEGs or MP3s save a good-enough artifact of a thing, with some pieces missing and fudged to save space — and large-language models, which learn from and spit back but do not record the entire web. “Think of ChatGTP as a blurry JPEG of all the text on the Web,” he instructs. 

What strikes me about the piece is how unselfaware media are when covering technology.

For what is journalism itself but lossy compression of the world? To save space, the journalist cannot and does not save or report everything known about an issue or event, compressing what is learned into so many available inches of type. For that matter, what is a library or a museum or a curriculum but lossy compression — that which fits? What is culture but lossy compression of creativity? As Umberto Eco said, “Now more than ever, we realize that culture is made up of what remains after everything else has been forgotten.”

Chiang analogizes ChatGPT et al to a computational Xerox machine that made an error because it extrapolated one set of bits for others. Matthew Kirschenbaum quibbles:

Agreed. This reminds me of the sometimes rancorous debate between Elizabeth Eisenstein, credited as the founder of the discipline of book history, and her chief critic, Adrian Johns. Eisenstein valued fixity as a key attribute of print, its authority and thus its culture. “Typographical fixity,” she said, “is a basic prerequisite for the rapid advancement of learning.” Johns dismissed her idea of print culture, arguing that early books were not fixed and authoritative but often sloppy and wrong (which Eisenstein also said). They were both right. Early books were filled with errors and, as Eisenstein pointed out, spread disinformation. “But new forms of scurrilous gossip, erotic fantasy, idle pleasure-seeking, and freethinking were also linked” to printing, she wrote. “Like piety, pornography assumed new forms.” It took time for print to earn its reputation of uniformity, accuracy, and quality and for new institutions — editing and publishing — to imbue the form with authority. 

That is precisely the process we are witnessing now with the new technologies of the day. The problem, often, is that we — especially journalists — make assumptions and set expectations about the new based on the analog and presumptions of the old. 

Media have been making quite the fuss about ChatGPT, declaring in many a headline that Google better watch out because it could replace its Search. As we all know by now, Microsoft is adding ChatGPT to its Bing and Google is said to have stumbled in its announcements about large-language models and search last week. 

But it’s evident that the large-language models we have seen so far are not yet good for search or for factual divination; see the Stochastic Parrots paper that got Tinmit Gebru fired from Google; see also her coauthor Emily Bender’s continuing and cautionary writing on the topic. Then read David Weinberger’s Everyday Chaos, an excellent and slightly ahead of its moment explanation of what artificial intelligence, machine learning, and large language models do. They predict. They take their learnings — whether from the web or some other large set of data — and predict what might happen next or what should come next in a sequence of, say, words. (I wrote about his book here.) 

Said Weinberger: “Our new engines of prediction are able to make more accurate predictions and to make predictions in domains that we used to think were impervious to them because this new technology can handle far more data, constrained by fewer human expectations about how that data fits together, with more complex rules, more complex interdependencies, and more sensitivity to starting points.”

To predict the next, best word in a sequence is a different task from finding the correct answer to a math problem or verifying a factual assertion or searching for the best match to a query. This is not to say that these functions cannot be added onto large-language models as rhetorical machines. As Google and Microsoft are about to learn, these functions damned well better be bolted together before LLMs are unleashed on the world with the promise of accuracy. 

When media report on these new technologies they too often ignore underlying lessons about what they say about us. They too often set high expectations — ChatGPT can replace search! — and then delight in shooting down those expectations — ChatGPT made mistakes!

Chiang wishes ChatGPT to search and calculate and compose and when it is not good at those tasks, he all but dismisses the utility of LLMs. As a writer, he just might be engaging in wishful thinking. Here I speculate about how ChatGPT might help expand literacy and also devalue the special status of the writer in society. In my upcoming book, The Gutenberg Parenthesis (preorder here /plug), I note that it was not until a century and a half after Gutenberg that major innovation occurred with print: the invention of the essay (Montaigne), the modern novel (Cervantes), and the newspaper. We are early our progression of learning what we can do with new technologies such as large-language models. It may be too early to use them in certain circumstances (e.g., search) but it is also too early to dismiss them.

It is equally important to recognize the faults in these technologies — and the faults that they expose in us — and understand the source of each. Large-language models such as ChatGPT and Google’s LaMDA are trained on, among other things, the web, which is to say society’s sooty exhaust, carrying all the errors, mistakes, conspiracies, biases, bigotries, presumptions, and stupidities — as well as genius — of humanity online. When we blame an algorithm for exhibiting bias we should start with the realization that it is reflecting our own biases. We must fix both: the data it learns from and the underlying corruption in society’s soul. 

Chiang’s story is lossy in that he quotes and cites none of the many scientists, researchers, and philosophers who are working in the field, making it as difficult as ChatGPT does to track down the source of his logic and conclusions.

The lossiest algorithm of all is the form of story. Said Weinberger:

Why have we so insisted on turning complex histories into simple stories? Marshall McLuhan was right: the medium is the message. We shrank our ideas to fit on pages sewn in a sequence that we then glued between cardboard stops. Books are good at telling stories and bad at guiding us through knowledge that bursts out in every conceivable direction, as all knowledge does when we let it.
But now the medium of our daily experiences — the internet — has the capacity, the connections, and the engine needed to express the richly chaotic nature of the world.

In the end, Chiang prefers the web to an algorithm’s rephrasing of it. Hurrah for the web. 

We are only beginning to learn what the net can and cannot do, what is good and bad from it, what we should or should not make of it, what it reflects in us. The institutions created to grant print fixity and authority — editing and publishing — are proving inadequate to cope with the scale of speech (aka content) online. The current, temporary proprietors of the net, the platforms, are also so far not up to the task. We will need to overhaul or invent new institutions to grapple with issues of credibility and quality, to discover and recommend and nurture talent and authority. As with print, that will take time, more time than journalists have to file their next story.


 Original painting by Johannes Vermeer; transformed (pixelated) by acagastya., CC0, via Wikimedia Commons

Writing as exclusion

DALL-E image of quill, ink pot, and paper with writing on it.
DALL-E

In The Gutenberg Parenthesis (my upcoming book), I ask whether, “in bringing his inner debates to print, Montaigne raised the stakes for joining the public conversation, requiring that one be a writer to be heard. That is, to share one’s thoughts, even about oneself, necessitated the talent of writing as qualification. How many people today say they are intimidated setting fingers to keys for any written form — letter, email, memo, blog, social-media post, school assignment, story, book, anything — because they claim not to be writers, while all the internet asks them to be is a speaker? What voices were left out of the conversation because they did not believe they were qualified to write? … The greatest means of control of speech might not have been censorship or copyright or publishing but instead the intimidation of writing.”

Thus I am struck by the opportunity presented by generative AI — lately and specifically ChatGPT— to provide people with an opportunity to better express themselves, to help them write, to act as Cyrano at their ear. Fellow educators everywhere are freaking out, wondering how they can ever teach writing and assign essays without wondering whether they are grading student or machine. I, on the other hand, look for opportunity — to open up the public conversation to more people in more ways, which I will explore here.

Let me first be clear that I do not advocate an end to writing or teaching it — especially as I work in a journalism school. It is said by some that a journalism degree is the new English degree, for we teach the value of research and the skill of clear expression. In our Engagement Journalism program, we teach that rather than always extracting and exploiting others’ stories, we should help people tell their own. Perhaps now we have more tools to aid in the effort.

I have for some time argued that we must expand the boundaries of literacy to include more people and to value more means of expression. Audio in the form of podcasts, video on YouTube or TikTok, visual expression in photography and memes, and the new alphabets of emoji enable people to speak and be understood as they wish, without writing. I have contended to faculty in communications schools (besides just my own) that we must value the languages (by that I mean especially dialects) and skills (including in social media) that our students bring.

Having said all that, let us examine the opportunities presented by generative AI. When some professors were freaking out on Mastodon about ChatGPT, one prof — sorry I can’t recall who — suggested creating different assignments with it: Provide students with the product of AI and ask them to critique it for accuracy, logic, expression — that is, make the students teachers of the machines.

This is also an opportunity to teach students the limitations and biases of AI and large language models, as laid out by Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major in their Stochastic Parrots paper. Users must understand when they are listening to a machine that is trained merely to predict the next most sensible word, not to deliver and verify facts; the machine does not understand meaning. They also must realize when the data used to train a language model reflects the biases and exclusions of the web as source — when it reflects society’s existing inequities — or when it has been trained with curated content and rules to present a different worldview. The creators of these models need to be transparent about their makings and users must be made aware of their limitations.

It occurs to me that we will probably soon be teaching the skill of prompt writing: how to get what you want out of a machine. We started exercising this new muscle with DALL-E and other generative image AI — and we learned it’s not easy to guide the machine to draw exactly what we have in mind. At the same time, lots of folks are already using ChatGPT to write code. That is profound, for it means that we can tell the machine how to tell itself how to do what we want it to do. Coders should be more immediately worried about their career prospects than writers. Illustrators should also sweat more than scribblers.

In the end, writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.

Generative AI also brings the reverse potential: helping to prompt the writer. This morning on Mastodon, I empathized with a writer who lamented that he was in the “I’m at the ‘(BETTER WORDS TK)’ stage” and I suggested that he try ChatGPT just to inspire a break in the logjam. It could act like a super-powered thesaurus. Even now, of course, Google often anticipates where I’m headed with a sentence and offers a suggested next word. That still feels like cheating — I usually try to prove Google wrong by avoiding what I now sense as a cliché — but is it so bad to have a friend who can finish your sentences for you?

For years, AI has been able to take simple, structured data — sports scores, financial results — and turn that into stories for wire services and news organizations. Text, after all, is just another form of data visualization. Long ago, I sat in a small newsroom for an advisory board meeting and when the topic of using such AI came up, I asked the eavesdropping, young sports writer a few desks over whether this worried him. Not at all, he said: He would have the machine write all the damned high-school game stories the paper wanted so he could concentrate on more interesting tales. ChatGPT is also proving to be good at churning out dull but necessary manuals and documentation. One might argue, then, that if the machine takes over the most drudgerous forms of writing, we humans would be left with brainpower to write more creative, thoughtful, interesting work. Maybe the machine could help improve writing overall.

A decade ago, I met a professor from INSEAD, Philip Parker, who insisted that contrary to popular belief, there is not too much content in the world; there is too little. After our conversation, I blogged: “Parker’s system has written tens of thousands of books and is even creating fully automated radio shows in many languages…. He used his software to create a directory of tropical plants that didn’t exist. And he has radio beaming out to farmers in poor third-world nations.”

By turning text into radio, Parker’s project, too, redefines literacy, making listening, rather than reading or writing, the necessary skill to become informed. As it happens, in that post from 2011, I starting musing about the theory Tom Pettitt had brought to the U.S. from the University of Southern Denmark: the Gutenberg Parenthesis. In my book, which that theory inspired, I explore the idea that we might be returning to an age of orality — and aurality — past the age of text. Could we be leaving the era of the writer?

And that is perhaps the real challenge presented by ChatGPT: Writers are no longer so special. Writing is no longer a privilege. Content is a commodity. Everyone will have more means to express themselves, bringing more voices to public discourse — further threatening those who once held a monopoly on it. What “content creators” — as erstwhile writers and illustrators are now known — must come to realize is that value will reside not only in creation but also in conversation, in the experiences people bring and the conversations they join.

Montaigne’s time, too, was marked by a new abundance of speech, of writing, of content. “Montaigne was acutely aware that printing, far from simplifying knowledge, had multiplied it, creating a flood of increasingly specialized information without furnishing uniform procedures for organizing it,” wrote Barry Lydgate. “Montaigne laments the chaotic proliferation of books in his time and singles out in his jeremiad a new race of ‘escrivains ineptes et inutiles’ ‘inept and useless writers’ on whose indiscriminate scribbling he diagnoses a society in decay…. ‘Scribbling seems to be a sort of symptom of an unruly age.’”

Today, the machine, too, scribbles.

https://link.medium.com/35wnOIMyYvb