Posts about wef

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

Efficiency over growth (and jobs)

The hook to every song sung at Davos is “jobs, jobs, jobs.” The chorus of machers on stages here operate under an article of faith that growth can come back, that they can stimulate it, that that will create jobs, and then that all will be eventually well.

What if that’s not the case? I am coming to believe, more and more, that technology is leading to efficiency over growth. I’ve written about that here.This notion is obviously true in some sectors of society: see news and media, retail, travel sales, and other arenas. But how many more sectors will this rule strike: universities? government? banking? delivery? even manufacturing?

As I write this, I’m watching a WEF panel moderated by Reuters’ editor, Steve Adler, with Larry Summers and government and business leaders. They’re discussing growth strategies and so far we’re hearing the same notions we hear elsewhere in Davos, the complete trick bag: spend money on infrastructure, be nice to business, regulate less, reform taxes, reform immigration. OK and OK.

“The problems of job creation are more complicated than that. They are more complicated than wealth creation,” says one of the panelists (operating under Chatham House Rule, so I won’t attribute*). “This is a group that understands wealth creation better than job creation.” He says “there are inherent limits” to the number of people employed in various sectors.

I haven’t heard any strategy yet that reverses the trends underway in the transition from the industrial economy to the digital economy. What will offset the shrinking of vast industries? New industries? Well, we have new, digital industries, but they are even more efficient than restructured old industries. Compare Google’s staff size to GM’s, even now. Facebook serves almost a billion people with the staff the size of a large newspaper. Amazon employes far fewer people than the bookstores it put out of business did. So those new industries will bring growth, profit, and wealth, but not many jobs.

“There are fewer jobs for regular people because those innovations happened than there would have been if those innovations hadn’t happened,” the panelist says. It would be “a delusion” to think that encouraging this innovation will increase jobs.

So what if the key business strategy of the near-term future becomes efficiency over growth? Productivity will improve. Companies will be more profitable. Wealth will be created. But employment will suffer.

I’m hearing no strategies focused on this larger transition in a gathering about the transition. I think that’s because the institutions’ trick bags are empty. They ran an industrial society. That’s over. And the entrepreneurs who will create new companies but also new efficiency aren’t yet in power to solve the problem they create.

I ask the panel whether all this talk of jobs, jobs, jobs is so much empty rhetoric. I ask whether there are other tricks in the bag.

The panelist I’ve been quoting says that there are two sets of economic issues: In the short term, for the next five years, we are dealing with demand and macroeconomic policy. “Employment today has nothing to do with the Kindle,” he says. “It has everything to do with the financial system, deleveraging, and macroeconomic policy.”

It’s in the long term that the issues I’m addressing here come to bear. “For the longer term, we don’t have nearly as good answers as we would like to,” he says. “We are going to have to embrace the idea that we are going to have growing numbers of people involved in the provision of fundamental services to other people, services like health care and education. We’re going to need to make that work for society.”

That is to say, health and education don’t directly create wealth; they are services funded in great measure by taxes of one sort or another. Employing people in those sectors amounts to a redistribution of wealth with the fringe benefit of providing helpful services. Is a service-sector economy the secret to growth? Who pays for that when fewer people have jobs in the productive economy? I still don’t see an answer. This is not an economic policy so much as it is a social policy.

Another panelist says that we will have fewer people and we will need to retrain people throughout their lives for new jobs. I agree. But that doesn’t create jobs (except in schools); it just helps fill the ones we have.

One more panelist, from Europe, suggests that nations here will end up making stuff for the growing economies and consuming middle classes of China, India, Brazil, etc. In a globalized world with maximum price competition, I’m not so sure that’s a strategy for growth, only survival. I’d hate to place my strategic bets on continuing — or returning to — the industrial economy. And at some point, that strategy bumps up against the question of sustainability: is there enough stuff to go around?

Indeed, in a globalized society, we need to look at total jobs, the sum of work and productivity and demand, not country-by-country. The question is: Will jobs on the whole increase in this digital economy?

If instead efficiency increases — and with it, again, productivity and profit — then great wealth can be created: see Google, and the technology economy. But that means the disparity of income and capital will only widen yet more. And it’s just wide enough today to cause unrest around the world. That’s much of what #Occupy_WEF et al is about. That’s what is causing such tsuris and uncertainty on the stages of the world (Economic Forum). That’s what is causing the institutions represented here to fear, resist, and regulate technology in the hopes of forestalling the change it is bringing. There is the root of the disruption we’re witnessing now even in Davos.

* I saw Summers later and he gave me permission to quote him by name. He is the quotable panelist.

Davos, disrupted

I’m among the disrupted of Davos. Outside, there’s an #OccupyDavos encampment in igloos (really). Down the road, someone will be giving out an award to the worst company of the world. But the disruption is no longer outside. That’s what I sensed in past years; that’s what they wanted to believe here. Now the disruption is inside. Every institution is challenged. Every.

The World Economic Forum issued a list of global risks (though Google’s Eric Schmidt countered on his Google+ page that he’s optimistic; that’s because he’s a disruptor). I’m sitting in a room here with a debate on capitalism about to begin. Even the sacred science is disrupted. I’m having conversations and sessions about disrupted banking and retail and education and media, of course.

I began this trip to Europe with my pilgrimage to the Gutenberg Museum in Mainz (blogged earlier). I recall Jon Naughton’s Observer column in which he asked us to imagine that we are pollsters in Mainz in 1472 asking whether we thought this invention of Gutenberg’s would disrupt the Catholic church, fuel the Reformation, spark the Scientific Revolution, change our view of education and thus childhood, and change our view of societies and nations and cultures. Pshaw, they must have said.

Ask those questions today. How likely do you think it is that every major institution of society–every industry, all of education, all of government–will be disrupted; that we will rethink our idea of nations and cultures; that we will reimagine education; that we will again alter even economics? Pshaw?

Welcome to Davos 1472.

LATER: Thanks to Andy Sternberg, here is a Storify of my tweets from an opening session at Davos, a Time debate on the future of capitalism (sorry for the long link; having trouble with the WordPress app on my iPad; also can’t get the embed code from Storify on the iPad; will fix it later):

The disruptors arrive at Davos

Last year at Davos, I said I was among the disrupted when I preferred to be among the disruptors.

The disruptor arrived last night. Daniel Domscheit-Berg, former spokesman for Wikileaks and founder of the competitive OpenLeaks, came to a dinner about transparency at which I was a panelist, alongside the Guardian’s Timothy Garton-Ash, Human Rights Watch’s Ken Roth, and Harvard’s David Kennedy, led by the NY Times’ Arthur Sulzberger.

Sad irony: the session on transparency was off-the-record. I asked for it to be open; Sulzberger asked in turn; no go. Fill in your punchline here.

But Dan Perry of the AP was there and interviewed the hyphenates, Domscheit-Berg and Garton-Ash, on the record. Under Chatham House Rule, we can summarize the talk without attributing it.

In truth, there was little disagreement — until we switched from transparent government to transparent business.

About government, the speakers put forward the expected enthusiasm about forcing more transparency upon government with the expected hesitation about potential harm resulting from incomplete redaction and about making government more secret rather than less. No surprises. One person in the room — a journalist I’ve heard here before who inevitably supports power structures — actually opposed transparent government (preferring mere accountability … though how one gets to the latter without the former, I have no idea).

About business, we did disagree. The question was posed: is secrecy a competitive advantage? Most of the panelists and the room said it was. I disagreed as did one other person you might expect to disagree. I argued that transparency is not about just malfeasance but also about a new and necessary way to operate in collaboration with one’s customers and public. Old, institutional companies will miss another boat as new, transparent companies take advantage of the age of openness to do business in a new way.

What I see is that when corporations are subjected to leaks, the reaction will be different. They’ll have more defenders from the power structure. They’ll too rarely see the opportunity in operating as open companies. But it won’t stop the leaks and the march of transparency.

Tomorrow, I’m going to an awards ceremony held by PublicEye.CH, naming the worst corporation in the world (you can still vote) and there, Domscheit-Berg will present OpenLeaks. This is the counterweight to the congregation of the Davos Man.

* Note also that one of my entrepreneurial journalism students at CUNY, Matt Terenzio, just launched Localeaks, which will enable any newspaper in the U.S. to receive leaks from whistleblowers. Very cool. More about it here.

Davos: Too little content

The one interesting thing I’ve heard so far at Davos this year is that the world doesn’t have too much content. It has too little. So says Philip Parker of INSEAD, who is doing fascinating work with automatic creation of content. He’s not doing it for evil purposes: content farms and spam. He is doing it to fill in knowledge that is missing in the world, especially in smaller cultures and languages.

Parker’s system has written tens of thousands of books and is even creating fully automated radio shows in many languages, some of which have never been used for weather reports (they don’t have words for “degree” or “celsius”). He used his software to create a directory of tropical plants that didn’t exist. And he has radio beaming out to farmers in poor third-world nations.

I’m fascinated by what Parker’s project says about our attitudes toward content: that we in the West think there’s too much of it (we’re overloaded); that content is that which content creators create; that content has to be owned; that it has to be inefficient and expensive to be good and useful.

In the U.S., there already is a company that automates the writing of sports stories (another straight line). Thomson Reuters has been automatically spitting out formatted financial stories since 2006. So this is nothing new, except that Parker is putting the notion to new use.

I’m intrigued by the potential uses of Parker’s content extruder. For example, I am on the board of Recording for the Blind & Dyslexic, and I imagine this technology could be used to deliver content, especially more current content — aurally — to its clients, whom I say don’t have learning disabilities but who learn differently.

Now tie that notion to the third world and we can even come to define literacy differently. If we can inform and educate people in their own languages through listening — rather than insisting on reading text — then haven’t we expanded the world of the literate greatly? Don’t we have better-informed nations and economies?

Academics from the University of Southern Denmark say that we are passing through the other side of the Gutenberg Parenthesis, returning to oral exchange and distribution of knowledge. Parker can serve that shift with his audio content.

He also helps us expand the reach and use of content, for his technology can gather bits of information from here and there that fit together and put them in a new form that is newly usable. It’s the Wikipedia worldview. Indeed, I suggested to Parker that he could help Wikipedia meet one of its key strategic goals — creating deeper content in more languages — through the automated generation of the first draft of articles, paving the way for editors.

Parker looks for content that is formulaic. That’s what his technology can replace. He studied TV news and found that 70% of its content is formulaic. No surprise. Most of it could be replaced with a machine.

That’s not just my joke and insult. The more efficient we make the creation of content, the less we will waste on repetitive tasks with commodified results, and the more we can concentrate our valuable and scarce resources on necessity and quality. Certain people will likely screech that such thinking and technology further deprofessionalizes the alleged art of creating content. So be it.