Posts about davos

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” Gary Marcus — a welcome voice of sanity in discourse about AI — talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029 with a $100,000 bet. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

Efficiency over growth (and jobs)

The hook to every song sung at Davos is “jobs, jobs, jobs.” The chorus of machers on stages here operate under an article of faith that growth can come back, that they can stimulate it, that that will create jobs, and then that all will be eventually well.

What if that’s not the case? I am coming to believe, more and more, that technology is leading to efficiency over growth. I’ve written about that here.This notion is obviously true in some sectors of society: see news and media, retail, travel sales, and other arenas. But how many more sectors will this rule strike: universities? government? banking? delivery? even manufacturing?

As I write this, I’m watching a WEF panel moderated by Reuters’ editor, Steve Adler, with Larry Summers and government and business leaders. They’re discussing growth strategies and so far we’re hearing the same notions we hear elsewhere in Davos, the complete trick bag: spend money on infrastructure, be nice to business, regulate less, reform taxes, reform immigration. OK and OK.

“The problems of job creation are more complicated than that. They are more complicated than wealth creation,” says one of the panelists (operating under Chatham House Rule, so I won’t attribute*). “This is a group that understands wealth creation better than job creation.” He says “there are inherent limits” to the number of people employed in various sectors.

I haven’t heard any strategy yet that reverses the trends underway in the transition from the industrial economy to the digital economy. What will offset the shrinking of vast industries? New industries? Well, we have new, digital industries, but they are even more efficient than restructured old industries. Compare Google’s staff size to GM’s, even now. Facebook serves almost a billion people with the staff the size of a large newspaper. Amazon employes far fewer people than the bookstores it put out of business did. So those new industries will bring growth, profit, and wealth, but not many jobs.

“There are fewer jobs for regular people because those innovations happened than there would have been if those innovations hadn’t happened,” the panelist says. It would be “a delusion” to think that encouraging this innovation will increase jobs.

So what if the key business strategy of the near-term future becomes efficiency over growth? Productivity will improve. Companies will be more profitable. Wealth will be created. But employment will suffer.

I’m hearing no strategies focused on this larger transition in a gathering about the transition. I think that’s because the institutions’ trick bags are empty. They ran an industrial society. That’s over. And the entrepreneurs who will create new companies but also new efficiency aren’t yet in power to solve the problem they create.

I ask the panel whether all this talk of jobs, jobs, jobs is so much empty rhetoric. I ask whether there are other tricks in the bag.

The panelist I’ve been quoting says that there are two sets of economic issues: In the short term, for the next five years, we are dealing with demand and macroeconomic policy. “Employment today has nothing to do with the Kindle,” he says. “It has everything to do with the financial system, deleveraging, and macroeconomic policy.”

It’s in the long term that the issues I’m addressing here come to bear. “For the longer term, we don’t have nearly as good answers as we would like to,” he says. “We are going to have to embrace the idea that we are going to have growing numbers of people involved in the provision of fundamental services to other people, services like health care and education. We’re going to need to make that work for society.”

That is to say, health and education don’t directly create wealth; they are services funded in great measure by taxes of one sort or another. Employing people in those sectors amounts to a redistribution of wealth with the fringe benefit of providing helpful services. Is a service-sector economy the secret to growth? Who pays for that when fewer people have jobs in the productive economy? I still don’t see an answer. This is not an economic policy so much as it is a social policy.

Another panelist says that we will have fewer people and we will need to retrain people throughout their lives for new jobs. I agree. But that doesn’t create jobs (except in schools); it just helps fill the ones we have.

One more panelist, from Europe, suggests that nations here will end up making stuff for the growing economies and consuming middle classes of China, India, Brazil, etc. In a globalized world with maximum price competition, I’m not so sure that’s a strategy for growth, only survival. I’d hate to place my strategic bets on continuing — or returning to — the industrial economy. And at some point, that strategy bumps up against the question of sustainability: is there enough stuff to go around?

Indeed, in a globalized society, we need to look at total jobs, the sum of work and productivity and demand, not country-by-country. The question is: Will jobs on the whole increase in this digital economy?

If instead efficiency increases — and with it, again, productivity and profit — then great wealth can be created: see Google, and the technology economy. But that means the disparity of income and capital will only widen yet more. And it’s just wide enough today to cause unrest around the world. That’s much of what #Occupy_WEF et al is about. That’s what is causing such tsuris and uncertainty on the stages of the world (Economic Forum). That’s what is causing the institutions represented here to fear, resist, and regulate technology in the hopes of forestalling the change it is bringing. There is the root of the disruption we’re witnessing now even in Davos.

* I saw Summers later and he gave me permission to quote him by name. He is the quotable panelist.

Studying the web

At the end of this video from this year’s Davos (at 2:30), Sir Tim Berners-Lee proposes the need to create an academic discipline — cutting across technology, psychology, anthropology, and other fields — to study and understand the web:

Now Rensellaer Polytechnic announces that it is creating the first undergraduate degree in web science. (I found out about through an email to my son, who was accepted there and is now deciding among the University of Rochester, NYU, George Washington University, and Boston University, plus Case Western, Drexel, and Northwestern…. if any of you have any advice and experience, let me know). RPI says its students will “investigate issues on the Web related to security, trust, privacy, content value, and the development of the Web of the future.”

Sir Tim himself praised the RPI program in its press release. He has also helped start the Web Science Trust.

Of course, there are many good minds studying the web today, from danah boyd to Clay Shirky to Jay Rosen to Jonathan Zittrain. But I agree that it is time to pull together study and thinking and questions under a discipline that treats the web as the enormous social force it is. Says the Web Science site:

Nothing like the Web has ever happened in all of human history. The scale of its impact and the rate of its adoption are unparalleled. This is a great opportunity as well as an obligation. If we are to ensure the Web benefits the human race we must first do our best to understand it.

The Web is the largest human information construct in history. The Web is transforming society. In order to understand what the Web is, engineer its future and ensure its social benefit we need a new interdisciplinary field that we call Web Science.

The Flip dance

At the Google party at Davos, I was enticed into doing the Flip dance with none less than Sir Tim Berners-Lee:

Another Sir Tim video from a session on social media. The first half of this 3:44 is him talking about the need for authority signals i social networks. In the middle, he takes pains to correct people who say that he invented the internet or created the web (no, he invented the web). The last half is his intriguing call for academic study of the web:

And, yes, it was a thrill to meet the man. I was wonderful seeing people come across him, spy his name tag, and gasp with glee and gratitude.

The disrupted of Davos

The theme of this year’s World Economic Forum meeting at Davos was “rethink, redesign, rebuild.” When a friend recited that list for me, I responded that given the institutions there, the more appropriate slogan is “replace.”

Last year when I arrived at Davos, I wondered whether we were among the problem or the solution. This year, I wondered whether we were among the future or the past. Well, actually, I don’t wonder.

We were among the disrupted. The only distinction among them is that some know it, some don’t. At Davos, I fear, most don’t.

I ran a session with international organizations about transparency and new ways they can govern themselves. I didn’t get far. “Oh, yes, we understand Twitter and all that,’ they said. “We have people who do that for us.” Don’t you want to read what your constituents and the world are saying about you? “We don’t have time.” Oy. I invited a young disrupter into the room who talked about his ability to organize efforts to help people quickly — not so much breaking rules but discovering new ones — but he didn’t get far either.

I sat in a session about the future of journalism that was set in the past. No fault of the moderator, the panel pretty much issued the same old saws: The internet is filled with trivia, sniffed one: “The stuff that goes on the web is just suffocating.” The free market will not support a free press, declared another. (How do we know that already?) Thus their conclusion: The only hope for journalism is state and foundation support, said a few. Oy again.

At the end of the week, I sat in on a session trying to brainstorm under WEF’s theme of the three re’s. They said the point of the exercise was to get soundbites (as they used to be known; tweets as they are now known) and that’s what they got: PowerPoint (actually, Tumblr) platitudes. There were good points: We need to change what we measure, said one table, for now we get what we measure (true from media to economies). But there was also insipidness: “We are what we allow to happen.” And: “Ecology means caring. Equity means sharing.” Put that on your T-shirt and wash it.

Then a 17-year-old from Iraq scolded the entire room, telling them that these were just sayings. Where’s the action, he asked? Where are the specifics? That moment gave me hope: another disrupter, this one from the future.

The World Economic Forum actually does an admirable job trying to push its members into that future. I got involved — and got my ticket into Davos — because I helped them venture into blogging to show institutions by example how to benefit from social media; that effort continues in video (YouTube is there) and Twitter (so is Ev Williams)

But one must wonder whether they can go fast enough — given this crowd’s resistance to change — and thus whether they are helping the right people. That’s why I didn’t blog during this meeting (my fourth): I simply didn’t hear much new. WEF does try to bring in new voices: its young global leaders and tech pioneers, but they are viewed by the entrenched powers as curiosities — sideshows — when they should be seen as the new bosses.

After one SOS (same old…) session, I told a WEF person that I dreamed of a new organization and event, a stepchild: the World Entrepreneurs Forum. Let’s bring together only the disrupters, only the people building the future rather than trying (desperately) to protect the past. Just as the old WEF forces its members to at least ask questions about their impact — on environment, values, trust, foresight — so should this new WEF push its participants to make sure they use their power of change responsibly, strategically, openly.

I have said of journalism that its future is entrepreneurial (not institutional). At this Davos, I come to sese the same is true of much of our world. The shift from the industrial economy to whatever follows is well underway, only the leaders of the old order are largely blind to it and in that willful ignorance, there is great risk.

Entire industries are in various stages of disruption and destruction: news, media, entertainment, advertising, automotive, manufacturing, retail, real estate, telecommunications, transportation, health care…. The same will come to institutions, including government, nongovernmental and international organizations, and the academy. One university president fretted at Davos: “Just think what the world would be like if we left what universities to the free market.” Well, yes, many companies are doing more than thinking about just that; they are building, a new and needed future for education.

The disruption is everywhere. What makes technology a model is that it is in a state of constant disruption; it disrupts and deflates and rethinks and rebuilds itself constantly. But that 1000-r.p.m. Great Mandala is now buzz-sawing through the rest of society. Only the rest of society isn’t built for change. Neither is WEF — though it tries — because the change is too profound and too fast.

There’s a clear dividing line here: Do you fear and resist this change (WEF I) or do you create and enable it (WEF II… and note that I didn’t use “2.0”!)? That’s why I think there’s a need for a new WEF. I wouldn’t suggest transforming the first into the second. I’ve learned from a decade and a half of trying — naively, I now see — to do that with newspapers that it’s rarely if ever going to succeed and for understandable reasons (the cost — in money, pain, and culture — is just too great). It is easier to build up than tear down.

We are seeing parallel worlds emerge: the disrupted and the disrupters and they are not meant to share a fondue pot. So let’s pull together the disrupters and challenge them — as WEF has its institutions — to more fully understand the impact of their work, to use their power of change to solve problems, to collaborate (as is their reflex already). Let’s encourage them to look forward, not back, and let’s support their needs (in education, governance, infrastructure). Let’s rethink our priorities around those needs (in media, for example, let’s stop defaulting to government subsidies of dying institutions and instead encourage government to provide ubiquitous broadband to enable a new future; let’s start with the market).

Is WEF the organization to bring this together? Is there a need for an organization at all? When I pulled together a conference (call) of people planning to teach entrepreneurial journalism from around the world, one participant suggested creating a body but Sree Sreenivasan of Columbia protested: “We have enough organizations.” Right. So what structure would support the disrupters? If it’s a meeting, don’t hold it in the high mountains of Switzerland or the low valley of Silicon. Hold it in a place awaiting progress. Or just hold it online. Make it open. As Dave Winer says, the people who should be there are there.

I see the value in Davos: smart people with the power to get things done (well, once upon a time) able to mix and meet and sometimes learn and even act. I see similar benefit for the people are indeed are rethinking, redesigning, and rebuilding by replacing.

Next year in India or Africa or Brazil or at an IP address to be named…