Posts about guardian

The article and the future of print

This week, Guardian Editor-in-Chief Alan Rusbridger declared that the paper would go “digital first,” following John Paton‘s lead and stopping a step short of his strategy at Journal Register: “digital first … print last.”

My Guardian friends are getting a bit tetchy about folks trying to tell them how to fix the institution, but given that it lost £34.4m last year, I’d say the intervention is warranted and should be seen only as loving care: chicken soup for the strategy. So I will join in.

My thoughts about the Guardian have something to do with my thoughts on the article. That’s a logical connection because the means of production and distribution of print are what mandated the invention of the article. So it is fitting that we consider its fate in that context.

But first let’s examine what it means to be digital first. It does not mean just putting one’s stories online before the presses roll. In that case, print still dictates the form and rhythm of news: everything in the process of a newsroom is still aimed at fitting round stories into squared holes on pages. That, as Jay Rosen says, is the key skill newsroom residents think they have (and the skill journalism schools prepare them for): the production cycle of print.

Digital first, aggressively implemented, means that digital drives all decisions: how news is covered, in what form, by whom, and when. It dictates that as soon as a journalist knows something, she is prepared to share it with her public. It means that she may share what she knows before she knows everything (there’s a vestige of the old culture, which held that we could know everything … and by deadline to boot) so she can get help from her public to fill in what she doesn’t know. That resets the journalistic relationship to the community, making the news organization a platform first, enabling a community to share its information and inviting the journalist to add value to that process. It means using the most appropriate media to impart information because we are no longer held captive to only one: text. We now use data, audio, video, graphics, search, applications, and wonders not yet imagined. Digital first is the realization that news happens with or without us — it mimics the architecture of the internet, end-to-end — and we must use all the tools available to add value where we can.

Digital first, from a business perspective, means driving the strategy to a digital future, no longer depending on the print crutch. That means creating a likely smaller and more efficient enterprise that can survive, then prosper post-monopoly, post-scarcity in an abundance-based media economy. It means serving the commercial needs of businesses in our communities in new ways: not just by selling space but by providing services (helping them with their own online strategies — including Google, Facebook, Groupon, craigslist, et al; training them; perhaps holding events with them). It means finding new efficiencies in the collaborative link economy. It means outrunning the grim reaper and getting past risky dependency on free-standing inserts (the coupons and circulars that will one day, sooner than we know — zap! — disappear) and retail advertising (which continues to implode) and the last vestiges of classified (how quaint) and circulation revenue (sorry!). It means getting rid of the cost of the analog business (“iron and real estate,” as Paton says).

Print last. Note that none of us — no, not even I — is saying print dead. Print, at least for a time, still has a place in serving content and advertising. But let’s re-examine that place even as we re-examine the role of the article, the journalist, and the advertisement in digital.

Since I spoke about this with Rusbridger last time he was in New York to herald the coming of Guardian for Yanks, I’ve refined my thinking. As I understand the well-known business of the Guardian — unlike many US papers and unlike at least one of its UK competitors, the Times — its Sunday paper, the Observer, is an economic burden. My thought earlier had been to give it up, just as many American papers are contemplating giving up other days of the week but keeping Sunday (and Thursdays and perhaps another … because they are still useful to wrap around those free-standing inserts). No, they won’t keep publishing on those days for journalistic purposes but because they have distribution value. Cynical, perhaps, but true.

But all this talk about the article has made me contemplate a new future: What if the Guardian became an online-only and international brand of news, multimedia, and comment and the Observer became a once-a-week (who cares what day of the week?) print brand of analysis, context, comment, and narrative? The Guardian has 37 million users, two-thirds of them outside the UK. Going online-only would enable it to become a truly international brand. The Observer could compete with the master of the article, the one publication that adds great value through the form: the Economist. As a newspaper of depth, this Observer could mimic Die Zeit in Germany, an amazing journal of reporting and commentary that is still growing in circulation. The print Observer could be printed in America, competing with weak-tea Sunday newspapers in markets across the country. Prior efforts to consider a print Guardian in the U.S. have stopped short. Could this succeed? Dunno.

The point is that the article as a high form of journalistic practice could succeed in a high-value print form while the Guardian could become a journal of news and comment in text, photo, video, audio, graphics, data….

What also makes me wonder about this is The New York Times’ proud announcement that it will remake its Week in Review into the Sunday Review next week. Truth be told, I haven’t read the Sunday Times in ages. I used to hang on its arrival at newsstands on Saturday nights in Manhattan and Brooklyn, but now I find it to be day-old bread, yeasty but stiff. Could The Times turn its plans for Sunday Review into an American Economist? I’m less sanguine about its chances than the Guardian’s. In either case, the winner would be the one that finds the greatest value in the old form of the article.

See, it’s not dead. It just needs a savior.

: MORE: I meant to add a few thoughts on the form the article takes in these media. In digital, articles are still valuable to synthesize a story, to summarize a complex day’s news, to add context, and so on. Again, not all stories need such articles, but many will. In this vision of print, the article takes on a different raison d’etre and a higher calling: It needs to add perspective. Bill Keller says it this way in his preview of the new Review:

Jonathan Landman, who took over the section from Dan Lewis, put it this way: The news sections’ job is to inform. (The desired reader reaction: “I didn’t know that!”) The opinion section’s job is to persuade. (“Yes, I see the light!”) The job of the Review is to help people see things in unexpected ways. (“I never thought of it that way!”)

I’d say The Economist presents the model for that kind of article. It is a high, a very high bar to reach. Can the Guardian attain that? Yes. The New York Times? Yes. The workaday local paper?

: Related: Charlie Beckett on Wikileaks and the threat of new news. Terry Heaton on news and the story.

The distraction trope

In the Guardian, Jonathan Freedland is the latest curmudgeon to recycle Nick Carr’s distraction trope, microwave it, and serve it with gravy. The argument is that Twitter—though possibly a wonderful thing for Egyptian revolutionaries (we can argue that trope another day)—is distracting us Westerners from our important work of deep reading and deep thinking and something simply must be done. We have a crisis of concentration brought on by a crisis of distraction, he tells us. Some people I respect react and call this matter urgent.

Bollocks, as my Guardian friends would say.

I want you to think back with me now—I’m hypnotizing you, which should alleviate the stress of distraction, at least momentarily—to the moment in 1994 or soon thereafter when you discovered the World Wide Web and a new activity: browsing. Didn’t we all, every one of us, waste hours—days, even—aimlessly, purposelessly clicking links from one site to the next, not knowing where we would go and then not knowing where our hours went? Oh my God, we would never get anything done again, we fretted. We are all too distracted. We were hypnotized.

I know from market research I did that back then that it was not long before browsing diminished and died as our primary behavior online. We became directed in our searches. We came to the web looking for something, got it, and moved on. That’s partly because the tools improved: Yahoo gave us a directory; brands took on the role of serving expected content; Google gave us search. But this change in behavior came mainly because we got over the newness of browsing and had other, more important things to do and we learned how to prioritize our time again.

It is ever thus. Think back to the early days of TV and cable: My God, with so much to watch, will we ever get anything done? The exact same argument can be made—indeed, one wishes it were made—about books: With so many of them unread, how can we possibly ever do anything else? But, of course, we do.

Twitter addiction shall pass. Have faith—faith in your fellow man and woman. I was busy doing other things yesterday, important things, and so I pretty much did not tweet. I survived without it. So, I’m depressed to say, did all of you without me. I just wrote in my book that Twitter indeed created a distraction to writing the book, as I was tempted by the siren call of the conversation that never ends. But it also helped with my writing that I always had ready researchers and editors, friends willing to help when I got stuck or needed inspiration.

Twitter is a tool to manage and we learn how to do that, once the new-car smell wears off. That’s exactly what has happened with blogging. And here is the moment the curmudgeons triumphally declare the triumphalists wrong and blogging—which, remember, was also going to destroy us—dead or dying. What killed blogging? Twitter. Ah, the circle of life, the great mandala.

But I can guarantee that the distraction trope will be pulled out of the refrigerator and reheated again and again as the curmudgeons raise alarms about the destructive power of the next shiny thing. I’m loving reading a long-awaited new book by the esteemed Gutenberg scholar Elizabeth Eisenstein. In Divine Art, Infernal Machine, she takes us back to exact same arguments over the printing press among the “triumphalists” and the “catastrophists.” That is perhaps better title for our curmudgeons. She quotes Erasmus arguing that

the benefits of printing were almost eclipsed by complaints about increased output: swarms of new books were glutting the market and once venerated authors were being neglected. “To what corner of the world do they not fly, these swarms of new books?… the very multitude of them is hurting scholarship, because it creates a glut, and even in good things satiety is most harmful.” The minds of men “flighty and curious of anything new” are lured “away from the study of old authors.”

And isn’t really their fear, the old authors, that they are being replaced? Control in culture is shifting.

What are our catastrophists really saying when they argue that Twitter is ruining us and Western (at least) civilization? They are branding us all sheeple. Ah, but you might say: Jarvis, aren’t you and your triumphalists making similarly overbroad statements when you say that these tools unlock new wonders in us? Perhaps. But there is a fundamental difference in our claims.

We triumphalists—I don’t think I am one but, what the hell, I’ll don the uniform—argue that these tools unlock some potential in us, help us do what we want to do and better. The catastrophists are saying that we can be easily led astray to do stupid things and become stupid. One is an argument of enablement. One is an argument of enslavement. Which reveals more respect for humanity? That is the real dividing line. I start with faith in my fellow man and woman. The catastrophists start with little or none.

Ah, but some will say, these tools are neutral. They can be used by bad actors as well. That’s certainly true. but bad actors are usually already bad. The tools don’t make them bad.

Take the Great Distractor of the age: Mark Zuckerberg and Facebook. The real debate over him in The Social Network and among privacy regulators and between catastrophists and triumphalists is about his motives. I write in Public Parts:

If, as the movie paints him, he acts out of his own cynical goals—getting attention, getting laid, getting rich—then manipulating us to reveal ourselves smells of exploitation. But if instead he has a higher aim—to help us share and connect and to make the world more open—then it’s easier to respect him, as Jake [my son] and I do. . . .

There is the inherent optimism that fuels the likes of him: that with the right tools and power in the right hands, the world will keep getting better. “On balance, making the world more open is good,” Zuckerberg says. “Our mission is to make the world more open and connected.” The optimist has to believe in his fellow man, in empowering him more than protecting against him. . . .

He believes he is creating the tools that help people to do what they naturally want to do but couldn’t do before. In his view, he’s not changing human nature. He’s enabling it.

I talked with Ev Williams at Twitter and he says similar things. He’s not trying to distract us to death. (That would be Evil Ev.) He’s trying to help us connect with each other and information, instantly, relevantly. (That is Good Ev.) It’s up to us how we use the tool well—indeed, we the community of users are the ones who helped invent the power of @ and # and $ and RT to refine the gift Ev et al gave us. I heard a similar mission from Dennis Crowley at Foursquare: helping us make serendipitous connections we otherwise wouldn’t.

Sir Tim Berners-Lee, the one who started this whole mess in the beginning (damn you, Sir!) is trying to push all the toolmakers to the next level, to better understand the science of what they are doing and to unlock the data layer of our world. Wonderful possibilities await—if you believe that the person next to you isn’t a distractable dolt but instead someone with unmet potential. There’s the real argument, my friends. And you are my friends, for remember that I’m the one who respects you.

New molecules

Guardian editor-in-chief Alan Rusbridger asked for help with his view of the fourth estate’s separation (outside the U.S.) into three sub-estates: legacy media, public media, our media (my wording). My response:

Pardon my metaphors:

I had a bunch of public broadcasters from Sweden at my school last week. They’re quite successful—audience is up; marketshare is up—and so it may be difficult for them to feel the urgency of the winds of change and move with them. I suggested that we are only beginning to feel the storm (/metaphor) and I argued that if we are coming out the other side of what some Danish researchers call (metaphor) the Gutenberg Parenthesis then our concepts of media and our consequent cognition of society will change profoundly over years yet to come.

In her amazing history of Gutenberg’s influence, Elizabeth Einstenstein argues that it took 50 years for books to come into their own and not merely copy the scribes and another 50 years or so for the impact of the press to become clear. The Gutenberg Parenthesis team argues that we are entering a period of confusion as great as the one Gutenberg caused. Granted, we are operating in internet years, not Gutenberg years. Still, we’ve only seen the beginning. And so I asked the Swedes to pull back and consider their role more broadly.

So I urged the Swedes to think of media as the essential tool of publicness and one that is no longer mediated. And so in their role of being publicly supported (but not — I’ll grant to them and to the BBC their fig leaves — tax-supported) then I suggested the best thing they could do is to enable and protect the voice of the public. They could curate, train, promote, and collaborate with new people using new tools in new ways, for example. They could establish platforms that make that possible and networks that help make it sustainable. They could see it as their role to support a lively, healthy ecosystem and all of its members, including not only the new kids but also the struggling legacy media (by that view, I’ve long argued that the BBC should make it its mission to use its powerful megaphone to promote and support the best of journalism and media in the UK, no matter who makes it; that is a public good).

All of which is to say that I think your trilogy-view of media today is correct but temporary. We are still in the phase when the printers are copying the scribes’ fonts and content. New wine, old skins. We are also still in a phase of separating the old-media folks from the new-media folks, the public from the private, and for that matter, the media (the journalists) from the public. I think those distinctions must melt away when we move past the stage of copying the copyists and invent entirely new forms.

We see content as that which we make. Google sees content everywhere. Twitter creates content even Twitter doesn’t understand yet (our useless chatter has real value as a predictor of movies’ success). Blippy creates a transparent marketplace for stuff. Google Goggles with Foursquare and Yelp and Facebook and Google Maps and the devices we carry that are always connected and location-aware and us-aware force us to rethink our definitions of both local and news. The Guardian turns data into news by collaborating with the people formerly known as its audience. We ain’t seen nothin’ yet.

So I don’t think we’re yet at a stage of stasis where we can find three estates out of the fourth estate and count on the tensions among them to support a new dynamic of media.

Overlaying this view, I think we are entering a phase in the economy in which industries — filled with closed, centralized corporations that own their means of production or distribution — are replaced by ecosystems — filled with entities that must collaborate and cooperate and complement each other to find efficiencies and through those efficiencies profitability and sustainability. So the idea that your three sub-estates will compete won’t be sustainable; they will have to specialize and then collaborate and as that occurs there may still be separations of roles — e.g., creator v. curator, platform v. network, local v. national — but they are new separations.

What you are identifying is the start of an atomization of media. But I see those atoms reforming into new molecules. (/metalphor)

Human in the throne?

In March, 2007, for a Guardian column, I asked the then-head of now-PM David Cameron’s web strategy whether the man would continues making his personal, folksy videos if he moved into No. 10. Sam Roake replied: “If it suddenly stopped, that would be seen as a very cynical move . . . You can’t stop communicating.” This, he argued, is “a new stage of politics” that is about “sustained dialogue with the public.”

We shall see.

The new No. 10 moved to new YouTube, Flickr, and Twitter addresses. They are putting up press conference videos and linking to photos of the PM.

Yes, but will he talk with the people from the kitchen, as he used to? His last Webcameron video asks people to vote (you’d think he’d at least have one saying thank you). We haven’t yet seen the PM buttering toast. Will he? Can he?

Barack Obama got to office using the internet to be human and then he took on the imperial form of the office, mostly giving pronouncements. And he now inexplicably tries to paint himself as a techofuddy. Nicolas Sarkozy also got to office using video to present himself as human. Now, I suppose, he’s more human than ever — though inadvertently; when I search YouTube for his name, the first video is of him drunk at the G8. Germany’s Angela Merkel, who frankly never came off as terribly warm and human, nonetheless make a podcast.

Can a politician who takes the highest office stay human? In this age, can he or she afford not to? I think Roake was right: not to continue communicating eye-to-eye makes the persona of the campaign into theater or it makes office into theater.

Here’s a video I did of Cameron in Davos in 2008 asking him about talking to small cameras:

Rusbridger v. walls

Just as The New York Times announces its pay wall, Guardian Editor Alan Rusbridger gives an important speech on the topic — indeed, on the very nature of journalism — arguing against pay walls.

Charging, Rusbridger says, “removes you from the way people the world over now connect with each other. You cannot control distribution or create scarcity without becoming isolated from this new networked world.”

In an industry in which we get used to every trend line pointing to the floor, the growth of newspapers’ digital audience should be a beacon of hope. During the last three months of 2009 the Guardian was being read by 40 per cent more people than during the same period in 2008. That’s right, a mainstream media company – you know, the ones that should admit the game’s up because they are so irrelevant and don’t know what they are doing in this new media landscape – has grown its audience by 40 per cent in a year. More Americans are now reading the Guardian than read the Los Angeles Times. This readership has found us, rather than the other way round. Our total marketing spend in America in the past 10 years has been $34,000. . . .

This is the opposite of newspaper decline-ism, the doctrine which compels us to keep telling the world the editorial proposition and tradition we represent are in desperate trouble. When I think of the Guardian’s journey and its path of growth and reach and influence my instincts at the moment – at this stage of the revolution – are to celebrate this trend and seek to accelerate it rather than cut it off. The more we can spread the Guardian, embed it in the way the world talks to each other, the better.

Rusbridger warns The NY Times that if it shrinks behind its wall, The Guardian could become the biggest newspaper brand online. He imagines start-ups that “begin each day with a prayer session for all national newspapers to follow Rupert Murdoch behind a pay wall. That’s their business model.” His warning continues: “Let’s not leave the field so that the digital un-bundlers can come in, dismantle and loot what we have built up, including our audiences and readers.

Rusbridger argues, as do I, that this is about more than a revenue line:

There is an irreversible trend in society today which rather wonderfully continues what we as an industry started – here, in newspapers, in the UK . It’s not a “digital trend” – that’s just shorthand. It’s a trend about how people are expressing themselves, about how societies will choose to organise themselves, about a new democracy of ideas and information, about changing notions of authority, about the releasing of individual creativity, about an ability to hear previously unheard voices; about respecting, including and harnessing the views of others. About resisting the people who want to close down free speech.

As {legendary Gaurdian editor C.P.] Scott said 90 years ago : “What a chance for the newspaper!” If we turn our back on all this and at the same time conclude that there is nothing to learn from it because what ‘they’ do is different – ‘we are journalists, they aren’t: we do journalism; they don’t’ – then, never mind business models, we will be sleep walking into oblivion.