Posts about beta

Beta-think (and ending malaria)

The organizers of the End Malaria campaign are putting on one more push to sell the book they organized (at least $20 from every sale goes to buying nets) and so they suggested that the contributors post their essays as enticement that you’ll want to read more like (or better than) this. Here’s my contribution on betathink:

Voltaire was half right. “Le mieux est l’ennemi du bien,” he said: The best is the enemy of the good. The best is also the enemy of the better. Striving for perfection complicates and delays getting things done. Worse, the myth of perfection can shut off the process of improvement and the possibility of collaboration.

That myth of perfection is a byproduct of the industrial revolution and the efficiencies of mass production, distribution, and marketing. A product that takes a long time to design and produce is sold to a large market with a claim of perfection. Its manufacturer can’t have customers think otherwise. The distribution chain invests in large quantities of the product and can’t afford for it to be flawed. Mass marketing is spent to convince customers it is the best it can be. Thus perfection becomes our standard or at least our presumption. But perfection is delusion. Nothing and no one is perfect.

The modern cure to Voltaire’s paradox—and a gift of the digital age—is the beta: the unfinished and imperfect product or process that is opened up so customers can offer advice and improvements. Releasing a beta is a public act, an invitation to customers to help complete and improve it. It is an act of transparency and an admission of humility. It is also an act of generosity and trust, handing over a measure of control to others.

Google vice-president Marissa Mayer tells the story of the launch of Google News. It was near the end of the week. No online product is ever released on a Friday (if it breaks, your Saturday is ruined). So the team had just enough time before the weekend to add one more feature. They were debating whether to add a function to sort the news by date or by place. They never got past debating. Come Monday, Google News came out as a beta (and stayed a beta for three years). That afternoon, Mayer says, the team received 305 emails from users, 300 of which begged for sort by date.

By admitting they weren’t finished, the company heard from customers what to do next. “We make mistakes every time, every day,” Mayer confesses. “But if you launch things and iterate really quickly, people forget about those mistakes and have a lot of respect for how quickly you build the product up and make it better.” Beta is Google’s way of never having to say they’re sorry.

Beta-think can benefit more than technology products. I see betas coming from companies in fashion, restaurants, even chocolate and automotive. I wish we’d see more beta-think—more innovation, experimentation and risk—from government, but bureaucrats and politicians are loath to admit imperfection. I also wish that education would operate under beta-think, encouraging learning by failure rather than teaching to a test and a perfect score of right answers. Beta-think can change how we think as managers. It can even change marriage (so much for trying to find the perfect husband or fix all his imperfections).

Beta-think opens an enterprise to the surprising generosity of the public. Look at the value users build in Wikipedia, TripAdvisor, Yelp and other services they control. Beta-think improves an institution’s relationship with its public. Making errors—and confessing and correcting them quickly—will enhance rather than diminish credibility. Once the fear of imperfection is taken out of the equation, innovation can flourish. Look at how Zappos improved customer service by letting employees make their own decisions (and mistakes). What does beta-think do to competitiveness? How can you show your hand to your rivals? That depends on where you see your real value: in keeping secrets from customers or in building strong relationships of trust by listening to and collaborating with them. Beta-think also brings speed. Even perfectionist Apple released its iPhone aware that it was incomplete, promising missing pieces in future updates.

Here’s the wonderful irony of beta-think: It says that we can make what we do ever-better because we are never done, never satisfied, always seeking ways to improve by working in public.

This essay, too, is a beta. It’s not perfect. I’m not done with it. So please come to and help make it better.

Beta-think and ending malaria

Amazon, Seth Godin’s Domino, and other good folks collaborated to come out with a book of essays whose proceeds go to buy mosquito nets to end malaria. My essay for End Malaria Day is actually the topic of the next book I was going to do until I got all hopped up on publicness and privacy and wrote Public Parts. The essay is on beta-think. Here’s a snippet from the start:

* * *

Voltaire was half right. “Le mieux est l’ennemi du bien,” he said: The best is the enemy of the good. The best is also the enemy of the better. Striving for perfection complicates and delays getting things done. Worse, the myth of perfection can shut off the process of improvement and the possibility of collaboration.

That myth of perfection is a byproduct of the industrial revolution and the efficiencies of mass production, distribution, and marketing. A product that takes a long time to design and produce is sold to a large market with a claim of perfection. Its manufacturer can’t have customers think otherwise. The distribution chain invests in large quantities of the product and can’t afford for it to be flawed. Mass marketing is spent to convince customers it is the best it can be. Thus perfection becomes our standard or at least our presumption. But perfection is delusion. Nothing and no one is perfect.

The modern cure to Voltaire’s paradox—and a gift of the digital age—is the beta: the unfinished and imperfect product or process that is opened up so customers can offer advice and improvements. Releasing a beta is a public act, an invitation to customers to help complete and improve it. It is an act of transparency and an admission of humility. It is also an act of generosity and trust, handing over a measure of control to others.

Editing your customers

“Almost everything you see in Twitter today was invented by our users,” its creator, Jack Dorsey, said in this video (found via his investor, Fred Wilson). RT, #, @, & $ were conventions created by users that were then—sometimes reluctantly—embraced by Twitter, to Twitter’s benefit. Dorsey said it is the role of a company to edit its users.

Edit. His word. I’m ashamed that I haven’t been using it in this context, being an editor myself and writing about the need for companies to collaborate with their customers.

I have told editors at newspapers that, as aggregators and curators, they will begin to edit not just the work of their staffs but the creations of their publics. But that goes only so far; it sees the creations of that public as contributions to what we, as journalists, do. And that speaks only to media organizations. Dorsey talks about any company as editor.

I have also told companies—it was a key moral to the story in What Would Google Do?—that they should become platforms that enable others to take control of their fates and succeed.

Twitter is such a platform. As Dorsey said in the video, it constantly iterates and that enables it to take in the creations of users. Months ago, when I wished for a Twitter feature, Fred Wilson tweeted back that that’s what the independent developers and applications are for. Indeed, Twitter enabled developers to create not only features but businesses atop it. But then when Twitter bought or created its own versions of these features created by developers, it went into competition with those developers, on whom Twitter depended to improve—to complete, really—its service. That’s a new kind of channel conflict—competing with your co-creators—that companies will also have to figure out as they become not just producers but editors.

Anyway, I like Dorsey’s conception of company as editor because it requires openness—operating and developing in public; it assumes process over product; it values iteration; it implies collaboration with one’s public; it still maintains the company’s responsibility for quality. An editor has nothing to edit if others haven’t created anything, so it is in the editor’s interest to enable others to create. And the better the creations that public makes, the better off the editor is, so it’s also in the company-as-editor’s interest to improve what that public creates through better tools and often training and also economic motives.

Privacy, publicness & penises

Here is video of the talk I gave at re:publica 2010 in Berlin on The German Paradox: Privacy, publicness, and penises. (Don’t be frightened by the first moments in German; it’s just an introduction and a joke — with fire extinguisher — about how I had threatened to Hendrix my iPad on the stage in Berlin.)

My subject is all the more relevant given this week’s letter to Google with privacy czars in a handful of countries trying to argue that Google Streetview taking pictures in public violates privacy. In my talk, I argue that what is public belongs to us, the public, and efforts to reduce what’s public steals from us. Journalists should be particularly protective of what is public; so should we all. (The czars also argued, amazingly, that Google shouldn’t release betas. They come, you see, from an old world of centralized control — and the myth that processes can be turned into products, finished, complete, even perfect — instead of the new world of openness and collaboration.)

With so much discussion — even panic — about privacy today, I fear that we risk losing the benefits of publicness, of the connections enabled by the internet and our interconnected world. If we shift to a default of private, we lose much and I argue that we should weigh that choice when we decide what to put behind a wall — and there are too many walls being build today. But we’re not discussing the benefits of the public vs. the private. I want to spark that discussion.

I use Germany as a laboratory and illustration of the topic not only because I was there but because they have something nearing a cultural obsession on the topic of privacy. What’s true there is true elsewhere, including the U.S., though only to a different level. I also only skim the surface of the topic in this video; there is so much more to talk about: how publicness benefits the ways we can and now must do business; how it affects government; how it alters education; how it changes our relationships; how young people bring a new culture that cuts across all national boundaries and expectations; how it multiplies our knowledge; how it creates value; how it leads to a new set of ethics; and much more. But that’s for another time and medium.

In the talk, this all leads up to the Bill of Rights in Cyberspace, which is really about openness and protecting that.

At the end of my time on stage, I invited the room to continue the discussion next door in the sauna, Four guys did show up. Here’s the proof.

If you prefer, here is are my slides with the audio of my talk and discussion, thanks to Slideshare:

The coverage of the talk in German media amazes me. It made the front page of three papers and coverage in more and a prime-time TV show plus radio. Coverage included Welt Kompakt and Welt, Welt again, Berliner Zeitung, Berliner Zeitung again, Zeit Online covers the talk, then Zeit feels compelled to respond and start a reader-debate, Spiegel, the German press agency, the Evangelical News Service, Berliner Morgenpost, Berliner Morgenpost again, Bild, Taggespiegel, taz, taz again, taz again, Berliner Kurier, Berliner Kurier again, 3sat, Süddeutschezeitung, BZ, Frankfurter Rundschau, business magazine WirtschaftWoche, L’Express in France, ORF TV in Austria, and more than one blog. And today add der Freitag. A week later comes an interview in the Berliner Zeitung.

Coverage of my re:publica talk

And here is a slice of an illustration of my talk by (who tweeted beforehand about having to draw a penis for the first time in her talk-illustration career) that appeared in the German paper Der Freitag this week:

derFrietag re re:publica

Yet more: Here’s an interview with in Berlin that summarizes my views:

: LATER: Penelope Trunk, who lives in public, writes: ” The value of your privacy is very little in the age of transparency and authenticity. Privacy is almost always a way of hiding things that don’t need hiding.. . . And transparency trumps privacy every time. So put your ideas in social media, not email.”

: AND: I just got a message on Facebook from the woman I talk about in the Sauna in Davos, the one I said was an American freaked by the mixed, nude crowd of sweaty Russians and me. She thought it was quite funny … especially because she’s French (living in America).

iPad danger: app v. web, consumer v. creator

I tweeted earlier that after having slept with her (Ms. iPad), I woke up with morning-after regrets. She’s sweet and pretty but shallow and vapid.

Cute line, appropriate for retweets. But as my hangover settles in, I realize that there’s something much more basic and profound that worries me about the iPad — and not just the iPad but the architecture upon which it is built. I see danger in moving from the web to apps.

The iPad is retrograde. It tries to turn us back into an audience again. That is why media companies and advertisers are embracing it so fervently, because they think it returns us all to their good old days when we just consumed, we didn’t create, when they controlled our media experience and business models and we came to them. The most absurd, extreme illustration is Time Magazine’s app, which is essentially a PDF of the magazine (with the odd video snippet). It’s worse than the web: we can’t comment; we can’t remix; we can’t click out; we can’t link in, and they think this is worth $4.99 a week. But the pictures are pretty.

That’s what we keep hearing about the iPad as the justification for all its purposeful limitations: it’s meant for consumption, we’re told, not creation. We also hear, as in David Pogue’s review, that this is our grandma’s computer. That cant is inherently snobbish and insulting. It assumes grandma has nothing to say. But after 15 years of the web, we know she does. I’ve long said that the remote control, cable box, and VCR gave us control of the consumption of media; the internet gave us control of its creation. Pew says that a third of us create web content. But all of us comment on content, whether through email or across a Denny’s table. At one level or another, we all spread, react, remix, or create. Just not on the iPad.

The iPad’s architecture supports these limitations in a few ways:

First, in its hardware design, it does not include a camera — the easiest and in some ways most democratic means of creation (you don’t have to write well) — even though its smaller cousin, the iPhone, has one. Equally important, it does not include a simple (fucking) USB port, which means that I can’t bring in and take out content easily. If I want to edit a document in Apple’s Pages, I have to go through many hoops of moving and snycing and emailing or using Apple’s own services. Cloud? I see no cloud, just Apple’s blue skies. Why no USB? Well, I can only imagine that Apple doesn’t want us to think what Walt Mossberg did in his review — the polar opposite of Pogue’s — that this pad could replace its more expensive laptops. The iPad is purposely handicapped, but it doesn’t need to be. See the German WePad, which comes with USB port(s!), a camera, multitasking, and the more open Android operating system and marketplace.

Second, the iPad is built on apps. So are phones, Apple’s and others’. Apps can be wonderful things because they are built to a purpose. I’m not anti-app, let’s be clear. But I also want to stop and examine the impact of shifting from a page- and site-based internet to one built on apps. I’ve been arguing that we are, indeed, moving past a page-, site-, and search-based web to one also built on streams and flows, to a distributed web where you can’t expect people to come to you but you must go to them; you must get yourself into their streams. This shift to apps is a move in precisely the opposite direction. Apps are more closed, contained, controlling. That, again, is why media companies like them. But they don’t interoperate — they don’t play well — with other apps and with the web itself; they are hostile to links and search. What we do in apps is less open to the world. I just want to consider the consequences.

So I see the iPad as a Bizarro Trojan Horse. Instead of importing soldiers into the kingdom to break down its walls, in this horse, we, the people, are stuffed inside and wheeled into the old walls; the gate is shut and we’re welcomed back into the kingdom of controlling media that we left almost a generation ago.

There are alternatives. I now see the battle between Apple and Google Android in clearer focus. At Davos, Eric Schmidt said that phones (and he saw the iPad as just a big phone… which it is, just without the phone and a few other things) will be defined by their apps. The mobile (that is to say, constantly connected) war will be won on apps. Google is competing with openness, Apple with control; Google will have countless manufacturers and brands spreading its OS, Apple will have media and fanboys (including me) do the work for it.

But Google has a long way to go if it hopes to win this war. I’m using my Nexus One phone (which I also had morning-after doubts about) and generally liking it but I still find it awkward. Google has lost its way, its devotion to profound simplicity. Google Wave and Buzz are confusing and generally unusable messes; Android needed to be thought through more (I shouldn’t have to think about what a button does in this use case before using it); Google Docs could be more elegant; YouTube’s redesign is halfway to clean. Still, Google and Apple’s competition presents us with choices.

I find it interesting that though many commercial brands — from Amazon to Bank of America to Fandango — have written for both Apple and Android, many media brands — most notable The New York Times and my Guardian — have written only for Apple and they now are devoting much resource to recreating apps for the iPad. The audience on Android is bigger than the audience on iPad but the sexiness and control Apple offers is alluring. This, I think, is why Salon CEO Richard Gingras calls the iPad a fatal distraction for publishers. They are deluding themselves into thinking that the future lies in their past.

On This Week in Google last night, I went too far slathering over the iPad and some of its very neat apps (ABC’s is great; I watched the Modern Family about the iPad on the iPad and smugly loved being so meta). I am a toy boy at heart and didn’t stop to cast a critical eye, as TWiG’s iPadless Gina Trapani did. This morning on Twitter, I went too far the other way kvetching about the inconveniences of the iPad’s limitations (just a fucking USB, please!) in compensation. That’s the problem with Twitter, at least for my readers: it’s thinking out loud.

I’ll sleep with the iPad a few more nights. I might well rebox and return it; I don’t have $500 to throw away. But considering what I do for a living, I perhaps should hold onto it so I can understand its implications. And that’s the real point of this post: there are implications.

: MORE: Of course, I must link to Cory Doctorow’s eloquent examination of the infantilization of technology. I’m not quite as principled, I guess, as Cory is on the topic; I’m not telling people they should not buy the iPad; I don’t much like that verb in any context. But on the merits and demerits, we agree.

And Dave Winer: “Today it’s something to play with, not something to use. That’s the kind way to say it. The direct way: It’s a toy.”

: By the way, back in the day, about a decade ago, I worked with Intel (through my employer, Advance) on a web pad that was meant to be used to consume in the home (we knew then that the on-screen keyboard sucked; it was meant to be a couch satellite to the desk’s PC). Intel lost nerve and didn’t launch it. Besides, the technology was early (they built the wireless on Intel Anypoint, not wi-fi or even bluetooth). Here’s the pad in the flesh. I have it in my basement museum of dead technlogy, next to my CueCat.

: More, Monday: NPR’s related report and Jonathan Zittrain’s worries.

TEDxNYed: This is bullshit

Here are my notes for my talk to the TEDxNYed gathering this past weekend. I used the opportunity of a TED event to question the TED format, especially in relation to education, where — as in media — we must move past the one-way lecture to collaboration. I feared I’d get tomatoes — organic — thrown at me at the first line, but I got laugh and so everything we OK from there. The video won’t be up for a week or two so I’ll share my notes. It’s not word-for-word what I delivered, but it’s close….

* * *

This is bullshit.

Why should you be sitting there listening to me? To paraphrase Dan Gillmor, you know more than I do. Will Richardson should be up here instead of me. And to paraphrase Jay Rosen, you should be the people formerly known as the audience.

But right now, you’re the audience and I’m lecturing.

That’s bullshit.

What does this remind of us of? The classroom, of course, and the entire structure of an educational system built for the industrial age, turning out students all the same, convincing them that there is one right answer — and that answer springs from the lecturn. If they veer from it they’re wrong; they fail.

What else does this remind us of? Media, old media: one-way, one-size-fits-all. The public doesn’t decide what’s news and what’s right. The journalist-as-speaker does.

But we must question this very form. We must enable students to question the form.

I, too, like lots of TED talks. But having said that….

During the latest meeting of Mothership TED, I tweeted that I didn’t think I had ever seen any TEDster tweet anything negative about a talk given there, so enthralled are they all for being there, I suppose. I asked whether they were given soma in their shwag bags.

But then, blessed irony, a disparaging tweet came from none other than TED’s curator, dean, editor, boss, Chris Anderson. Sarah Silverman had said something that caused such a kerfuffle Anderson apologized and then apologized for the apology, so flummoxed was he by someone coming into the ivory tower of TED to shake things up with words.

When I tweeted about this, trying to find out what Silverman had said, and daring to question the adoration TEDsters have for TED, one of its acolytes complained about my questioning the wonders of TED. She explained that TED gave her “validation.”


Good God, that’s the last thing we should want. We should want questions, challenges, discussion, debate, collaboration, quests for understanding and solutions. Has the internet taught us any less?

But that is what education and media do: they validate.

They also repeat. In news, I have argued that we can no longer afford to repeat the commodified news the public already knows because we want to tell the story under our byline, exuding our ego; we must, instead, add unique value.

The same can be said of the academic lecture. Does it still make sense for countless teachers to rewrite the same essential lecture about, say, capillary action? Used to be, they had to. But not now, not since open curricula and YouTube. Just as journalists must become more curator than creator, so must educators.

A few years ago, I had this conversation with Bob Kerrey at the New School. He asked what he could do to compete with brilliant lectures now online at MIT. I said don’t complete, complement. I imagined a virtual Oxford based on a system of lecturers and tutors. Maybe the New School should curate the best lectures on capillary action from MIT and Stanford or a brilliant teacher who explains it well even if not from a big-school brand; that could be anyone in YouTube U. And then the New School adds value by tutoring: explaining, answering, probing, enabling.

The lecture does have its place to impart knowledge and get us to a shared starting point. But it’s not the be-all-and-end-all of education – or journalism. Now the shared lecture is a way to find efficiency in ending repetition, to make the best use of the precious teaching resource we have, to highlight and support the best. I’ll give the same advice to the academy that I give to news media: Do what you do best and link to the rest.

I still haven’t moved past the lecture and teacher as starting point. I also think we must make the students the starting point.

At a Carnegie event at the Paley Center a few weeks ago, I moderated a panel on teaching entrepreneurial journalism and it was only at the end of the session that I realized what I should have done: start with the room, not the stage. I asked the students in the room what they wished their schools were teaching them. It was a great list: practical yet visionary.

I tell media that they must become collaborative, because the public knows much, because people want to create, not just consume, because collaboration is a way to expand news, because it is a way to save expenses. I argue that news is a process, not a product. Indeed, I say that communities can now share information freely – the marginal cost of their news is zero. We in journalism should ask where we can add value. But note that that in this new ecosystem, the news doesn’t start with us. It starts with the community.

I’ve been telling companies that they need to move customers up the design chain. On a plane this week, I sat next to a manufacturer of briefcases last week and asked whether, say, TechCrunch could get road warriors to design the ultimate laptop bag for them, would he build it? Of course, he would.

So we need to move students up the education chain. They don’t always know what they need to know, but why don’t we start by finding out? Instead of giving tests to find out what they’ve learned, we should test to find out what they don’t know. Their wrong answers aren’t failures, they are needs and opportunities.

But the problem is that we start at the end, at what we think students should learn, prescribing and preordaining the outcome: We have the list of right answers. We tell them our answers before they’ve asked the questions. We drill them and test them and tell them they’ve failed if they don’t regurgitate back our lectures as lessons learned. That is a system built for the industrial age, for the assembly line, stamping out everything the same: students as widgets, all the same.

But we are no longer in the industrial age. We are in the Google age. Hear Jonathan Rosenberg, Google’s head of product management, who advised students in a blog post. Google, he said, is looking for “non-routine problem-solving skills.” The routine way to solve the problem of misspelling is, of course, the dictionary. The non-routine way is to listen to all the mistake and corrections we make and feed that back to us in the miraculous, “Did you mean?”

“In the real world,” he said, “the tests are all open book, and your success is inexorably determined by the lessons you glean from the free market.”

One more from him: “It’s easy to educate for the routine, and hard to educate for the novel.” Google sprung from seeing the novel. Is our educational system preparing students to work for or create Googles? Googles don’t come from lectures.

So if not the lecture hall, what’s the model? I mentioned one: the distributed Oxford: lectures here, teaching there.

Once you’re distributed, then one has to ask, why have a university? Why have a school? Why have a newspaper? Why have a place or a thing? Perhaps, like a new news organization, the tasks shift from creating and controlling content and managing scarcity to curating people and content and enabling an abundance of students and teachers and of knowledge: a world whether anyone can teach and everyone will learn. We must stop selling scarce chairs in lecture halls and thinking that is our value.

We must stop our culture of standardized testing and standardized teaching. Fuck the SATs.* In the Google age, what is the point of teaching memorization?

We must stop looking at education as a product – in which we turn out every student giving the same answer – to a process, in which every student looks for new answers. Life is a beta.

Why shouldn’t every university – every school – copy Google’s 20% rule, encouraging and enabling creation and experimentation, every student expected to make a book or an opera or an algorithm or a company. Rather than showing our diplomas, shouldn’t we show our portfolios of work as a far better expression of our thinking and capability? The school becomes not a factory but an incubator.

There’s another model for an alternative to the lecture and it’s Dave Winer’s view of the unconference. At the first Bloggercon, Dave had me running a panel on politics and when I said something about “my panel,” he jumped down my throat, as only Dave can. “There is no panel,” he decreed. “The room is the panel.” Ding. It was in the moment that I learned to moderate events, including those in my classroom, by drawing out the conversation and knowledge of the wise crowd in the room.

So you might ask why I didn’t do that here today. I could blame the form; didn’t want to break the form. But we all know there’s another reason:


* That was an ad-lib

Buzz: A beta too soon

As soon as Buzz was announced — before I could try it — I tried to intuit its goals and I found profound opportunities.

Now that I’ve tried it, reality and opportunity a fer piece apart. It’s awkward. I’d thought that I had wanted Twitter to be threaded but I was wrong; the simplest point quickly passes into an overdose of add-ons. Worse, Google didn’t think through critical issues of privacy — and it only gets worse (via danah boyd). I won’t go as far as Steve Rubel and some others, who instantly declared Buzz DOA; there is the essence of something important here (which I think will come out in mobile more than the web). But there’s no question: Buzz has kinks.

I was going to use that line in the headline — that Buzz is a beta too soon — but the irony is that Buzz is the one product Google did not release as a beta. Big mistake, I think.

In fact, even if Buzz had been released as a beta to a small audience, I’m not sure all the problems would have surfaced because it takes a lot of people using it to surface those problems: unwanted connections and too much noise.

So I wonder whether Google should have moved the users up the design chain — something I’ve been advising retailers and manufacturers to do. The sooner one can learn from one’s customers/users/public (not turning design into democracy but enabling the target to help make you smarter and make what you’re creating better), the better. What if Google had released screenshots and wireframes of Buzz? It’s not as if someone else was going to steal it; Buzz was Google catching up to Twitter, Facebook, and Foursquare anyway. Very few people would have bothered to dig into the design of the product but enough might have — the 1% rule — to warn Google off the worse of Buzz’s bloopers.

Then again, isn’t that what Google did with Wave? Some — many of the same insta-critics — declared it too difficult and DOA while I reminded people that Google specifically said it released a version very early in the process so people could use it and, more importantly, develop new products atop it and through that, Google would learn what Wave really was.

So where’s the happy medium? Or as I ask in the presentation I’ve been making on Beta (likely next book): When’s the beta baked? How done is done?

I’ll be contemplating the answer to those questions and I ask your help and opinions and stories and examples.

Were I to give Google advice on Buzz — what the heck, everyone else is — I think I’d release a product plan for comment and then put out a clearly labeled beta and then invite only volunteers to try it and then make sure that at every step there’s a clear opportunity for me to opt out of a choice and tell Google why I was doing it so Google could learn. I’d listen better.

: MORE: This is a video I did for the release of What Would Google Do? summarizing the beta section in the book, which in turn inspired the thinking above:

Media after the site

Tweet: What does the post-page, post-site, post-media media world look like? @stephenfry, that’s what.

The next phase of media, I’ve been thinking, will be after the page and after the site. Media can’t expect us to go to it all the time. Media has to come to us. Media must insinuate itself into our streams.

I’ve been trying to imagine what that would be and then I was Skype-chatting with Nick Denton (an inspirational pastime I’ve had too little of lately) and he knew exactly what it looks like:


Spot on. Fry insinuated himself into my stream. He comes to us. We distribute him. He has been introduced to and acquired new fans. He now has a million followers, surely more than for any old web site of his. He did it by his wit(s) alone. His product is his ad, his readers his agency. How will he benefit? I have full faith that he of all people will find the way to turn this into a show and a book. He is media with no need for media. I was trying to avoid using Aston Kutcher as my example, but he’s on the cover of Fast Company making the same point: “He intends to become the first next-generation media mogul, using his own brand as a springboard…. ‘The algorithm is awesome,’ Kutcher says…”

That’s media post-media.

This view of the future makes it all the more silly and retrograde for publishers like Murdoch to complain about the value of the readers Google sends to them. Who says readers will or should come to us at all? We were warned of this future by that now-legendary college student who said in Brian Stelter’s New York Times story (which foretold the end of the medium in which it appeared): “If the news is that important, it will find me.”

If a page (and a site) become anything, it will be a repository, an archive, a collecting pool in which to gather permalinks and Googlejuice: an article plus links plus streams of comments and updates and tweets and collaboration via tools like Wave. Content will insinuate itself into streams and streams will insinuate themselves back into content. The great Mandala.

The notion of the stream takes on more importance when you think about your always-connected and always-on device, whatever the hell you call it (phone, tablet, netbook, eyeglasses, connector….). I recently saw a telecommunications technology exec show off a prototype of a screen he says will be here in a year or so that not only has color and full-motion video and can be seen in ambient light but that takes so little power that it can and will be on all the time. So rather than hitting that button on the iPhone to see what’s new, your post-phone post-PC device is always on and always connected. You don’t sneak it under the table to turn it on now and again. You leave it on the table and it constantly streams.

Is that stream news? Only a small portion of your stream – whatever you want, whatever you allow in – will be. Just as publishers’ news is only a small portion of the value of what Google returns in search, we mustn’t be so hubristic to think that the streams flowing by readers’ eyes will be owned, controlled, and filled by media with what they declare to be news. They will be filled with life.

The real value waiting to be created in the stream-based web is prioritization. That’s part of what Clay Shirky is driving at when he talks about algorithmic authority and what Marissa Mayer talks about when she says news streams will be hyperpersonal. The opportunity in news is not to try to mass-prioritize it for everyone at once – impossible! – but to help each of us do it. To make that work, it will have to be personal and personal will scale only if it’s algorithmic and the algorithm will work only if we trust and value what it delivers. So how do you learn enough about me, who I am, what I do, and what I need so you can solve my personal filter failure and show me the emails and tweet and updates and, yes, news I’ll most want to read? What tricks can you bring to bear, as Google did and Facebook did: the wisdom of a crowd – perhaps my crowd? the value of editors still?

So imagine this future without pages and sites, this future that’s all built on process over product. If you’re what used to be a content-creation – if you’re Stephen Fry, post-media – you’re all about insinuating yourself into that stream. If you’re about content curation – formerly known as editing – then you’re all about prioritizing streams for people; that’s how you add value now.

Getting people to come to you so you can tell them what you say they should know while showing them ads they didn’t want from advertisers who bear the cost and risk of the entire experience? That’s just so 2008. Now it’s time to go with the stream.