Posts about newcriticism

Recommending the tail

Wharton and O’Reilly just released two provocative reports on whether social distribution and recommendation really get into the long tail.

First, O’Reilly’s on the distribution of Facebook apps:

The good news has already been widely disseminated: there are nearly 5000 Facebook applications, and the top applications have tens of millions of installs and millions of active users. The bad news, alas, is in our report: 87% of the usage goes to only 84 applications! Only 45 applications have more than 100,000 active users. This is a long tail marketplace with a vengeance — but unfortunately, the economic models (for developers at least, though not for Facebook itself) all rely on getting into the very short head.

I think there are a few reasons for that. First, the Facebook platform is so damned new. If the same analysis of the entire web had been made in December, 1994, two months after Netscape’s release, it would have shown that Netscape got most of the attention along with a camera on a coffee pot. It took a long time for the Web to develop its incredible depth: its tail. The Facebook platform is very much in its infancy. It’s far too soon to draw any grand conclusions.

More substantively, I think one reason for this undistributed distribution is the nature of social apps: They gain in value the more that people — especially you know — use them, and so the community is uniquely motivated to create blockbusters. It’s one matter to simply recommend things to people (more on that from Wharton in a minute); it doesn’t really affect you if more people watch the movie you recommend, except that you feel as if you’re part of a trend and maybe you can discuss it with them. Those are light motives. By contrast, many Facebook apps are all but useless if your friends don’t use them; that’s the social in it. This creates more of a gathering point than mere recommendation.

I think there’s a lesson in this for old, blockbuster-oriented economies — entertainment and media, mainly: How do you improve your product for all by having more people involved in it? And how does that motivate people to spread it for you? We have seen this happening in online forums: the more people who are involved, the more people get involved (though there is a tipping point; you can have too many people). I wonder whether collaborative media could take on this effect. Lonely Girl 15 may be an example: people made media around the media and spread the original along with their creations. How can newspapers and TV shows do likewise? How does the collaboration and the involvement of your friends improve the product and how then do you get your friends involved? If I were trying to produce a social news or entertainment product, I’d investigate that formula.

Now shift to mere recommendation. The Wharton report (via PaidContent) says that as presently implemented, automated recommendation systems tend to cluster people around products and create blockbusters.

Online retailers may be shooting themselves in the tail — the long tail, that is, according to Kartik Hosanagar, Wharton professor of operations and information management, and Dan Fleder, a Wharton doctoral candidate, in new research on the “recommenders” that many of these retailers use on their websites. Recommenders — perhaps the best known is Amazon’s — tend to drive consumers to concentrate their purchases among popular items rather than allow them to explore and buy whatever piques their curiosity, the two scholars suggest in a working paper titled, “Blockbuster Culture’s Next Rise or Fall: The Impact of Recommender Systems on Sales Diversity.”

Hosanagar and Fleder argue that online recommenders “reinforce the blockbuster nature of media.” And they warn that, by deploying standard designs, online retailers may be recreating the very phenomenon — circumscribed media purchasing choices — that some of them have bragged about helping consumers escape.

The problem is with automated recommendations and that a critical point:

“Because common recommenders recommend products based on sales and [consumer] ratings, they cannot recommend products with limited historical data, even if they would be rated favorably,” they write. “This can create rich-get-richer effects for popular products and vice-versa for unpopular ones, which results in less diversity.”

That could be solved or balanced, I think, if you shift to reliance on human recommendations: ‘My friend Fred finds good stuff for me…. My friend Sally finds better stuff than Fred…. My friend Jeff has no taste.’ Then a critical mass of historical data doesn’t really matter; relationships and taste and shared knowledge do. And we find the friends who like the stuff we like. We live in the tail. We can also live in the head of the curve: We all watch American Idol, too. More on this later…