Posts about google

Moral Authority as a Platform

[See my disclosures below.*]

Since the election, I have been begging the platforms to be transparent about efforts to manipulate them — and thus the public. I wish they had not waited so long, until they were under pressure from journalists, politicians, and prosecutors. I wish they would realize the imperative to make these decisions based on higher responsibility. I wish they would see the need and opportunity to thus build moral authority.

Too often, technology companies hide behind the law as a minimal standard. At a conference in Vienna called Darwin’s Circle, Palantir CEO Alexander Karp (an American speaking impressive German) told Austrian Chancellor Christian Kern that he supports the primacy of the state and that government must set moral standards. Representatives of European institutions were pleasantly surprised not to be challenged with Silicon Valley libertarian dogma. But as I thought about it, I came to see that Karp was copping out, delegating his and his company’s ethical responsibility to the state.

At other events recently, I’ve watched journalists quiz representatives of platforms about what they reveal about manipulation and also what they do and do not distribute and promote on behalf of the manipulators. Again I heard the platforms duck under the law — “We follow the laws of the nations we are in,” they chant — while the journalists pushed them for a higher moral standard. So what is that standard?

Transparency should be easy. If Facebook, Twitter, and Google had revealed that they were the objects of Russian manipulation as soon as they knew it, then the story would have been Russia. Instead the story is the platforms.

I’m glad that Mark Zuckerberg has said that in the future, if you see a political ad in your feed, you will be able to link to the page or user that bought it. I’d like the platforms to all go farther:

  • First, internet platforms should make every political ad available for public inspection, setting a standard that goes far beyond the transparency required of political advertising on broadcast and certainly beyond what we can find out about dark political advertising in direct mail and robocalls. Why shouldn’t the platforms lead the way?
  • Second, I think it is critical that the platforms reveal the targeting criteria used for these political ads so we can see what messages (and lies and hate) are aimed at whom.
  • Third, I’d like to see all this data made available to researchers and journalists so the public — the real target of manipulation — can learn more about what is aimed at them.

The reason to do this is just not to avoid bad PR or merely to follow the law, to meet minimal expectations. The reason to do all this is to establish public responsibility consumate with the platforms’ roles as the proprietors of so much of the internet and thus the future.

In What Would Google Do?, I praised the Google founders’ admonition to their staff — “Don’t be evil” — as a means to keep the company honest. The cost of doing evil in business has risen as customers have gained the ability to talk about a company and as anyone could move to a competitor with a click. But that, too, was a minimal standard. I now see that Google — and its peers — should have evolved to a higher standard:

“Do good. Be good.”

I don’t buy the arguments of cynics who say it is impossible for a corporation to be anything other than greedy and evil and that we should give up on them. I believe in the possibility and wisdom of enlightened self-interest and I believe we can hold these companies to an expectation of public spirit if not benevolence. I also take Zuck at his word when he asks forgiveness “for the ways my work was used to divide people rather than bring us together,” and vows to do better. So let us help him define better.

The caveats are obvious: I agree with the platforms that we do not want them to become censors and arbiters of right v. wrong; to enforce prohibitions determined by the lowest-common-demoninators of offensiveness; to set precedents that will be exploited by authoritarian governments; to make editorial judgments.

But doing good and being good as a standard led Google to its unsung announcement last April that it would counteract manipulation of search ranking by taking account of the reliability, authority, and quality of sources. Thus Google took the side of science over crackpot conspirators, because it was the right thing to do. (But then again, I just saw that Alternet complains that it and other advocacy and radical sites are being hit hard by this change. We need to make clear that fighting racism and hate is not to be treated like spreading racism and hate. We must be able to have an open discussion about how these standards are being executed.)

Doing good and being good would have led Facebook to transparency about Russian manipulation sooner.

Doing good and being good would have led Twitter to devote resources to understanding and revealing how it is being used as a tool of manipulation — instead of merely following Facebook’s lead and disappointing Congressional investigators. More importantly, I believe a standard of doing good and being good would lead Twitter to set a higher bar of civility and take steps to stop the harassment, stalking, impersonation, fraud, racism, misogyny, and hate directed at its own innocent users.

Doing good and being good would also lead journalistic institutions to examine how they are being manipulated, how they are allowing Russians, trolls, and racists to set the agenda of the public conversation. It would lead us to decide what our real job is and what our outcomes should be in informing productive and civil civic conversation. It would lead us to recognize new roles and responsibilities in convening communities in conflict into uncomfortable but necessary conversation, starting with listening to those communities. It should lead us to collaborate with and set an example for the platforms, rather than reveling in schadenfreude when they get in trouble. It should also lead us all — media companies and platforms alike — to recognize the moral hazards embedded in our business models.

I don’t mean to oversimplify even as I know I am. I mean only to suggest that we must raise up not only the quality of public conversation but also our own expectations of ourselves in technology and media, of our roles in supporting democratic deliberation and civil (all senses of the word) society. I mean to say that this is the conversation we should be having among ourselves: What does it mean to do and be good? What are our standards and responsibilities? How do we set them? How do we live by them?

Building and then operating from that position of moral authority becomes the platform more than the technology. See how long it is taking news organizations to learn that they should be defined not by their technology — “We print content” — but instead by their trust and authority. That must be the case for technology companies as well. They aren’t just code; they must become their missions.


* Disclosure: The News Integrity Initiative, operated independently at CUNY’s Tow-Knight Center, which I direct, received funding from Facebook, the Craig Newmark Philanthropic Fund, and the Ford Foundation and support from the Knight and Tow foundations, Mozilla, Betaworks, AppNexus, and the Democracy Fund.

A Call for Cooperation Against Fake News

We — John Borthwick and Jeff Jarvis — want to offer constructive suggestions for what the platforms — Facebook, Twitter, Google, Instagram, Snapchat, WeChat, Apple News, and others — as well as publishers and users can do now and in the future to grapple with fake news and build better experiences online and more civil and informed discussion in society.

Key to our suggestions is sharing more information to help users make better-informed decisions in their conversations: signals of credibility and authority from Facebook to users, from media to Facebook, and from users to Facebook. Collaboration between the platforms and publishers is critical. In this post we focus on Facebook, Twitter, and Google search. Two reasons: First simplicity. Second: today these platforms matter the most.

We do not believe that the platforms should be put in the position of judging what is fake or real, true or false as censors for all. We worry about creating blacklists. And we worry that circular discussions about what is fake and what is truth and whose truth is more truthy masks the fact that there are things that can be done today. We start from the view that almost all of what we do online is valuable and enjoyable but there are always things we can do to improve the experience and act more responsibly.

In that spirit, we offer these tangible suggestions for action and seek your ideas.

  1. Make it easier for users to report fake news, hate speech, harassment, and bots. Facebook does allow users to flag fake news but the function is buried so deep in a menu maze that it’s impossible to find; bring it to the surface. Twitter just added new means to mute harassment but we think it would also be beneficial if users can report false and suspicious accounts and the service can feed back that data in some form to other users (e.g., “20 of your friends have muted this account” or “this account tweets 500 times a day”). The same would be helpful for Twitter search, Google News, Google search, Bing search, and other platforms and other platforms.
  2. Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. The Trust Project is working on getting media to generate such signals. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.
  3. Expand systems of verified sources. As we said, we don’t endorse blacklists or whitelists of sites and sources (though when lists of sites are compiled to support a service — as with Google News — we urge responsible, informed selection). But it would be good if users could know the creator of a post has been online for only three hours with 35 followers or if this is a site with a known brand and proven track record. Twitter verifies users. We ask whether Twitter, Facebook, Google, et al could consider means to verify sources as well so users know the Denver Post is well-established while the Denver Guardian was just established.
  4. Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.
  5. Track back to original sources of news items and memes. We would like to see these technology platforms use their considerable computing power to help track back and find the source of news items, photos and video, and memes. For example, one of us saw an almost-all-blue mapwith 225K likes that was being passed around as evidence that millennials voted for Clinton when, in fact, at its origin the map was labeled as the results of a single, liberal site’s small online poll. It would not be difficult for any platform to find all instances of that graphic and pinpoint where it began. The source matters! Similarly, when memes are born and bred, it would be useful to know whether one or another started at a site with a certain frog as an avatar. While this is technically complicated its far less complicated than the facial recognition that social platforms have today.
  6. Address the echo-chamber problem with recommendations from outside users’ conversational spheres. We understand why Facebook, Twitter, and others surface so-called trending news: not only to display a heat map but also to bring serendipity to users, to show them what their feeds might not. We think there are other, perhaps better, ways to do this. Why not be explicit about the filter-bubble problem and present users with recommended items, accounts, and sources that do *not* usually appear in their feeds, so The Nation reader sees a much-talked-about column from the National Review, so a Clinton voter can begin — just begin — to connect with and perhaps better understand the worldview of Trump voter? Users will opt in or out but let’s give them the chance to choose.
  7. Recognize the role of autocomplete in search requests to spread impressions without substance. Type “George Soros is…” into a Google search box and you’re made to wonder whether he’s dead. He’s not. We well understand the bind the platforms are in: They are merely reflecting what people are asking and searching for. Google has been threatened with suits over what that data reveals. We know it is impossible and undesirable to consider editing autocomplete results. However, it would be useful to investigate whether even in autocomplete, more information could be surfaced to the user (e.g., “George Soros is dead” is followed by an asterisk and a link to its debunking). These are the kinds of constructive discussions we would like to see, rather than just volleys of complaint.
  8. Recognize how the design choices can surface information that might be better left under the rock. We hesitate to suggest doing this, but if you dare to search Google for the Daily Stormer, the extended listing for the site at the moment we write this includes a prominent link to “Jewish Problem: Jew Jake Tapper Triggered by Mention of Black …” Is that beneficial, revealing the true nature of the site? Or is that deeper information better revealed by getting quicker to the next listing in the search results: Wikipedia explaining that “The Daily Stormer is an American neo-Nazi and white supremacist news and commentary website. It is part of the alt-right movement …”? These design decisions have consequences.
  9. Create reference sites to enable users to investigate memes and dog whistles. G’bless Snopes; it is the cure for that email your uncle sends that has been forward a hundred times. Bless also Google for making it easy to search to learn the meanings of Pepe the frog and Wikipedia for building entries to explain the origins. We wonder whether it would be useful for one of these services or a media organization to also build a constantly updated directory of ugly memes and dog whistles to help those users — even if few — who will look into what is happening so they can pass it on. Such a resource would also help media and platforms recognize and understand the hidden meanings and secret codes their platforms are being used to spread.
  10. Establish the means to subscribe to and distribute corrections and updates. We would love it if we could edit a mistaken tweet. We understand the difficulty of that, once tweets have flown the nest to apps and firehoses elsewhere. But imagine you share a post you later find out to be false and then imagine if you could at least append a link to the tweet in the archive. Better yet, imagine if you could send a followup message that alerts people who shared your tweet, Facebook post, or Instagram image to the fact that you were mistaken. Ever since the dawn of blogging, we’ve wished for such a means to subscribe to and send updates, corrections, and alerts around what we’ve posted. It is critical that Twitter as well as the other platforms do everything they can to enable responsible users who want to correct their mistakes to do so.
  11. Media must learn and use the lesson of memes to spread facts over lies. Love ’em or hate ’em, meme-maker Occupy Democrats racked up 100 to 300 million impressions a week on Facebook, according to its cofounder, by providing users with the social tokens to use in their own conversations, the thing they share because it speaks for them. Traditional media should learn a lesson from this: that they must adapt to their new reality and bring their journalism — their facts, fact-checking, reporting, explanation, and context — to the public where the public is, in a form and voice that is appropriate to the context and use of each platform. Media cannot continue to focus only on their old business model, driving traffic back to their websites (that notion sounds more obsolete by the day). So, yes, we will argue that, say, Nick Kristof should take some of his important reporting, facts, arguments, and criticisms and try to communicate them not only in columns (which, yes, he should continue!) but also with memes, videos, photos, and the wealth of new tools we now have to communicate with and inform the public.
  12. Stop funding fake news. Google and Facebook have taken steps in the right direction to pull advertising and thus financial support (and motivation) for fake-news sites. Bing, Apple, and programmatic advertising platforms must follow suit. Publishers, meanwhile, should consider more carefully the consequences of promoting content — and sharing in revenue — from dubious sources distributed by the likes of Taboola and Outbrain.
  13. Support white-hat media hacking. The platforms should open themselves up to help from developers to address the problems we outline here. Look at what a group of students did in the midst of the fake-news brouhaha to meet the key goals we endorse: bringing more information to users about the sources of what they read and share. (Github here.) We urge the platforms to open up APIs and provide other help to developers and we urge funders to support work to improve not only the quality of discourse online but the quality of civic discourse and debate in society.
  14. Hire editors. We strongly urge the platforms to hire high-level journalists inside their organizations not to create content, not to edit, not to compete with the editors outside but instead to bring a sense of public responsibility to their companies and products; to inform and improve those products; to explain journalism to the technologists and technology to the journalists; to enable collaboration with news organizations such as we describe here; and foremost to help improve the experience for users. This is not a business-development function: deal-making. Nor is this a PR function: messaging. This sensibility and experience needs to be embedded in the core function in every one of these platform companies: product.
  15. Collaborate in an organization to support the cause of truth; research and develop solutions; and educate platforms, media companies, and the public. This is ongoing work that won’t be done with a new feature or option or tweak in an algo. This is important work. We urge that the platforms, media companies, and universities band together to continue it in an organization similar to but distinct from and collaborating with the First Draft Coalition, which concentrates on improving news, and the Trust Project, which seeks to gather more signals of authority around news. Similarly, the Coral Project works on improving comments on news sites. We also see the need to work on improving the quality of conversation where it occurs, on platforms and on the web. This would be an independent center for discussion and work around all that we suggest here. Think of it as the Informed Conversation Project.

We will bring our resources to the task. John Borthwick at Betaworks will help invest in and nurture startups that tackle these problems and opportunities. Jeff Jarvis at CUNY’s Graduate School of Journalism will play host to meetings where that is helpful and seek support to build the organization we propose above.

We do this mostly to solicit your suggestions to a vital task: better informing our conversations, our elections, and our society. (See another growing list of ideas here.) Pile on. Help out.

To a faster — and distributed — web

Screenshot 2015-10-07 at 10.52.04 AM

Last May, shortly after Facebook announced its Instant Articles, Google held its first Newsgeist Europe and I walked in, saying obnoxiously (it’s what I do): “Facebook just leapfrogged you by a mile, Google. What you should do now is create an open-source version of Instant Articles.” Richard Gingras, head of Google News, has long been arguing for what he called portable content. I had been arguing since 2011 for embeddable content: If content could travel with its brand, revenue, analytics, and links attached, then it can go to the reader rather than making the reader come to it.

Today, fairy godmother Google delivered our wish — thanks to Gingras, Google engineering VP Dave Besbris, and media partners inside and outside of Google’s European Digital News Initiative. Hallelujah.

Accelerated Mobile Pages (AMP) — as you can see from Google’s definition on Github, above — a simple way to dramatically speed up the serving of web pages (on mobile and on desktop) through several means, including:
(1) a shared library of web-page functions so that they can be cached and called and not downloaded with every new web page;
(2) the opportunity to cache content nearer the user — with Google or not and inside apps on user’s devices;
(3) the beginnings of advertising standards to get rid of some of the junk that both slows down and jumbles the serving of web pages; and
(4) the sharing of some functions such as gathering data for analytics.

Note that the publisher’s revenue (that is, ads), analytics (that is, user data), brand, and links stay with the content. Google emphasized again and again: It’s just the web, done well. It’s just a web page — but way faster. A link is no longer an invitation to wait. A link is just a next page, instantly and fully visible.

You can get a demo here. So far, it’s just a sample of about 5,000 new pages per day from the launch partners. Open that URL on your phone. Search for something like Obama. Go through the carousel and you should be amazed with the speed.

But I think AMP and Instant Articles are more than that. They are a giant step toward a new, distributed content ecology on the web … and a better, faster web, especially in mobile.

Here are a few ways I see this changing the way content operates on the web:

Imagine an aggregator like Real Clear Politics or an app like Nuzzel. Now, every time you click on a link, you have to load a browser and all the cruft around the content on a page. Now, the page — every page made to the AMP standard — can load *instantly* because the architecture and functionality of the page can be prefetched and cached and the content can be cached closer to the user — and the advertising and analytics will not be allowed to screw up the loading of the page. So the experience of reading an aggregation of content will be like reading a web site: fast, clean, smooth. If I were in the aggregation business, I would build around AMP.

Imagine starting a new media service without a web site but built around content meant to be distributed so it goes directly to readers wherever they are: on Twitter (via users’ links there), on Facebook (in a community there), on Nuzzel (through recommendations there), and elsewhere — via Reddit, Mode aggregation, Tumblr, etc.

Now there are a few key things missing from the AMP architecture that will be critical to business success. But they can be added.

The first is that user interest data needs to flow back to the content creator — with proper privacy transparency and consent built in! — so that the publisher can build a direct relationship of relevance and value with the user, no matter where she is encountered. That is more complicated but vital.

The second — and this is a lesson I learned working with shared content and thus audience in the New Jersey news ecosystem — is that we must value and reward not just the creators of content but also those who build audience for that content.

That’s a small matter of deal making. AMP is built with *no* need to make deals, which is critical to its quick adoption. You make your content AMP-ready and anybody can serve it instantly to their audiences with your business model (advertising, etc.) attached. But there’s no reason two publishers can’t make a separate deal so, for example, the Washington Post could say to the Cincinnati Inquirer: You can take our AMP-ready content with our ads attached but we will give you your own ad avail or we will give you a reward for the traffic you bring us and we can share a special, co-branded page. The Post is already getting ready to distribute all its content in Facebook. It is using its owner Jeff Bezos’ Amazon to distribute itself, too. (Speculation is that these alone will have it leap past The New York Times in audience.) Why not use AMP and make deals to reward other quality news services on the web to be its distributor? That is the new newsstand. That is the new site-less web.

I also see the opportunity to make AMP-ready modules and widgets that can be collected and aggregated *inside* web pages.

This is a big deal. It’s not just about speeding up the web. It’s about unbundling the web and web sites. If we in media are smart in exploiting its opportunities and if AMP and Amazon and others gather together around a single set of standards — which is quite possible — if we add more data smarts to the process, this could be big for us in media or for upstarts in garages. Your choice, media.

AFTERTHOUGHT: How should Facebook respond? I would suggest they have nothing to lose by joining the standard so publishers can publish both ways. I would also suggest that Facebook can now leapfrog Google by helping publishers with interest data and user profiles — that is where the real value will be.

What Would Alphabet Do?

Eakins, Baby at Play 1876.jpg
You’d expect me to say this but Google’s transformation into Alphabet is a brilliant move that enables Page, Brin, and their company to escape the bonds of their past — They’re just a search company. Why are they working on self-driving cars and magical contact lenses and high-flying balloons? — and go where no one has thought they would go before.

To Wall Street and countless bleating analysts — not to mention its competitors and plenty of government regulators — Google was a search company, though long ago it became so much more. I don’t just mean that it also made a great browser, the best maps, killer email, an open phone operating system and some of the best phones, and a new operating system (and the damned fine computer I’m writing on right now) — and that it acquired the biggest video company and the best traffic data company. I don’t just mean that Google has for a long time really been the powerhouse advertising company.

No, Google long ago became a personal services company, the post-mass-market company that treats every user as a customer it knows individually. That is the heart of Google. When they say they “focus on the user and all else will follow,” they mean it.

But Google was also a technology company, working on projects that didn’t fit with that mission.

So this move lets Page and Brin move up to the strategic stratosphere where they are most comfortable. It lets them recognize the tremendous job Sundar Pichai has been doing running the company that is now “just” Google. It lets them invest in new experiments and new lines of business — cars, medical technology, automated homes, and energy so far, and then WTF they can imagine and whatever problems they yearn to solve. It lets them tell Wall Street not to freak at a blip in the ad market — though, of course, the vast majority of the parent company’s revenue will still come from Google’s advertising business.

A journalist asked me a few minutes ago whether there was any risk to the change. I couldn’t think of any then. I suppose one risk is that this will only freak out especially European media and regulatory technopanickers, who will now go on a rampage warning that — SEE! — Google does want to rule the world. But what the hell. They were going to do that anyway.

A few weeks ago at Google I/O, I had the privilege of meeting Page. To introduce myself, I said that I wrote a book called What Would Google Do?. “Oh, I remember,” he said with impish grin and then he asked: “What would Google do? I want to know.”

See, I don’t think even Larry Page knows what Google — er, Alphabet — will do. He is now setting himself up for discoveries, surprises, exploration, experimentation, and a magnificently uncertain future. Who wants a certain future? That’d be so damned boring. So horribly conventional.

Disclosure: I own Google — er, Alphabet — stock. And I now lust after Alphabet swag.

How (not) to interview

Here’s an object lesson for journalism students in the art of the interview.

Poor Sundar Pichai, the No. 2 at Google, sat down for an interview with a New York Times technology reporter, only to find himself bombarded with the same question a half-dozen ways, to wit: Aren’t mobile phones bad for us?

First question: “Do you see mobile phones heading down a path of social unacceptability? Do we have a problem of overuse?”

After acknowledging that phones can do good things — goddamned miracles, I’d say — the reporter came back to his plaint: “But then people start doing things like checking their email at dinner. Are there things Google is doing to return people to where they are and reduce the temptation to look at their phone?” Like everything else, isn’t this your fault, Google?

Sundar tried to politely deflect: “You’re asking questions that have nothing to do with technology. Should kids check phones at dinner? I don’t know. To me that’s a parenting choice.”

The reporter tried again. And then again: “As you have risen in the ranks at Google, have you noticed that people use their phones less in meetings with you?”

And again: “Have you done anything to ease back? I have a policy that I’m not allowed to walk around the house with my phone. It has to stay in one room.”

Oh, jeesh. I imagine the reporter getting Grandma’s telephone table from the front hall and tying an iPhone to it. Some of us would say that eliminating the need for wires was progress.

It’s not hard to see what was happening here: The same reporter had an “analysis” published the same day on devices and programs to get users to crack that addiction the reporter thinks we have to our phones. He interviewed Pichai and decided to make a blog post out of the transcript, giving us a window to the sausage factory. The writer wanted a quote for his story. So he did what reporters often do: He asks the same question over and over … until he gets the quote he wants for his story. That’s how interviews are too often held: to fill in a blank the writer has already made rather than really listening and being open to new information and new angles.

When a reporter does this to me, I finally say: You can ask the same thing as many times as you want but I’m not giving you the answer you want. Corporate executives trying to make nice can’t do that.