Posts about publicparts

Tech companies: Whose side are you on?

I wrote this for the Guardian. I’m crossposting it here for my archive. The post is all the more relevant a day later as Google, Apple, AT&T, and Public Knowledge attend a secret White House meeting about secrecy. I’d have a lot more respect for them if they refused, given the condition.

Technology companies: Now is the moment when you must answer for us, your users, whether you are collaborators in the U.S. government’s efforts to collect it all — our every move on the internet — or whether you, too, are victims of its overreach.

Every company named in Edward Snowden’s revelations has said that it must comply with government demands, including requirements to keep secret court orders secret. True enough. But there’s only so long they can hide behind that cloak before making it clear whether they are resisting government’s demands or aiding in them. And now the time has come to go farther: to use both technology and political capital to actively protect the public’s privacy. Who will do that?

We now know, thanks to Snowden, of at least three tiers of technology companies enmeshed in the NSA’s hoovering of our net activity (we don’t yet know whether the NSA has co-opted companies from the financial, retail, data services, and other industries):

(1) Internet platforms that provide services directly to consumers, allowing government to demand access to signals about us: Google with search, mail, calendars, maps; Facebook with connections; Skype with conversations, and so on.

In its first Prism reporting, the Washington Post apparently unfairly fingered nine of these companies, accusing the NSA and FBI of “tapping directly into the central servers” that hold our “chats, photographs, e-mails, documents, and connection logs.” Quickly, the companies repudiated that claim and sought the right to report at least how many secret demands are made. But there’s more they can and should do.

(2) Communications brands with consumer relationships that hand over metadata and/or open taps on internet traffic for collection by the NSA and Britain’s GCHQ, creating vast databases that can then be searched via XKeyscore. Verizon leads that list, and we now know from the Süddeutsche Zeitung that it also includes BT and Vodafone.

(3) Bandwidth providers that enable the NSA and its international partners to snoop on the net, wholesale. The Süddeutsche lists the three telco brands above in addition to Level 3, Global Crossing, Viatel, and Interroute. Eric King, head of research for Privacy International, asked in the Guardian, “Were the companies strong-armed, or are they voluntary intercept partners?”

The bulk data carriers have no consumer brands or relationships and thus are probably the least likely to feel commercial pressure to protect the rights of the users at the edge. The telephone companies should care more but they operate as oligopolies with monopoly attitudes and rarely exhibit consumer empathy (which is a nice way of saying their business models are built on customer imprisonment).

A hodgepodge alliance of U.S. legislators is finally waking up to the need and opportunity to stand up for citizens’ rights, but they will be slow and, don’t we know, ineffective and often uninformed. The courts will be slower and jealous of their power. Diplomacy’s the slowest route to reform yet, dealing in meaningless symbolism.

So our strongest expectations must turn to the first tier above, the consumer internet platforms. They have the most to lose — in trust and thus value — in taking government’s side against us.

At the Guardian Activate conference in London last month, I asked Vint Cerf, an architect of the net and evangelist for Google, about encrypting our communication as a defense against NSA spying. He suggested that communication should be encrypted into and out of internet companies’ servers (thwarting, or so we’d hope, the eavesdropping on the net’s every bit over telcos’ fibre) but should be decrypted inside the companies’ servers so they could bring us added value based on the content: a boarding pass on our phone, a reminder from our calendar, an alert about a story we’re following (not to mention a targeted ad).

Now there are reports that Google is looking at encrypting at least documents stored in Google Drive. That is wise in any case, as often these can contain users’ sensitive company and personal information. I now think Google et al need to go farther and make encryption an option on any information. I don’t want encryption to be the default because, in truth, most of my digital life is banal and I’d like to keep getting those handy calendar reminders. But technology companies need to put the option and power of data security directly into users’ hands.

That also means that the technology companies have to reach out and work with each other to enable encryption and other protections across their services. I learned the hard way how difficult it is to get simple answers to questions about how to encrypt email. The industry should work hard to make that an option on every popular service.

But let’s be clear that encryption is not the solution, probably only a speed bump to the NSA’s omnivorous ingesting. At the Activate conference, Cerf was asked whether the solution in the end will be technical or institutional. No doubt, institutional, he answered. That means that companies and government agencies must operate under stated principles and clear laws with open oversight.

Before Snowden’s leaks, technology CEOs would have had to balance cooperation and resistance just as the nation supposedly balances security and privacy. But now the tide of public opinion has clearly shifted — at least for now — and so this is the moment to grab control of issue.

If they do not assert that clear control, these technology companies risk losing business not only from skittish consumers but also from corporate and foreign-government clients. The Cloud Security Alliance polled companies and found that 10% had canceled U.S. cloud business and 56% were less likely to do business with U.S. providers. “If businesses or governments think they might be spied on,” said European Commission Vice President Neelie Kroes, “they will have less reason to trust the cloud, and it will be cloud providers who ultimately miss out.”

Besides taking action to secure technology and oversight within their companies and the industry, right-thinking technology companies also need to band together to use their political capital to lobby governments across the world to protect the rights of users and the freedom and sanctity of privacy and speech on the net. They must take bold and open stands.

To do that, they must first decide on the principles they should protect. In my book Public Parts, I proposed some principles to discuss, among them:
* the idea that if any bit on the net is stopped or detoured — or spied upon — then no bit and the net itself cannot be presumed to be free;
* that the net must remain open and distributed, commandeered and corrupted by no government;
* that citizens have a right to speak, assemble, and act online and thus have a right to connect without fear;
* that privacy is an ethic of knowing someone else’s information and coming by it openly;
* and that government must become transparent by default and secret by necessity (there are necessary secrets). Edward Snowden has shown us all too clearly that the opposite is now true.

I also believe that we must see a discussion of principles and ethics from the technologists inside these companies. One reason I have given Google the benefit of the doubt — besides being an admirer — is that I believe the engineers I know inside Google would not stay if they saw it violating their ethics even if under government order.

Yonathan Zunger, the chief architect of Google+, said this after the Guardian’s and Glenn Greenwald’s first revelations were published:

I can tell you that it is a point of pride, both for the company and for many of us, personally, that we stand up to governments that demand people’s information…. I can categorically state that nothing resembling the mass surveillance of individuals by governments within our systems has ever crossed my plate. If it had, even if I couldn’t talk about it, in all likelihood I would no longer be working at Google.

In the end, it’s neither technologies nor institutions that will secure us from the inexorable overreach of government curiosity in the face of technical capability. Responsibility for oversight and correction begins with individuals, whether whistleblowers or renegade politicians or employees of conscience who finally remind those in power: “Don’t be evil.”

Matters of principle

Prism
America is supposed to be a nation governed by principles, which are undergirded by the Constitution and the Bill of Rights and carried into law. The discussion about the government and its capture of *our* data should be held on the level of principles.

* Privacy: Our direct and personal communication in any medium and by any means — mail, email, phone, VOIP, Twitter DM, and any technology yet to be invented — should be considered private, as our physical mail is, and subject to government intervention only through lawful warrant. That is not the case. Thus it is quite reasonable to be disturbed at the news that government can demand and receive communication we believe to be private. Government may call itself the protector of our privacy but it is our privacy’s worst enemy.

* Transparency: The actions of government should be known to citizens. I argue in Public Parts that our institutions should be public by default, secret by necessity; now they are secret by default and open by force. There are necessary secrets. There is a need for intelligence. There I agree with David Simon. I saw people die before me on 9/11 and I fault intelligence or not stopping it.

But we are left out of the discussion of where the line of necessity should be. If President Obama believes in the transparency he talks about and if he now says he welcomes the debate about security and freedom then it should have occurred *before* government took the actions now being reported and not by force through leaks. There I agree with James Fallows that this leak is not harmful — what bad guys didn’t already realize that their phones could be tracked? — and will be beneficial for democracy.

* Balance of powers: The best protection of our nation’s principles is the balance of powers. Yes, Congress passed the Patriot Act and yes, a FISA court does approve the executive branch’s actions. But both our representatives and our justices are prevented from sharing anything with us, as are the companies that are forced to be their accomplices. The true balance of powers is the exercise of democracy by citizens, but without information we have no power and government has it all.

* Freedom of speech and of the press: Information comes to the public from the press, which is now anyone with information to share. And citizens exercise power through speech. But in its jihad against leaks… that is whistleblowers… that is reporting… that is journalism and the public’s right to know, the White House is chilling both the press and speech. I pray that Glenn Greenwald doesn’t have a Verizon phone.

This discussion is less about privacy and more about transparency and speech. The principles most offended here are those embedded in the First Amendment for those are the principles we rely upon to take part in the debate that is democracy.

I am asking for government to behave according to principles. I am also asking companies to do so. Twitter — whose behavior toward developers and users can sometimes mystify me — is apparently the platform most stalwart in standing for its users’ rights as a matter of principle. They apparently refused to make it easier for government to get data. Now one could argue that helping government thwart terrorists is also behaving according to principle. But again we and these companies aren’t allowed to have that debate. So I’d now advise following what is apparently Twitter’s route in only responding to demands, nothing more. And I’d advise following Google’s example in revealing government demands for information (though under FISA, once again, they’re not allowed to reveal — even by a count — them all).

There is much debate and sometimes conspiracy theorizing swirling around about what Google, Facebook, et al did and didn’t provide to government. I take Larry Page’s and Mark Zuckerberg’s statements at their literal word and agree with Declan McCullagh that I so far see no evidence that these companies handed the keys to their servers to the NSA. We know and they have long said that they comply with government orders, whether in the U.S. or China.

Though some are attacking him on this issue and though I often disagree with him on the state of the news business, I again say that I agree with David Simon on the unsophisticated and emotional interpretation of this news. Since the initial New York Times report on NSA “warrantless wiretapping,” I have understood that one of government’s goals is to use data to find anomalies but to do that it has to have a baseline of normal behavior. We’re the normal. This has been going on for sometime, as Simon says; we just haven’t known how.

Are we as a nation OK with allowing government to make such an analysis to find the terrorists’ anomalous behaviour or not? That’s a discussion that should occur according to principles, properly informed about the risks and benefits. Are we OK with government using that same data to fish for other crimes — like, say, leaking a PowerPoint to the Guardian? I am not. Are we OK with government treating whistleblowers and leakers as traitors — starting with Bradley Manning? I am not. I agree with Bruce Shneier: “We need whistleblowers.” Are we OK with government having access to our private communications without warrants? I say: most definitely not, as a matter of principle.

Under a regime of secrecy, assuming the worst becomes the default in the discussion. We assume the worst of government because they keep from us even activities they say are harmless and beneficial. We see people who want to be suspicious of technology and technology companies assuming the worst of them because, after all, we can’t know precisely what they are doing. I agree with Farhad Manjoo about the danger. People in other nations — I’m looking at you, EU — already distrust both the American government and American technology companies, often in the past for emotional reasons or with anti-American roots but now with more cause. You can bet we’ll hear governments across Europe and elsewhere push harder for legislation now in process to require that their citizens’ data be held outside the U.S. and to European standards because, well, they assume the worst. We’ll hear calls to boycott American-made platforms because — even if they try not to go along — their acquiescence to our government means they cannot be trusted. This is bad for the net and bad for the country. The fault lies with government.

This is a story about transparency and the lack of it. It is a story about secrecy and its damages. It is a story about principles that are being flouted. It should be a discussion about upholding principles.

NY Times technobias

nytimesp1From the headline to the lede to the chosen sources to the writing to the page-one placement, today’s New York Times coverage of Google’s $7 million settlement for the drive-by capture of wifi data is one-sided, shallow, and technopanicky.

First, let’s remind ourselves of the facts. Google’s Street View cars captured wifi addresses as they drove by as a way to provide better geolocation on our phones (this is why your phone suggests you turn on wi-fi when using maps — so you can take advantage of the directory of wifi addresses and physical addresses that Google and other companies keep). Stupidly and for no good reason, the cars also recorded other data passing on *open* wifi networks. But that data was incredibly limited: just what was transmitted in the random few seconds in which the Google car happened to pass once by an address. There is no possible commercial use, no rationally imagined nefarious motive, no goldmine of Big Data to be had. Nonetheless, privacy’s industrial-regulator complex jumped into action to try to exploit the incident. But even Germany — the rabid dog of privacy protectors — dropped the case. And the U.S. case got pocket lint from Google.

But that didn’t stop The Times from overplaying the story. Neither did it stop a CNN producer from calling me to try to whip up another technopanic story about privacy; I refused. I won’t pay into the panic.

Let’s dissect the Times story from the headline down:

* The Times calls what Google did “prying.” That implies an “improper curiosity” and an intentionality, as if Google were trying to open our drawers and find something there. It’s a loaded word.

* The lede by David Streitfeld says Google “casually scooped up passwords, e-mail and other personal information from unsuspecting computer users.” Later in the story, he says: “For several years, the company also secretly collected personal information — e-mail, medical and financial records, passwords — as it cruised by. It was data-scooping from millions of unencrypted wireless networks.”

The cars recorded whatever data was passing on these — again — *open* and *public* networks, which can be easily closed. Google was obviously not trying to vacuum up passwords. To say “unsuspecting computer users” is again loaded, as if these were victims. And to list particularly medical and financial records and not mention bits employed in playing Farmville is loaded as well.

* Here’s the worst of it: Streitfeld says unnamed “privacy advocates and Google critics characterized the overall agreement as a breakthrough for a company they say has become a serial violator of privacy.” A “serial violate or privacy”? Really? Where’s the link to this long and damning rap sheet? Facebook, maybe. But I doubt even Google’s vocal and reasonable critics would characterize the company this way. If Streitfeld found someone who said that, it should be in quotes and attributed to someone, or else he and the paper are the ones issuing this judgment.

* If anyone would say such a thing, it would certainly be the people Streitfeld did quote in the story, for he sought out only the worst of the company’s critics, including Scott Cleland, “a consultant for Google’s competitors” [cough] and Marc Rotenberg, self-styled protector of privacy at the so-called Electronic Privacy Information Center. Streitfeld also went to the attorneys general and a former FTC bureaucrat who went after Google. Nowhere in this story is there any sense of another side, let alone of context and perspective. That’s just not good reporting.

I have made it clear that I’m generally a fan of Google; I wrote a book about that. Nonetheless, I have frequently called Google’s recording of this data as its cars passed by — and this is my technical term — a fuckup. It was stupid. It was damaging to Google’s reputation. It played into the hands of the critics. That’s what I can’t stand.

I’m tired of media’s and governments’ attempts to raise undue panic about technology. Look at the silly, preemptive, and panicky coverage of Google Glass before the product is even out. A Seattle dive bar said it would ban Glass and media picked it up all over (8,000+ references at last check on Google News) — though the bar admitted, as any fool could see, that it was just a publicity stunt.

There are plenty of serious issues to discuss about protecting privacy and there is certainly a need to educate people about how to protect their privacy. But this simplistic, biased, anti-technology, panicked coverage does neither. I might expect this other outlets. But I’m sad to see The Times join in.

Note that as part of its settlement, Google will educate people to close their open wifi networks. The Times found someone to ridicule even that when its ink would have been better put to telling people how to close their networks.

: See also Phillip Dampier on the topic.

I see you: The technopanic over Google Glass

Screenshot 2013-03-06 at 2.45.02 PM
Google Glass isn’t available yet. Even so, the technopanic it’s inspiring is rising to full swivet. But I say there’s no need to panic. We’ll figure it out, just as we have with many technologies—from camera to cameraphone—that came before.

The greatest compilation of worries to date comes from Mark Hurst, who frets: “The most important Google Glass experience is not the user experience— it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.” [His typography]

This is the fear we hear most: That someone wearing Glass will record you—because they can now—and you won’t know it. But isn’t that what we heard when cell phones added cameras? See The New York Times from a decade ago about Chicago Alderman Edward Burke:

But what Mr. Burke saw was the peril.
“If I’m in a locker room changing clothes,” he said, “there shouldn’t be some pervert taking photos of me that could wind up on the Internet.”
Accordingly, as early as Dec. 17, the Chicago City Council is to vote on a proposal by Mr. Burke to ban the use of camera phones in public bathrooms, locker rooms and showers.
His fear didn’t materialize. Why? Because we’re civilized. We’re not as rude and stupid—as perverted—as our representative, Mr. Burke, presumed us to be.

How will we deal with the Glass problem? I’ll bet that people wearing Glass will learn not to shoot those around them without asking or they’ll get in trouble; they’ll be scolded or shunned or sued, which is how we negotiate norms. I’d also bet that Google will end up adding a red light—the universal symbol for “You’re on!”—to Glass. And folks around Glass users will hear them shout instructions to their machines, like dorks, saying: “OK, Glass: Record video.”

That concern raised, Hurst escalates to the next: that pictures and video of you could be uploaded to Google’s servers, where it could be combined with facial recognition and the vastness of data about you. Facebook can’t wait to exploit this, he warns. But this is happening already. Every photo on my phone is automatically uploaded to Google; others do likewise to Facebook, each of which has facial recognition and information about us. Hurst acknowledges that we’re all recorded all day in public—remember: it is public—by security cameras. But the difference here, he argues, is that this data is held by a companies. Big companies + Big Data = Big problems, right? That’s the alarm Siva Vaidhyanathan raises:

But what’s to investigate? Should governments have investigated Kodak cameras when they came out? Well, Teddy Roosevelt did briefly ban cameras in Washington parks. In 2010, Germany’s minister of consumer protection, Ilse Aigner, decreed that tying facial recognition to geolocation would be “taboo”—though one could certainly imagine such a combination being useful in, for example, finding missing children. To ban or limit a technology before it is even implemented and understood is the definition of short-sighted.

Hurst also fears that the fuzz and the Feds could get all this data about us, these days even without warrants. I fear that, too—greatly. But the solution isn’t to limit the power of technology but to limit the power of government. That we can’t is an indication of a much bigger problem than cameras at our eyelids.

I agree with Hurst that this is worth discussing and anticipating problems to solve them. But let us also discuss the benefits alongside the perils, change to welcome balancing change we fear—the ability to get relevant information and alerts constantly, the chance to capture an otherwise-lost moment with a baby, another way to augment our own memories, and other opportunities not yet imagined. Otherwise, if we manage only to our fears, only to the worst case, then we won’t get the best case. And let’s please start here: We are not uncivilized perverts.

Yes, I’m dying to get a Google Glass and get my head around it and vice versa. But rest assured, I will ask you whether it’s OK to take a picture of you in private—just as I ask whether it’s OK to take or share your picture now or to tweet or blog something you say to me. We figured all that out. We will figure this out. We have before. No need to technopanic.

Screenshot 2013-03-06 at 2.41.47 PM

Clippings from The New York Times

Cross-posted from Medium.

LATER: A good post from Jürgen Geuter that raises the point I also wrote about in Public Parts: let’s concentrate on the use over the gathering of data; if we do the latter, we regulate what we’re allowed to know.