Header graphic for print

Steptoe Cyberblog

Steptoe Cyberlaw Podcast – Interview with Orin Kerr

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Our guest this week is Orin Kerr, professor of law at George Washington University and well-known scholar in computer crime law and Internet surveillance.  Orin is our second return guest, and he demonstrates why, opining authoritatively on the future of NSA’s 215 program and the “mosaic” theory of fourth amendment privacy as well as joining in our news roundup.

We begin the podcast with this week in NSA, which again consists of news stories not written by Glenn Greenwald and the Snowdenistas.  Most prominent are the stories claiming that Snowden’s leaks contributed to US intelligence failures against ISIS, the decision by Justice and DNI officials to support Senator Leahy’s USA Freedom bill, and the release of a less-redacted version of Jack Goldsmith’s OLC opinion holding that the 215 program’s predecessor is not only legal but requires no FIS court approval, at least in time of war.  We find even more evidence that Snowden leaks harmed our ability to monitor ISIS, doubt that Senator Leahy’s bill will pass before the elections, and speculate about whether OLC has a macro that inserts its plenary Article II article into every opinion it produces.

Meanwhile, Yelp prevails in an extreme case claiming that the company suppresses bad reviews – but only for advertisers.  To which the Ninth Circuit says, “So what? It’s Yelp’s site.”  If only the aggrieved shopowner had sued under EU privacy law, which might require Yelp to forget those bad reviews.

Speaking of the right to be forgotten, I explain what I’ve learned by actually filing censorship demands of my own.  The headline?  Google will suppress European search results for anyone anywhere.  You don’t have to be a European to have your peccadilloes forgotten.  The full post is here.

And, speaking of foreign censorship of US information, LinkedIn is being accused of applying Chinese censorship to Chinese customers, even on LinkedIn’s U.S. site.  Three cases make a trend, and censoring the news that Americans read by threatening to hold their news suppliers liable abroad is definitely a trend.

This week in data breaches:  Home Depot is accused, and Senator Rockefeller calls on the company to respond.  Will “tokenization” solve the problem, at least for stores – or is that a solution only a lawyer could love?  We also look at the healthcare.gov hack and conclude that it’s been hyped.

In other regulatory action, Google takes a big hit for kids’ in-app purchases and Verizon agrees to pay $7.4 million for sending inadequate notices to customers.  But the class action bar isn’t likely to get rich off either case.

And Jason lays out the details of a Hasidic child abuse trial that has already produced not one but two noteworthy privacy rulings in New York.

Download the thirty-third episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

Inside Europe’s Censorship Machinery

Posted in International, Privacy Regulation, Security Programs & Policies

Three months ago, I tried hacking Google’s implementation of Europe’s “right to be forgotten.”  For those of you who haven’t followed recent developments in censorship, the right to be forgotten is a European requirement that “irrelevant or outdated” information be excluded from searches about individuals.  The doctrine extends even to true information that remains on the internet.  And it is enforced by the search engines themselves, operating under a threat of heavy liability.  That makes the rules particularly hard to determine, since they’re buried in private companies’ decisionmaking processes.

So to find out how this censorship regime works in practice, I sent several takedown requests to Google’s British search engine, google.co.uk.  (Europe has not yet demanded compliance from US search engines, like Google.com, but there are persistent signs that it wants to.)

I’ve now received three answers from Google, all denying my requests.  Here’s what I learned.

The first question was whether Google would rule on my requests at all.  I didn’t hide that I was an American.  Google’s “right to be forgotten” request form requires that you provide ID, and I used my US driver’s license.  Would Google honor a takedown request made by a person who wasn’t a UK or EU national?

The answer appears to be yes.  Google’s response does not mention my nationality as a reason for denying my requests.  This is consistent with Europe’s preening view that its legal “mission civilisatrice” is to confer privacy rights on all mankind.  And it may be the single most important point turned up by this first set of hacks, because it means that lawyers all around the world can start cranking out takedown requests for Belorussian and Saudi clients who don’t like the way they look online.

But will the requests succeed?  The reasons Google gave for denying my requests tell us something about that as well.

1. I had asked that Google drop a link to a book claiming that in 2007 I had the “dubious honor” of being named the world’s “Worst Public Official” by Privacy International, beating out Vladimir Putin on the strength of my involvement with NSA and the USA Patriot Act.  It’s true that Privacy International announced I had won the award, but I argued that the book was inaccurate because in fact, I “had very little to do with either domestic surveillance activities at NSA or with the USA Patriot Act, and the trophy is a ‘dubious’ honor only in the sense that Privacy International never actually awarded it.”  (All true: I’ve been trying to collect the trophy for years but Privacy International has refused to deliver it.)

Google refused to drop the link, saying, “In this case, it appears that the URL(s) in question relate(s) to matters of substantial interest to the public regarding your professional life.  For example, these URLs may be of interest to potential or current consumers, users, or participants of your services.  Information about recent professions or businesses you were involved with may also be of interest to potential or current consumers, users, or participants of your services.  Accordingly, the reference to this document in our search results for your name is justified by the interest of the general public in having access to it.”

So it looks as though Google has adopted a rule that “information about recent professions or businesses you were involved with” are always relevant to consumers.  It would be impressive if the poor paralegal stuck with answering my email did enough online research to realize that I sell legal services, but I fear he or she may have thought that being the world’s worst public official was just one of the gigs I had tried my hand at in the last decade.

2. My second takedown request was a real long shot.  In an effort to see whether Google would let me get away with blatant censorship of my critics, I asked for deletion of a page from Techdirt that seems to be devoted to trashing me and my views; I claimed that it was “inappropriate” under European law to include the page in a list of links about me because it contains “many distorted claims about my political views, a particularly sensitive form of personal data.  The stories are written by men who disagree with me, and they are assembled for the purpose of making money for a website, a purpose that cannot outweigh my interest in controlling the presentation of sensitive data about myself.”

To American ears, such a claim is preposterous, but under European law, it’s not.  Google, thank goodness, still has an American perspective:  “Our conclusion is that the inclusion of the news article(s) in Google’s search results is/are – with regard to all the circumstances of the case we are aware of – still relevant and in the public interest.”

If I had to bet, I’d say that this rather vague statement is the one Google uses when other, more pointed reasons to deny relief don’t work.  But the reference to this page as a “news article” suggests that Google may be using a tougher standard in evaluating takedown requests for news media, a term that applies, at least loosely, to Techdirt.

3. The third denial was a little less interesting. I tried to get Google to take down an image showing me with a beard, arguing that it was out of date: “I don’t have a beard now. If you look at the picture, you’ll see why.”

But Google just gave me the same “professional life” rejection it gave to my “Worst Public Official” request.  I suspect that’s because the article that accompanies the picture is without question about my professional life; it’s published by the Blog of the Legal Times.  I can understand why Google would want to evaluate the complete link, not just the image, for this purpose but that’s going to make deletion of images harder, especially when a bad photo accompanies an unexceptionable article.

What next? With these results in hand, I’m preparing a second round of hacks to further explore the boundaries of the right to be forgotten, and I’ll resubmit my “does this search engine make me look fat?” request that Google take down a fourteen-year-old photo (unattached to a story) on the grounds that I weigh less now.

But to tell the truth, I’m having trouble finding stuff in my search history that is sufficiently inaccurate or outdated, especially now that we know Google is treating professional activities and news as per se relevant (at least if it’s “recent,” whatever that means).  So I hope that others will make their own searches and their own takedown requests and report what they find.  In fact, my second effort has shed some light on how Google decides someone is famous, but I’ll write that up separately, since this post is already long enough

Steptoe Cyberlaw Podcast – Interview with David Hoffman

Posted in Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

We’re back!  After a much needed hiatus, during which we shared wilderness paths with bison, woke up to wolf cries, and celebrated the value of ibuprofen, the Steptoe Cyberlaw Podcast is back on the net.

The hiatus allows us to cover this month in NSA, which is a good thing, because the Snowden News Machine is sputtering.  The most significant news was probably made by NSA itself, which released a redacted opinion of the FISC, shedding a lot of light on why the government abandoned its internet 215 program.  Judge Bates’s heavily redacted program criticizes the agency relentlessly for making promises about its technology and procedures that it just couldn’t keep.  My guess is that the agency heads and DOJ got so tired explaining and apologizing to the court that they finally just killed the program.

In other NSA news, Snowdenista journalists try to make an issue of the fact that NSA has developed a search engine for metadata called ICREACH.  Public reaction: Well, duh.

More egregiously, Laura Poitras and Der Spiegel provided detailed information about US intelligence collection on Turkey in a scarcely veiled effort to sabotage the US-Turkey relationship – and to relieve the German government of the embarrassment of a leak showing that despite Angela Merkel’s claim that friends shouldn’t spy on friends, Germany spies enthusiastically on Turkey.

Mustn’t embarrass the German government, after all.  Its insistence on moral purity in intelligence collection is the main political/diplomatic support for what’s left of the Snowden campaign.  But that purity is looking a little sullied after revelations that German intelligence intercepted both Hillary Clinton and John Kerry as they carried out diplomatic efforts.

In other August news, the Microsoft case questioning the government’s authority to issue warrants for overseas data continued to evolve over the month, with the government greatly raising the stakes:  If Microsoft wants to appeal, the government says, its only option is to refuse compliance with the warrant and let the court hold it in contempt.  And it looks like the district court agrees.

Elsewhere, Linkedin settles its data breach case for a relatively modest $1.25 million.  NIST seeks comment on how its Cybersecurity Framework is working out.  And a federal court in Massachusetts offers novel (and probably bad) advice to those hoping to avoid liability under federal computer abuse law:  Just make sure the computer’s been disconnected from the Internet before you attack it.  Finally in what looks like an increasingly American exceptionalist view, US courts continue to hold that search engines aren’t liable for the links they publish or their autocomplete suggestions.

Our guest for the week is David Hoffman, Intel’s Chief Privacy Officer, and one of the most thoughtful privacy officials going.  Apart from his unaccountable fondness for the European Court of Justice’s decision on the right to be forgotten.  We debate the decision again, and I discover that David and I are famous by Google’s standards, while Michael is not.  I propose new ways to throw a legal spanner in the European data protection agencies’ works.

Download the thirty-second episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

Are You a Google-Certified Public Figure?

Posted in China, International, Privacy Regulation

I am not a big fan of the EU’s “right to be forgotten,” but it has one silver lining.  I was noodling around with Google’s ever-more-baroque implementation of the principle this weekend, and I discovered that it offers a quick and cheap way to discover just how famous Google thinks you are.

To understand how Google got in the “famous or not” business requires a dive into the search engine’s stutter-step implementation of the EU requirement.  In China, of course, when Google is required to suppress a link, it includes a warning on the results page, saying in essence that the results have been censored.  Google originally planned to do the same in response to European censorship.  But the European data protection censors didn’t like that kind of transparency.  They thought that the notice, even if it didn’t actually say what had been suppressed, would stigmatize Europeans who invoked the right to be forgotten.  (That, and it might remind searchers that their access to data was being restricted by European law.)

Google caved, mostly.  But it left in place a vestige of its original policy.  Now, it includes the following warning on its European results pages whenever any name is searched for:  ”Some results may have been removed under data protection law in Europe.  Learn more.”

But that policy isn’t implemented across the board.  As Google’s global privacy counsel explained a month ago, “Most name queries are for famous people and such searches are very rarely affected by a removal, due to the role played by these persons in public life, we have made a pragmatic choice not to show this notice by default for known celebrities or public figures.”

So there you have it.  Somewhere, Google has an algorithm for deciding who is a celebrity or public figure and who is not.  To find out whether you made the grade, all you have to do is go to Google.co.uk, and type in your name.  Then look at the bottom of the page for the tag that says, “Some results may have been removed” etc.  If it’s not there, apparently you’re a public figure in Google’s eyes.  If it is, well, you’d better get working on your SEO techniques.

I found this when I searched for myself and didn’t see the “some results” tag-of-ignominy.  I thought that was weird, so I ran a few other names.  And it looks as though Google is making a cut based on number of name searches, but as Google’s counsel more or less admitted in his letter, the system is still pretty rough.  Maybe it will get better.  But why wait until it comes out of beta?  Knowing Google, that could be years.

Let’s ask now who makes it past Google’s equivalent of the red velvet rope.  Here’s my quick census:

Google-Famous:  Stewart Baker, Ben Wittes, Eugene Volokh, Jack Goldsmith, Orin Kerr, Kent Walker, Nicole Wong, Declan McCullagh, Peter Swire, Annie Anton, Dan Geer (cybersecurity guru), Jim Lewis (ditto), Raj De (NSA’s GC), Dianne Feinstein (Senate intelligence committee chair), David Hoffman (upcoming guest on the Steptoe Cyberlaw Podcast), Chris Soghoian, James X. Dempsey (CDT senior counsel, member of Privacy and Civil Liberties Oversight Board).

Not Google-Famous:  Nuala O’Connor (head of CDT), Michael Daniel (White House cybersecurity czar), Bob Litt (DNI’s general counsel), John P. Carlin (Assistant AG for National Security), Michael J. Rogers (chair of House intelligence committee), David Medine (chair of Privacy and Civil Liberties Oversight Board), Michael Vatis (cohost of the Steptoe Cyberlaw Podcast), Jason Weinstein (ditto), Ellen Nakashima (astonishingly prolific Washington Post national security reporter).

It’s pretty clear that Google is struggling with the old saw, “On the Internet, everyone is famous for fifteen people.”  But it’s still hard to see exactly where the line is being drawn.

For further irony, consider Max Mosley, who is internet-famous mainly for the video of his multi-hour, multi-hooker, sadomasochistic orgy and for his successful campaign to force Google to suppress links to those pictures.  His search results are being censored. But he’s now so famous that Google gives us no warning – not even that they might be bowdlerized.  That can’t make sense.

But why should I have all the fun?  Why not google yourself first (don’t pretend you won’t) and then your friends and acquaintances?  Then list any additional surprises in the comments.

Steptoe Cyberlaw Podcast – Debate with Harley Geiger

Posted in Cybersecurity and Cyberwar, Security Programs & Policies

The Steptoe Cyberlaw Podcast is on hiatus in August, but we’ve brought it back for a special appearance – a debate over Senator Leahy’s version of the USA Freedom Act sponsored by the Federalist Society.  Moderated by Christian Corrigan, the debate pitted me against Harley Geiger, Senior Counsel and Deputy Director for the Freedom, Security and Surveillance Project at the Center for Democracy and Technology.  Surprisingly, Harley and I manage to find some significant points of agreement, not only on the superiority of the Senate’s definition of ‘special selection term’ over the House’s but also on the need to deal with what ethical and conflicts standards should apply to special advocates appearing before the Foreign Intelligence Surveillance Court – a topic that neither the House nor the Senate Bill now addresses.

Download the thirty-first episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

As Evidence Mounts, It’s Getting Harder to Defend Edward Snowden

Posted in Cybersecurity and Cyberwar, Data Breach, Privacy Regulation, Security Programs & Policies

The evidence is mounting that Edward Snowden and his journalist allies have helped al Qaeda improve their security against NSA surveillance.  In May, Recorded Future, a predictive analytics web intelligence firm, published a persuasive timeline showing that Snowden’s revelations about NSA’s capabilities were followed quickly by a burst of new, robust encryption tools from al Qaeda and its affiliates:

This is hardly a surprise for those who live in the real world.  But it was an affront to Snowden’s defenders, who’ve long insisted that journalists handled the NSA leaks so responsibly that no one can identify any damage that they have caused.

In damage control mode, Snowden’s defenders first responded to the Recorded Future analysis by pooh-poohing the terrorists’ push for new encryption tools.  Bruce Schneier declared that the change might actually hurt al Qaeda: “I think this will help US intelligence efforts.  Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight.”

Schneier is usually smarter than this.  In fact, the product al Qaeda had been recommending until the leaks, Mujahidin Secrets, probably did qualify as “home-brew encryption.”  Indeed, Bruce Schneier dissed Mujahidin Secrets in 2008 on precisely that ground, saying “No one has explained why a terrorist would use this instead of PGP.”

But as a second Recorded Future post showed, the products that replaced Mujahidin Secrets relied heavily on open-source and proven encryption software.  Indeed, one of them uses Schneier’s own, well-tested encryption algorithm, Twofish.

Faced with facts that contradicted his original defense of Snowden, Schneier was quick to offer a new reason why Snowden’s leaks and al Qaeda’s response to them still wouldn’t make any difference:

Whatever the reason, Schneier says, al-Qaida’s new encryption program won’t necessarily keep communications secret, and the only way to ensure that nothing gets picked up is to not send anything electronically.  Osama bin Laden understood that.  That’s why he ended up resorting to couriers.

Upgrading encryption software might mask communications for al-Qaida temporarily, but probably not for long, Schneier said….”It is relatively easy to find vulnerabilities in software,” he added.  ”This is why cybercriminals do so well stealing our credit cards.  And it is also going to be why intelligence agencies are going to be able to break whatever software these al-Qaida operatives are using.”

So, if you were starting to think that Snowden and his band of journalist allies might actually be helping the terrorists, there’s no need to worry, according to Schneier, because all encryption software is so bad that NSA will still be able to break the terrorists’ communications and protect us.  Oddly, though, that’s not what he says when he isn’t on the front lines with the Snowden Defense Corps.  In a 2013 Guardian article entitled “NSA surveillance: A guide to staying secure,“ for example, he offers very different advice, quoting Snowden:

“Encryption works.  Properly implemented strong crypto systems are one of the few things that you can rely on.”

Scheier acknowledges that hacking of communication endpoints can defeat even good encryption, but he’s got an answer for that, too:

Try to use public-domain encryption that has to be compatible with other implementations. …Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about.…

The NSA has turned the fabric of the internet into a vast surveillance platform, but they are not magical.  They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.

Trust the math.  Encryption is your friend.  Use it well, and do your best to ensure that nothing can compromise it.  That’s how you can remain secure even in the face of the NSA.

It sounds as though al Qaeda took Bruce Schneier’s advice to heart, thanks to leaks from Edward Snowden – even if Schneier is still doing everything he can to avoid admitting it.

UPDATE:  The description of Recorded Future was changed at the request of the company, which said, “While this may seem like splitting hairs, in the world of data analysis software “predictive analytics” has specific technical meaning which implies something different.  We use the term web intelligence to reduce this confusion.”

More On The Microsoft Search Warrant Case

Posted in International, Privacy Regulation, Security Programs & Policies

Few people are as widely cited as Orin Kerr when it comes to the Stored Communications Act, so in the Microsoft search warrant case it’s nice to have him as an ally – even (or perhaps especially) an ally who came to our side a bit reluctantly.

Earlier, I posted my response to Orin’s first two blog posts about the Microsoft case, pointing out where we agreed and disagreed.  Orin has now  “Fisked” my response (dissecting it and replying point by point), but try as he might, he can’t wriggle free of our embrace.  I won’t belabor the points on which we differ, but will just emphasize two key points of agreement.

First, Orin agrees that a government seizure of the emails would occur when Microsoft copied them in Ireland.  This is a critical point, because the government contends that the statute would not be applying extraterritorially since no search or seizure would be occurring outside the United States.  In the government’s view, “The warrant is served upon the provider here; the provider must produce its records to a law enforcement agent here; and if the provider fails to do so, the provider is subject to court sanction imposed here. There is no extraterritorial application of domestic law under these circumstances.”  But if, as Orin acknowledges, a seizure would occur outside the US at the moment Microsoft copied the emails in Ireland in order to comply with the warrant, then the government’s argument that no relevant action would take place outside the United States falls to pieces.

In his latest post, Orin seems to walk back from his earlier acknowledgment that a seizure would occur in Ireland.  He asserts now that a “Fourth Amendment seizure” would occur only if “the target is a US person with Fourth Amendment rights”—something we don’t yet know.  But this misses the point.  The question here is not whether the seizure would violate someone’s Fourth Amendment rights.  The relevant issue is whether the warrant is directing that an action take place outside the United States.  Orin agrees that it is.  Since the warrant is indeed directing that an action take place outside the United States, then the government is clearly seeking to have the SCA apply extraterritorially.

The US government stands alone, then, in thinking that nothing relevant would be occurring on Irish soil.  Certainly the European Union, the Irish government, and the owner of the email account would all agree with Orin and Microsoft (and Verizon and the Electronic Frontier Foundation) that the warrant is directing that action take place in Ireland.

Here’s an analogy: Imagine that the government wanted to obtain the contents of a file cabinet located in back of an office building in Mexico.  Instead of asking the Mexican authorities to seize and transfer the files to the US, or sending in a team of DEA agents under cover of night to steal the files and bring them home, the government decided on a third way: hiring a drone operator in Texas to send a drone over the border into Mexico, where the plane deployed a mechanical arm to lift up the cabinet and bring the files back to Texas for examination.  Would anyone seriously contend that no seizure took place in Mexico, just because the drone was operated by a person sitting in Texas and the files weren’t examined until they were in Texas?  I doubt it.  The answer shouldn’t be any different here just because the relevant evidence is in electronic form.  The evidence still has a physical location, and it has to be taken from that location and brought back to the US.

Second, Orin agrees that “the current version of Rule 41” does not “authorize[] warrants for searches abroad” except in rare and irrelevant circumstances (involving US diplomatic posts and the like).  That shouldn’t be surprising, given the plain language of the Rule and ample precedent saying that Rule 41 doesn’t authorize warrants for searches or seizures abroad.  So that brings us back to the question of whether the SCA clearly authorizes warrants for searches and seizures abroad, and, as I noted in my earlier post, Orin agrees that it does not.

Finally, one other point is worth mentioning.  Orin originally argued that if Microsoft won this case, the government could turn around and simply use a subpoena to get the same emails, under the Bank of Nova Scotia line of cases holding that grand jury subpoenas can be used to obtain company records held abroad.  This, he suggested, would result in less privacy protection for Microsoft’s subscribers than requiring the government to establish probable cause and get a warrant from a judge.  In response to my argument that it seemed unlikely that the government would or could use a subpoena to get a person’s emails abroad (which are in no sense Microsoft’s own business records), Orin now takes a new tack.  He suggests that the government could, instead of obtaining just a search warrant, use “a combined subpoena and warrant”—“(1) a grand jury subpoena ordering the provider to transport a copy of the emails to the grand jury inside the US together with (2) a warrant ordering the provider to disclose the emails to investigators.”  This is an interesting idea, but it doesn’t change the fact that the government has apparently never sought to use a subpoena to force a company to bring back into the US anything other than its own business records, and there’s no authority indicating that it could do so.

Moreover, even if it were viable, Orin’s alternative approach would actually provide more privacy protection than what the government is currently trying to do, not less.  For it would require the government both to use a subpoena—which would require passing the Bank of Nova Scotia balancing test where production would violate foreign law—and to obtain a warrant from a judge, after proving probable cause.  That’s what some might call a “belt and suspenders” approach.

Metaphors aside, ultimately this case is one of statutory interpretation.  Did Congress clearly express an intent that the SCA permit the government to use a warrant to obtain emails located outside the US?  The litigants and the bloggers have had their say.  Now it’s up to the courts.

Verizon’s Response to Orin Kerr’s Posts on the Microsoft Search Warrant Case

Posted in International, Privacy Regulation, Security Programs & Policies

As our readers and podcast listeners know, Steptoe filed an amicus brief for Verizon Communications Inc. in the case in which Microsoft has moved to vacate a search warrant seeking emails located in Ireland.  The issue in the case is whether a US search warrant can be used to obtain the content of emails stored outside the United States.  Microsoft and Verizon have argued that neither Rule 41 of the Federal Rules of Criminal Procedure (which outlines the rules governing search warrants generally) nor the Stored Communications Act (which sets out the rules governing access by law enforcement to electronic communications) authorizes a search warrant to be used to obtain emails stored abroad.

Orin Kerr has blogged (here and here) about the case, taking issue with some of the arguments raised by both sides, but ultimately agreeing with the companies’ central contention that the Stored Communications Act does not expressly address the question of whether warrants can be used to obtain communications located outside the United States.  Orin’s concession should resolve the case (in Microsoft’s favor), since statutes are presumed not to apply extraterritorially unless Congress expressly says otherwise.  My response to Orin’s posts is posted on The Volokh Conspiracy (part of The Washington Post) here, and is repeated in full below:

Why the government cannot use a search warrant to get e-mail located outside the US — unless Congress changes the law.

Orin Kerr has written two interesting posts about some of the legal issues raised by a case in which Microsoft has moved to vacate a US search warrant for a subscriber’s e-mails that are located in Ireland.  Microsoft’s central argument is that a US warrant cannot be used to obtain emails located abroad because warrants have no extraterritorial reach.  Steptoe filed an amicus brief in the case on behalf of Verizon Communications Inc., so I thought it would be helpful to provide our perspective on the legal issues.  (I won’t discuss here the profound business and policy implications of the government’s position.  For that, see the Verizon brief.)

While we disagree with Orin on some of his subsidiary points (as discussed below), we very much agree with the central thrust of his first post:  ”[T]he Stored Communications Act just wasn’t drafted with the problem of territoriality in mind.  It assumed a US Internet with US servers and US users.”

This recognition that Congress wasn’t thinking about extraterritoriality when it passed the SCA is the crux of the Microsoft case.  There is a well-established doctrine called the “presumption against extraterritoriality,” which holds that “legislation of Congress, unless a contrary intent appears, is meant to apply only within the territorial jurisdiction of the United States.” Morrison v. Nat’l Austl. Bank Ltd., 561 U.S. 247, 248 (2010).  Thus, a statute is presumed not to have extraterritorial application unless Congress has “clearly expressed” its “affirmative intention … to give [the] statute extraterritorial effect.”  Id.  Orin’s acknowledgment that the SCA does not address the extraterritoriality issue should be the end of the story.  As the Supreme Court said in Morrison:  “When a statute gives no indication of an extraterritorial application, it has none.” Id.

Orin doesn’t discuss the presumption against extraterritoriality.  But it is at the core of the Microsoft case.

The government has sought to sidestep the presumption against extraterritoriality by arguing that the statute would not actually be applying outside the United States in this case, even though the e-mails it seeks are in Ireland, because the warrant was served on Microsoft in the United States and because the e-mails wouldn’t actually be seized or searched until they were in the government’s hands in the United States.  The government cites no cases supporting this novel argument.  Regardless, the government ignores two key facts ‒ Microsoft’s computers would be searched when Microsoft ‒ acting at the behest, and as an agent, of the government ‒ looks for the responsive e-mails in Ireland.  Moreover, those e-mails would be seized in Ireland when they are copied.  On this point, Orin agrees that “the seizure would be occurring outside the United States.”  As a result, it seems undeniable that at least a seizure would be occurring in Ireland, meaning that the search warrant would indeed be applying extraterritorially.

Orin raises an argument different from the government’s, asserting that “recent amendments to [Federal] Rule [of Criminal Procedure] 41 … expressly allow extraterritorial warrants.”  But these amendments permit (in certain limited circumstances, such as terrorism investigations) only searches of property outside of the issuing court’s district.  They say nothing about searches or seizures of property located outside of the country.  Not surprisingly, then, courts have uniformly held that Rule 41 does not authorize searches or seizures outside of the territory of the United States.  See, e.g., US v. Odeh, 552 F.3d 157, 169 (2d Cir. 2008).  Moreover, the Supreme Court rejected a proposed amendment to Rule 41 that would have allowed warrants for searches and seizures of property located outside the United States.  See Fed. R. Crim. Proc. 41, Notes of Advisory Committee on Rules ‒ 1990 Amendment.  Not surprisingly, then, the US government has not advanced the argument that Rule 41 authorizes a search warrant for e-mails (or other property) located outside the United States.

There is one narrow exception ‒ Rule 41 authorizes warrants for searches conducted in United States territories, diplomatic missions, and residences owned by the US and used by diplomatic personal outside the US.  But this is not what Orin seems to be talking about, and it is not what the Microsoft case is about.  Moreover, this exception shows that Congress knows how to make a warrant apply outside of the US when it wants to, which just underscores that it did not do so for any other circumstances in Rule 41.

Thus, neither the SCA nor Rule 41 authorizes warrants for searches or seizures of e-mails (or anything else) outside of the United States.  The presumption of extraterritoriality therefore comes into play, and Microsoft wins.  Case closed.

A second, two-hundred-and-ten-year old doctrine holds that “an act of Congress ought never to be construed to violate the law of nations if any other possible construction remains.”  Murray v. Schooner Charming Betsy, 6 U.S. (2 Cranch) 64, 118 (1804).  The Supreme Court has repeatedly re-affirmed this principle, stating that US laws should be interpreted “to avoid unreasonable interference with the sovereign authority of other nations.”  F. Hoffman-La Roche Ltd. v. Empagran S.A., 542 US 155, 164 (2004).  Orin doesn’t discuss this Charming Betsy doctrine, but it provides another, independent reason that the SCA should not be construed as authorizing warrants for e-mails located abroad.  For if it were construed in this manner, it could easily lead to conflicts with the laws of the nations where the e-mails are stored.

That is clearly the case here.  For example, EU officials such as Viviane Reding, the Vice-President of the European Commission, have stated that if Microsoft disclosed the e-mails in Ireland, it would run afoul of the EU Data Protection Directive.  It would also run counter to the Mutual Legal Assistance Treaty (MLAT) between the US and Ireland, which presupposes that the US will request assistance from the Irish government when it wants to get its hands on evidence located in Ireland.

So the case for Microsoft seems pretty clear.  Orin goes on to argue that if Microsoft wins, the government could just turn around and use a subpoena to get the same data, which might result in less privacy protection for e-mails than a probable-cause based warrant.  There are two problems with this argument.

First, it strikes me as doubtful that the government would actually try to use a subpoena to obtain the content of e-mails located abroad.  After all, the Justice Department has now given up using anything but warrants to get communications content in general, following the decisions of the Sixth Circuit (in US v. Warshak, 631 F.3d 266 (6th Cir. 2010)) and other courts holding that the Fourth Amendment requires the government to use a warrant to get any communications content.  The Attorney General and other Justice Department officials have also said the Department favors amending ECPA to require a warrant to obtain any communications content as part of a criminal investigation.  Thus, even if the Fourth Amendment’s warrant requirement doesn’t apply to property located outside the US, it seems doubtful to me that the government would try to use a subpoena to obtain e-mail content because of the privacy ramifications (Fourth Amendment aside).  Moreover, using a subpoena, based on a mere relevance standard, would only worsen the international uproar caused by the government’s attempt to unilaterally obtain communications stored abroad.  And it would be sure to generate intense opposition from US communications and cloud service providers.

Second, it is not at all clear that the government could use a subpoena to obtain the content of e-mails that are in electronic storage for less than 180 days old.  (The SCA allows certain other communications content to be obtained with a subpoena, but those are not at issue in this colloquy, so let’s set them aside.)  Orin asserts that if the court agrees that the SCA doesn’t authorize an extraterritorial warrant, then the SCA’s legal protections ‒ in particular, the statutory requirement to use a warrant to get e-mail content ‒ “necessarily … don’t apply,” either.  I don’t think that’s right.  Neither Rule 41 nor the SCA expressly authorizes warrants to be used to get data abroad, so the presumption against extraterritoriality and the Charming Betsy doctrine kick in.  But Section 2702 of the SCA does expressly say that an electronic communications service provider may not knowingly divulge communications content except as authorized by Section 2703 (and a few other provisions), and Section 2703 requires the government to get a warrant.  Section 2702 may not apply to communications providers located outside the United States.  But it clearly does apply to providers inside the United States.  So the SCA legally prohibits Microsoft from divulging any communications content to the government without a warrant.

Moreover, the cases in which the government has been able to get information stored abroad by serving a company in the United States all involve the business records of that company or an affiliate under that company’s control.  I’m not aware of any case in which a court has permitted the government to use a subpoena to a US company to obtain property belonging to someone else or the content of another person’s communications.  Thus, as Microsoft suggests in its reply brief, the government might be able to use a subpoena to a US bank to obtain the business records of the bank’s subsidiary in Switzerland, but it could not use it to obtain the contents of a customer’s safe deposit box there.  It might be able to use a subpoena to a US hotel company to get the records in France concerning one of the company’s properties in Paris, but it could not use one to obtain the belongings of a hotel guest from his room in that Paris hotel.  Similarly, the government might be able to use a subpoena to obtain an e-mail provider’s own business records stored in Dublin (if those records are under the US provider’s custody, possession, or control and the balancing test set out in the Restatement (Second) of Foreign Relations Law of the United States weighs in favor of the government).  But I don’t know of any authority that holds that a subpoena can be used to obtain the content of a subscriber’s e-mails stored abroad.

Does this smack of the providers’ wanting to have it both ways ‒ that is, the SCA doesn’t authorize warrants to obtain the content of e-mails abroad, but it forbids providers from disclosing e-mails in response to a subpoena, regardless of where the e-mails are located?  It may seem that way.  But all it really means is that Congress hasn’t addressed the extraterritoriality issue in the SCA.  This leads us back to the point Orin and I agree on:  if Congress wants search warrants to apply to data stored abroad, despite the negative impact that would have on the business of American e-mail and cloud providers and on the United States’ relationship with other countries, and despite the fact that the government can usually get the information it wants through assistance from foreign law enforcement, it needs to amend the statute to say so expressly.  Balancing the negative effects on business and foreign relations against the needs of law enforcement is a quintessential policy decision that should be made by Congress, not by a prosecutor or judge in the Southern District of New York.

Steptoe Cyberlaw Podcast – Interview with Richard Danzig

Posted in Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Wow, that was quick. I haven’t even turned on the air conditioning at home yet, and already we’ve done the last podcast of the summer.  The Steptoe Cyberlaw Podcast will go on hiatus for August and return after Labor Day!

This week in NSA: The Senate Judiciary Committee, the most anti-NSA of the Senate committees with jurisdiction over the agency, says that it has come up with a new version of the section 215 reform bill passed by the House.  Chairman Leahy says his draft does a better job of protecting privacy than the House bill, and privacy activists agree.  Ordinarily that would mean it’s worse for security, but based on press reports, the bill may actually be an improvement on the lame “selection term” menu proposed by the House.  (And, now that I’ve seen the Leahy bill, that prediction turns out to be right; its definition of “specific selection term” is much more workable.)

Looking distinctly like the proprietor of a fireworks display whose finale fizzled, Glenn Greenwald strains ever harder to find outrage in the quotidian.  NSA, he discloses, has a limited intelligence sharing arrangement with Saudi Arabia.  The Saudis, of course, have a lot of terrorists and jihadists, some of whom have also attacked the United States (Osama bin Laden, to name one).  But none of that matters to Greenwald, who seems to think we should learn about terrorists only from countries with no human rights violations.

The effort to cripple NSA’s overseas intelligence collection program almost as thoroughly as its section 215 program has picked up four Senators – Tester, Begich, Merkley, and Walsh, who send a letter to that effect.

In other news: Sony settles its traumatic, service-suspending hack for $15 million worth of free stuff for users.  Hats off to Sony’s GC, who struck a brilliant deal.

The 9/11 Commission issues a soft endorsement of “direct action” by private parties who are hacked. Stewart Baker celebrates.

The phenomenon of dueling celebrity magistrates continues.  Is this the first time someone outside of the FISC has felt obliged to write an opinion granting a search warrant?  How sad is that?

Vladimir Putin signs legislation to keep Russian data in Russia.  And the Russian government offers a bounty for attacks on the TOR network.

The Washington Post tells us that the FBI “Going Dark” is real, quoting our own Jason Weinstein.   We’re sure there’s a drinking game to be built around the President’s plan to talk about drone privacy, but we’re not imaginative enough to find it.  And Congress votes to end DMCA protection for locked cell phones.

Our guest for the day is the eminent Richard Danzig, former Secretary of the Navy, and a defense intellectual’s defense intellectual.  Richard has at last turned his attention to cyber insecurity, with a paper entitled “Surviving on a Diet of Poisoned Fruit.”

Richard’s view is that we can’t treat cyber insecurity as a technical problem, or assume that there are technical solutions.  He advocates for limiting the use of digital technology when it comes to managing critical national security systems, and he defines critical national security assets in a refreshingly direct way.  If the deliberate crashing of a digital system could dissuade the US government from pursuing its national security interests, that system is critical to national security. Stewart wonders if we aren’t already past that point.

Richard argues for international norms limiting cyberattacks, focusing on those that would destabilize mutual assured nuclear destruction.  Stewart expresses doubts about the durability and verifiability of such norms.  We agree on the need for deterrence but not on the mechanisms.

It’s a great workout for cybersecurity wonks, and a good way to ease into Richard’s thoughtful paper.

Download the thirtieth episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

9/11 Commission Gingerly Embraces “Direct Action” Against Hackers

Posted in Cybersecurity and Cyberwar, Security Programs & Policies

I’ve long been an advocate for fewer restraints on how the private sector responds to hacking attacks.  If the government can’t stop and can’t punish such attacks, in my view the least it could do is not threaten the victims with felony prosecution for taking reasonable measures in self-defense.  I debated the topic with co-blogger Orin Kerr here.  I’m pleased to note that my side of the debate continues to attract support, at least from those not steeped in the “leave this to the professionals” orthodoxy of the US Justice Department.

The members of the 9/11 Commission, who surely define bipartisan respectability on questions of national security, have issued a tenth anniversary update to the Commission’s influential report.  The update repeats some of the Commission’s earlier recommendations that have not been implemented.  But it also points to new threats, most notably the risk of attacks on the nation’s computer networks.  No surprise there, but I was heartened to see the commissioners’ tentative endorsement of private sector “direct action” as a response to attacks on private networks:

Congress should also consider granting private companies legal authority to take direct action in response to attacks on their networks.

This “should consider” formulation avoids a full embrace of particular measures, and in that respect it parallels another establishment endorsement of counterhacking.  The Commission on Theft of American Intellectual Property, said in its 2013 report:

Finally, new laws might be considered for corporations and individuals to protect themselves in an environment where law enforcement is very limited.  Statutes should be formulated that protect companies seeking to deter entry into their networks and prevent exploitation of their own network information while properly empowered law-enforcement authorities are mobilized in a timely way against attackers.  Informed deliberations over whether corporations and individuals should be legally able to conduct threat-based deterrence operations against network intrusion, without doing undue harm to an attacker or to innocent third parties, ought to be undertaken.

If repeated tentative embraces are the way new policy ideas become respectable, “direct action” is well on its way.  The 9/11 Commission deserves credit, not just for moving the debate but for contributing a label that gives counterhacking a kind of anarcho-lefty frisson.