Header graphic for print

Steptoe Cyberblog

Steptoe Cyberlaw Podcast – Interview with Bruce Schneier

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Bruce Schneier joins Stewart Baker and Alan Cohn for an episode recorded live in front of an audience of security and privacy professionals.  Appearing at the conference Privacy.Security.Risk. 2015., sponsored by the IAPP and the Cloud Security Alliance, Bruce Schneier talks through recent developments in law and technology.

The three of us stare into the pit opened by an overwrought (and overdue and overweening) European Court of Justice advisor.  If the European Court of Justice follows his lead (and what seems to be its inclinations), we could face a true crisis in transatlantic relations. Continue Reading

Steptoe Cyberlaw Podcast – Interview with Jim Lewis

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Cyberlaw negotiations are the theme of episode 82, as the US and China strike a potentially significant agreement on commercial cyberespionage and Europeans focus on tearing up agreements with the US and intruding on US sovereignty.

Our guest for the episode is Jim Lewis, a senior fellow and director of the Strategic Technologies Program at the Center for Strategic and International Studies.  Most importantly, Jim is one of the most deeply informed and insightful commentators on China and cybersecurity.  He offers new perspectives on the Obama-Xi summit and what it means for cyberespionage. Continue Reading

Steptoe Cyberlaw Podcast – Interview with Margie Gilbert

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Episode 81 features China in the Bull Shop, as the White House prepares for President Xi’s visit and what could be ugly talks on cyber issues.  Our guest commentator, Margie Gilbert, is a network security professional with service at NSA, CIA, ODNI, Congress, and the NSC.  Now at Team Cymru, she’s able to offer a career’s worth of perspective on how three Presidents have tried to remedy the country’s unpreparedness for network intrusions.

In the news roundup, there’s a high likelihood that President Obama will be accusing and Xi will be denying China’s role in cyberespionage.  You might say it’s a “he said, Xi said” issue.  Alan Cohn and I debate whether the US should settle for a “no first use” assurance to protect critical infrastructure in peacetime.   Continue Reading

Steptoe Cyberlaw Podcast – Hostfull

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Still trying to dig out from under our hiatus backlog, we devote episode 80 to our regulars.  We’ll bring back a guest next week.  This week it’s a double dose of Jason Weinstein, Michael Vatis, Stewart Baker, and Congress-watcher Doug Kantor.

Michael offers an analysis of the Second Circuit’s oral argument in the Microsoft lawsuit over producing data stored in Ireland.  The good news: it was a hot bench, deeply engaged, that let oral argument go to triple the usual length.  The bad news for Microsoft:  by far the hottest member of the panel was Judge Lynch, who made no secret of his deep opposition to Microsoft’s arguments.

I offered a skeptical view of the US-EU umbrella “deal” on exchange of law enforcement data and the “Judicial Redress Act” that Congress seems ready to rush through in support of the agreement.  The problem?  It looks as though DOJ sold out the rest of government and much of industry.  Justice promised to make the one change in US law the EU wants, granting Europeans a right of action under the Privacy Act, in exchange for, well, pretty much nothing except a bit of peace of mind for DOJ.  Since the EU is more a receiver than sender of data, it already has a lot of leverage in data exchanges and there haven’t been many attempts to thwart the exchange of strictly criminal evidence.  What the US really wants is for the EU to stop threatening the Safe Harbor, to stop penalizing US companies to pressure the US government about its use of data, and to guarantee that it isn’t holding the US to higher privacy standards than it imposes on EU governments.  The DOJ-led negotiations got none of those concessions.  And I’m willing to bet that the EU didn’t even give up the right to bitch, moan, and cut off data flows in the future if it doesn’t like how the umbrella applies.  (On top of everything, the agreement is still under wraps, so the rush to praise and implement it is particularly imprudent.)

Michael and Jason deliberate on why Justice would obtain a text intercept order for Apple and then not react to the utterly predictable claim by Apple that it had no way to implement such an intercept.  We note the further irony of Apple simultaneously defying the US government on privacy grounds while rushing to comply with Russia’s anti-privacy localization law.

The administration seems unable to impose sanctions on China’s cyberattackers or to stop talking about imposing sanctions on China’s cyberattackers.  Sounds like a job for Stewart Baker!  I offer my proposed sanctions for the Github attack, already laid out in detail here and here.

One barrier to sanctions may be the fear of hitting the wrong target, and in that regard, the Justice Department is wearing a full coat of egg after dropping its indictment of a purported Chinese spy amid allegations that it had simply misunderstood the technology in question.

Doug Kantor offers a detailed and surprisingly upbeat assessment of the information-sharing bills’ chances for passage later this year.  We also alert defense contractors to an expanded breach disclosure obligation.

And, finally, we decide to crowdsource the decision whether to keep our current theme music or to adopt one of three challengers.  One of the candidates gets a heart-tugging endorsement from Jason that you’ll have to listen to the podcast to hear.  Here’s the link to listen and vote for your favorite: www.steptoe.com/cybermusic.

The Cyberlaw Podcast is now open to feedback.  Send your questions, suggestions for interview candidates, or topics to CyberlawPodcast@steptoe.com.  If you’d like to leave a message by phone, contact us at +1 202 862 5785.

Download the eightieth episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

Steptoe Cyberlaw Podcast – Interview with Peter Singer

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

The cyberlaw podcast is back from hiatus with a bang.  Our guest is Peter Singer, author of Ghost Fleet, a Tom Clancy-esque thriller designed to illustrate the author’s policy and military chops.  The book features a military conflict with China that uses all the weapons the United States and China are likely to deploy in the next decade.  These include China’s devilishly effective sabotage of the US defense supply chain, Silicon Valley’s deployment of a letter of marque, and some spot-on predictions of the likely response of our sometime allies.

Episode 79 also recaps some of the most significant cyberlaw developments of the past month.

First, to no one’s surprise, the cybersecurity disaster just keeps getting worse, and the climate for victims does too:  breach losses are being measured in the tens or even hundreds of millions of dollars, with a networking company losing $30 million and unlawful insider trading profits reaching $100 million.

Meanwhile, the courts are less than sympathetic.  The Seventh Circuit cleared the way for a breach suit against Neiman Marcus, while the FTC and the Third Circuit were kicking Wyndham around the courtroom and down the courthouse steps.  We wonder what exactly Wyndham did to earn the court’s ire.

Next, we savor the “long, withdrawing, roar” of 215 metadata litigation, as privacy groups try with ever more desperation to pile a judicial ruling on top of their Congressional win.  We ask what the hell the DC circuit’s splintered ruling means, and whether Judge Leon is really determined to jam still more exclamation points into the case despite its imminent mootness.  (Answer from Judge Leon:  Hell, yes!!!).  Privacy groups are agitating for the Second Circuit to issue an injunction against the program.  We ask:  is that as dumb and violative of ordinary judicial procedures as it sounds?  Stay tuned.

Finally, the messy fight over location data and the warrant requirement just won’t die, and may be metastasizing.  Judge Koh and the Fourth Circuit say a warrant is needed for location data, revitalizing a circuit conflict that looked as though it was curing itself.  Meanwhile, DOJ gets in the act, declaring as a matter of policy that federal use of stingrays needs a warrant.  The result is that thousands of Baltimore cases could be at risk as a result?  Luckily, Jason Weinstein hints, most of those cases wouldn’t have yielded a conviction.

The Cyberlaw Podcast is now open to feedback.  Send your questions, suggestions for interview candidates, or topics to CyberlawPodcast@steptoe.com.  If you’d like to leave a message by phone, contact us at +1 202 862 5785.

Download the seventy-ninth episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

The GitHub Attack and Internet Self-defense

Posted in China, Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

In an earlier post I talked about how the Chinese government has used its “Great Firewall” censorship machinery on an expanded list of targets – from its own citizens to ordinary Americans who happen to visit Internet sites in China.  By intercepting the ad and analytics scripts that Americans downloaded from Chinese sites, the Chinese government was able to infect the Americans’ machines with malware.  Then the government used that malware to create a “Great Cannon” that aimed a massive number of packets at the US company GitHub.  The goal was to force the company to stop making news sites like the New York Times and Greatfire.org available to Chinese citizens.  The Great Cannon violated a host of US criminal laws, from computer fraud to extortion. The victims included hundreds of thousands of Americans.  And to judge from a persuasive Citizen Lab report, China’s responsibility was undeniable.  Yet the US government has so far done nothing about it.

US inaction is thus setting a new norm for cyberspace.  In the future, it means that many more Americans can expect to be attacked in their homes and offices by foreign governments who don’t like their views.

The US government should be ashamed of its acquiescence.  Especially because the Great Cannon is surprisingly vulnerable. After all, it only works if foreigners continue to visit Chinese sites and continue to download scripts from Chinese ad networks.  They supply the ammunition that the Great Cannon fires.  If no one from outside China visits Chinese search sites or loads Chinese ads, the Cannon can’t shoot. Continue Reading

The GitHub Attack, Part 1: Making International Cyber Law the Ugly Way

Posted in Cybersecurity and Cyberwar, International, Privacy Regulation, Security Programs & Policies

Over the past few years, the US government has invested heavily in trying to create international norms for cyberspace. We’ve endlessly cajoled other nations to agree on broad principles about internet freedom and how the law of war applies to cyberconflicts. Progress has been slow, especially with countries that might actually face us in a cyberwar. But the bigger problem with the US effort is simple: Real international law is not made by talking. It’s made by doing.

“If you want to know the law … you must look at it as a bad man,” Oliver Wendell Holmes Jr. once observed.  A bad man only cares whether he’ll be punished or not. If you tell him that an act is immoral but won’t be punished, Holmes argued, you’re telling him that it’s lawful.

When it comes to international law, Holmes nailed it. In dealings between nations, norms are established by what governments do.  If countries punish a novel attack effectively, that builds an international norm against the attack. And if they tolerate the attack without retaliating, they are creating an international norm that permits it.

By that measure, the United States has been establishing plenty of norms lately. After accusing North Korea of seeking to censor Sony with a cyberattack, the US announced meaningless sanctions; there’s no sign that the US has found, let alone frozen, any of the secretive North Korea’s intelligence agency’s assets. Similarly, even though the US director of national intelligence long ago attributed the OPM hack to China, the National Security Council continues to dither about whether and how to retaliate.

When it comes to setting new norms through inaction, though, the most troubling incident is China’s denial of service attack on GitHub. Like lots of US tech successes, GitHub didn’t exist ten years ago, but it is now valued at more than $2 billion. Its value comes from creating a collaborative environment where software can be edited by dozens or hundreds of people around the world. Making information freely available is the core of its business. So when the Chinese government decided to block access to the New York Times, the paper provided access to Chinese readers via GitHub. China then tried to block GitHub, as it had the Times. But if Chinese programmers can’t access GitHub, they can’t do their jobs. The outcry from Chinese tech companies forced the Chinese government to drop its block within days.

It was a victory for free speech. Or so you’d think. But the Chinese didn’t give up that easily. They went looking for another way to punish GitHub. And found it. Earlier this year, GitHub was soon hit with a massive distributed denial of service attack. Computers in the US, Taiwan, and Hong Kong sent waves of meaningless requests to GitHub, swamping its servers and causing intermittent outages for days. The company’s IT costs skyrocketed. A similar attack was launched against Greatfire.org, a technically sophisticated anticensorship site.

A Citizens Lab report shows that this denial of service attack was actually a pathbreaking new use of China’s censorship infrastructure. Over the years, China has built a “Great Firewall” that interrupts every single internet communication between China and the rest of the world. Up to now, China has used that infrastructure to inspect Chinese users’ requests for content from abroad. Uncontroversial requests are allowed to proceed after inspection. But most requests for censored information trigger a reset signal that cuts the connection. The same infrastructure could be used to inspect foreign requests for data from Chinese sites but there’s no obvious need to do so because the Chinese sites are already under the government’s thumb.

But the Github attack shows an imaginative repurposing of the censorship machinery. Instead of subtracting packets from the foreign data requests, China decided to add a few packets — of malware. Whenever foreigners — whether from the US, Taiwan, or Hong Kong — visited a site inside the Great Firewall, they were already downloading buckets of code to run on their machines. Called javascript, this code is now a standard part of almost all internet browsing. It’s javascript that makes your computer play those moving, talking ads you love so much, and its importance to advertisers means that it isn’t likely to fade away any time soon. That’s too bad, because javascript actually runs code on your machine, so it’s not just an annoyance, it’s a serious security risk.

A risk China managed to exploit. How? Well, since China’s censorship infrastructure was already intercepting all the packets running between China and the outside world, it was easy enough for China to drop a few additional javascripts into the stream of legitimate advertisers’ code that foreign users were already downloading. Once on the user’s machine, though, instead of stealing credit card information the way most javascript malware does, the Chinese government’s code started sending packets to GitHub. Soon, millions of infected machines were doing the same, and Github’s servers couldn’t keep up. The attack brought GitHub to its knees.

For several technical reasons, it’s also plain that the Chinese government could not have expected to keep its hand hidden. Indeed, the Citizen Lab report makes clear that no one other than the Chinese government could have used this technique or this infrastructure.

Think about that for a minute. This was an attack that was carried out largely on American soil, first by infecting hundreds of thousands of American computers and then by launching them at a US company, all with the goal of punishing Americans for hosting the content of a preeminent US newspaper. And China didn’t even bother to hide its actions from the US government.

As it turns out, the Chinese had taken our measure pretty well. Not until May, weeks after the attacks, did the State Department respond. And then it simply announced that it “has asked Chinese authorities to investigate” the attack. Really? What’s to investigate? Given the evidence of Chinese complicity, the request seems pointless. And now, months later, it appears that the Chinese have not deigned to respond.

The message is clear. The administration has decided to tolerate this kind of attack. As Justice Holmes reminds us, for bad men all that matters are the consequences of their acts. By imposing no consequences on the GitHub attack, the United States has done its bit to make such attacks lawful.

That’s a foolish choice, and one than needs to be reversed. We shouldn’t tolerate such contempt for both our values and our borders. Even if the US government won’t take action, Americans can still take action that will deter such attacks in the future.

I’ll talk about that in my next post.

Steptoe Cyberlaw Podcast – Atlantic Council Panel

Posted in Cybersecurity and Cyberwar, Data Breach, International, Privacy Regulation, Security Programs & Policies

Bonus Episode 78:  Dmitri Alperovitch, Harvey Rishikof, Stewart Baker, and Melanie Teplinsky debate whether the United States should start doing commercial espionage

I know, I know, we promised that the Cyberlaw Podcast would go on hiatus for the month of August.  But we also hinted that there might be a bonus episode.  And here it is, a stimulating panel discussion sponsored by the Atlantic Council and moderated by Melanie Teplinsky.  The topic is whether the United States should abandon its longstanding policy of refusing to steal the commercial secrets of foreigners to help American companies compete.  The discussion is lively, with plenty of disagreements and an audience vote at the start and finish of the discussion to gauge how persuasive we were.  Enjoy!

The Cyberlaw Podcast is now open to feedback.  Send your questions, suggestions for interview candidates, or topics to CyberlawPodcast@steptoe.com.  If you’d like to leave a message by phone, contact us at +1 202 862 5785.

Download the seventy-eighth episode (mp3).

Subscribe to the Cyberlaw Podcast here. We are also now on iTunes and Pocket Casts!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

FinTech Bits: What Does Donald Trump Think About Bitcoin?

Posted in Blockchain, Virtual Currency

This week featured interesting remarks from two of the most influential thought leaders in Bitcoin and the blockchain – Blythe Masters and Brian Forde.

During SourceMedia’s Convene conference, Masters, the CEO of Digital Asset Holdings, observed that while we are in the early days of development for Bitcoin and the blockchain, similar to where we were with the Internet in the early 1990s, “[t]he potential addressable markets for these types of technologies are gigantic.” For instance, Masters noted that blockchain technology could transform the way we trade and settle transactions for stocks, bonds, and derivatives.

Meanwhile, Brian Forde from MIT’s Digital Currency Initiative, with whom I was privileged to spend time at the Blockchain Summit, spoke at the Atlantic Aspen Ideas Festival about how digital currency and blockchain technology could improve public welfare. Forde observed that these technological innovations could improve the efficiency and security of government services. He noted that the technology also could benefit underserved populations by, among other things, increasing financial inclusion for the unbanked, helping secure property rights, and protecting identity.

The week also included thoughtful remarks about Bitcoin from a more unlikely source – former Texas governor and current presidential candidate Rick Perry. In a speech to the Committee to Unleash Economic Prosperity, Perry offered his take on the causes of the 2008 economic crisis, predicted that another economic crash is on the horizon, and challenged Donald Trump to a pull-up contest. But in the same speech, Perry called for “regulatory breathing room for banking with digital currencies, like Bitcoin.” Perry added that “[d]igital currencies harbor the possibility of reducing the cost and improving the quality of financial transactions in much the same way that the conventional Internet has done for consumer goods and services.” Regardless of what one thinks of Perry’s politics, it is a milestone of sorts that any presidential candidate was discussing Bitcoin in the course of a campaign event, and perhaps even more significant that the candidate was encouraging a regulatory approach that doesn’t stifle the growth of the technology.

No word yet on when, or if, Donald Trump will offer his position on Bitcoin and the blockchain. Or whether he’ll accept the challenge of a pull-up contest. Time will tell, as the campaign, like the technology, is still young.

On the Intelligence Authorization Bill

Posted in Security Programs & Policies

On July 28, Senator Ron Wyden objected to the Senate’s passage of the Intelligence Authorization Bill for Fiscal Year 2016. He objected not because he opposes the funding decisions included in the legislation but rather because of just 29 lines of text among the 41 pages of proposed legislation that have nothing to do with intelligence spending. Those 29 lines, found in Section 603 of S. 1705, would require Internet companies to report to the Attorney General (or her designee) “terrorist activity” on their platforms. In support of this idea, proponents have raised concerns about use of the Internet by terrorist organizations such as ISIS to promote terrorism and recruit new members. Of course such concerns are appropriate, but the proposed legislation creates too much collateral damage. Our client, the Internet Association, has raised concerns with Section 603. The views here, however, are my own.

The Supreme Court, among others, has noted, “[C]ontent on the Internet is as diverse as human thought.” This means that along with supercharged innovation, economic development, and democratic discourse, the Internet also facilitates the views of the intolerant, hateful, and yes even criminal elements around the globe.

In the US, the First Amendment protects the rights of individuals to express intolerant and hateful ideas. We are often criticized for this, to which we respond that the best means of combating such speech is by ensuring the ability of others to respond. In this dynamic, we believe that the marketplace of ideas is the best referee. Certainly, it is a better referee we can agree than a bureaucrat in a government agency making decisions about what should be censored. Put another way, the dangers of government-controlled speech far outweigh concerns over the promotion of speech we find objectionable.

Yet the First Amendment does not protect organizations from laws prohibiting them from conspiring to commit violent acts or raise money to fund criminal activities. The First Amendment does not protect an individual’s right to incite imminent lawless action that is likely to incite such action.

When use of the Internet crosses the line from protected speech to criminal activity, law enforcement can and should intervene. In such cases, Internet companies can and do cooperate with lawful requests to assist efforts to investigate and prosecute criminal behavior.

A key problem with Section 603, however, is that the trigger for the reporting mandate is based on the vague and undefined term “terrorist activity.” This term is not a term of art in the US criminal code and arguably goes well beyond criminal activity to speech that is protected under the First Amendment.

Proponents of the provision compare the reporting obligation to the existing reporting obligation for child pornography images in 18 U.S.C. §2258A. That law requires intermediaries that obtain actual knowledge of any facts and circumstances from which there is an apparent violation of federal child exploitation crimes involving child pornography to file a report with the National Center for Missing and Exploited Children (NCMEC).

The NCMEC reporting obligations, however, relate to images that are per se unlawful and are never protected speech under the US Constitution. A government mandate that an Internet company report facts and circumstances connected to the vague and overbroad term “terrorist activity” certainly would result in overbroad reporting to the government of speech that is protected under the First Amendment.

More troubling, if adopted, the provision would serve as a global template for other countries to impose reporting requirements for activities those jurisdictions deem unlawful. This would be particularly problematic with countries that regulate speech, including political speech, and with authoritarian regimes that would demand that Internet companies police their citizens’ activities.

Section 603 also creates a practical compliance problem. Because no one knows the definition of “terrorist activity,” how does one counsel a client to establish a compliance protocol under the proposal?

Any company would be at risk that if it did not report “terrorist activity,” it could be liable if there were a subsequent event that resulted in loss of life, limb, or property. Likely, this would result in designing a protocol to over-report anything that could be considered “terrorist activity.” Given the massive scale of content shared and created on the Internet daily, this would result in reporting of items that are not likely to be of material concern to public safety and would create a “needle in the haystack” problem for law enforcement. This serves no one’s purposes and adds privacy concerns to the First Amendment concerns noted above.

This creates a perverse incentive for a company to avoid obtaining knowledge of any activity that would trigger the reporting requirement—the exact opposite of what the proponents of the legislation want. Yet, designing such an avoidance protocol is nearly impossible. If even one low-level employee received an over-the-transom email about a “terrorist activity,” knowledge of the activity can be imputed to the entire company – exacerbating the potential liability faced by an Internet company.

Section 603 has other problems. The scope of the kind of Internet platforms that would be covered by the proposal is enormous. The reporting mandate applies to an “electronic communication service” (ECS) and a “remote computing service” (RCS). An ECS is arguably any service that provides a person with the ability to communicate with others electronically. The definition of “remote computing service” is “the provision to the public of computer storage or processing services by means of an electronic communications system.” These terms create a huge universe of entities subject to the mandate, including but certainly not limited to social media companies, search engines, Internet service providers, blogs, community bulletin boards, universities, advocacy organizations, and religious institutions.

Further, the proposal would not limit the reporting requirement to publicly viewable sites. It would require a cloud storage provider to police a third party’s internal, stored communications to avoid the potential liability under the provision.

For all of the reasons above, Senator Wyden was right to object to the reporting mandate.

And the Senate Select Committee is right to raise concerns with the use of the Internet by terrorist organizations. Confronting such use, however, must not be done at the expense of the First Amendment and by requiring Internet companies to police and report on their users’ activities.