David Kris, Paul Rosenzweig, and I dive deep on the big tech issue of the COVID-19 contagion: Whether (but mostly how) to use mobile phone location services to fight the virus. We cover the Israeli approach, as well as a host of solutions adopted in Singapore, Taiwan, South Korea, and elsewhere. I’m a big fan of Singapore, which produced in a week an app that Nick Weaver thought would take a year.

In our interview, evelyn douek, currently at the Berkman Klein Center and an SJD candidate at Harvard, takes us deep into content moderation. Displaying a talent for complexifying an issue we all want to simplify, she explains why we can’t live with social platform censorship and why we can’t live without it. She walks us through the growth of content moderation, from spam, through child porn, and on to terrorism and “coordinated inauthentic behavior” – the identification of which, evelyn assures me, does not require an existentialist dance instructor. Instead, it’s the latest and least easily defined category of speech to be suppressed by Big Tech. It’s a mare’s nest, but I, for one, intend to aggravate our new Tech Overlords for as long as possible.


Continue Reading

That’s the question I debate with David Kris and Nick Weaver as we explore the ways in which governments are using location data to fight the spread of COVID-19. Phone location data is being used to enforce quarantines and to track contacts with infected people. It’s useful for both, but Nick thinks the second application may not really be ready for a year – too late for this outbreak.

Our interview subject is Jason Healey, who has a long history with Cyber Command and a deep recent oeuvre of academic commentary on cyber conflict. Jay explains Cyber Command’s doctrine of “persistent engagement” and “defending forward” in words that I finally understand. It makes sense in terms of Cyber Command’s aspirations as well as the limitations it labored under in the Obama Administration, but I end up wondering whether it’s going to be different from “deterrence through having the best offense.” Nothing wrong with that, in my view – as long as you have the best offense by a long shot, something that is by no means proven.


Continue Reading

If your podcast feed has suddenly become a steady diet of more or less the same COVID-19 stories, here’s a chance to listen to cyber experts talk about what they know about – cyberlaw. Our interview is with Elsa Kania, adjunct senior fellow at the Center for a New American Security and one of the most prolific students of China, technology, and national security. We talk about the relative strengths and weaknesses of the artificial intelligence ecosystems in the two countries.


Continue Reading

Our interview in this episode is with Glenn Gerstell, freed at last from some of the constraints that come with government service. We cover the Snowden leaks, how private and public legal work differs (hint: it’s the turf battles), Cyber Command, Russian election interference, reauthorization of FISA, and the daunting challenges the US (and its Intelligence Community) will face as China’s economy begins to reinforce its global security ambitions.


Continue Reading

This episode features a lively (and – fair warning – long) interview with Daphne Keller, Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center. We explore themes from her recent paper on regulation of online speech. It turns out that more or less everyone has an ability to restrict users’ speech online, and pretty much no one has both authority and an interest in fostering free-speech values. The ironies abound: Conservatives may be discriminated against, but so are Black Lives Matter activists. In fact, it looks to me as though any group that doesn’t think it’s the victim of biased content moderation would be well advised to scream as loudly about censorship or the others for fear of losing the victimization sweepstakes. Feeling a little like a carny at the sideshow, I serve up one solution for biased moderation after another, and Daphne methodically shoots them down. Transparency? None of the companies is willing, and the government may have a constitutional problem forcing them to disclose how they make their moderation decisions. Competition law? A long haul, and besides, most users like a moderated Internet experience. Regulation? Only if we take the First Amendment back to the heyday of broadcast regulation. As a particularly egregious example of foreign governments and platforms ganging up to censor Americans, we touch on the CJEU’s insufferable decision encouraging the export of European defamation law to the US – with an extra margin of censorship to keep the platform from any risk of liability. I offer to risk my Facebook account to see if that’s already happening.


Continue Reading

This episode features an interview on the Bezos phone flap with David Kaye and Alex Stamos. David is a UN Special Rapporteur and clinical professor of law at UC Irvine who first drew attention to an FTI Consulting report concluding that the Saudis did hack Bezos’ phone. Alex is director of the Stanford Internet Observatory and was the CSO at Facebook; he thinks the technical case against the Saudis needs work, and he calls for a supplemental forensic review of the phone.


Continue Reading

This week’s episode includes an interview with Bruce Schneier about his recent op-ed on privacy. Bruce and I are both dubious about the current media trope that facial recognition technology was spawned by the Antichrist. He notes that what we are really worried about is a lot bigger than facial recognition and offers ways in which the law could address our deeper worry. I’m less optimistic about our ability to write or enforce laws designed to restrict use of information that gets cheaper to collect, to correlate, and to store every year. It’s a good, civilized exchange.


Continue Reading

There’s a fine line between legislation addressing deepfakes and legislation that is itself a deep fake. Nate Jones reports on the only federal legislation addressing the problem so far. I claim that it is well short of a serious regulatory effort – and pretty close to a fake law.

In contrast, India seems serious about imposing liability on companies whose unbreakable end-to-end crypto causes harm, at least to judge from the howls of the usual defenders of such crypto. David Kris explains how the law will work. I ask why Silicon Valley gets to impose the externalities of encryption-facilitated crime on society without consequence when we’d never allow tech companies to say that society should pick up the tab for their pollution because their products are so cool. In related news, the FBI may be turning the Pensacola military terrorism attack into a slow-motion replay of the San Bernardino fight with Apple, this time with more top cover.


Continue Reading

For this special edition of the Cyberlaw Podcast, we’ve convened a panel of experts on intelligence and surveillance legal matters. We take a look at the Department of Justice Inspector General’s report on the FBI’s use of FISA applications – and the many errors in those applications. We also touch on FBI Director Wray’s response, as well as a public order issued by the Foreign Intelligence Surveillance Court. We wrap up with thoughts on how to resolve some of the issues identified by the IG’s report and suggestions for improving the FISA process.


Continue Reading