The apparent terror attack at Naval Air Station Pensacola spurs a debate among our panelists about whether the FISA Section 215 metadata program deserves to be killed, as Congress has increasingly signaled it intends to do. If the Pensacola attack involved multiple parties acting across US borders, still a live possibility as we talked, then it would be just about the first such attacks since 9/11 – and exactly the kind of attack the metadata program was designed to identify in advance.
Algorithms are at the heart of the Big Data/machine learning/AI changes that are propelling computerized decision-making. In their book, The Ethical Algorithm, Michael Kearns and Aaron Roth, two Computer Science professors at Penn, flag some of the social and ethical choices these changes are forcing upon us. My interview with them touches on many of the hot-button issues surrounding algorithmic decision-making. I disclose my views early: I suspect that much of the fuss over bias in machine learning is a way of smuggling racial and gender quotas and other academic social values into the algorithmic outputs. Michael and Aaron may not agree with that formulation, but the conversation provides a framework for testing it – and leaves me more skeptical about “bias hacking” of algorithmic outputs.
This Week in the Great Decoupling: The Commerce Department has rolled out proposed telecom and supply chain security rules that never once mention China. More accurately, the Department has rolled out a sketch of its preliminary thinking about proposed rules. Brian Egan and I tackle the substance and history of the proposal and conclude that the government is still fighting about the content of a policy it’s already announced. And to show that decoupling can go both ways, a US-based chip-tech group is moving to Switzerland to reassure its Chinese participants. Nick Weaver and I conclude that there’s a little less here than Reuters seems to think.
Brad Smith is President of Microsoft and author (with Carol Ann Browne) of Tools and Weapons: The Promise and Peril of the Digital Age. The book is a collection of vignettes of the tech policy battles in the last decade or so. Smith had a ringside seat for most of them, and he recounts what he learned in a compelling and good-natured way in the book – and in this episode’s interview. Starting with the Snowden disclosures and the emotional reaction of Silicon Valley, through the CLOUD Act, Brad Smith and Microsoft displayed a relatively even keel while trying to reflect the interests of its many stakeholders. In that effort, Smith makes the case for more international cooperation in regulating digital technology. Along the way, he discloses how the Cyberlaw Podcast’s own Nate Jones and Amy Hogan-Burney became “Namy,” achieving a fame and moniker inside Microsoft that only Brangelina has achieved in the wider world. Finally, he sums up Microsoft’s own journey in the last quarter century as a recognition that humility is a better long-term strategy than hubris.
This Week in Mistrusting Google: Klon Kitchen points to a Wall Street Journal story about all the ways Google tweaks its search engine to yield results that look machine-made but aren’t. He and I agree that most of these tweaks have understandable justifications – but you have to trust Google not to misuse them. And increasingly no one does. The same goes for Google’s foray into amassing and organizing health data on millions of Americans. It’s a nothing-burger with mayo, unless you mistrust Google. Since mistrusting Google is a growth industry, it’s getting a lot of attention, including from HHS investigators. Matthew Heiman explains, and when he’s done, my money is on Google surviving that investigation comfortably. The capital of mistrusting Google is Brussels, and not surprisingly, Maury Shenk tells us that the EU has forced Google to modify its advertising protocols to exclude data on health-related sites visited by its customers.
The Foreign Agent Registration Act is having a moment – in fact its best year since 1939, as the Justice Department charges three people with spying on Twitter users for Saudi Arabia. Since they were clearly acting like spies but not stealing government secrets or company intellectual property, FARA seems to be the only law that they could be charged with violating. Nate Jones and I debate whether the Justice Department can make the charges stick.
This episode is a wide-ranging interview with Andy Greenberg, author of Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers. The book contains plenty of original reporting, served up with journalistic flair. It digs deep into some of the most startling and destructive cyberattacks of recent years, from two dangerous attacks on Ukraine’s power grid, to the multibillion-dollar NotPetya, and then to a sophisticated but largely failed effort to bring down the Seoul Olympics and pin the blame on North Korea. Apart from sophisticated coding and irresponsibly indiscriminate targeting, all these episodes have one thing in common. They are all the work of Russia’s GRU.
Andy persuasively sets out the attribution and then asks what kind of corporate culture supports such adventurism – and whether there is a strategic vision behind the GRU’s attacks. The interview convinced me at least that the GRU is pursuing a strategy of muscular nihilism – “our system doesn’t work, but yours too is based on fragile illusions.” It’s a kind of global cyber intifada, with all the dangers and all the self-defeating tactics of the original intifadas. Don’t disagree until you’ve listened!
We open the episode with David Kris’s thoughts on the two-years-late CFIUS investigation of TikTok, its Chinese owner, ByteDance, and ByteDance’s US acquisition of the lip-syncing company Musical.ly. Our best guess is that this unprecedented reach-back investigation will end in a more or less precedented mitigation agreement.
You knew we’d go there. I talk about Congresswoman Katie Hill’s “throuple” pics and whether the rush to portray her as a victim of revenge porn raises questions about revenge porn laws themselves. Paul Rosenzweig, emboldened by twin tweets – from President Trump calling Never-Trumpers like him “human scum” and from Mark Hamill welcoming him to the Rebel Scum Alliance – takes issue with me.
Our interview is with Alex Joel, former Chief of the Office of Civil Liberties, Privacy, and Transparency at the Office of the Director of National Intelligence. Alex is now at the American University law school’s Tech, Law, and Security Program. We share stories about the difficulties of government startups and how the ODNI carved out a role for itself in the Intelligence Community (hint: It involved good lawyering). We dive pretty deep on recent FISA court opinions and the changes they forced in FBI procedures. In the course of that discussion, I realize that every “reform” of intelligence dreamed up by Congress in the last decade has turned out to be a self-licking compliance trap, and I take back some of my praise for the DNI’s lawyering.