Our interview with Ben Buchanan begins with his report on how artificial intelligence may influence national and cybersecurity. Ben’s quick takes: better for defense than offense, and probably even better for propaganda. The best part, in my view, is Ben’s explanation of how to poison the AI that’s trying to hack you
Our interview is with Mara Hvistendahl, investigative journalist at The Intercept and author of a new book, The Scientist and the Spy: A True Story of China, the FBI, and Industrial Espionage, as well as a deep WIRED article on the least known Chinese AI champion, iFlytek. Mara’s book raises…
David Kris, Paul Rosenzweig, and I dive deep on the big tech issue of the COVID-19 contagion: Whether (but mostly how) to use mobile phone location services to fight the virus. We cover the Israeli approach, as well as a host of solutions adopted in Singapore, Taiwan, South Korea, and elsewhere. I’m a big fan of Singapore, which produced in a week an app that Nick Weaver thought would take a year.
In our interview, evelyn douek, currently at the Berkman Klein Center and an SJD candidate at Harvard, takes us deep into content moderation. Displaying a talent for complexifying an issue we all want to simplify, she explains why we can’t live with social platform censorship and why we can’t live without it. She walks us through the growth of content moderation, from spam, through child porn, and on to terrorism and “coordinated inauthentic behavior” – the identification of which, evelyn assures me, does not require an existentialist dance instructor. Instead, it’s the latest and least easily defined category of speech to be suppressed by Big Tech. It’s a mare’s nest, but I, for one, intend to aggravate our new Tech Overlords for as long as possible.
If your podcast feed has suddenly become a steady diet of more or less the same COVID-19 stories, here’s a chance to listen to cyber experts talk about what they know about – cyberlaw. Our interview is with Elsa Kania, adjunct senior fellow at the Center for a New American Security and one of the most prolific students of China, technology, and national security. We talk about the relative strengths and weaknesses of the artificial intelligence ecosystems in the two countries.
This episode features a lively (and – fair warning – long) interview with Daphne Keller, Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center. We explore themes from her recent paper on regulation of online speech. It turns out that more or less everyone has an ability to restrict users’ speech online, and pretty much no one has both authority and an interest in fostering free-speech values. The ironies abound: Conservatives may be discriminated against, but so are Black Lives Matter activists. In fact, it looks to me as though any group that doesn’t think it’s the victim of biased content moderation would be well advised to scream as loudly about censorship or the others for fear of losing the victimization sweepstakes. Feeling a little like a carny at the sideshow, I serve up one solution for biased moderation after another, and Daphne methodically shoots them down. Transparency? None of the companies is willing, and the government may have a constitutional problem forcing them to disclose how they make their moderation decisions. Competition law? A long haul, and besides, most users like a moderated Internet experience. Regulation? Only if we take the First Amendment back to the heyday of broadcast regulation. As a particularly egregious example of foreign governments and platforms ganging up to censor Americans, we touch on the CJEU’s insufferable decision encouraging the export of European defamation law to the US – with an extra margin of censorship to keep the platform from any risk of liability. I offer to risk my Facebook account to see if that’s already happening.
We interview Ben Buchanan about his new book, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. This is Ben’s second book and second interview on the podcast about international conflict and cyber weapons. It’s safe to say that America’s strategic posture hasn’t improved. We face more adversaries with more tools and a considerably greater appetite for cyber adventurism. Ben recaps some of the stories that were undercovered in the US press when they occurred. The second large attack on Ukraine’s grid, for example, was little noticed during the US election of 2016, but it appears more ominous after a recent analysis of the tools used, and perhaps most importantly, those available to the GRU but not used. Meanwhile, the US is not making much progress in cyberspace on the basic requirement of a great power, which is making our enemies fear us.
There’s a fine line between legislation addressing deepfakes and legislation that is itself a deep fake. Nate Jones reports on the only federal legislation addressing the problem so far. I claim that it is well short of a serious regulatory effort – and pretty close to a fake law.
In contrast, India seems serious about imposing liability on companies whose unbreakable end-to-end crypto causes harm, at least to judge from the howls of the usual defenders of such crypto. David Kris explains how the law will work. I ask why Silicon Valley gets to impose the externalities of encryption-facilitated crime on society without consequence when we’d never allow tech companies to say that society should pick up the tab for their pollution because their products are so cool. In related news, the FBI may be turning the Pensacola military terrorism attack into a slow-motion replay of the San Bernardino fight with Apple, this time with more top cover.
Algorithms are at the heart of the Big Data/machine learning/AI changes that are propelling computerized decision-making. In their book, The Ethical Algorithm, Michael Kearns and Aaron Roth, two Computer Science professors at Penn, flag some of the social and ethical choices these changes are forcing upon us. My interview with them touches on many of the hot-button issues surrounding algorithmic decision-making. I disclose my views early: I suspect that much of the fuss over bias in machine learning is a way of smuggling racial and gender quotas and other academic social values into the algorithmic outputs. Michael and Aaron may not agree with that formulation, but the conversation provides a framework for testing it – and leaves me more skeptical about “bias hacking” of algorithmic outputs.
Brad Smith is President of Microsoft and author (with Carol Ann Browne) of Tools and Weapons: The Promise and Peril of the Digital Age. The book is a collection of vignettes of the tech policy battles in the last decade or so. Smith had a ringside seat for most of them, and he recounts what he learned in a compelling and good-natured way in the book – and in this episode’s interview. Starting with the Snowden disclosures and the emotional reaction of Silicon Valley, through the CLOUD Act, Brad Smith and Microsoft displayed a relatively even keel while trying to reflect the interests of its many stakeholders. In that effort, Smith makes the case for more international cooperation in regulating digital technology. Along the way, he discloses how the Cyberlaw Podcast’s own Nate Jones and Amy Hogan-Burney became “Namy,” achieving a fame and moniker inside Microsoft that only Brangelina has achieved in the wider world. Finally, he sums up Microsoft’s own journey in the last quarter century as a recognition that humility is a better long-term strategy than hubris.
The Foreign Agent Registration Act is having a moment – in fact its best year since 1939, as the Justice Department charges three people with spying on Twitter users for Saudi Arabia. Since they were clearly acting like spies but not stealing government secrets or company intellectual property, FARA seems to be the only law that they could be charged with violating. Nate Jones and I debate whether the Justice Department can make the charges stick.