artificial intelligence

Our interview is with Mara Hvistendahl, investigative journalist at The Intercept and author of a new book, The Scientist and the Spy: A True Story of China, the FBI, and Industrial Espionage, as well as a deep WIRED article on the least known Chinese AI champion, iFlytek. Mara’s book raises

David Kris, Paul Rosenzweig, and I dive deep on the big tech issue of the COVID-19 contagion: Whether (but mostly how) to use mobile phone location services to fight the virus. We cover the Israeli approach, as well as a host of solutions adopted in Singapore, Taiwan, South Korea, and elsewhere. I’m a big fan of Singapore, which produced in a week an app that Nick Weaver thought would take a year.

In our interview, evelyn douek, currently at the Berkman Klein Center and an SJD candidate at Harvard, takes us deep into content moderation. Displaying a talent for complexifying an issue we all want to simplify, she explains why we can’t live with social platform censorship and why we can’t live without it. She walks us through the growth of content moderation, from spam, through child porn, and on to terrorism and “coordinated inauthentic behavior” – the identification of which, evelyn assures me, does not require an existentialist dance instructor. Instead, it’s the latest and least easily defined category of speech to be suppressed by Big Tech. It’s a mare’s nest, but I, for one, intend to aggravate our new Tech Overlords for as long as possible.

Continue Reading Episode 308: Location, location, location. And the virus.

If your podcast feed has suddenly become a steady diet of more or less the same COVID-19 stories, here’s a chance to listen to cyber experts talk about what they know about – cyberlaw. Our interview is with Elsa Kania, adjunct senior fellow at the Center for a New American Security and one of the most prolific students of China, technology, and national security. We talk about the relative strengths and weaknesses of the artificial intelligence ecosystems in the two countries.

Continue Reading Episode 306: The (almost) COVID-19-free episode

This episode features a lively (and – fair warning – long) interview with Daphne Keller, Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center. We explore themes from her recent paper on regulation of online speech. It turns out that more or less everyone has an ability to restrict users’ speech online, and pretty much no one has both authority and an interest in fostering free-speech values. The ironies abound: Conservatives may be discriminated against, but so are Black Lives Matter activists. In fact, it looks to me as though any group that doesn’t think it’s the victim of biased content moderation would be well advised to scream as loudly about censorship or the others for fear of losing the victimization sweepstakes. Feeling a little like a carny at the sideshow, I serve up one solution for biased moderation after another, and Daphne methodically shoots them down. Transparency? None of the companies is willing, and the government may have a constitutional problem forcing them to disclose how they make their moderation decisions. Competition law? A long haul, and besides, most users like a moderated Internet experience. Regulation? Only if we take the First Amendment back to the heyday of broadcast regulation. As a particularly egregious example of foreign governments and platforms ganging up to censor Americans, we touch on the CJEU’s insufferable decision encouraging the export of European defamation law to the US – with an extra margin of censorship to keep the platform from any risk of liability. I offer to risk my Facebook account to see if that’s already happening.

Continue Reading Episode 302: Will the First Amendment Kill Free Speech in America?

We interview Ben Buchanan about his new book, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. This is Ben’s second book and second interview on the podcast about international conflict and cyber weapons. It’s safe to say that America’s strategic posture hasn’t improved. We face more adversaries with more tools and a considerably greater appetite for cyber adventurism. Ben recaps some of the stories that were undercovered in the US press when they occurred. The second large attack on Ukraine’s grid, for example, was little noticed during the US election of 2016, but it appears more ominous after a recent analysis of the tools used, and perhaps most importantly, those available to the GRU but not used. Meanwhile, the US is not making much progress in cyberspace on the basic requirement of a great power, which is making our enemies fear us.

Continue Reading Episode 301: Ratchet to Disaster

This week’s episode includes an interview with Bruce Schneier about his recent op-ed on privacy. Bruce and I are both dubious about the current media trope that facial recognition technology was spawned by the Antichrist. He notes that what we are really worried about is a lot bigger than facial recognition and offers ways in which the law could address our deeper worry. I’m less optimistic about our ability to write or enforce laws designed to restrict use of information that gets cheaper to collect, to correlate, and to store every year. It’s a good, civilized exchange.

Continue Reading Episode 296: Is CCPA short for “Law of Unintended Consequences”?

There’s a fine line between legislation addressing deepfakes and legislation that is itself a deep fake. Nate Jones reports on the only federal legislation addressing the problem so far. I claim that it is well short of a serious regulatory effort – and pretty close to a fake law.

In contrast, India seems serious about imposing liability on companies whose unbreakable end-to-end crypto causes harm, at least to judge from the howls of the usual defenders of such crypto. David Kris explains how the law will work. I ask why Silicon Valley gets to impose the externalities of encryption-facilitated crime on society without consequence when we’d never allow tech companies to say that society should pick up the tab for their pollution because their products are so cool. In related news, the FBI may be turning the Pensacola military terrorism attack into a slow-motion replay of the San Bernardino fight with Apple, this time with more top cover.

Continue Reading Episode 295: The line between deepfake legislation and deeply fake legislation

Algorithms are at the heart of the Big Data/machine learning/AI changes that are propelling computerized decision-making. In their book, The Ethical Algorithm, Michael Kearns and Aaron Roth, two Computer Science professors at Penn, flag some of the social and ethical choices these changes are forcing upon us. My interview with them touches on many of the hot-button issues surrounding algorithmic decision-making. I disclose my views early: I suspect that much of the fuss over bias in machine learning is a way of smuggling racial and gender quotas and other academic social values into the algorithmic outputs. Michael and Aaron may not agree with that formulation, but the conversation provides a framework for testing it – and leaves me more skeptical about “bias hacking” of algorithmic outputs.

Continue Reading Episode 291: Ethical Algorithms with Michael Kearns and Aaron Roth

The Foreign Agent Registration Act is having a moment – in fact its best year since 1939, as the Justice Department charges three people with spying on Twitter users for Saudi Arabia. Since they were clearly acting like spies but not stealing government secrets or company intellectual property, FARA seems to be the only law that they could be charged with violating. Nate Jones and I debate whether the Justice Department can make the charges stick.

Continue Reading Episode 287: Plumbing the depths of artificial stupidity

Our guests this week are Paul Scharre from the Center for a New American Security and Greg Allen from the Defense Department’s newly formed Joint Artificial Intelligence Center. Paul and Greg have a lot to say about AI policy, especially with an eye toward national security and strategic competition. Greg sheds some light on DOD’s activity, and Paul helps us understand how the military and policymakers are grappling with this emerging technology. But at the end of the day, I want to know: Are we at risk of losing the AI race with China? Paul and Greg tell me not all hope’s lost – and how we can retain technological leadership.

Continue Reading Episode 274: Will Silicon Valley have to choose between end-to-end crypto and shutting down speech it hates?