Is the European Union (EU) about to rescue the FBI from Going Dark? Jamil Jaffer and Nate Jones tell us that a new directive aimed at preventing child sex abuse might just do the trick, a position backed by people who’ve been fighting the bureau on encryption for years.

The Biden administration is prepping to impose some of the toughest sanction ever on Chinese camera maker Hikvision, Jordan Schneider No one is defending Hikvision’s role in China’s Uyghur policy, but I’m skeptical that we should spend all that ammo on a company that probably isn’t the greatest national security threat we face. Jamil is more comfortable with the measure, and Jordan reminds me that China’s economy is shaky enough that it may not pick a fight to save Hikvision. Speaking of which, Jordan schools me on the likelihood that Xi Jin Ping’s hold on power will be loosened by the plight of Chinese tech platforms, harsh pandemic lockdowns, or the grim lesson provided by Putin’s ability to move without check from tactical error to strategic blunder and on to historic disaster.

Speaking of products of more serious national security than Hikvision, Nate and I try to figure out why the effort to get Kaspersky software out of U.S. infrastructure is still stalled. I think the Commerce Department should take the fall.

In a triumph of common sense and science, the wave of dumb laws attacking face recognition may be receding as lawmakers finally notice what’s been obvious for five years: The claim that face recognition is “racist” is false. Virginia, fresh off GOP electoral gains, has revamped its law on face recognition so it more or less makes sense. In related news, I puzzle over why Clearview AI accepted a settlement of the ACLU’s lawsuit under Illinois’s biometric law.

Nate and I debate how much authority Cyber Command should have to launch actions and intrude on third country machines without going through the interagency process. A Biden White House review of that question seems to have split the difference between the Trump and Obama administrations.

Quelle surprise! Jamil concludes that the EU’s regulation of cybersecurity is an overambitious and questionable expansion of the U.S. approach. He’s more comfortable with the Defense Department’s effort to keep small businesses who take its money from decamping to China once they start to succeed. Jordan and I fear that the cure may be worse than the disease.

I get to say I told you so about the unpersuasive and cursory opinion by United States Judge Robert Pitman, striking down Texas’ social media law. The Fifth Circuit has overturned his injunction, so the bill will take effect, at least for a while. In my view some of the provisions are constitutional and others are a stretch; Judge Pitman’s refusal to do a serious severability analysis means that all of them will get a try-out over the next few weeks.

Jamil and I debate geofenced search warrants and the reasons why companies like Google, Microsoft and Yahoo want them restricted.

In quick hits,

                                                                                                                                               

Download the 407th Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Nick Weaver kicks off a wide-ranging episode by celebrating Treasury’s imposition of sanctions on a cryptocurrency mixer for facilitating the laundering of stolen cryptocurrency. David Kris calls on Justice to step up its game in the face of this competition, while Nick urges Treasury to next sanction Tornado Cash — and explains why this would incentivize better behavior more generally. Scott Shapiro weighs in to describe North Carolina’s effort to prohibit government entities from papaya ransom to ransomware gangs; he doubts it will work.

David and Scott also further our malware education by summarizing two chilling reports about successful long-term intrusion campaigns – one courtesy of Chinese state hackers and the other likely launched by Russian government agents. I can’t help wondering whether the Russian agencies haven’t prioritized cool hackings over effective ones – to Russia’s cost in the fight with Ukraine.

Nick provides a tutorial on why quantum cryptanalysis is worrying the Biden Administration and what it thinks we ought to do about it. I express some cynicism over how good U.S. physicists have gotten at selling expensive dreams to the government – and considerable relief that Chinese physicists are apparently at least as good at extracting funding from their government.

Here’s a story mainstream media is already burying because it doesn’t fit the “AI bias” narrative. It turns out that in a study by the Department of Homeland Security, most errors (75%) were introduced at the photo capture stage, not by the matching algorithms. What’s more, the bias we keep hearing about has disappeared for the best products. Error rates were reported for the best products by gender and skin color. Errors for women, for light-skinned subjects and dark-skinned subjects were all as low as it’s possible to be – zero. For men, the error rate was 0.8%. These tests were of authentication/identification face recognition, which is easier to do than 1:n “searches” for matching faces, but the results mean that it’s not unreasonable to expect the whole bias issue to disappear as soon as the public wises up to the ideologically driven journalism now on offer.

Nick and I spar over location data sales by software providers. I pour cold water on the notion that evil prosecutors will track women to abortion clinics in other states using their location data. Nick takes the affirmative on that topic, and we put some money on the outcome, though it may take five years for one of us to collect.

Scott unpacks the flap over Department of Homeland Security (DHS) Disinformation Governance Board, headed by Cyberlaw Podcast alumna Nina Jankowicz, who revealed on Tiktok that I should have asked her to sing the interview. Scott and I agree that DHS is retreating quickly from the board’s name and mission as negative reviews for the name, the leader, and the mission keep piling up.

This Week in Schadenfreude is covered by Nick, pointing out the irony of the Spanish prime minister’s phone being targeted with Pegasus spyware not long after the Spanish government was widely blamed for using Pegasus against Catalan separatists.

In quick hits,

                                                                                                                                               

Download the 406th Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

 

 

Retraction: An earlier episode of the Cyberlaw Podcast may have left the impression that I think Google hates mothers. I regret the error. It appears that, in reality, Google only hates Republican mothers who are running for office. But to all appearances, Google really, really hates them. A remarkable, and apparently damning study disclosed that during the most recent federal election campaign, Google’s Gmail sent roughly two-thirds of GOP campaign emails to users’ spam inboxes while downgrading less than ten percent of the Dems’ messages. Jane Bambauer lays out the details, which refute most of the excuses Google might offer for the discriminatory treatment. Notably, neither Outlook nor Yahoo! mail showed a similar pattern. Tatyana thinks we should blame Google’s algorithm, not its personnel, but we’re all eager to hear Google’s explanation, whether it’s offered in the press, Federal Election Commission (FEC), in court, or in front of Congressional investigators after the next election.

Jordan Schneider helps us revisit China’s cyber policies after a long hiatus. Things have NOT gotten better for the Chinese government, Jordan reports. Stringent lockdowns in Shanghai are tanking the economy and producing a surprising amount of online dissent, but with Hong Kong’s death toll in mind, letting omicron spread unchecked is a scary prospect, especially for a leader who has staked his reputation on dealing with the virus better than the rest of the world. The result is hesitation over what had been a strong techlash regulatory campaign.

Tatyana Bolton pulls us back to the Russian-Ukrainian war. She notes that Russia Is not used to being hacked at anything like the current scale, even if most of the online attacks are pinpricks. She also notes Microsoft’s report on Russia’s extensive use of cyberattacks in Ukraine. All that said, cyber operations remain a minor factor in the war.

Michael Ellis and I dig into the ODNI’s intelligence transparency report, which inspired several differed takes over the weekend. The biggest story was that the FBI had conducted “up to” 4 million searches for U.S. person data in the pool of data collected under section 702 of the Foreign Intelligence Surveillance Act (FSA). Sharing a brief kumbaya moment with Sen. Ron Wyden, Michael finds the number “alarming or meaningless,” probably the latter. Meanwhile, FISA Classic wiretaps dropped again in the face of the coronavirus. And the FBI conducted four searches without going to the FISA court when it should have, probably by mistake.

We can’t stay away from the pileup that is Elon Musk’s Twitter bid. Jordan offers views on how much leverage China will have over Twitter by virtue of Tesla’s dependence on the Chinese market. Tatyana and I debate whether Musk should have criticized Twitter’s content moderators for their call on the Biden laptop story. Jane Bambauer questions whether Musk will do half the things that he seems to be hinting.

I agree, if only because European law will force Twitter to treat European sensibilities as the arbiter of what can be said in the public square. Jane outlines recent developments showing, in my view, that Europe isn’t exactly running low on crazy. A new court decision opens the door to what amounts to class actions to enforce European privacy law without regard for the jurisdictional limits that have made life easier for big U.S. companies. I predict that such lawsuits will also mean trouble for big Chinese platforms.

And that’s not half of it. Europe’s Digital Services Act, now nearly locked down, is the mother lode of crazy. Jane spells out a few of the wilder provisions – only some of which have made it into legal commentary.

Orin Kerr, the normally restrained and professorial expert on cyber law, is up in arms over a recent 9th Circuit decision holding that a preservation order is not a seizure requiring a warrant. Michael, Jane, and I dig into Orin’s agita, but we have trouble sharing it.

In quick hits:

                                                                                                                           

Download the 405th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

I’m unable to resist pointing out the profound bias built into everything Silicon Valley does these days. Google, it turns out, is planning to tell enterprise users of its word processor that words like “motherboard” and “landlord” are insufficiently inclusive for use in polite company. We won’t actually be forbidden to use those words. Yet. Though the future has apparently already arrived in Mountain View, where at least one source says that “mainboard” is the only acceptable term for the electronics that used to honor the women who raised us. In another blow for freedom, as now defined in the Valley, Twitter will suppress all climate talk that contradicts the views of a panel of government-appointed scientist-politicos. Apparently suppressing talk that contradicted The Centers for Disease Control and Prevention (CDC) scientist-politicians worked so well that Twitter is doubling down under the slogan, “You’ll pry these red pencils from our cold, dead fingers, Elon!”

In other cyber news, Megan Stifel sums up the last week of cyberwar news: It was a lot like the week before that. We’re still waiting – nervously — for Russian hackers to lift their eyes from the near target in Ukraine and focus on the West. The Five Eyes security agencies are doing their best to make sure we’re ready. Everywhere except for our cloud providers, who were exempted from the definition of really critical infrastructure in the Obama administration and have successfully fought off a change for the better part of a decade. Sultan Meghji and I think the Congressional effort to recognize the criticality of securing cloud providers is a heavy lift, especially among Republicans.

Is DJI sabotaging Ukraine’s drone fleet, presumably at China’s behest? The evidence is hardly airtight, but Ukraine is understandably not taking any chances, as it moves to more expensive drones sourced from the U.S. and elsewhere. Jamil Jaffer delivers a heartfelt plea to American hobbyists to do the same.

A group of former security officials are warning that pending antitrust bills could cause national security problems by handing advantages to Chinese tech companies. POLITICO has done a hit piece on them, claiming (with evidence ranging from plausible to laughable) that they are influenced by their ties to Silicon Valley. I’m pretty cautious about Silicon Valley’s effort to hide behind the national security interests they mostly dismiss, but I end up agreeing with Jamil that the antitrust bills should be amended to allow security to moderate the trustbusters’ zeal.

Sultan and I largely agreed on the week’s stories about Artificial Intelligence (AI). We were a bit disappointed by what started as a promising War on the Rocks piece about China’s Plans for AI and Cognitive Warfare. We were fascinated by the promise of hacking AI imperceptibly by corrupting its datasets. And we were interested in the facts but put off by the dime-store Marxism in MIT Technology Review’s tale of how dataset labeling for machine learning is providing a bare living for dispossessed Venezuelans.

Has Steve Ballmer been sneaking onto Microsoft’s Redmond campus and whispering ruthless tactics into Satya Nadella’s ear? Sultan and I think that may be the most plausible explanation for Microsoft’s greedy and boneheaded demand that the federal government pay extra for a crucial security feature.

Finally, in short hits:

                                                                                                                                               

Download the 404th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Whatever else the pundits are saying about the use of cyberattacks in the Ukraine war, Dave Aitel notes, they all believe it confirms their past predictions about cyberwar. Not much has been surprising about the cyber weapons the parties have deployed, Scott Shapiro agrees. The Ukrainians have been doxxing Russia’s soldiers in Bucha and its spies around the world. The Russians have been attacking Ukraine’s grid. What’s surprising is that the grid attacks have not seriously degraded civilian life, and how hard the Russians have had to work to have any effect at all. Cyberwar isn’t a bust, exactly, but it is looking a little overhyped. In fact, Scott suggests, it’s looking more like a confession of weakness than of strength: “My military attack isn’t up to the job, so I’ll throw in some fancy cyberweapons to impress The Boss.”

Would it have more impact here? We can’t know until the Russians (or someone else) gives it a try. But we should certainly have a plan for responding, and Dmitri Alperovitch and Sam Charap have offered theirs: Shut down Russia’s internet for a few hours just to show we can. It’s better than no plan, but we’re not ready to say it’s the right plan, given the limited impact and the high cost in terms of exploits exposed.

Much more surprising, and therefore interesting, is the way Ukrainian mobile phone networks have become an essential part of Ukrainian defense. As discussed in a very good blog post, Ukraine has made it easy for civilians to keep using their phones without paying no matter where they travel in the country and no matter which network they find there. At the same time, Russian soldiers are finding the network to be a dangerous honeypot. Dave and I think there are lessons there for emergency administration of phone networks in other countries.

Gus Hurwitz draws the short straw and sums up the second installment of the Elon Musk v. Twitter story. We agree that Twitter’s poison pill probably kills Musk’s chances of a successful takeover. So what else is there to talk about? In keeping with the confirmation bias story, I take a short victory lap for having predicted that Musk would try to become the Rupert Murdoch of the social oligarchs. And Gus helps us enjoy the festschrift of hypocrisy from the Usual Sources, all declaring that the preservation of democracy depends on internet censorship, administered by their friends.

Scott takes us deep on pipeline security, citing a colleague’s article for Lawfare on the topic. He thinks responsibility for pipeline security should be moved from Transportation Security Administration (TSA) to (FERC), because, well, TSA. The Biden administration is similarly inclined, but I’m not enthusiastic; TSA may not have shown much regulatory gumption until recently, but neither has FERC, and TSA can borrow all the cyber expertise it needs from its sister agency, CISA. An option that’s also open to FERC, Scott points out.

You can’t talk pipeline cyber security without talking industrial control security, so Scott and Gus unpack a recently discovered ICS malware package that is a kind of Metasploit for attacking operational tech systems. It’s got a boatload of features, but Gus is skeptical that it’s the best tool for causing major havoc in electric grids or pipelines. Also, remarkable: it seems to have been disclosed before the nation state that developed it could actually use it against an adversary. Now that’s Defending Forward!

As a palate cleanser, we ask Gus to take us through the latest in EU cloud protectionism. It sounds like a measure that will hurt U.S. intelligence but do nothing for Europe’s effort to build its own cloud industry. I recount the broader story, from subpoena litigation to the CLOUD Act to this latest counter-CLOUD attack. The whole thing feels to me like Microsoft playing both sides against the middle.

Finally, Dave takes us on a tour of the many proposals being launched around the world to regulate the use of Artificial Intelligence (AI) systems. I note that Congressional Dems have their knives out for face recognition vendor id.me. And I return briefly to the problem of biased content moderation. I look at research showing that Republican Twitter accounts were four times more likely to be suspended than Democrats after the 2020 election. But I find myself at least tentatively persuaded by further research showing that the Republican accounts were four times as likely to tweet links to sites that a balanced cross section of voters considers unreliable. Where is confirmation bias when you need it?

                                                                                                                                     

Download the 403rd Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

The theme of this episode of the Cyberlaw Podcast is, “Be careful what you wish for.” Techlash regulation is burgeoning around the world. Mark MacCarthy takes us through a week’s worth of regulatory enthusiasm.  Canada is planning to force Google and Facebook to pay Canadian news media for links. It sounds simple, but arriving at the right price – and the right recipients – will require a hefty dose of discretionary government intervention. Meanwhile, South Korea’s effort to regulate Google’s Android app store policies, which also sounds simple, is quickly devolving into such detail that the government might as well call it price regulation – because that’s what it is. And, Mark notes, even in China, which seemed to be moderating its hostility to tech platforms, just announced algorithm compliance audits for TenCent and ByteDance.

Nobody is weeping for Big Tech, but anybody who thinks this kind of thing will hurt Big Tech has never studied the history of AT&T – or Rupert Murdoch. Incumbent tech companies have the resources to protect themselves from regulatory harm – and to make sure their competitors will be crushed by the burdens they bear. The one missing chapter in the mutual accommodation of Big Tech and Big Government, I argue, is a Rupert Murdoch figure – someone who will use his platform unabashedly to curry favor not from the left but from the right. It’s an unfilled niche, but a moderately conservative Big Tech company is likely to find all the close regulatory calls being made in its favor if (or, more likely, when) the GOP takes power. If you think that’s not possible, you missed the last week of tech news. Elon Musk, whose entire business empire is built on government spending, is already toying with occupying a Silicon Valley version of the Rupert Murdoch niche. His acquisition of nearly 10% of Twitter is an opening gambit that is likely to make him the man that conservatives hail as the antidote to Silicon Valley’s political monoculture. Axios’s complaint that the internet is becoming politically splintered is wildly off the mark today, but it may yet come true.

Nick Weaver brings us back to earth with a review of the FBI’s successful (for now) takedown of the Cyclops Blink botnet – a Russian cyber weapon that was disabled before it could be fired. Nick reminds us that the operation was only made possible by a change in search and seizure procedures that the Electronic Frontier Foundation (EFF) and friends condemned as outrageous just a decade ago. Last week, he reports, Western law enforcement also broke the Hydra dark market. In more good news, Nick takes us through the ways in which bitcoin’s traceability has enabled authorities to bust child sex rings around the globe.

Nick also brings us This Week in Bad News for Surveillance Software: FinFisher is bankrupt. Israeli surveillance software smuggled onto EU ministers’ phones is being investigated; and Google has banned apps that use particularly intrusive data collection tools, outed by Nick’s colleagues at the International Computer Science Institute.

Finally, Europe is building a vast network to do face recognition across the continent. I celebrate the likely defeat of ideologues who’ve been trying to toxify face recognition for years. And I note that one of my last campaigns at the Department of Homeland Security (DHS) was a series of international agreements that lock European law enforcement into sharing of such data with the United States. Defending those agreements, of course, should be a high priority for the State Department’s on-again-off-again new cyber bureau.

                                                                                                            

Download the 402nd Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Spurred by a Cyberspace Solarium op-ed, Nate Jones gives an overview of cybersecurity worries in the maritime sector, where there is plenty to worry about. I critique the U.S. government’s December 2020 National Maritime Cybersecurity Strategy, a 36-page tome that, when the intro and summary and appendices and blank pages are subtracted, offers only eight pages of substance. Luckily, the Atlantic Council has filled the void with its own report on the topic.

Of course, the maritime sector isn’t the only one we should be concerned about. Sultan Meghji points to the deeply troubling state of industrial control security, as illustrated by at “10 out of 10” vulnerability recently identified in a Rockwell Automation ICS

Still, sometimes software rot serves a good purpose. Maury Shenk tells us about decay in Russia’s SORM – a site-blocking system that may be buckling under the weight of the Ukraine invasion. Talking about SORM allows me to trash a nothingburger story perpetrated by three New York Times reporters who ought to know better. Adam Satariano, Paul Mozur and Aaron Krolik should be ashamed of themselves for writing a long story suggesting that Nokia did something wrong by selling Russian telecom gear that enables wiretaps. Since the same wiretap features are required by Western governments as a matter of law, Nokia could hardly do anything else. SORM and its abuses were all carried out by Russian companies. I suspect that, after wading through a boatload of leaked documents, these three (three!) reporters just couldn’t admit there was no there, there.

Nate and I note the emergence of a new set of secondary sanctions targets as Treasury begins sanctioning companies that it concludes are part of a sanctions evasion network. We also puzzle over the surprising pushback on proposals to impose sanctions on If the WSJ is correct, and the reason is fear of cyberattacks if the Russian firm is sanctioned, isn’t that a reason to sanction them out of Western networks?

Sultan and Maury remind us that regulating cryptocurrency is wildly popular with some, including Sen. Elizabeth Warren and the EU Parliament. Sultan remains skeptical that sweeping regulation is in the cards. He is much more bullish on Apple’s ability to upend the entire fintech field by plunging into financial services with enthusiasm. I point out that it’s almost impossible for a financial services company to maintain a standoffish relationship with government, so Apple may have to change the tune it’s been playing in the U.S. for the last decade.

Maury and I explore fears that the DMA will break WhatsApp encryption, while Nate and I plumb some of the complexities of a story Brian Krebs broke about hackers exploiting the system by which online services provide subscriber information to law enforcement in an emergency.

Speaking of Krebs, we dig into Ubiquiti’s defamation suit against him. The gist of the complaint is that Krebs relied on a “whistleblower” who turned out to be the perp, and that Krebs didn’t quickly correct his scoop when that became apparent. My sympathies are with Krebs on this one, at least until Ubiquiti fills in a serious gap in its complaint – the lack of any allegation that the company told Krebs that he’d been misled and asked for a retraction. Without that, it’s hard to say that Krebs was negligent (let alone malicious) in reporting allegations by an apparently well-informed insider.

Maury brings us up to speed on the (still half-formed) K. online harms bill and explains why the U.K. government was willing to let the subsidiary of a Chinese company buy the U.K.’s biggest chip foundry. Sultan finds several insights in an excellent CNN story about the Great Conti Leak.

And, finally, I express my personal qualms about the indictment (for disclosing classified information) of Mark Unkenholz, a highly competent man whom I know from my time in government. To my mind, the prosecutors are going to have to establish that Unkenholz was doing something different from the kind of disclosures that are an essential part of working with tech companies that have no security clearances but plenty of tools needed by the intelligence community. This is going to be a story to watch.

                                                                                                           

Download the 401st Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

With the U.S. and Europe united in opposing Russia’s attack on Ukraine, a few tough transatlantic disputes are being swept away – or at least under the rug. Most prominently, the data protection crisis touched off by Schrems 2 has been resolved in principle by a new framework agreement between the U.S. and the EU. Michael Ellis and Paul Rosenzweig trade insights on the deal and its prospects before the European Court of Justice. The most controversial aspect of the agreement is the lack of any change in U.S. legislation. That’s simple vote-counting if you’re in Washington, but the Court of Justice of the European Union (CJEU) clearly expected that it was dictating legislation for the U.S. Congress to adopt, so Europe’s acquiescence may simply kick the can down the road a bit. The lack of legislation will be felt in particular, Michael and Paul aver, when it comes to providing remedies to European citizens who feel their rights have been trampled. Instead of going to court, they’ll be going to an administrative body with executive branch guarantees of independence and impartiality. We congratulate several old friends of the podcast who patched this solution together.

The Russian invasion of Ukraine, meanwhile, continues to throw off new tech stories. Nick Weaver updates us on the single most likely example of Russia using its cyber weapons effectively for military purposes – the bricking of Ukraine’s (and a bunch of other European) Viasat terminals. Alex Stamos and I talk about whether the social media companies recently evicted from Russia, especially Instagram, should be induced or required to provide information about their former subscribers’ interests to allow microtargeting of news to break Putin’s information management barriers; along the way we examine why it is that tech’s response to Chinese aggression has been less vigorous. Speaking of microtargeting, Paul gives kudos to the FBI for its microtargeted “talk to us” ads, only visible to Russian speakers within 100 yards of the Russian embassy in Washington. Finally, Nick Weaver and  Mike mull the significance of Israel’s determination not to sell sophisticated cell phone surveillance malware to Ukraine.

Returning to Europe-U.S. tension, Alex and I unpack the European Digital Markets Act, which regulates a handful of U.S. companies because they are “digital gatekeepers.” I think it’s a plausible response to network effect monopolization, ruined by anti-Americanism and the persistent illusion that the EU can regulate its way to a viable tech industry. Alex has a similar take, noting that the adoption of end-to-end encryption was a big privacy victory, thanks to WhatsApp, an achievement that the Digital Markets Act will undo in attempting to force standardized interoperable messaging on gatekeepers.

Nick walks us through the surprising achievements of the gang of juvenile delinquents known as Lapsus$. Their breach of Okta is the occasion for speculation about how lawyers skew cyber incident response in directions that turn out to be very bad for the breach victim. Alex vividly captures the lawyerly dynamics that hamper effective response. While we’re talking ransomware, Michael cites a detailed report on corporate responses to REvil breaches, authored by the minority staff of the Senate Homeland security committee. Neither the FBI nor CISA comes out of it looking good. But the bureau comes in for more criticism, which may help explain why no one paid much attention when the FBI demanded changes to the cyber incident reporting bill.

Finally, Nick and Michael debate whether the musician and Elon Musk sweetheart Grimes could be prosecuted for computer crimes after confessing to having DDOSed an online publication for an embarrassing photo of her. Just to be on the safe side, we conclude, maybe she shouldn’t go back to Canada. And Paul and I praise a brilliant WIRED op-ed proposing that Putin’s Soviet empire nostalgia deserves a wakeup call; the authors (Rosenzweig and Baker, as it happens) suggest that the least ICANN can do is kill off the Soviet Union’s out-of-date .su country code.

                                                                                                                                     

Download the 400th Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

On March 24, Utah Governor Spencer Cox signed into law the Utah Consumer Privacy Act, which gives state residents the right to know what personal information businesses collect about them, to require businesses to delete their personal information, and to opt out of the sale of their data or its use in targeted advertising. Utah joins California, Virginia, and Colorado in the growing club of states with similar consumer privacy laws. The law follows the general contours of its statutory progenitor, the California Consumer Privacy Act (CCPA), but in many ways is less burdensome to business. The law takes effect December 31, 2023.

The Utah law applies to for-profit companies that do business in Utah or target products or services at residents of the state, have annual revenue of $25,000,000 or more, and either: a) control or process personal data of at least 100,000 Utah residents in a calendar year or b) derive over 50% of their gross revenue from the sale of personal data and control or process personal data of at least 25,000 Utah residents. There are numerous exceptions to the law’s applicability, including for entities and information regulated by HIPAA and the Gramm-Leach-Bliley Act.

In broad strokes, the Utah law gives consumers (defined as residents of Utah “acting in an individual or household context” and not “an employment or commercial context”) the rights to:

  • Confirm whether a controller is processing the consumer’s personal data.
  • Access that personal data.
  • Delete personal data that the consumer provided to the controller.
  • Obtain a copy of personal data that the consumer previously provided to the controller, in a format that is portable, readily useable, and transferable to another controller.
  • Opt out of the sale of the consumer’s personal data or its processing for targeted advertising.
  • Opt out of the processing of “sensitive data” collected from the consumer.

“Sensitive data” is defined as personal data that reveals an individual’s racial or ethnic origin, religious beliefs, sexual orientation, or citizenship or immigration status; or information regarding an individual’s medical history, mental or physical health condition, or medical treatment or diagnosis by a health care professional; the processing of genetic personal data or biometric data, if the processing is for the purpose of identifying a specific individual; or specific geolocation data. Sensitive data does not include data that reveals racial or ethnic origin if the personal data is processed by a video communication service, or data processed by a licensed health care provider.

The law allows businesses to prescribe the means by which consumer requests are made, but requires businesses to take action on a request, and inform the consumer of that action, within 45 days of the request (extendable by one additional 45-day period). Controllers need not comply with a request if it appears to be fraudulent or the controller cannot authenticate it using commercially reasonable means. A controller may not charge a fee for responding to a request unless the request is the consumer’s second or more in a 12-month period. In addition, a controller can refuse to act on a request or can charge a reasonable fee to cover administrative costs of complying, if the request is “excessive, repetitive, technically infeasible, or manifestly unfounded,” “the controller reasonably believes the primary purpose in submitting the request was something other than exercising a right,” or “the request, individually or as part of an organized effort, harasses, disrupts, or imposes undue burden on the resources of the controller’s business.” However, a controller who relies on one of these grounds for rejecting a request or for charging a fee bears the burden of demonstrating that one of these grounds applies.

Controllers are also required to provide consumers with “a reasonably accessible and clear privacy notice” that includes: “(i) the categories of personal data processed by the controller; (ii) the purposes for which the categories of personal data are processed; (iii) how consumers may exercise a right; (iv) the categories of personal data that the controller shares with third parties, if any; and (v) the categories of third parties, if any, with whom the controller shares personal data.” A controller must also provide clear notice to consumers if it processes sensitive data. If a controller sells personal data or engages in targeted advertising, it must “clearly and conspicuously disclose to the consumer the manner in which the consumer may exercise the right to opt out” of such sales or advertising.

Notably, some of the key terms in the Utah law are defined more narrowly than in the CCPA, potentially lessening some of the burden on businesses. “Sale” or “sell” means “the exchange of personal data for monetary consideration by a controller to a third party.” “Targeted advertising” means “displaying an advertisement to a consumer where the advertisement is selected based on personal data obtained from the consumer’s activities over time and across nonaffiliated websites or online applications to predict the consumer’s preferences or interests,” but does not include advertising “based on a consumer’s activities within a controller’s website or online application or any affiliated website or online application,” among other things. “Third party” means “a person other than the consumer, controller, or processor and other than an affiliate or contractor of the controller or the processor.”

The law prohibits controllers from discriminating against consumers for exercising a right by denying a good or service to the consumer, charging a different price or rate, or providing the consumer a different level of quality of a good or service. However, controllers may offer a different price, rate, level, quality or selection if the consumer has opted out of targeted advertising or if the offer is related to the consumer’s voluntary participation in a bona fide loyalty, rewards, or similar program. A controller also is not required to provide a product, service, or functionality to a consumer if the consumer does not provide his or her personal data or allow its processing, but that data is reasonably necessary for the controller to provide the product, service, or functionality.

In addition to providing state residents with the rights and disclosures described above, the Utah law requires controllers to “establish, implement, and maintain reasonable administrative, technical, and physical data security practices designed to…protect the confidentiality and integrity of personal data…and reduce reasonably foreseeable risks of harm to consumers relating to the processing of personal data.”

A special reminder that we will be doing episode 400 live on video and with audience participation on March 28, 2022 at noon Eastern daylight time. So, mark your calendar and when the time comes, use this link* to join the audience:

https://riverside.fm/studio/the-cyberlaw-podcast-400

See you there!

*Please note that using this link on a mobile phone will prompt you to download the Riverside app.

                                                                                                                                     

  • There’s nothing like a serious shooting war to bring on paranoia and mistrust, and the Russian invasion of Ukraine is generating mistrust on all sides.
  • Everyone expected a much more damaging cyberattack from the Russians, and no one knows why it hasn’t happened yetDave Aitel walks us through some possibilities. Cyberattacks take planning, and Russia’s planners may have believed they wouldn’t need to use large-scale cyberattacks—apart from what appears to be a pretty impressive bricking of Viasat terminals used extensively by Ukrainian forces. Now that the Russians could use some cyber weapons in Ukraine, the pace of the war may be making it hard to build them. None of that is much comfort to Western countries that have imposed sanctions, since their infrastructure makes a nice fat sitting-duck target, and may draw fire soon if American intelligence warnings prove true.
  • Meanwhile, Matthew Heiman reports, the effort to shore up defenses is leading to a cavalcade of paranoia. Has the UK defense ministry banned the use of WhatsApp due to fears that it’s been compromised by Russia? Maybe. But WhatsApp has long had known security limitations that might justify downgrading its use on the battlefield. Speaking of ambiguity and mistrust, Telegram use is booming in Russia, Dave says, either because the Russians know how to control it or because they can’t. Take your pick.
  • Speaking of mistrust, the German security agency has suddenly discovered that it can’t trust Kaspersky products.  Good luck finding them, Dave offers, since many have been whitelabeled into other companies’ software. He has limited sympathy for an agency that resolutely ignored U.S. warnings about Kaspersky for years.
  • Even in the absence of a government with an interest in subverting software, the war is producing products that can’t be trusted. One open-source maintainer of a popular open-source tool turned it into a data wiper for anyone whose computer looks Belarussian or Russian. What could possibly go wrong with that plan?
  • Meanwhile, people who’ve advocated tougher cybersecurity regulation (including me) are doing a victory lap in the press about how it will bolster our defenses. It’ll help, I argue, but only some, and at a cost of new failures. The best example being TSA’s effort to regulate pipeline security, which has struggled to avoid unintended consequences while being critiqued by an industry that has been hostile to the whole effort from the start.
  • The most interesting impact of the war is in China. Jordan Schneider explores how China and Chinese companies are responding to sanctions on Russia. Jordan thinks that Chinese companies will follow their economic interests and adhere to sanctions – at least where it’s clear they’re being watched – despite online hostility to sanctions among Chinese digerati.
  • Matthew and I think more attention needs to be paid to Chinese government efforts to police and intimidate ethnic Chinese, including Chinese Americans, in the United States. The Justice Department for one is paying attention; it has arrested several alleged Chinese government agents engaged in such efforts.
  • Jordan unpacks China’s new guidance on AI algorithms. I offer grudging respect to the breadth and value of the topics covered by China’s AI regulatory endeavors.
  • Dave and I are disappointed by a surprise package in the FY 22 omnibus appropriations act. Buried on page 2334 is an entire smorgasbord of regulation for intelligence agency employees who go looking for jobs after leaving the intelligence community. This version is better than the original draft, but mainly for the intelligence agencies; intelligence professionals seem to have been left out in the cold when revisions were proposed.
  • Matthew does an update on the peanut butter sandwich spies who tried to sell nuclear sub secrets to a foreign power that the Justice Department did not name at the time of their arrest. Now that country has been revealed. It’s Brazil, apparently chosen because the spies couldn’t bring themselves to help an actual enemy of their country.
  • And finally, I float my own proposal for the nerdiest possible sanctions on Putin. He’s a big fan of the old Soviet empire, so it would be fitting to finally wipe out the last traces of the Soviet Union, which have lingered for thirty years too long in the Internet domain system. Check WIRED magazine for my upcoming op-ed on the topic.

                                                                                                                                     

Download the 399th Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.