AI-splaining

The war that began with the Russian invasion of Ukraine grinds on. Cybersecurity experts have spent much of 2022 trying to draw lessons about cyberwar strategies from the conflict. Dmitri Alperovitch takes us through the latest learning, cautioning that all of it could look different in a few months, as both sides adapt to the others’ actions.

David Kris joins Dmitri to evaluate a Microsoft report hinting at how China may be abusing its edict that software vulnerabilities must be reported first to the Chinese government. The temptation to turn such reports into 0-day exploits is strong, and Microsoft notes with suspicion a recent rise in Chinese 0-day exploits. Dmitri worried about just such a development while serving on the Cyber Safety Review Board, but he is not yet convinced that we have the evidence to make a case against the Chinese mandatory disclosure law.

Sultan Meghji keeps us in Redmond, digging through a deep Protocol story on how Microsoft has helped build Artificial Intelligence (AI) capacity in China. The amount of money invested, and the deep bench of AI researchers from China, raise real questions about how the United States can decouple from China – and whether China will eventually decide to do the decoupling.

I express skepticism about the White House’s latest initiative on ransomware, a 30+ nation summit that produced a modest set of concrete agreements. But Sultan and Dmitri have been on the receiving end of deputy national security adviser Anne Neuberger’s forceful personality, and they think we will see results. We’d better. Banks report that ransomware payments doubled last year, to $1.2 billion.

David introduces the high-stakes struggle over when cyberattacks can be excluded from insurance coverage as acts of war. A recent settlement between Mondelez and Zurich has left the law in limbo.

Sultan tells me why AI is so bad at explaining the results it reaches. He sees light at the end of the tunnel. I see more stealthy imposition of woke values. But we find common ground in trashing the Facial Recognition Act, a lefty Democrats’ bill that throws together every bad idea for regulating facial recognition ever put forward and adds a few more. A red wave election will be worth it just to make sure this bill stays dead.

Finally, Sultan reviews the National Security Agency’s report on supply chain security. And I introduce the elephant in the room, or at least the mastodon: Elon Musk’s takeover at Twitter and the reaction to it.  I downplay the probability of CFIUS reviewing the deal. And I mock the Elon-haters who fear that Musk’s scrimping on content moderation will turn Twitter into a hellhole that includes *gasp!* Republican speech. Turns out that they are fleeing Twitter for Mastodon, which pretty much invented scrimping on content moderation.

Download the 429th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets

The post AI-splaining appeared first on Reason.com.