EUthanizing AI

Maury Shenk opens this episode with an exploration of three efforts to overcome notable gaps in the performance of large language AI models. OpenAI has developed a tool meant to address the models’ lack of explainability. It uses, naturally, another large language model to identify what makes individual neurons fire the way they do. Maury is skeptical that this is a path forward, but it’s nice to see someone trying. Another effort, Anthropic’s creation of an explicit “constitution” of rules for its models, is more familiar and perhaps more likely to succeed. We also look at the use of “open source” principles to overcome the massive cost of developing new models and then training them. That has proved to be a surprisingly successful fast-follower strategy thanks to a few publicly available models and datasets. The question is whether those resources will continue to be available as competition heats up.

The European Union has to hope that open source will succeed, because the entire continent is a desert when it comes to institutions making the big investments that look necessary to compete in the field. Despite (or maybe because) it has no AI companies to speak of, the EU is moving forward with its AI Act, an attempt to do for AI what the EU did for privacy with GDPR. Maury and I doubt the AI Act will have the same impact, at least outside Europe. Partly that’s because Europe doesn’t have the same jurisdictional hooks in AI as in data protection. It is essentially regulating what AI can be sold inside the EU, and companies are likely to be quite willing to develop their products for the rest of the world and bolt on European use restrictions as an afterthought. In addition, the AI Act, which started life as a coherent if aggressive policy about high risk models, has collapsed into a welter of half-thought-out improvisations in response to the unanticipated success of ChatGPT.

Anne-Gabrielle Haie is more friendly to the EU’s data protection policies, and she takes us through a group of legal rulings that will shape liability for data protection violations. She also notes the potentially protectionist impact of a recent EU proposal to say that U.S. companies cannot offer secure cloud computing in Europe unless they partner with a European cloud provider.

Paul Rosenzweig introduces us to one of the U.S. government’s most impressive technical achievements in cyberdefense – tracking down, reverse engineering, and then killing Snake, possibly Russia’s best hacking tool.

Paul and I chew over China’s most recent self-inflicted wound in attracting global investment – the raid on Capvision. I agree that it’s going to discourage investors who need information before they part with their cash. But I also offer a lukewarm justification for China’s fear that Capvision’s business model encourages leaks.

Maury reviews Chinese tech giant Baidu’s ChatGPT-like search add-on. I wonder whether we can ever trust any such models for search, given their love affair with plausible falsehoods.

Paul reviews the technology that will be needed to meet what’s looking like a national trend to  require social media age verification.

Maury reviews the ruling upholding the lawfulness of the UK’s interception of Encrochat users. And Paul describes the latest crimeware for phones, this time centered in Italy.

Finally, in quick hits:

I note that both the director and the career deputy director are likely to leave NSA in the next several months.
And Maury and I both enthuse over Google’s new “passkey” technology.

Download the 457th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

The post EUthanizing AI appeared first on Reason.com.