This episode of the Cyberlaw Podcast opens with a look at some genuinely weird AI behavior, first by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – and then by Google’s AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS’ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China’s autocracy never gets its AI capabilities off the ground.
One thing that AI is creepily good at is faking people’s voices. I try out ElevenLabs’ technology in the first advertisement ever to run on the Cyberlaw Podcast.
The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment of how agencies are handling 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI’s access – at great cost to our unified approach to terrorism intelligence, I point out. I also complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community.
Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink’s effort to keep providing service to Ukraine without assisting offensive military operations.
Chinny does a victory lap over reports that the national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I wonder why Google thinks this is good for Google.
Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act (DMCA) and European Union (EU) and California privacy law are the censor’s favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges its customers so little.
Jim and Chinny cover the charm offensive being waged in Washington by TikTok’s CEO and the broader debate over China’s access to the personal data of Americans, including health data. Jim cites a recent Duke study, which I complain is not clear about when the data being sold is individual and when it is aggregated. Nick reminds us all that aggregate data is often easy to individualize.
Finally, we make quick work of a few more stories:
This week’s oral argument in Gonzalez v. Google is a big deal, but we will cover it in detail next week, with the benefit of the argument.
If you want to know why conservatives think the whole “disinformation” scare is a scam to suppress conservative speech, look no further than the scandal over the State Department’s funding of an non-governmental organization devoted to cutting off ad revenue for “risky” purveyors of “disinformation”—into which category is puts Reason (presumably including the Volokh Conspiracy), Real Clear Politics, the N.Y. Post, and the Washington Examiner – all outlets that can only look like disinformation to the most biased judge. The National Endowment for Democracy has already cut off funding to the NGO that dreamed this up, but Microsoft’s ad agency still seems to be dancing to the censor’s tune.
EU Lawmakers are refusing to endorse the latest EU-U.S. data deal. But it’s all virtue signaling.
Leaving Twitter over Elon Musk’s ownership turns out to be about as popular as leaving the U.S. over Trump’s presidency.
Chris Inglis has finished his tour of duty as national cyber director.
And the Federal Trade Commission’s humiliating failure to block Meta’s acquisition of Within is now complete. Meta closed the deal last week.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The post AI goes off the rails appeared first on Reason.com.