We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. We’re lawyers holding forth on the frontiers of technology, so take it with a grain of salt.
Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it’s not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There’s no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. Unfortunately, for all the heavy breathing, the public measures adopted by the West do not seem likely to defeat or deter China’s strategy.
The group is not impressed by the New York Times’ claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don’t do it well, I argue.
Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Meanwhile, Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations.
We will be eagerly waiting for resolution of the European fight over Facebook’s subscription fee and the implementation by websites of “Pay or Consent” privacy terms. I predict that Eurocrats’ hypocrisy will be tested by the effort to reconcile rulings for elite European media sites, which have already embraced “Pay or Consent,” with a nearly foregone ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task.
Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more.
Cristin also covers a detailed new Google TAG report on commercial spyware.
And in quick hits,
House Republicans tried and failed to find common ground on renewal of FISA Section 702 I recommend Goody-2, the ‘World’s ‘Most Responsible’ AI Chatbot Dechert has settled a wealthy businessman’s lawsuit claiming that the law firm hacked his computer network Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The post Are AI Models Learning to Generalize? appeared first on Reason.com.