Elon Musk, Andrew Yang, and Steve Wozniak Propose an A.I. ‘Pause.’ It’s a Bad Idea and Won’t Work Anyway.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” asserts an open letter signed by Twitter’s Elon Musk, universal basic income advocate Andrew Yang, Apple co-founder Steve Wozniak, DeepMind researcher Victoria Krakovna, Machine Intelligence Research Institute co-founder Brian Atkins, and hundreds of other tech luminaries. The letter calls “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” If “all key actors” will not voluntarily go along with a “public and verifiable” pause, the letter’s signatories argue that “governments should step in and institute a moratorium.”

The signatories further demand that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” This amounts to a requirement for nearly perfect foresight before allowing the development of artificial intelligence (A.I.) systems to go forward.

Human beings are really, really terrible at foresight—especially apocalyptic foresight. Hundreds of millions of people did not die from famine in the 1970s; 75 percent of all living animal species did not go extinct before the year 2000; and “war, starvation, economic recession, possibly even the extinction of homo sapiens” did not happen since global petroleum production failed to peak in 2006.

Nonapocalyptic technological predictions do not fare much better. Moon colonies were not established during the 1970s. Nuclear power, unfortunately, does not generate most of the world’s electricity. The advent of microelectronics did not result in rising unemployment. Some 10 million driverless cars are not now on our roads. As OpenAI (the company that developed GPT-4) CEO Sam Altman argues, “The optimal decisions [about how to proceed] will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far.”

Still, some of the signatories are serious people and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing—e.g., doing better on the bar exam than 90 percent of current human test takers. They can also be confounding.

Some segments of the transhumanist community have been greatly worried for a while about an artificial super-intelligence getting out of our control. However, as capable (and quirky) as it is, GPT-4 is not that. And yet, a team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and in a pre-print reported, “The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence.”

As it happens, OpenAI is also concerned about the dangers of A.I. development—however, the company wants to proceed cautiously rather than pause. “We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice,” wrote Altman in an OpenAI statement about planning for the advent of artificial general intelligence. “We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize ‘one shot to get it right’ scenarios.”

In other words, OpenAI is properly pursuing the usual human path for gaining new knowledge and developing new technologies—that is, learning from trial and error, not “one shot to get it right” through the exercise of preternatural foresight. Altman is right when he points out that “democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.”

A moratorium imposed by U.S. and European governments, as called for in the open letter, would certainly delay access to the possibly quite substantial benefits of new A.I. systems while doubtfully increasing A.I. safety. In addition, it seems unlikely that the Chinese government and A.I. developers in that country would agree to the proposed moratorium anyway. Surely, the safe development of powerful A.I. systems is more likely to occur in American and European laboratories than those overseen by authoritarian regimes.

The post Elon Musk, Andrew Yang, and Steve Wozniak Propose an A.I. ‘Pause.’ It’s a Bad Idea and Won’t Work Anyway. appeared first on Reason.com.