“Prediction: 2023 will make 2022 look like a sleepy year for AI advancement & adoption,” Greg Brockman, president and co-founder of OpenAI, tweeted on December 31. That’s a bold claim given what happened when Brockman’s artificial intelligence (A.I.) company allowed the public to preview ChatGPT and DALL-E 2. The former is its generative pre-trained transformer (GPT), a large language model optimized through deep learning to simulate human writing. The latter is its text-to-image A.I. art deep learning model that generates digital images from natural language descriptions.
ChatGPT can create everything from novel dad jokes to fairly well-written computer code. At my prompting, it wrote a serviceable sonnet describing Gilgamesh’s failed quest for immortality.
In just five days after ChatGPT’s public launch, 1 million people had signed up to give it assignments. In comparison, it took Instagram two and a half months to reach 1 million users, while Facebook needed 10 months, Twitter needed two years, and Netflix needed 41 months. ChatGPT’s servers are now regularly at capacity, and there is a waiting list to interact with the model.
ChatGPT was trained using around half a trillion words of text scraped from the internet and a selection of books. ChatGPT boasts 175 billion parameters, which are values in language models that change independently as they learn from training data to make more accurate predictions about the appropriate responses to conversations and queries.
High school and college teachers are worried that students will use ChatGPT to write essays, and journalists are concerned that it can produce news articles. Other firms see an opportunity to ramp up their productivity without adding personnel. Since November, the technology news site CNET has used ChatGPT to produce nearly 100 articles. The outlet says human editors check to ensure the articles are “accurate, credible, and helpful.” But outside journalists gleefully spotted several elementary robot reporting errors that CNET‘s human editors missed after publication. Corrections followed.
In January, New York City schools blocked access to ChatGPT on school-owned networks and devices. The January 12 issue of Nature reported that scientific reviewers were fooled about one-third of the time by fake biomedical article abstracts that ChatGPT generated. Somewhat ironically, the prestigious International Conference on Machine Learning banned authors from using A.I. tools like ChatGPT to write scientific papers.
A more sinister prospect is that large language models like ChatGPT will enable the automation of effective propaganda and the spread of disinformation. They are, after all, cheap, fast, and human-sounding.
As amazing and amusing as ChatGPT is, it is by no means flawless. “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers,” OpenAI acknowledges.
When prompted, ChatGPT makes text predictions to produce plausible responses, but the machine sometimes “hallucinates” factually wrong answers. In one case, a user asked ChatGPT what mammal lays the largest eggs. It responded that elephants did, adding that some elephant eggs reach nine inches in length and weigh up to five pounds.
Skeptics will argue that with respect to large language models like ChatGPT we are traversing the Gartner hype cycle. Developed by the Gartner information technology consultancy, it is a graphical representation of the life cycle stages—from innovation trigger through the peak of inflated expectations to the trough of disillusionment, rising again during the slope of enlightenment to reach the plateau of productivity—that a technology goes through from conception to maturity and widespread adoption. According to that view, the innovation trigger of ChatGPT has propelled us to the peak of inflated expectations, and the trough of disillusionment lies before us.
But large language models are not going away, and they will get better and better. Even before ChatGPT was released, A.I. watchers were speculating about the impending arrival of OpenAI’s GPT-4. Initial rumors suggested that GPT-4 would feature 100 trillion parameters, about 500 times more than ChatGPT. In an interview last year, however, OpenAI CEO Sam Altman said GPT-4 won’t be much bigger than ChatGPT.
When the cybersecurity content marketing firm HackerContent asked ChatGPT to guess how many parameters GPT-4 will have, it gave a different answer. “It’s hard to make an accurate guess without more information about the design and architecture of ChatGPT-4,” ChatGPT said, “but it is likely to have several hundred billion parameters or even more, as machine learning models tend to increase in size and complexity with each iteration.” While that sounds reasonable, ChatGPT may once again be hallucinating a plausible answer.
“There will be scary moments as we move towards [artificial general intelligence] systems, and significant disruptions,” Altman tweeted in December, “but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there.” I, for one, welcome our new chatbot overlords.
The post Welcoming Our New Chatbot Overlords appeared first on Reason.com.