Don’t be sucked in by AI’s head-spinning hype cycles

The last year was a roller coaster ride in the AI world, and no doubt many people are dizzied by the number of advances and reversals, the constant hype and equally constant fearmongering. But let’s take a step back: AI is a powerful and promising new technology, but the conversation isn’t always genuine, and it’s generating more heat than light.

AI is interesting to everyone from PhDs to grade school kids for good reason. Not every new technology both makes us question the fundamental natures of human intelligence and creativity, and lets us generate an infinite variety of dinosaurs battling with lasers.

This broad appeal means the debate over what AI is, isn’t, might or mustn’t be has spread from trade conferences like NeurIPS to specialist publications like this one, to the front page of impulse-purchase news mags at the grocery store. The threat and/or promise of AI (in a general sense, which lack of specificity is part of the problem) has become a household topic seemingly overnight.

On the one hand, must be validating for researchers and engineers who have toiled in relative obscurity for decades on what they feel is an important technology to see it so widely considered and remarked upon. But like the neuroscientist whose paper results in a headline like “Scientists have located the exact center of love,” or the physicist whose ironically-named “god particle” leads to a theological debate, it surely must also be frustrating to have one’s work bounced around among the hoi polloi (that is, unscrupulous pundits, not innocent lay persons) like a beach ball.

“AI can now…” is a very dangerous way to start any sentence (though I’m sure I’ve done it myself) because it’s very difficult to say for certain what AI is really doing. It certainly can outplay any human at chess or go, and it can predict the structure of protein chains; it can answer any question confidently (if not correctly) and it can do a remarkably good imitation of any artist, living or dead.

But it is difficult to tease out which of these things is important, and to whom, and which will remembered as briefly diverting parlor tricks in 5 or 10 years, like so many innovations we have been told are going to change the world. The capabilities of AI are widely misunderstood because they have been actively misrepresented by both those who want to sell it or drive investment in it, and those who fear it or underestimate it.

It’s obvious there’s a lot of potential in something like ChatGPT, but those building products with it would like nothing better than for you, potentially a customer or at least someone who will encounter it, to think that it is more powerful and less error-prone than it is. Billions are being spent to ensure that AI is at the core of all manner of services — and not necessarily to make them better, but to automate them the way so much has been automated with mixed results.

Not to use the scary “they,” but they — meaning companies like Microsoft and Google that have an enormous financial interest in the success of AI in their core businesses (having invested so much in it) — are not interested in changing the world for the better, but making more money. They’re businesses, and AI is a product they are selling or hoping to sell — that no slander against them, just something to keep in mind when they make their claims.

On the other hand you have people who fear, for good reason, that their role will be eliminated not due to actual obsolescence but because some credulous manager swallowed the “AI revolution” hook, line, and sinker. People are not reading ChatGPT scripts and thinking, “oh no, this software does what I do.” They are thinking, “this software appears to do what I do, to people who don’t understand either.”

That’s very dangerous when your work is systemically misunderstood or undervalued, as a great deal is. But it’s a problem with management styles, not AI per se. Fortunately we have bold experiments like CNET’s attempt to automate financial advice columns: the graves of such ill-advised efforts will serve as gruesome trail markers to those thinking of making the same mistakes in the future.

But it is equally dangerous to dismiss AI as a toy, or to say it will never do such and such simply because it can’t now, or because one has seen an example of it failing. It’s the same mistake that the other side makes, but inverted: proponents see a good example and say, “this shows it’s over for concept artists;” opponents see a bad example (or perhaps the same one!) and say “this shows it can never replace concept artists.”

Both build their houses upon shifting sands. But both click, and eyeballs are of course the fundamental currency of the online world.

And so you have these dueling extreme takes that attract attention not for being thoughtful but for being reactive and extreme — which should surprise no one, since as we have all learned from the last decade, conflict drives engagement. What feels like a cycle of hype and disappointment is just fluctuating visibility in an ongoing and not very helpful argument over whether AI is fundamentally this or that. It has the feel of people in the ’50s arguing over whether we will colonize Mars or Venus first.

The reality is that a lot of those concept artists, not to mention novelists, musicians, tax preparers, lawyers, and every other profession that sees AI encroachment in one way or another, are actually excited and interested. They know their work well enough to understand that even a really good imitation of what they do is fundamentally different from actually doing it.

Advances in AI are happening slower than you think, not because there aren’t breakthroughs but because those breakthroughs are the result of years and years of work that isn’t as photogenic or shareable as stylized avatars. The biggest thing in the last decade was “Attention is all you need,” but we didn’t see that on the cover of Time. It’s certainly notable that as of this month or that, it’s good enough to do certain things, but don’t think about it as AI “crossing a line” so much as AI moving further down a long, long gradient or continuum that even its most gifted practitioners can’t see more than a few months of.

All of this is just to say, don’t get caught up in either the hype or the doomsayers. What AI can or can’t do is an open question, and if anyone says they know, check if they’re trying to sell you something. What people may choose to do with the AI we already have, though — that’s something we can and should talk about more. I can live with a model that can ape my writing style — I’m just aping a dozen other writers anyway. But I would prefer not to work at a company that algorithmically determines pay or who gets laid off, because I wouldn’t trust the those who put that system in place. As usual, the tech isn’t the threat — it’s the people using it.

Don’t be sucked in by AI’s head-spinning hype cycles by Devin Coldewey originally published on TechCrunch