Hallucinations and Hype

“Hallucinations are bad enough. But after a while you learn to cope with things like seeing your dead grandmother crawling up your leg with a knife in her teeth.” —Hunter S. Thompson

When an AI gets something wrong, the mistake is often called a hallucination. The media has, rightfully, been all over the phenomenon of generative AI getting stuff wrong. But now we’re at a curious moment where the media itself is having its own AI hallucinations.

My feeds over the past week have been dominated by headlines like this:





And this:




And this:



The first is from a news org, the next from a brilliant computer scientist and the last from a brilliant cultural critic. Almost overnight, there’s been a vibe shift from“hail the new AI overlords” to “ew, AI is trash.”

This is a little…



We are not even ten months from the launch of ChatGPT, the first real opportunity for average folks to jack into an LLM and finally have someone to complete their TPS reports and 5-paragraph themes for them. Ten months is everything in the desperate media world of page-view scavenging schadenfreude and a blip in the history of AI. Here is where time scales are colliding in a supernova of too-hasty analysis and we have, almost overnight, dropped from the crest of the hype tsunami to the trough.

I can see (at least) three fallacies in play here:

1. Judging AI’s development by consumer adoption, rather than business and government.

The evidence presented is Bing’s failure to increase market share despite its AI injection, and ChatGPT’s decline in use. This wildly underestimates the AI going on that you can’t see. Which is to say the AI being used or figured out by businesses. Just about everyone I know in a corporate job is testing, piloting or strategizing around AI in some form or fashion. Serious investments abound. I would bet that in the end most consumers will experience AI not by playing with AI tools, but through other experiences that are silently powered by AI.

2. Assuming that a bubble—and its bursting—means the end of AI.

We know from past bubbles that there are always companies resilient, innovative or lucky enough to survive downturns and bring lasting change. There’s a good bet that even when all specialty consumer AI tools and vaporware B2B thangs go down, some actual innovators will stick around.

3. A scorching case of recency bias.

While many journalists like to pretend that AI began last November, the truth is that the arc of its development goes back to the middle of the 20th century. Up until maybe IBM’s Watson captured the public imagination, this was a run of boom-and-bust cycles shaped largely by the availability of research funding, rather than the ebbs and flow of media attention.

Kevin Kelly, the Wired co-founder, observed, “The business plans of the next 10,000 startups… [is] Take X and add AI.” He wrote that in 2014.

Melanie Mitchell, in her Artificial Intelligence: A Guide for Thinking Humans, described an “AI spring in full bloom.” She wrote that in 2019.

So we are talking about one month, one set of results, in a much longer moment. We’ve heard this song before; it just wasn’t sung by Grimes’ AI voiceprint. To pretend that it’s all new has its risks, especially if you’re using the news cycle to think about how technology impacts your career or your business or your life or the lives of your kids.

All but the oldest of us have only ever lived with and adapted to more computing power and more forms of interaction with technology in our lives. There is nothing behind the recent headlines that should make us believe that will change. There is nothing that erases AI as a topic to wrangle with, whether as a reason to be concerned (ethics, misinformation, dislocation of jobs, copyright violation, Skynet etc.) or as a source of inspiration.

So if you prematurely consign AI to the dustbin of history because of some media hallucinations, you are…wait for it… and sorry… strAIght trippin’.


Previous
Previous

Generative Doesn’t Equal Intuitive

Next
Next

Art and Copy and AI