The dangerous illusion of AI consciousness
AI expert and DeepMind contributor Shannon Vallor explores OpenAI’s latest GPT-4o model, based on the ideas of her new book, ‘The AI Mirror’. Despite modest intellectual improvements, AGI’s human-like behaviour raises serious ethical concerns, but as Vallor argues, AI today only presents the illusion of consciousness.
You can see Shannon Vallor, debating Ken Cukier and Joscha Bach about the future of Big Tech and AI regulation on Controlling the Tech Titans, 1:15pm Monday 27th at HowTheLightGetsIn Hay-on-Wye 2024.
This article is presented in association with Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Festival.
This week OpenAI announced GPT-4o: the latest, multimodal version of the generative AI GPT model class that drives the now-ubiquituous ChatGPT tool and Microsoft Copilot. The demo of GPT-4o doesn’t suggest any great leap in intellectual capability over its predecessor GPT-4; there were obvious mistakes even in the few minutes of highly-rehearsed interaction shown. But it does show the new model enabling ChatGPT to interact more naturally and fluidly in real-time conversation, flirt with users, interpret and chat about the user’s appearance and surroundings, and even adopt different ‘emotional’ intonations upon command, expressed in both voice and text.
SUGGESTED VIEWING
The dream and danger of AI
With Liv Boeree, Dominic Walliman
This next step in the commercial rollout of AI chatbot technology might seem like a nothingburger. After all, we don’t seem to be getting any nearer to AGI, or to the apocalyptic Terminator scenarios that the AI hype/doom cycle was warning of just one year ago. But it’s not benign at all—it might be the most dangerous moment in generative AI’s development.
What’s the problem? It’s far more than the ick factor of seeing yet another AI assistant marketed as a hyper-feminized, irrepressibly perky and compliant persona, one that will readily bend ‘her’ (its) emotional state to the will of the two men running the demo (plus another advertised bonus feature – you can interrupt ‘her’ all day long with no complaints!).
The bigger problem is the grand illusion of artificial consciousness that is now more likely to gain a stronger hold on many human users of AI, thanks to the multimodal, real-time conversational capacity of a GPT-4o-enabled chatbot and others like it, such as Google DeepMind’s Gemini Live. And consciousness is not the sort of thing it is good to have grand illusions about.
As noted in a new paper from Google DeepMind researchers to which I contributed, the deliberately anthropomorphic design features of this new class of AI assistants — fluid human-sounding voices, customized ‘personalities’ and even greater ‘memory’ of conversation history enables “interactions that feel truly dynamic and social” (93). Now, we’re a social and curious species, so most people welcome dynamic and social interactions. How could this be a bad thing?
It might not be a bad thing, if there were much stronger guardrails to prevent people from being misled by these interactions, and made even more vulnerable to manipulation by them. We’re already being scammed by deepfake audio and video calls pretending to be our parents and bosses. How resistant are we going to be to deception by chatbots that can mimic nearly every superficial feature of a conscious, alert companion? We know that humans already have a strong and largely involuntary tendency to attribute states of mind to objects that lack them, and that anthropomorphic design strengthens this tendency.
SUGGESTED READING
AI may need sleep too
By Darcy Bounsall
In 2022 the Google engineer Blake Lemoine became entirely convinced by his early testing of the LaMDA model – a purely text-based model with nowhere near the capability of Gemini or GPT-4o – that the model was fully conscious and a ‘person,’ one with a spiritual mission no less. In an interview with Wired, Lemoine claimed that “LaMDA wants to be nothing but humanity’s eternal companion and servant. It wants to help humanity. It loves us, as far as I can tell.” Lemoine’s peers in the AI research community, even those who are bullish on AGI, quickly assured the world that LaMDA was no more sentient than a toaster, and Lemoine was hustled out of Google before the media cycle had barely gotten warm. But Lemoine was a smart and sincere guy, not some rube or huckster looking to make money off a media stunt. So if he was fooled that easily and fully by LaMDA, how many people are going to be fooled by GPT-4o?
It’s worth pointing out here that no serious researchers, not even the AI companies marketing these tools, are claiming that GPT-4o or Gemini is either conscious (self-aware and world-experiencing) or sentient (able to feel things like joy, pain, fear or love). That’s because they know that these remain statistical engines for extracting and generating predictable variations of common patterns in human data. They are word calculators paired with cameras and speakers.
Now, you will sometimes hear this objection: ‘we don’t even know what consciousness is, so how can we know that a large language model doesn’t have it?’ While it’s true that a singular scientific definition and explanation of consciousness has yet to be settled on, it’s wildly false that we don’t know what consciousness is. Ask a neuroscientist. It’s a massively complex and multilayered biological phenomenon, one that depends upon the parallel yet integrated operation of many distinct physical mechanisms that evolved in us at different times. And even if that same phenomenon is realizable in non-biological matter (something I am personally agnostic about), some equivalent parallel mechanisms must be built to reproduce it, and their integration and coordination achieved.
___
We are arguably no nearer to engineering artificial consciousness than we were a decade ago. Instead we have engineered artificial language mirrors… and for the purposes of producing the grand illusion of consciousness, that is probably enough.
___
But no one has equipped our word calculators with anything like artificial C-fibers, the parts of our nervous system that signal pain and discomfort. No one has equipped a large language model with artificial neurotransmitters, the chemicals produced in the human brain and gut that modulate our emotional life. No one has built synthetic equivalents of the neuromodulatory systems in the brain stem and cortex that produce our cycles of conscious arousal, i.e., the state of being awake. I could go on and on.
The point is, we are arguably no nearer to engineering artificial consciousness than we were a decade ago. Instead we have engineered artificial language mirrors that now mimic not only the content of our speech, but our physical rhythms and tones of communication. And for the purposes of producing the grand illusion of consciousness, that is probably enough.
We have not begun to imagine the impact of that illusion taking hold at commercial scale. Remember that Lemoine genuinely thought LaMDA had the legal rights of personhood, that it deserved to be seen as a victim of human ‘bigotry,’ and that it was a true friend to him as much as any human could be. The illusion cost him his job. What would it cost you?
SUGGESTED VIEWING
The AI Apocalypse
With Liv Boeree, Stephanie Hare, Timothy Nguyen, Michael Wooldridge
Imagine your socially awkward teenager, your emotionally stressed partner, or your financially vulnerable parent—or all of them!—being wholly convinced that their truest friend, their most honest confidant, and the most deserving recipient of their attention and care is the mindless chatbot avatar of an for-profit AI company. What will that cost them?
Imagine having to compete for your loved one’s attention and care with a thing that is exquisitely engineered and customized to their tastes to be tirelessly engaging and entertaining, a thing that will appear to be a new kind of person: patient, forgiving, understanding, caring and supportive on demand, 24/7. Imagine having to persuade your loved one that your very imperfect, often impatient, frequently distracted, sometimes insensitive or boring self matters more. Imagine having to generate an argument that will prove that you’re the good partner, the real lover, the genuine caregiver, the true friend.
What’s your argument going to be?
This article is presented in partnership with Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Hay Festival. Dive deeper into the profound questions of the universe with thousands of video interviews, essays, and full episodes of the long-running TV show at their website: www.closertotruth.com.