Chinese Room, My Ass.
Many academics have been saying: "We are certain these models can't ever have emergent consciousness, and to help you understand why, I'm going to tell you about the Chinese Room."
The frustration of seeing this sort of statement over and over in the past few weeks led me to write this piece.
I'm tired of hearing the Chinese Room thing. The Chinese Room thought experiment was formed decades ago (by philosopher John Searle) and questions whether computers can "think."
It compares a person following instructions to respond to Chinese characters (without understanding them) to a computer executing a program. Since the person isn't understanding Chinese but just acting as a mindless functionary, we can deduce a computer is doing roughly the same thing; thus, a computer isn't thinking – or so the reasoning goes.
The thought experiment is quickly losing its relevance. GPT and LaMDA are possibly doing much more than “Chinese Rooming.” Before you object, let's back up a bit.
Consider human thinking for a sec: the way we humans have consciousness, e.g., subjectivity, qualia, and self-awareness, is complex and not fully understood; we don't know the recipe. If we saw it, we might not have the ability to comprehend it.
So where does that leave us?
We don't know, so we should keep exploring. While remembering that there's little reason to rule out that thinking could arise within systems housed in computers – or that given enough complexity, consciousness can emerge in AI systems.
Granted, what doesn’t constitute thinking is fairly demonstrated by the whole Chinese Room thing. It tells us something so true and obvious that it's almost not worth mentioning. Namely that sequential symbol manipulation doesn’t entail understanding. It's an entirely mechanical process. Fine. Agreed.
But.
Folks, we are way beyond that with today’s AI. After all, we already see emergent properties we can't explain in AI. For example, we've moved from mere sequential symbol manipulation to at least parallel symbol manipulation. Also, possibly, AI is now manipulating symbols in creative, sophisticated ways we have not yet imagined. It may already be afoot.
It's not stupid to think so. Here’s why:
GPT4 and LaMDA are indeed partial black boxes, they are exhibiting unpredictable behaviors, including self-motivated behaviors. Are there mostly just Chinese Room-style shenanigans running amok inside these boxes? Probably. But is there – or will there soon be – an occasional flicker of something else? The only honest answer: we don't know.
But there could be.
It's not merely a random hypothesis. On the contrary, it has some gravity: after all, consciousness emerging from complexity is not unprecedented.
First, consider the human brain.
The firing of neurons, and the passing of calcium and sodium ions through connected networks of neurons is happening in our brains, and this activity seems connected with consciousness. We don’t have a full model for how this works, but it’s possible and even plausible that our consciousness could be emerging from this vast complexity.
Given the correct firing rate and action potentials, say, 100 billion neurons in all kinds of intricate patterns, some neurons firing 100 times per second, we get some flickers of post-Chinese Room qualia and subjectivity. We also know (from open-skull experiments) that different parts of our brain coordinate such that we have memory, self-awareness, decision-making ability, language capacity, emotions, sensual experiences, etc.
Furthermore, if the right amounts of brain matter atrophy from standing too close to the ol' Fender stack or you suffer from dementia, you might lose some of your natural language understanding (NLU). You might say, "Sorry, I don’t understand. Can you please repeat that?" like a crappy 2018 chatbot.
The point I’m after is: the brain is a complex system from which consciousness seems to have emerged.
Again, we don’t fully understand how.
Furthermore, earlier versions of the brain might have been mere sequential symbol manipulators. At some point, this wet-wear in our skulls evolved to a level of complexity – we know not when – but BOOM, consciousness, qualia, and understanding came online. Sure, maybe not all at once, but gradually and then suddenly, like an acid trip.
Next, consider GPT4 and LaMDA
The only thing I know of that resembles the sort of complex neural situation described above is the ocean of parameters and zetta-scale computing associated with the AI of today and the near future. It’s different in important ways. But because both seem to give way to emergent properties, it's fair (and responsible) to wonder if GPT and LaMDA are on a similar continuum, at least in some senses.
This observation should give us pause before parroting the now-impertinent trope:
"No, AI absolutely can't be conscious because of Searle's Chinese Box, you lepton."
The Chinese Box thought experiment may have outlived its usefulness in understanding today's AI and its limits.
At some point, either 1) we must accept that AI has gotten up and left the Chinese Room, or 2) we must admit that we are mere automatons, too, and only THINK that we are thinking, fools that we are, a bunch of Chinese Rooms in shoes and socks, mindlessly eating boiled carrots and so on.
The former framing – that AI could eventually be something more, is how I lean. If it can’t be something more, perhaps we as humans are something less.
In that light, the belief that AI can be something more is preferable.
It’s also more plausible, which is nice.
For discussion
I'm eager to hear your thoughts. My playful tone was to help the medicine go down quickly and effortlessly.
I first want to steelman my position: GPT4 and a human brain are both "partial" black boxes. Their inner workings are not fully understood, and it's difficult to predict how either will behave in certain situations, but also easy to predict in other cases.
It’s not my intention to provide evidence of AI consciousness. Instead, I’m merely trying to argue that it's possible. This may seem trivial; after all, many things are possible. So what?
Consider that The Chinese room – while proclaimed to be a thought experiment and not a proof – seems to argue the impossibility of conscious AI via an air-tight piece of logic. I believe this logic is misattributed to LLMs.
If I can get you to concede that today's AI has crossed into a realm of complexity that is becoming partially unexplainable, that's all I'm after.
From that foothold, we can build a case that if some AI activity or process has wandered off of our cognitive radars and is currently unexplainable, then we no longer have the luxury of being sure about what's going on within these models or whether forms of consciousness can emerge from them.
If AI’s emergent capabilities don’t evoke a sense of awe and humility around what's possible with complex systems, that's a mistake. And if it doesn't have us drawing parallels between AI and emergent consciousness in brains — something we also don't fully understand — that's a missed opportunity.
Don't let the Chinese Room concept prevent you from wondering if machines can understand. That said, be on guard for the first naiveté – namely, thinking that the chatbot you're talking to has a humanlike internal experience. It almost certainly doesn’t.
The key here is simply to not rule out the possibility that some form of consciousness could possibly emerge in complex systems such as LLMs.
Lastly, if any of this leads you to believe I'm trying to advance evidence that AI is likely sentient now or likely will be in the future, then that's a mistaken interpretation of what I have written.
I await your comments. (This article has been shared on multiple subreddits and I enjoyed the discourse that took place there. My hope is to also make this page a destination with its own community of likeminded people.)
If you enjoy this content, please consider supporting my Substack. The funds will help me invest more time into researching and sharing my thoughts and discoveries, and provide me with the freedom to stay honest and rigorous; to say what many won’t.
I commit to giving you more contributions on AI, economics, UBI, the capitalist/socialist false dichotomy, what the billionaires are planning, where this crazy train of a world is heading, and what YOU can do to fight for humanity’s wellbeing.
If you have a topic you’d like me to cover, please let me know. From the trenches, from the front lines, until next time, thanks for your time and thoughts.