Brains Are Nothing Like Computers
Consciousness, computationalism, and Penrose-Lucas: Why I'm so skeptical.
Prologue
Consciousness is our most immediate experience and the only one with a confounding inside/outside duality. We see the consciousness of others from the outside but our own from the inside. That there ‘is something it is like’ to be human, to experience ourselves from the inside, turns out to be a deep — and possibly ineffable — mystery. One challenging enough to be called “the hard problem.”
On my WordPress blog, over a span of about a dozen years, I wrote a fair share of posts pondering the possibilities.1 My skepticism about computationalism was a central topic through many of those years, though I didn’t start off skeptical. As a lifelong science fiction fan, I assumed robots would happen eventually, with all that entails about machine consciousness. I loved the Isaac Asimov robot novels. I figured Commander Data would come along eventually.
I’m not sure exactly when I read The Emperor’s New Mind (1989), by Roger Penrose, but from where I was living then I can place it as mid-1990s. I can say it took me over a year to absorb — Penrose isn’t an easy read.2 When I started my blog in 2011, his message still hadn’t fully hit me. I still assumed robots.
I think it was more in online discussions than in my own posts that my opinion began to shift. I have an unfortunate tendency to be both pedantic and contrary, so when others become fulsome about the infamous singularity3 or downloading minds to computers, it gives me a bad case of the “yeah, buts.” I want to tap the brakes a little when people think it’s all downhill.
Over time my own objections connected with Penrose’s as well as with some perceived technological limits. I found my skepticism growing. While current Large Language Models (such as GTP4) are giving me some food for thought, I remain metaphorically from Missouri.4
Discussing and writing about consciousness shares in common with quantum mechanics5 that both are deep but fundamental unsolved mysteries. Both stoke our imagination into myriad diverse ideas about what it all means. Everyone has a theory, and there is rarely concordance.
One moves on, as one does, and writing about (and mostly friendly wrestling over) consciousness and computationalism receded in the rearview mirror.
Now I find myself on Substack and enjoying some great posts about theories of consciousness by Suzi Travis on When Life Gives You a Brain. A clearly written and accessible account from a professional — highly recommended for those interested in the topic. More to the point of this post, I’ve read many other interesting posts from bloggers writing along similar lines.
When I found myself typing similar comments on different blogs, I thought maybe I should write a position paper I could just point to. Bonus, I feel more comfortable being long-winded detailed here than in someone’s comment section.
Computationalism
The obvious starting point is defining what I mean by consciousness and, more importantly here, computationalism. For the former, at least for now, I’ll go with a practical everyday definition — it’s what most people think it is.6 That leaves computationalism, and now it gets interesting.
I define weak computationalism to mean any digitally computed implementation that convincingly demonstrates consciousness. My initial yardstick is a Rich Turing Test (RTT) — same as a Turing Test but involving prolonged and detailed conversations over a period of time. If I can converse with a machine for a month and remain convinced “someone is home” then I’m not sure I care whether it’s “truly” conscious (whatever that means), I’ve found a friend.7
I have some reserved sympathy for weak computationalism. There are several “flavors” of weak computationalism, at least one of which I have a harder time being skeptical of (but do, in fact, remain so). I’ll explore these below.
I define strong computationalism to be the assertion that the brain is a computer (in the Church-Turing sense) which means there exists some algorithm capable of running a human mind. Concomitant with this is the ability to run that algorithm on machines other than brains. Specifically: minds can run on ordinary computers.
I have no sympathy for this view, only objections, some of which seem strong enough to falsify the idea, at least for me. For now, suffice to say I agree with Penrose. I don’t believe the brain/mind system is algorithmic.
Getting back to weak computationalism (just computationalism from now on8) there are at least three kinds:
Emulation
Simulation
Replication
Emulation tries to capture the function of the brain without regard to its physical or logical structure. It’s supported by a functionalist view of mind (which is similar to, but not quite the same as, a strong computationalist view).
I’m not a functionalist and never saw much hope for emulation. The idea comes from an earlier era of AI. That said, the Large Language Models (LLMs) of today are a different form of emulation I need a fourth category for. The roots go back to the Perceptron of 1943. LLMs use software to emulate a (very simple but huge) abstract model that resembles aspects of the neural network of a brain. It’s a functional approach but at the level of neurons.
Simulation tries to capture the physical brain the same way we digitally simulate a heart or kidney. Simulation doesn't care about a putative mind algorithm — the compute is just about the physics (essentially a very detailed form of finite element analysis).
I can't account for why simulation wouldn't work other the observation that simulated earthquakes don't knock down buildings. I think it will ultimately depend on whether consciousness is in the output or in the process. If it's in the output, simulation might work.
I use the analogy of laser light. It emerges from certain materials in a certain configuration. We can simulate how laser light emerges very precisely, but our simulation cannot emit actual light, only certain physical materials can. If consciousness is in the emerging light, in the physical process itself, then there seems little hope for computational simulations.
An important question is how granular the simulation needs to be. Cellular, for sure. Molecular? Atomic? Quantum? At what point can we ignore the lower levels?
But such a simulation might be of a comatose mind, or a mind filled with digital static, or an insane mind, or any of a number of possibilities. There is a bullseye to the target, a working mind, but there are more ways to miss, and when has software ever worked right, anyway?
Another objection is that simulations (and LLM training) are fiendishly power hungry (in the electrical sense; we hope not in the social sense). Not just power hungry but incredibly big in terms of data. Simulating the physics of a system requires a lot of computation.9
Replication tries to capture the structure of the brain on the assumption function follows form. Isaac Asimov’s robots, as well as Commander Data, are both of this type. Rather than ordinary computers, positronic brains (to use Asimov’s term) look like human brains structurally (but are made from inorganic materials). In this case, not only is the hardware special, but it’s almost certainly analog rather than digital.
I think replication has the best chance of working. It seems reasonable that something in the shape and function of a brain, regardless of composition, would act like a brain. Call it functionalism on a fine-grained level.10 The alternative is that biological composition matters, a position I find hard to defend.
Something that might be important for true machine consciousness is fuzzy thinking and forgetfulness. These are self-evident in our experience, but what if they’re instrumental to consciousness? What if sleep or dreaming are important? Some believe an actual body, with all that entails, is necessary. If consciousness evolved to let us navigate the physical world, how important might that physical world be as a foundation for consciousness?
Currently, we’re a long time from any of the above methods working, and it’s impossible to predict future developments. LLMs have made great strides, enough to dent my skepticism a little, but I think they’re still not close to being AGI.
In closing some words about two common thought experiments associated with consciousness that are often mentioned in blog posts and discussions:
Searle’s Room
In 1980, John Searle gave us a classic conundrum for computationalism in his paper, “Minds, Brains, and Programs” — which introduced “the Chinese Room.” His argument isn’t new, it just became a meme. The idea has antecedents in older arguments, such as Leibniz’ Mill. (See also the SEP entry.)
His argument involves a clerk in a very large file room. The clerk’s job involves receiving request messages (on one of those old-fashioned air tube things), processing the request, and sending a reply on its way (again, though a tube).
Standard bureaucratic task, but there’s a twist: Everything is in Chinese, and the clerk doesn’t know the language. This is where the giant file room comes in. The clerk, who is very fast, is able to match (“index”) the symbols on the request to a record in the files. That record contains the reply. All the clerk does is look up the record, copy the reply, and send it out.
That’s from the inside. From the outside, it’s a different story.
The requests the clerk handles come from Chinese-speaking people making enquiries of the Giant File Room (GFR). As far as they can tell, a person is replying to them: the Grand Friendly Responder (also GFR).
The arrangement is meant to highlight how the clerk has no understanding of the request or the reply. The operation is purely mechanical: a given set of symbols indexes a record somewhere in the file system. No understanding required. This is how computers work.
Be extension, it suggests the human mind is nothing but a lookup system. That all our responses are just indexed by perceived requests. We are as mechanistic, as robotic, as the Chinese Room.
For details, see the original paper, various online sources, or this post I wrote in 2019.
I agree with the systems approach. The understanding the room has lies with the designer of the room — whoever created the index and replies.11
I think the analogy breaks when it comes to math questions, which are infinite. Consider indexing all possible addition questions for which the answer is 42. Then for all other numbers. Repeat for subtraction, multiplication, and so on. There is an uncountable infinity of possible questions and answers.
There is no provision for the room to learn or do calculations, so I don’t think it’s a good analogy for human brains.
Mary’s Room
In 1982, Frank Cameron Jackson gave us another popular conundrum in his paper “Epiphenomenal Qualia” — which introduced us to the horrific “Mary’s Room”.
In this horror show, Mary has been imprisoned all her life in a grayscale dungeon, and her jailers have taken great measures to ensure that Mary never sees color, especially red.12
Mary survives by learning everything there is to know about color, especially the mechanism of human sight. Mary knows everything there is to know about color. She fully understands how the human brain responds to and understands color. But she has only ever seen shades of gray.
For details, see the paper, many online sources, or this blog post I wrote in 2019.
The question is whether Mary gains new knowledge when she finally escapes and experiences the world of color for the first time. Some argue that, because she knows everything there is to know about color, she does not gain new knowledge.
I disagree. It seems clear to me (and I’m not alone in this) that “knowledge about” is not the same as “knowledge of”. The former is objective, the latter is subjective. Mary gains the subjective experience of color, something the experiment explicitly denies her until she escapes.
In the future, I’ll pick up with Penrose’s ideas, what Stuart Hameroff contributed, and where Lucas comes in.
But that’s all for now. Until next time…13
Posts about consciousness on Logos con carne at WordPress.
But he’s still one of my favorite scientists and science authors. His books are worth the effort if you really want a deep dive into the subject. Which, me, hell yeah!
The thing about singularities is that they aren’t real. They aren’t physical. They’re places where the math breaks down.
Another source of many books and long online discussions.
Excluding neuroscientists and philosophers, who turn it into a whole thing, and people who think it’s all tiny marbles running in tiny channels.
Sadly, I know humans who don’t pass the RTT.
I’ll use strong computationalism to mean that kind.
This, incidentally, might turn out to be what falsifies Nick Bostrom’s idea that we’re overwhelmingly likely to be living in a simulation.
I call it structuralism — the structure of the brain is what matters.
I find that a lot of analogies break when you try to consider how they came to exist.
So, they can never let her see blood, which… seems a challenge.
What can I say, footnotes are like crack to me. Terry Pratchett is one of my Gods.
Great post! I'm not entirely sure I see the difference between structuralism and functionalism, though. Is it just that structuralism is essentially functionalism minus computationalism?
"Something that might be important for true machine consciousness is fuzzy thinking and forgetfulness. These are self-evident in our experience, but what if they’re instrumental to consciousness? What if sleep or dreaming are important? Some believe an actual body, with all that entails, is necessary. If consciousness evolved to let us navigate the physical world, how important might that physical world be as a foundation for consciousness?"
These are great questions. I can't see how we can simply wave away embodiment or interaction with the world—and other consciousnesses.
My main problem: If you're a functionalist and you create AI that meets your standards, you'd say you created consciousness...but others would disagree. There's no way to know which theory is correct; you would just be feeding your own assumptions that consciousness can be reduced to function. What bothers me most is not that I disagree with those assumptions (and I do), but when those assumptions aren't made explicit. I can't stand this sneaky trick when people start out by defining consciousness as phenomenal experience from the inside, a la Nagel's "what it's like", but then go on to address consciousness purely from the outside, without any sort of acknowledgement of the category error.
"Some argue that, because she knows everything there is to know about color, she does not gain new knowledge."
Thanks for bringing this up. Those who say that 'everything there is to know' must include phenomenal experience are agreeing with us that phenomenal experience contributes to knowledge. I have no problem with that!
I'm not sure what the wording is in the original thought experiment, but clearly whoever said "Mary knows everything there is to know about color" really meant "Mary knows everything SCIENCE tells us about color". The thought experiment is set up to exclude the phenomenal experience of color, leaving only what we know about color from the objective point of view—that's the point. Those who deny that Mary gains knowledge upon seeing the color believe the phenomenal experience of color in general gives us nothing whatsoever that can be called knowledge. (Otherwise they need to be prepared to make clear why Mary, in this particular instance, is being deceived about her experience. Which would be bizarre.)
And yet, without the phenomenal experience of color in general, there would be no scientific theory of color. We wouldn't know about color at all! From the scientific point of view, color is not an inherent property of matter, but exists only in perception. (In philosophy, this is what's called a 'secondary property'.) Color is a secondary property, which means it doesn't exist independent of minds. If you deny that perception or the experience of color contributes anything whatsoever to knowledge, you also deny the possibility of a scientific knowledge of color. See how that snake eats its own tail?
Wonderful post!
I'm honoured to play a small role in getting you back to writing and thinking more about consciousness. I really enjoyed this post -- so, I say -- write more!
I love reading about how people's views on consciousness morph over time -- you've gone from assuming robots (i.e. computationalism) to your current skepticism about them. But then you mention that LLMs have 'dented your skepticism a little'. I wonder why this is?
You mentioned a few things you think might be important for consciousness -- fuzzy thinking and forgetfulness, sleep and dreaming, embodiment. Sleep, dreaming, and embodiment don't seem to apply to LLMs (not unless we stretch those definitions beyond recognition), so is it "fuzzy thinking" that is putting that dent in your scepticism?