Brains Are Nothing Like Computers
Consciousness, computationalism, and Penrose-Lucas: Why I'm so skeptical.
Prologue
Consciousness is our most immediate experience and the only one with a confounding inside/outside duality. We see the consciousness of others from the outside but our own from the inside. That there ‘is something it is like’ to be human, to experience ourselves from the inside, turns out to be a deep — and possibly ineffable — mystery. One challenging enough to be called “the hard problem.”
On my WordPress blog, over a span of about a dozen years, I wrote a fair share of posts pondering the possibilities.1 My skepticism about computationalism was a central topic through many of those years, though I didn’t start off skeptical. As a lifelong science fiction fan, I assumed robots would happen eventually, with all that entails about machine consciousness. I loved the Isaac Asimov robot novels. I figured Commander Data would come along eventually.
I’m not sure exactly when I read The Emperor’s New Mind (1989), by Roger Penrose, but from where I was living then I can place it as mid-1990s. I can say it took me over a year to absorb — Penrose isn’t an easy read.2 When I started my blog in 2011, his message still hadn’t fully hit me. I still assumed robots.
I think it was more in online discussions than in my own posts that my opinion began to shift. I have an unfortunate tendency to be both pedantic and contrary, so when others become fulsome about the infamous singularity3 or downloading minds to computers, it gives me a bad case of the “yeah, buts.” I want to tap the brakes a little when people think it’s all downhill.
Over time my own objections connected with Penrose’s as well as with some perceived technological limits. I found my skepticism growing. While current Large Language Models (such as GTP4) are giving me some food for thought, I remain metaphorically from Missouri.4
Discussing and writing about consciousness shares in common with quantum mechanics5 that both are deep but fundamental unsolved mysteries. Both stoke our imagination into myriad diverse ideas about what it all means. Everyone has a theory, and there is rarely concordance.
One moves on, as one does, and writing about (and mostly friendly wrestling over) consciousness and computationalism receded in the rearview mirror.
Now I find myself on Substack and enjoying some great posts about theories of consciousness by Suzi Travis on When Life Gives You a Brain. A clearly written and accessible account from a professional — highly recommended for those interested in the topic. More to the point of this post, I’ve read many other interesting posts from bloggers writing along similar lines.
When I found myself typing similar comments on different blogs, I thought maybe I should write a position paper I could just point to. Bonus, I feel more comfortable being long-winded detailed here than in someone’s comment section.
Computationalism
The obvious starting point is defining what I mean by consciousness and, more importantly here, computationalism. For the former, at least for now, I’ll go with a practical everyday definition — it’s what most people think it is.6 That leaves computationalism, and now it gets interesting.
I define weak computationalism to mean any digitally computed implementation that convincingly demonstrates consciousness. My initial yardstick is a Rich Turing Test (RTT) — same as a Turing Test but involving prolonged and detailed conversations over a period of time. If I can converse with a machine for a month and remain convinced “someone is home” then I’m not sure I care whether it’s “truly” conscious (whatever that means), I’ve found a friend.7
I have some reserved sympathy for weak computationalism. There are several “flavors” of weak computationalism, at least one of which I have a harder time being skeptical of (but do, in fact, remain so). I’ll explore these below.
I define strong computationalism to be the assertion that the brain is a computer (in the Church-Turing sense) which means there exists some algorithm capable of running a human mind. Concomitant with this is the ability to run that algorithm on machines other than brains. Specifically: minds can run on ordinary computers.
I have no sympathy for this view, only objections, some of which seem strong enough to falsify the idea, at least for me. For now, suffice to say I agree with Penrose. I don’t believe the brain/mind system is algorithmic.
Getting back to weak computationalism (just computationalism from now on8) there are at least three kinds:
Emulation
Simulation
Replication
Emulation tries to capture the function of the brain without regard to its physical or logical structure. It’s supported by a functionalist view of mind (which is similar to, but not quite the same as, a strong computationalist view).
I’m not a functionalist and never saw much hope for emulation. The idea comes from an earlier era of AI. That said, the Large Language Models (LLMs) of today are a different form of emulation I need a fourth category for. The roots go back to the Perceptron of 1943. LLMs use software to emulate a (very simple but huge) abstract model that resembles aspects of the neural network of a brain. It’s a functional approach but at the level of neurons.
Simulation tries to capture the physical brain the same way we digitally simulate a heart or kidney. Simulation doesn't care about a putative mind algorithm — the compute is just about the physics (essentially a very detailed form of finite element analysis).
I can't account for why simulation wouldn't work other the observation that simulated earthquakes don't knock down buildings. I think it will ultimately depend on whether consciousness is in the output or in the process. If it's in the output, simulation might work.
I use the analogy of laser light. It emerges from certain materials in a certain configuration. We can simulate how laser light emerges very precisely, but our simulation cannot emit actual light, only certain physical materials can. If consciousness is in the emerging light, in the physical process itself, then there seems little hope for computational simulations.
An important question is how granular the simulation needs to be. Cellular, for sure. Molecular? Atomic? Quantum? At what point can we ignore the lower levels?
But such a simulation might be of a comatose mind, or a mind filled with digital static, or an insane mind, or any of a number of possibilities. There is a bullseye to the target, a working mind, but there are more ways to miss, and when has software ever worked right, anyway?
Another objection is that simulations (and LLM training) are fiendishly power hungry (in the electrical sense; we hope not in the social sense). Not just power hungry but incredibly big in terms of data. Simulating the physics of a system requires a lot of computation.9
Replication tries to capture the structure of the brain on the assumption function follows form. Isaac Asimov’s robots, as well as Commander Data, are both of this type. Rather than ordinary computers, positronic brains (to use Asimov’s term) look like human brains structurally (but are made from inorganic materials). In this case, not only is the hardware special, but it’s almost certainly analog rather than digital.
I think replication has the best chance of working. It seems reasonable that something in the shape and function of a brain, regardless of composition, would act like a brain. Call it functionalism on a fine-grained level.10 The alternative is that biological composition matters, a position I find hard to defend.
Something that might be important for true machine consciousness is fuzzy thinking and forgetfulness. These are self-evident in our experience, but what if they’re instrumental to consciousness? What if sleep or dreaming are important? Some believe an actual body, with all that entails, is necessary. If consciousness evolved to let us navigate the physical world, how important might that physical world be as a foundation for consciousness?
Currently, we’re a long time from any of the above methods working, and it’s impossible to predict future developments. LLMs have made great strides, enough to dent my skepticism a little, but I think they’re still not close to being AGI.
In closing some words about two common thought experiments associated with consciousness that are often mentioned in blog posts and discussions:
Searle’s Room
In 1980, John Searle gave us a classic conundrum for computationalism in his paper, “Minds, Brains, and Programs” — which introduced “the Chinese Room.” His argument isn’t new, it just became a meme. The idea has antecedents in older arguments, such as Leibniz’ Mill. (See also the SEP entry.)
His argument involves a clerk in a very large file room. The clerk’s job involves receiving request messages (on one of those old-fashioned air tube things), processing the request, and sending a reply on its way (again, though a tube).
Standard bureaucratic task, but there’s a twist: Everything is in Chinese, and the clerk doesn’t know the language. This is where the giant file room comes in. The clerk, who is very fast, is able to match (“index”) the symbols on the request to a record in the files. That record contains the reply. All the clerk does is look up the record, copy the reply, and send it out.
That’s from the inside. From the outside, it’s a different story.
The requests the clerk handles come from Chinese-speaking people making enquiries of the Giant File Room (GFR). As far as they can tell, a person is replying to them: the Grand Friendly Responder (also GFR).
The arrangement is meant to highlight how the clerk has no understanding of the request or the reply. The operation is purely mechanical: a given set of symbols indexes a record somewhere in the file system. No understanding required. This is how computers work.
Be extension, it suggests the human mind is nothing but a lookup system. That all our responses are just indexed by perceived requests. We are as mechanistic, as robotic, as the Chinese Room.
For details, see the original paper, various online sources, or this post I wrote in 2019.
I agree with the systems approach. The understanding the room has lies with the designer of the room — whoever created the index and replies.11
I think the analogy breaks when it comes to math questions, which are infinite. Consider indexing all possible addition questions for which the answer is 42. Then for all other numbers. Repeat for subtraction, multiplication, and so on. There is an uncountable infinity of possible questions and answers.
There is no provision for the room to learn or do calculations, so I don’t think it’s a good analogy for human brains.
Mary’s Room
In 1982, Frank Cameron Jackson gave us another popular conundrum in his paper “Epiphenomenal Qualia” — which introduced us to the horrific “Mary’s Room”.
In this horror show, Mary has been imprisoned all her life in a grayscale dungeon, and her jailers have taken great measures to ensure that Mary never sees color, especially red.12
Mary survives by learning everything there is to know about color, especially the mechanism of human sight. Mary knows everything there is to know about color. She fully understands how the human brain responds to and understands color. But she has only ever seen shades of gray.
For details, see the paper, many online sources, or this blog post I wrote in 2019.
The question is whether Mary gains new knowledge when she finally escapes and experiences the world of color for the first time. Some argue that, because she knows everything there is to know about color, she does not gain new knowledge.
I disagree. It seems clear to me (and I’m not alone in this) that “knowledge about” is not the same as “knowledge of”. The former is objective, the latter is subjective. Mary gains the subjective experience of color, something the experiment explicitly denies her until she escapes.
In the future, I’ll pick up with Penrose’s ideas, what Stuart Hameroff contributed, and where Lucas comes in.
But that’s all for now. Until next time…13
Posts about consciousness on Logos con carne at WordPress.
But he’s still one of my favorite scientists and science authors. His books are worth the effort if you really want a deep dive into the subject. Which, me, hell yeah!
The thing about singularities is that they aren’t real. They aren’t physical. They’re places where the math breaks down.
Another source of many books and long online discussions.
Excluding neuroscientists and philosophers, who turn it into a whole thing, and people who think it’s all tiny marbles running in tiny channels.
Sadly, I know humans who don’t pass the RTT.
I’ll use strong computationalism to mean that kind.
This, incidentally, might turn out to be what falsifies Nick Bostrom’s idea that we’re overwhelmingly likely to be living in a simulation.
I call it structuralism — the structure of the brain is what matters.
I find that a lot of analogies break when you try to consider how they came to exist.
So, they can never let her see blood, which… seems a challenge.
What can I say, footnotes are like crack to me. Terry Pratchett is one of my Gods.
Hey Wyrd. Can you expound a little bit on the difference between emulation and simulation? You say that simulation is just about “capturing” something important about the original, absent any need for computational/functional replication. But in what sense does a simulation capture or represent something else, if not in the functional sense? It seems like all of the examples you use, like laser light being modeled on a a diagram or computer (or just some fancy mechanical simulation), would just be examples of “capturing” in the algorithmic sense. That is, there is some algorithm (e.g. behavior modeled on electromagnetic wave equations) that we think our simulation roughly implements, and that real light implements as well.
I can’t think of a particular example of simulation which doesn’t count as functional replication (aka emulation). Can you? I would be really interested to hear of an example. Thanks!
I disagree with what you say about simulations. The reason that a simulation of a laser light doesn't produce a real laser isn't because there's some *process* that's missing. It's because a specific physical substance (light) is part of what a laser is. No simulation can replicate that aspect of something because they only replicate the abstract structure, not the specific substances of which something is made. Similarly, the reason a simulated earthquake doesn't knock down a building is because you need more than just a process with a certain structure to knock down a building - you need a large amount of physical force, which just isn't want a simulation does.
So the objection that a simulated brain wouldn't be conscious because a simulated laser doesn't produce a real one, or a simulated earthquake doesn't really knock down buildings, is a non-sequitur. Everyone agrees that a simulation of something doesn't share all the properties of the non-simulated version. In particular, it doesn't share properties related to the physical substance the thing is made of or the way it interacts with external objects. But by the same token, we all agree that simulations do share some similarities with the thing being simulated - that's the whole point of a simulation. So the question is whether consciousness is the type of property that would be shared by a simulation. Computationalists answer yes, and simply pointing to examples of properties that aren't shared by simulations can't refute them.
So is consciousness the type of thing that would be shared by a simulation? We both agree that consciousness isn't about the specific physical substances that make up the brain - as you said, that view is very hard to defend. It also seems that we both agree that consciousness isn't just about how you interact with external objects. So neither class of dissimilarity between simulation and reality that your examples fall into actually apply to consciousness. Could there be some other class of dissimilarity that includes consciousness? I don't think so, because I think consciousness of one of the things that definitely is the same between a simulation and reality. After all, what simulations do have in common with the things they simulate are structural characteristics. The parts of a simulation interact in the same way as the parts of the real thing (if they didn't, the simulation is inaccurate). But I think that consciousness is in the interaction between neurons. I can't imagine any plausible physicalist account on which this isn't true, and I don't buy non-physicalist accounts in part because it seems so obvious to me that consciousness must be in these interactions. But if these interactions are what consciousness is, as they appear to be, then a simulation is conscious too, since the simulated neurons interact in the exact same way. The same neural processes occur in the simulation as do in a flesh-and-blood brain. Hence, a brain simulation is conscious.