Digital Simulation
If we simulate a human brain accurately enough, does consciousness emerge?
This is the fourth post in a series exploring my skepticism about computationalism. Earlier posts are Brains Are Nothing Like Computers, Brains Are Not Algorithmic, and Digital Emulation.
Last time I looked at computer emulation — trying to duplicate consciousness at the functional level of mental states. This time I look at computer simulation — trying to duplicate consciousness at the functional level of brain biochemistry and physics.
Simulation studies a system by designing a numerical model representing the physical system and a suite of computer functions to operate on the model. These comprise, respectively, the data and code of computer software. The software design is arbitrary but constrained by the need for valid output.
Famously, “garbage in, garbage out”1 but the corollary is ‘good data in, good data out’ (assuming the software isn’t buggy). The inputs in this case are the model and the data we provide as “real world sense information”. The output we expect due to the computer functions acting on those is data indicating a conscious mind.
Let’s drill into exactly what that means. For a moment, assume we have a good model of a brain and the computer functions to bring that “brain” to life — to animate the physics and biology to successfully simulate a living brain. We also need to provide input numbers representing sight, sound, smell, taste, and touch data. There are other more subtle senses, proprioception for instance. Some believe consciousness depends on being physically embodied, so input data representing that may be required.2
A Virtual Reality
All this sense data assumes a virtual reality — a separate model — to act as its source. The sight data, for instance, arises from the virtual body (or at least virtual eyes) looking at objects in the virtual reality model. Likewise, all other sense data. Even the sense of being embodied is of being embodied in some 3D physical space.
Virtual reality models today are already close to what’s needed here. A big challenge remaining is converting information from these models to sense data for a (virtual) brain. We have some understanding of how eyes, ears, and other sensory organs generate nerve impulses, but we’re far from a good dictionary that would allow high-resolution inputs for all senses. Part of the problem is scope. We need a unified set of signals, one for every neural path to the brain. That’s a lot of signals, and they all need to act in sync.
Keep in mind that in all cases we’re talking about numbers. In the famous “brain in a jar” scenario, we would need to generate unified but physical nerve impulses for each nerve. The task involves electrical signals and a physical transducer to turn them into the chemical ion-based nerve signals. In a computer simulation, we need a numerical model of this. We need data structures and numbers to represent real world data and the nerve signals.
This also applies to the outputs. A “brain in a jar” generates physical nerve signals that a transducer would convert to electrical signals for the system simulating the brain’s environment. That system would need to convert those to actions in that virtual environment. Simply shifting the virtual eyes implies needing to shift what those eyes are “seeing”. Let alone if the brain commands the body to move. In that case, much more needs to happen than merely shifting the view.
Again, in a brain simulation, the outputs are just numbers that represent physical things. Muscle movements, mainly. And the presumption — if at least weak computationalism works — is that those numbers would represent a conscious being moving around in their virtual environment. Or at least something indistinguishable from one. An entity capable of passing a Rich Turing Test.3
So, we need numerical models to represent:
A brain at a useful level of detail.
A virtual reality for that brain to exist in.
Possibly a virtual body for the brain to live in.
Input sensory data from the environment to the brain.
Output muscle data from the brain to the environment.
All at an appropriate level of detail. And don’t forget we need the computer functions to animate them.
A Virtual Brain
With that, let’s turn back to the central players here, the numerical brain model and the computer code to animate it. We don’t need to think about implementing consciousness, as such. We just need an accurate model of the brain. By simulating the brain’s function at a sufficiently low physical level, the hope is that consciousness emerges and is reflected in the output numbers.
A brain simulation like this is similar to a simulation of a heart or lung.4 It’s just a matter of replicating the physical biology accurately enough. Brains are a bit more complicated because they have so many important inputs and outputs in addition to the blood and other biological requirements of any organ.
They’re also more complicated in resolution needed to faithfully simulate brain function. In a heart or lung simulation, it would probably not give up anything important to simulate the organ on the scale of clumps of cells — perhaps on the order of dozens to hundreds of cells. The organ function at “clump” resolution is likely indistinguishable from one at the cellular level.5
This doesn’t seem likely for the brain. I expect that, not only does every brain cell matter, but every synapse on every brain cell matters. An adult brain has roughly 500,000,000,000,0006 of them. We should pause here to appreciate how large a brain model needs to be.
Brain Model Size
How many bits does it take to model a synapse? I recall an article in which a neuroscientist said the synapse is the most complicated biological machine we know of. There are many different neurotransmitter chemicals and even more receptors for them. But we may be able to save space by using tables to hold common data, which means individual synapses can use short indexes into those tables.7
This may be an underestimate — I’m pretty sure it’s not an overestimate — but let’s imagine it takes 64 bits to model a synapse at the accuracy we require. Then, assuming the adult brain with half a quadrillion synapses:
That should take your breath away just a little. Our model of just the synapses requires 4 petabytes of data (four quadrillion bytes). One terabyte drives have become fairly common. Four petabytes amounts to four thousand one terabyte drives.
Those synapses are all interconnected — the brain’s connectome — and the short version is that the connection map doubles or triples the size. We can define a one-way map where each synapse “points” back to the neuron whose axon connects to it. There are roughly 100 billion neurons, so the “pointer” must be capable of pointing to any number from 1 to 100 billion. That requires 37 bits.8 That’s too big for a 32-bit number and a bit small for the next logical size, 64 bits. If memory was a big concern, we might use 40 bits, but if speed was more important, the native 64-bit size makes more sense.
So, a one-way connection map requires a 64-bit number for every synapse, hence the doubling. If we want a two-way connection map (which has a lot of advantages), then we triple the size. A two-way map lets us easily get from neuron to synapse through its axon, or from the synapse back to the neuron. We’re already in seriously large memory territory regardless, so let’s go two-way.
Now our static memory map, our connectome, is a whopping 12 petabytes.
Note that we have not yet modeled the neuron, which integrates all its synapses into a decision to fire or not fire. It turns out to be a drop in a bucket. Assume we can model a neuron with 64 bits (again using tables for common properties). With the 100 billion neurons, that’s only 800 gigabytes — not even a terabyte. A fraction of the synapse and connection model.
So, our model weighs in at 12,000,800,000,000,000 bytes.9
Now we need computer code to operate on the model.
Brain Model Function
In some regards, the functionality we need is simple. We need to duplicate the physical behavior of synapses and neurons. The rules of physics apply throughout the brain, so we only need a single set of rules to apply throughout our model.
We might also edge away from pure physics and try to simulate synapse and neuron behavior more in terms of “black boxes” with known outputs given certain inputs. It’s not clear how detailed our simulation needs to be to capture consciousness. It does seem we need at least the synapse level, but what about other aspects of the brain?
For instance, the glia cells apparently affect mentation, and so does the myelin sheath that surrounds axons.10 What other details might be crucial for consciousness to emerge? Does the EMF environment of the brain play a role? Does the packing of neurons — and hence local effects on each other — matter?
If consciousness depends on a specific set of circumstances — perhaps in a Yin-Yang balance between order and chaos — then it is possible simulations need to consider very low levels of reality. Perhaps as low as the quantum level.
If so, then conventional computers may never be sufficient, either at the practical level — due to the impossibly large models to simulate quantum — or perhaps even at the theoretical level. Lucas and Penrose might be right even when it comes to numerical simulation of physics.
Is There Dualism Here?
In the sense we visited last time with emulation, no. We’re not attempting to simulate higher brain function, so Gödel doesn’t apply in that sense. A numerical simulation is an abstraction by nature, so the duality that still exists between computer hardware and whatever that hardware is implementing doesn’t represent the disconnect it does with emulation.
The Bottom Line
What still applies are the issues of getting a simulation, not just correct, but bug-free. That is, the design has to be correct and must be implemented without errors. As anyone who uses complex software knows, both of those are a challenge we’ve been thus far unable to meet.11
Assuming we can accomplish a sufficiently detailed simulation, I see several possible outcomes. We might end up with a comatose brain. The meat lives but has no mind. Perhaps synaptic activity results in pure static, no thread of awareness. Perhaps we get decodable outputs, but they reveal an incoherent mind. Or an insane one filled with hallucinations. Perhaps the simulation seems to work at first but rapidly degrades as chaos errors accumulate. Maybe we get a somewhat coherent mind, but an infant one or a very stupid one.
Or maybe we do manage to hit the bullseye and get a functioning mind. But I think it’s safe to say the compute resources required will be substantial. Unless vast advances are made in computing, computed minds will likely be rare and expensive.
On the other hand, there is the potential of scanning a living brain and simulating that scan, which would allow migration of humans into the virtual realm. But the technology and hardware — if it’s possible at all — are far in the future.
Until next time…
Which is true of all logic and math.
A Turing Test over a long span of time. A month, at least.
In fact, if embodiment is crucial to consciousness, brain simulations may require simulating other body parts.
Depending on the organ, there might be points requiring higher resolution, but I think clumping at the right level wouldn’t matter much in most organ simulations.
500 million-million, or 500 trillion.
For example, if some synapse property takes 1000 bits to describe accurately but is shared among many synapses, each synapse can have a short index (say 8 bits) into that table.
Because log₂ 10¹¹ = 36.5412…
Keep in mind this is a rough and minimal estimate.
Which isn’t formed until a person’s mid-20s (hence the drinking age restrictions), and people literally aren’t in their right minds until then.
Under some views, it is impossible to meet.
Isn’t it generally accepted that we humans use only a small percentage of our brains? And any artificial intelligence starts small, focused on a limited set of tasks, or else emulates a simple organism. These factors would chip away the number of circuits needed to more manageable levels.
Great post! Looks like we’re jumping the gun a bit on simulated consciousness.