It is sometimes asserted — in A Computational Foundation for the Study of Cognition (1993), David Chalmers argues in detail — there is an identity between the physical states of a computer simulating a mind and the physical states of a brain (which is presumed to have a mind). This identity between physical states means the computer is as conscious as we are.
Chalmers refers to a “topological invariance” between a series of recorded computer states and a series of recorded brain states at a resolution sufficient to capture all significant details. Essentially, a recording of human consciousness with a resolution fine enough so that playing back these brain states also plays back the recorded consciousness.1
Already I have some issues. The idea that brains have distinct recordable states needs some examination. This is central to Chalmers’s thesis, but I’m not sure the idea survives a close look.
Brain States
The assumption is that the brain takes recognizable steps that potentially can be recorded as snapshots. A further assumption is that a sequential series of these snapshots captures the stream of consciousness the same way single frames in a movie capture motion.2
This requires an identifiable specific brain state Sₙ₊₁ that supervenes on a previous specific state Sₙ.
An important consideration is whether each brain state Sₙ must have all the information necessary to compute state Sₙ₊₁. This already puts us in deep water. A full fidelity recording of consciousness brain states need not contain the information necessary to predict the next state. Just as a single movie frame does not predict the next frame. The predictability lies in the physics of the scene being filmed.3
The brain itself — through its physics — determines each successive state. The premise here is only that we can identify and record each state in sufficient detail to fully capture consciousness. Here we are not concerned with trying to predict or generate successive frames. We are not trying to compute consciousness.
The first problem is identifying specific brain states. The brain is both asynchronous and analog. Consider trying to record hundreds of billions of fallen raindrops each following — and changing — the shape of the landscape. Every molecule of water follows a continuous path as it continuously interacts with nearby molecules of water, land, and air. In theory, the system does have moment-to-moment states but picking them out requires slicing this continuous behavior into digital snapshots like frames in a movie.
Reality is fast, so we need lots of snapshots every second (a high “frame rate”), and each snapshot must capture every aspect of the water running over the landscape (a high resolution). So, lots of snapshots with lots of data.4
Further, each snapshot captures a freeze frame of an analog system, so if we want a numeric representation (which we do), we also have the issue of converting analog reality into numbers.5
In fact, the assumption identifiable snapshots exist essentially asserts computationalism, so the argument seems to beg the question.6
But let’s assume it’s possible to record a list of brain states. This involves a massive amount of data. A human brain has roughly 500 trillion synapses. Capturing only the synapse state almost certainly is not a fine enough resolution, but let’s take it as a crude minimal estimate. We need to capture at least 500 trillion individual states (raindrops).
Our framerate has to be much higher than the 24 frames per second of movies. Synapses can operate on time scales down to about 100 nanoseconds, which corresponds to a frequency of 10,000,000 Hz. A theory by Harry Nyquist suggests we need a snapshot rate of at least 20,000,000 frames per second. As with the resolution, this is a crude estimate.
A sufficient recording almost certainly needs a higher resolution (if not also a higher frame rate). Let’s go with 500 trillion synapses recorded 20 million times per second. That’s:
Ten sextillion numbers per second. Assume each number requires 64 bits (8 bytes), and we’re talking 80 zettabytes per second. Per second. Ten minutes is 600 seconds, so ten minutes of consciousness requires 48 yottabytes of data.7
The data recording and playback rate is necessarily 80 zettabytes per second, which requires a data rate of 6.4×10²⁶ bits per second. This puts data transmission in the unimaginably hard gamma ray range. Pulling off real-time recording or playback will be a technological marvel, to say the least.
But it’s not impossible in principle, so let’s run with it and imagine such a recording is possible. Indeed, part of what Chalmers proposes only depends on the existence of identifiable specific states that could potentially be recorded.8
Computer States
Different from brains, computers are synchronous and discrete. Because of this, recording computer states is trivial and routinely done for debugging.
Another striking difference is the static nature of the overall system state compared to brains. At any given instant, the system can only access one memory location. All other locations remain static, so there is no need to record them in each frame. Given a starting state, we can record only the changes the CPU makes.
This makes the data resolution much smaller. We capture only the address, data, and control lines between the CPU and the memory (and other peripherals). So, capturing a mere 8,000 bits — one kilobyte — seems more than adequate to capture the system state.
This does mean the recording must begin with an initial state for memory (and peripherals). The captured CPU signals then reference this image. Playback updates the image based on the recorded signals. The alternative is recording the complete system state 10 billion times per second, which generates — to no real purpose — considerably more data (but still not as much as brains do).
We save on frame size, but computers require a rather higher framerate. Modern CPUs have clock speeds up to billions of ticks per second. The clock speed gives us the framerate needed to capture CPU activity. Let’s assume a framerate of 10 billion snapshots per second. Recording a CPU “thinking” for ten minutes requires:
This (60 petabytes) is nine orders of magnitude less than the 48 yottabytes of a consciousness recording. Which indicates how much more computing is required to emulate or simulate the human mind. But if computationalism is true, then Church-Turing asserts the difference is just a matter of time or space. One computer can calculate for a long time, or many can operate in parallel to accomplish the same thing in less time.9
Regardless of the details, we can easily imagine a recording of the series of states that fully capture a computer computing.
Brain States to Computer States
Chalmers proposes an identity between brain states and computer states. This requires a map from brain states to computer states. If such a map exists, a computer can do a “playback” of the mapped version. Chalmers believes this causes the computer to experience the recorded consciousness.
Note a key point: There is no claim to generate consciousness through computation, only that it occurs in virtue of playing back recorded brain states that comprised a stream of consciousness.
Even so, as with assuming identifiable brain states, assuming they can be mapped to computer states likewise begs the question of computationalism.
Note also that we don’t care what the brain states mean — we’re not trying to analyze consciousness. The premise is only that the recorded brain states are sufficient to account for all aspects of consciousness. If we can successfully map them to computer states, then (if Chalmers is right), the computer must experience that same consciousness.
I stressed above the analog nature of the brain, and vast data requirements aside, it’s not impossible to conceive of an analog recording of brain activity at a level sufficient to capture all objective aspects of consciousness. This recording would seem to be unique to the source brain, though, and it would equally seem that playing it back requires the same brain. (Because no two brains are identical.) So, if technology can overcome some serious obstacles10, we might someday be able to re-experience our own memories in Real-Life Detail™.
But mapping a massively parallel digitized dataset (500 trillion numbers per sample) to specific computer states is a trick. The obvious map is 500 trillion memory locations, each containing the state. For each sample (movie frame), the computer updates all 500 trillion locations with new numbers.
Another map involves 500 trillion CPUs, each sequentially loading the numbers for a specific channel. The difference being whether the computerized brain states are relevant in memory or need to be brought into the CPU. A question that I think illustrates the absurdity of the concept. In what sense does 500 trillion memory locations or 500 trillion CPU registers equate to 500 trillion physical synapses?
I can’t help but think, in no sense whatsoever. CPU architecture is vastly different from brain architecture. [See Brains are Nothing Like Computers.]
Playback versus Computation
Let’s grant enough computationalism for identifiable brain states we can record and map to computer states. The problem still is that playing back a series of previously computed states is not at all the same as doing the computation to generate them.
Playing a movie of physical work doesn’t repeat the physical work.
It is computationally expensive to compute a Mandelbrot deep zoom image. Depending on the computer and the zoom, it can take hours or days of computing time. On the other hand, once generated, it is computationally almost cost-free to load and display that image.
We can view this in terms of entropy. Computing an image (which has very low entropy) requires a corresponding increase in environmental entropy — the heat emitted by the computer. But loading and displaying the image results in only a small increase in entropy because the work done by the computer is minor.
So, even granting some form of computationalism, even granting some topological map between brain states and computer states, I do not think playing back recorded computerized brain states works as Chalmers suggests.

I have a further objection to the notion that computer playback of recorded brain states replicates the topology of brain states. The short version is that computers go through intermediate states even just “playing back” data.
For example, CPUs alternate between first fetching instructions and then fetching (or writing) data. And they update memory locations sequentially, so at what point (and how) does the computer declare “this state corresponds to brain state Sₙ” before moving through the many intermediate computer states to brain state Sₙ₊₁?
There is also that digital data is just ones and zeros with no semantics beyond that. Digital data has no higher meaning in itself, only in reference to how it is externally interpreted. At a digital level, there is little difference in the computer between running a conscious mind or running Tetris.
I suspect Chalmers doesn’t understand exactly how computers work. Most people don’t know exactly how computers work. Unless one at least knows them down to the assembly code level, one probably does not.
My bottom line and best guess: Running previously recorded consciousness on a computer (if that were even possible) is not at all likely to engender consciousness in the computer. Any more than loading a Mandelbrot image is the same as calculating it.
Until next time…
For now, assume the playback occurs in the same brain that was the source of the recording. The same brain but experiencing the earlier recorded states.
For early science fiction examples, see Brainstorm (1983) and Strange Days (1995). Both involve recording someone’s stream of consciousness for someone else to experience through playback. This presumes person A’s brain states are meaningful to person B (and C, D, E, …), which is a big assumption.
This post takes scientific physicalism as a metaphysical axiom.
Integrating the previous frames — watching the movie, so to speak — could allow us to guess at the physics of the original scene and thus potentially predict the next frame. But unseen influences from off-camera would severely undermine that ability.
See Digital Emulation and Digital Simulation.
See Digital vs Analog.
My futile prayer: Please stop saying “begs the question” when you mean “raises the question”. They are two very different things, and some people will raise their eyebrows at you.
Think of it as 48 trillion one-terabyte drives.
But as I said above, this essentially asserts computationalism.
In fact, our brain generates such large states because of its massive parallelism.
The greatest of which might be getting your brain to re-experience past states without damage to its current state.
I’m also reminded of Brain Greene’s discussion of spacetime slicing in “The Fabric of the Cosmos” (and I’m sure many other books of his). Essentially Brian asks that we picture all of 4d spacetime as a loaf of bread that can be sliced into 3d chunks. Special Relativity tells us that there is more than one way to slice this loaf and depending on how we slice it, it will seem to tell different stories with respect to the order of events. While this doesn’t change the overall structure of space time, it does mean that things will appear to have a different ordering if we look at it from different perspectives.
I was drawn to this analogy in your discussion on the difficulty of creating distinct states in time with an asynchronous analog system like the brain. I believe the nature of these states doesn’t just relate to the frequency at which we are taking the snapshots but also the perspective from which we are doing it.
Taking the analogy into a hypothetical apparatus we can think of some scanning device that, let’s say, using an electromagnetic field measures in some sense the structural state of the brain at given moments via pulses it emits.
If this analogy is more than just a pretty metaphor, the states we measure would not look exactly the same depending on how we position the device and surely other particular implementation details. We might seem to see the order of firing taking place in the brain change depending on how we record events. This doesn’t change the overall fact that, given the arbitrarily “perfect” precision that we would supposedly need, in any case we will capture all of the relevant information regarding the brain’s activity.
This seems to imply that the supposed subjective experience of these sequential brain states could have various interpretations depending on the exact method we use to read the data which to me seems to be another logical contradiction leading us to refute the idea that conscious experience can be captured in frames of brain states. But maybe I am stretching the metaphor too much.
Excellent article Wyrd.
I had to read the thing twice lol
Appreciate the raindrop analogy. Just as we can't truly capture the continuous, dynamic nature of rainfall by taking discrete snapshots, we can't reduce consciousness to a series of recorded states. It's not just a technical limitation I feel - it's a fundamental misunderstanding of what consciousness is.
What I found most interesting is the distinction between playback and computation. The Mandelbrot set for example makes this clear - displaying a pre-computed image is different from doing the actual computation. Similarly, playing back recorded brain states (even if we could somehow capture them) wouldn't recreate the actual experience of consciousness.
I think this gets at a deeper issue in AI and consciousness debates: we often mistake simulation for replication. Just because something can mimic the outputs of consciousness doesn't mean it has consciousness, just as a video of rain isn't actually making it rain. Just some thoughts.