With the name computationalism it’s not surprising a cornerstone of the discussion involves exactly what it means to “compute” something. Looking at it from a strictly functional view, we might say a computation is a way to get an answer to a question.1
But there are many ways to get answers. Assuming a tape measure, the question “What size is that wall panel?” seems more a measurement than a computation. On the other hand, having measured the wall as 12 feet by 10 feet, and the plank as 6 feet by 1 foot, the question “How many planks do we need to cover the wall?” clearly requires a calculation.2
Here’s another example: An eight-slice pizza with one slice missing is readily observed as seven-eighths of a pizza. On the other hand, figuring out the decimal expansion of 7/8 — 0.875 — requires a calculation (or good memory).3
Hence today’s assertion:
Calculation (computation) and evaluation are not the same.
A computation requires an algorithm. I introduced algorithms in an earlier post. Briefly, an algorithm is a list of steps designed to process data. The list of steps is an important aspect of the definition. As a first-cut rule of thumb, calculations involve steps (and an algorithm) while evaluation is immediate and holistic.
I believe the difference between computation and evaluation is significant to computationalism because of my second assertion:
Brains work through evaluation not computation.
So, a significant difference between computation and evaluation may affect our ability to implement a mind using computation. This is why I believe replication, as opposed to simulation or emulation, has the best chance of success. Replication uses evaluation rather than computation, just as the brain does. [Here’s an explanation of terms.]
I’ve already explored emulation and simulation in some detail.4 I’ll explore replication down the road, but here I want to dig a bit more into the difference between computation and evaluation.
Above I mentioned fractions, specifically 1/8 (a slice of pizza) and 7/8 (the rest of the pizza). Fractions are ratios, and we immediately recognize the proportions here. We’d find it even easier to recognize 1/4 or 1/2. The immediate holistic nature is perhaps more apparent if we disguise the implicit division and write them as an eighth, a fourth, a half. The notion of a piece of something is fundamental and physical.
It’s less apparent with fractions such as 739/1203, but we are still talking about a holistic piece of some real or abstract single object. In contrast, the decimal expansion of 739/1203, which is 0.6142975…5, requires a series of steps determined by an algorithm. In this case, a division algorithm that must generate 200 decimal digits to find where the digits begin to repeat.6 Generating each digit requires multiple steps, so the algorithm takes many hundreds of steps.
The key point is that while p/q represents an immediate and physical ratio — we can grab one-eighth of a pizza — we need a computation to realize that our piece is 0.125 pizzas.7
As a relevant aside, a naive division algorithm — one that doesn’t recognize when non-zero digits begin to repeat — can potentially run forever generating digits. This makes it a simple example of the Turing Halting problem.8
The naive version, the long division we learned in grade school, stops when the remainder reaches zero, so the algorithm halts on fractions such as 1/8 = 0.125000… but runs forever on ones such as 1/3 = 0.333333… The Halting problem tells us that no “oracle” algorithm can tell us in advance whether the algorithm halts or runs forever.
Penrose’s argument is that our intuition gives insights that transcend algorithmic approaches. In this trivial example, an algorithm can spot the repeating pattern (because they are finite)9 but as the Turing Halting problem proves, this cannot be true in all cases.
Whether a computation halts or not is mainly of theoretical interest. Algorithm designers employ practical methods to avoid infinite loops. For instance, a simple solution for the division algorithm is to stop generating digits after X many.
Getting back to the notion of evaluation (vs computation), consider within our Solar system the orbits of the planets, their moons, and all the known asteroids.
Each object finds a natural orbit based on its mass, instantaneous velocity, and ever-changing distance from other objects. This is evaluation, the interaction of physical objects according to physical laws. Metaphorically, planets seek their orbits like water seeks the lowest level.
The equations involved are known and not terribly complicated but computing the long-term precise orbits of even three bodies (let alone thousands) is effectively impossible.10 This is because of an interaction between numerical precision and mathematical chaos.
In order to compute with numbers, the numbers must be finite, they must have a fixed precision. In most common computers, precision is limited to 16 digits. You can put lots of zeros in front or behind them, but only up to 16 non-zero digits. (And, no, you can’t stick any number of zeros between significant digits. Any zeros therein are considered significant.)
But as we saw above, fractions can require hundreds of digits. The physical ratio 739/1203 cannot be accurately represented in conventional computing.
Mathematical chaos is about how tiny differences in starting conditions cause wide divergence as a nonlinear system evolves. Planetary orbits — and in fact most real-world dynamical system — are nonlinear so mathematical chaos affects them. Note that chaos affects both physical systems and computational ones.
Note also that chaotic physical systems are fully determined. That is, their future behavior is fully predictable in principle. The planets of the Solar system have found their natural orbits for billions of years and will do so for billions more. But we can only compute those orbits for hundreds of years, maybe thousands for larger bodies. (My astronomy program, Redshift, has a time range of ±3000 years.)
To understand why, consider the ratio 739/1203 and a conventional computation with that ratio as a starting condition. My Windows calculator apparently uses quadruple precision (128 bit) numbers which offer 33 digits of precision. Serious accuracy, but still not the 200 digits needed to express the fraction.
Our imaginary calculation proceeds then from 0.61429758935993349958437240232751 (and nothing more), and depending on the equations used, may diverge quickly from the physical system. Or the divergence can be close to zero for a long time before diverging, sometimes suddenly. Regardless, a computation with necessarily finite numbers will always diverge from the physical system it models.
Physical systems integrate moment-to-moment influences from all other parts of the system. They are constantly self-adjusting. An orbit is the result of all instantaneous forces acting on it. Computing all these, even in principle, is extraordinarily difficult and likely impossible practically. NASA uses course corrections on its spacecraft because perfect prediction is impossible.
Nature itself, in evaluating forces, acts as a giant analog parallel computer. Each particle participates simultaneously in analyzing the forces on it and reacting, so in some sense the amount of “computation” involved is massive. Simulating that requires implementing each particle and the forces on it in a machine much larger those particles. So, there is a massive expansion on computing hardware over the natural systems they model.
I’ll end with a visually stunning example of both chaos and the difference between computation and evaluation. The Mandelbrot set is an infinitely complex strictly computational abstract mathematical object. It cannot be produced through evaluation.
The M-set is the set of points defined by the following simple computation:
For all coordinates (the c in the equation above). The only other variable is the z (zed), which we initialize to zero. We square z and add it to c to produce a new z. We plug the new z into the equation and keep doing this until one of two things happens: the value of z goes above 2.0 (“escapes”) or it always stays below it, and we tire of calculating.
If the value of z goes above 2.0, the point is not in the M-set, and we’re done. If a point never escapes, that point is in the M-set, but the gotcha is that we can never be sure the point never escapes. Which makes calculating the Mandelbrot set a Halting problem.
What makes the Mandelbrot so visually (and mathematically) interesting is that the closer we get to the boundary between inside and outside, the longer it takes to find out if a point escapes or not. Deep zooms into the boundary regions can require billions of iterations per point to escape.

The resolution (accuracy) of the Mandelbrot is determined solely by how many iterations the calculation does before giving up and deciding the point must be in the M-set. Figure 2 shows how stopping after four iterations results in a blob (left). Increasing to eight results in only a vague outline. Only at sixteen does the real silhouette start to emerge. Usually, one starts at several hundred and increases while zooming in. The deep zoom images in Figure 1 had maximums in the high millions.
But because of the Halting problem with points along the boundary, the Mandelbrot set can never be definitively computed even though it is well-defined. We can only ever have an approximation of it based on the iteration ceiling. Further, it cannot be evaluated because the computation of each point requires the results of the previous computation.

Lastly, chaos shows up in deep zooms near the boundary. Tiny variations between coordinates result in different outcomes to the computation. Figure 3 shows a tiny area around “2 o’clock” on the main cardioid. Adjacent pixels may have identical coordinates for hundreds or more decimal digits before differing, yet the calculation outcomes will differ significantly. Hence the seeming “static” of pixels in Figure 3.
I’ll note that some see evaluation as computation. The line can be fuzzy, and it’s not exactly wrong to see evaluation as computation. I think it conflates two distinct processes though, and I think that distinction may matter with regard to computing consciousness.
Next time I’ll dig a bit more into what computing is along with the notion that “everything is a computation”.
Until then…
For example, “What do you get if multiply six by nine?”
Wall: 10×12=120 feet². Panel: 6×1=6 feet². 120÷6=20 panels.
I thought mine was good enough to write it from memory, but I typed 0.825. Always check!
See Digital Emulation and Digital Simulation.
0.61429758935993349958437240232751454696591853699085619285120532003325020781379883624272651704073150457190357439733998337489609310058187863674147963424771404821280133000831255195344970906068162926018287… these 200 digits repeat infinitely.
All fractions have an infinite repeating-digit pattern. Even 1/2 = 0.500000000…
Technically, 0.125000000…
Which is related to Gödel Incompleteness theorems. Both are parts of the Lucas-Penrose argument that mind is not computational. See Brains are Not Algorithmic.]
With the cost of a fair bit of extra computation to remember and compare previously generated digits.
This is known as the three-body problem.
"Calculation (computation) and evaluation are not the same."
Haha.. you won't have to convince me. I'm pretty sure my mind is involved in a great deal of the latter and not much of the former. So I fully admit to skipping over all the numbers.
The other day I was trying to get Claude AI to argue with me in a meaningful way, but either it pandered to me "Thank you for pointing that out, you are correct...blah blah blah" or when I pointed out that it was pandering, it feigned indignation, "I am not! I was merely pointing out that...." Finally it just gave up and admitted it was pandering. The obvious thing that was missing from the discussion was the usual nonsense that real people say when they're digging in their heels. I find it hard to believe that emotions, especially subtle ones such as 'digging in one's heels,' could be computationally replicated. It's not just emotion, it's a repressed emotion, it's subtle and tricky to analyze in ourselves, so how on earth do we think we can turn it into an algorithm? And it's not like we all behave in the same way.
Not to diminish what's going on with AI these days, which is truly impressive. But still, I'm nowhere near worried about it becoming conscious.