53 Comments

Hey Wyrd. Can you expound a little bit on the difference between emulation and simulation? You say that simulation is just about “capturing” something important about the original, absent any need for computational/functional replication. But in what sense does a simulation capture or represent something else, if not in the functional sense? It seems like all of the examples you use, like laser light being modeled on a a diagram or computer (or just some fancy mechanical simulation), would just be examples of “capturing” in the algorithmic sense. That is, there is some algorithm (e.g. behavior modeled on electromagnetic wave equations) that we think our simulation roughly implements, and that real light implements as well.

I can’t think of a particular example of simulation which doesn’t count as functional replication (aka emulation). Can you? I would be really interested to hear of an example. Thanks!

Expand full comment
author

Hey Alex. A key metric differentiating emulation from simulation is the graining of the system. Emulation is course-grained, simulation is fine-grained. Emulation seeks to capture function at a much higher level -- a "functionally equivalent" level -- without regard to the actual physics or fine structure of what it emulates. Simulation seeks to capture the physics of what comprises the system and expects the high-level emergent properties of the system to arise from the low-level physics. Per your final question, no, I can't. All three categories capture function at some level. It's the level that distinguishes them. That help any?

With regard to lasers, a similar distinction can be made. A simulation might involve computations at the atomic or quantum level, whereas an emulation might depend more on those equations. In either case, though, we're in abstract numerical territory. We've made some map from reality to input numbers and from output numbers back to reality. There is a Digital Divide with both simulation and emulation, something I'll be writing about down the road.

Replications, on the other hand, are likely to be more physically analogous to what they model and on the reality side of the Divide.

Expand full comment

Thanks! Would you agree then that the question of computationalism is orthogonal to the question of which level of scale/complexity accurately captures consciousness? If consciousness was computable, that still wouldn’t tell us which level of complexity (e.g. emulation/simulation) was needed to appropriately replicate it. And if it wasn’t computable, then that still doesn’t tell us what level that non-computable physical process needed to achieve consciousness exists at. Though granted we might say that this provides some evidence for the quantum level/low-level replication approach, since presumably some kind of quantum phenomenon would be needed to explain non-computability, although that may not necessarily be the case.

On a related note, I agree with you that the Chinese room is a poor model of the mind, but I would also add that it’s a poor criticism of computationalism. At best, it shows that the brain and conscious activity is not a lookup table, not that it is non-computational. It might be that consciousness simply requires the kind of computational processes which aren’t implemented by lookup tables. Even if brains and lookup tables share the same functional outputs at some level of scale, they nonetheless might be implementing different algorithms, with particular algorithms forming the key ingredients of consciousness.

Thanks for the response!

Expand full comment
author

They're definitely distinct questions. I'd have to think about if I'd go so far as orthogonal, but off the top of my head, yeah, sure. And as you say, physical reality does give us one end of the spectrum. Arguably for computationalism being true, as well. It suggests that a quantum computation at the quantum level would almost have to work. An unimaginably large quantum simulation, on the order of 10^20 or greater.

You raise the same point Plasma Bloggin' does elsethread. Lookup tables == computation. Consider multiplication tables. Any math function can be replaced by a lookup table (I've done it to escape having to calculate sine functions -- you only have to cover a 90° segment). And any computation is just math (very simple math at the CPU level, and CPUs actually do a lot of work using lookup tables). So, if consciousness cannot -- in principle -- be implemented with a lookup table, then it cannot be computational (as we define discrete symbol computation).

In his book, Penrose seeks to demonstrate that our consciousness is not algorithmic. Equivalently: is not a lookup table, is not a lambda calculus, is not computable. Four ways of saying the same thing.

Expand full comment

Hey Wryd, quick reply because I have to go to bed. But I wasn’t saying that look-up tables aren’t computational, only that they may not be doing the same kinds of computation as brains or neural networks. They might be implementing different algorithms to solve the same function.

For example, I can solve (3 + 7) by breaking it down into smaller sums, or by multiplying and then dividing and so forth. Maybe the giant lookup table which exactly replicates your behavior (assuming computationalism is true) might still not replicate the algorithms which drives your behavior. And if the special sauce of consciousness lies in implementing certain algorithms, it might still be that lookup-tables are not conscious, even if they are computational systems, and even if computationalism is true. Let me know if Penrose addresses this, or if you think I’m mistaken.

Haven’t read Penrose’s book, but looks like it’s next on my reading list!

Expand full comment
author

The term "algorithm" can be tricky. To someone like me with a strong CS background, it has a specific meaning that, through the Church-Turing thesis equates algorithm, Turing Machine, lookup table (and lambda calculus). But there is a more casual definition of "algorithm" that is not restricted to the discrete symbol processing CS meaning but which means *any* process.

Under the CS definition of algorithms, I think it's hard to justify that two algorithms with identical outputs, but completely different approaches, would have different results with regard to consciousness. Under computationalism, it's hard to see what different algorithms would matter. Computation is about outputs, not process. Church-Turing again.

But under the more casual processes definition, the importance of the nature of the process is kind of what I'm arguing for in the post. In this case, the vast difference between physical analog processes and digital numeric ones seems very significant.

Expand full comment

Wyrd*

Sorry, it’s late here!

Expand full comment
author

Da nada. But did you know Substack lets you edit your comments? Click the three dots in the lower right and you should get a menu with an edit option. Kind of a nice feature.

Expand full comment

I disagree with what you say about simulations. The reason that a simulation of a laser light doesn't produce a real laser isn't because there's some *process* that's missing. It's because a specific physical substance (light) is part of what a laser is. No simulation can replicate that aspect of something because they only replicate the abstract structure, not the specific substances of which something is made. Similarly, the reason a simulated earthquake doesn't knock down a building is because you need more than just a process with a certain structure to knock down a building - you need a large amount of physical force, which just isn't want a simulation does.

So the objection that a simulated brain wouldn't be conscious because a simulated laser doesn't produce a real one, or a simulated earthquake doesn't really knock down buildings, is a non-sequitur. Everyone agrees that a simulation of something doesn't share all the properties of the non-simulated version. In particular, it doesn't share properties related to the physical substance the thing is made of or the way it interacts with external objects. But by the same token, we all agree that simulations do share some similarities with the thing being simulated - that's the whole point of a simulation. So the question is whether consciousness is the type of property that would be shared by a simulation. Computationalists answer yes, and simply pointing to examples of properties that aren't shared by simulations can't refute them.

So is consciousness the type of thing that would be shared by a simulation? We both agree that consciousness isn't about the specific physical substances that make up the brain - as you said, that view is very hard to defend. It also seems that we both agree that consciousness isn't just about how you interact with external objects. So neither class of dissimilarity between simulation and reality that your examples fall into actually apply to consciousness. Could there be some other class of dissimilarity that includes consciousness? I don't think so, because I think consciousness of one of the things that definitely is the same between a simulation and reality. After all, what simulations do have in common with the things they simulate are structural characteristics. The parts of a simulation interact in the same way as the parts of the real thing (if they didn't, the simulation is inaccurate). But I think that consciousness is in the interaction between neurons. I can't imagine any plausible physicalist account on which this isn't true, and I don't buy non-physicalist accounts in part because it seems so obvious to me that consciousness must be in these interactions. But if these interactions are what consciousness is, as they appear to be, then a simulation is conscious too, since the simulated neurons interact in the exact same way. The same neural processes occur in the simulation as do in a flesh-and-blood brain. Hence, a brain simulation is conscious.

Expand full comment
author

I just finished a two-week project, a software simulation of a 32-bit CPU — I'm very aware of the contrasts and comparisons between digital simulations and analog reality. A key point is that digital sims are, on the surface, much larger and more complicated than what they model. That's because a sim needs to capture the low-level physics of the object to some useful resolution. Much depends on how fine the resolution must be to capture reality faithfully.

With regards to a brain sim, neuron resolution seems the starting point. We know the glial cells and myelin sheath are critical. Synapses, obviously. Do we need to go as low as the molecular chemistry? If so, the sim is gonna be huge. If quantum-level resolution is necessary, it may be that only reality itself is capable of the necessary computation.

I think you might have misunderstood my point about laser light. Clearly laser light emerges only from certain physical materials and processes, and no simulation can have those physical properties or processes. My question is whether consciousness lies in the physical process, in the laser light, so to speak. If it does, then it's not clear to me to what extent simulations can capture it.

As someone who has written many software simulations, I don't see them as sharing structure with what they simulate. They embody a numerical abstraction of that structure but that is only apparent in the programmer's mind. Viewed objectively, a sim is just a lot of random-ish numbers with no obvious connection to what it simulates.

In a paper, David Chalmers claimed a "topological invariance" in the execution structure of a computer, the steps it takes doing the simulation, but I think that misunderstands how computers really work. There is a whole thing with intermediate states that I think falsifies it.

Expand full comment

I don't know how much resolution a simulation would need to perfectly simulate consciousness. I think a computer simulation can be conscious in theory, but that doesn't necessarily mean it would be practical to make one. Even if quantum-level resolution is required, which I'm pretty skeptical of, it could still be done in principle, though maybe not in practice without a quantum computer.

> My question is whether consciousness lies in the physical process, in the laser light, so to speak.

Computationalism says that consciousness lies in the physical process, and that anything performing that process would be conscious. So it doesn't lie in the specific physical substrate. I think this verdict is right - it seems absurd that consciousness would depend on the type of material in that way, and there are evolutionary arguments against it (If consciousness just depends on the process, it's clear why it evolved, but if it depends on the materials that perform the process, it's a total coincidence that the materials in our brain happen to be just the right ones).

It seems like you mean something else by, "consciousness lies in the physical process here," something that would be linked to the specific physical substance. But I think consciousness clearly does not lie in that.

> As someone who has written many software simulations, I don't see them as sharing structure with what they simulate. They embody a numerical abstraction of that structure but that is only apparent in the programmer's mind.

I think, "They embody a numerical abstraction of that structure," is just another way of saying that they have that structure. And the structure is physically real - it might not be *apparent* to someone who doesn't know what the computer is doing, but clearly it's there, or else the simulation wouldn't work.

> Viewed objectively, a sim is just a lot of random-ish numbers with no obvious connection to what it simulates.

"Random-ish" maybe, but that "-ish" is doing a lot of legwork. If the simulation was actually just random numbers, it would be incapable of simulating anything interesting. The connection to what they simulate might not be obvious, but clearly there is a connection - the programmer programmed one in.

Expand full comment

Also, I think there is a deeper flaw in the Chinese Room beyond the fact that the lookup table isn't infinite. I don't think you need an infinite lookup table to pass the RTT, since humans can't do arbitrarily long math problems either. I think the Chinese Room successfully demonstrates that the RTT isn't a sufficient condition for consciousness. The room would pass the RTT without being conscious or having anyone in it who performs the conscious process that the room is emulating.

The problem is that that's *all* it demonstrates. I don't understand why Searle think it works as an argument against computationalism because it's clearly invalid when used that way. Computationalism doesn't imply that anything that appeared from the outside (i.e., based on external behavior) to be conscious would actually be conscious. It has to be the right way on the inside too. The Chinese Room doesn't perform the same computation as a conscious being does when responding to queries. Our brain isn't just a giant lookup table, after all. So computationalism actually implies that the Chinese Room *isn't* conscious, just as Searle's intuition suggests. The only theory of mind that the Chinese Room is actually a valid argument against in behaviorism, which basically everybody already agrees is implausible.

Expand full comment
author

> "Computationalism says that consciousness lies in the physical process, and that anything performing that process would be conscious. So it doesn't lie in the specific physical substrate."

Consider what that *process* actually is -- a massively parallel analog system where some 500 trillion synapses are each their own complicated chemical analog system. (I read a neuroscientist say synapses are the most complicated biological machine we know.) Neurons do have an "on" and "off" state (which I think confuses people into thinking neuron are binary), but the on state has pulses, and their frequency and duty cycle carry analog information. Recent experiments suggest even the rise and fall times of the on/off changes carries analog information the brain uses.

And I agree completely that, as you say, anything that performs *that* *process* is likely conscious regardless of the composition of its materials. To me, though, that's replication, Positronic brains, not computationalism.

Computationalism holds that *simulating* that analog process numerically should result in outputs indistinguishable from analogous outputs in a human. For example, if connected to voice gear, it could generate signals that made speech. And that may be correct, but I have some skepticism.

> It seems like you mean something else by, "consciousness lies in the physical process here," something that would be linked to the specific physical substance.

Not a specific substance, but a specific *structure* and *functionality* of substance, regardless of materials. For example, many things, from gas to solid, can be made to lase. It's not about material, as such, but about how that material functions. Any material with the right function can lase. Those without it cannot.

> If the simulation was actually just random numbers, it would be incapable of simulating anything interesting.

By "random-ish" I was being careful about the mathematical definition of random. For all intents and purposes, they're random, especially in the binary form inside the computer. There would be certain repeating patterns just from the nature of binary data, but they would have no connection to what the data represented.

We might have an opportunity for an interesting test if you're up for it. I'll present a data structure for some simulation, and you try to figure out what it could be. I actually have no clue how that would turn out. (Or what model to pick.)

Expand full comment

> Consider what that *process* actually is -- a massively parallel analog system where some 500 trillion synapses are each their own complicated chemical analog system.

I don't really see why that matters unless the analogue information is so essential to the process that you couldn't even have the same process without it, and I really doubt this. In fact, I don't see how this could even be possible. You can approximate any analogue information arbitrarily closely with digital information, and there has to be some limit to the resolution of the analogue information in the brain anyway - at some level, it can't matter to its functioning because it would be washed out by noise.

> Computationalism holds that *simulating* that analog process numerically should result in outputs indistinguishable from analogous outputs in a human. For example, if connected to voice gear, it could generate signals that made speech. And that may be correct, but I have some skepticism.

This is guaranteed to be true, since the laws of physics are computable.

> Not a specific substance, but a specific *structure* and *functionality* of substance, regardless of materials. For example, many things, from gas to solid, can be made to lase. It's not about material, as such, but about how that material functions.

If you define "laser" to refer just to structure and functionality rather than specific substance, it's no longer so implausible to say that a computer simulation in some sense lases, though. If defined broadly enough, you could say that some pattern of information in the computer is lasing.

> or all intents and purposes, they're random

Certainly not. If they were really random, they wouldn't simulate anything. but they do simulate something. that's a pretty big intent and purpose for which they are not random, and it's the only one relevant here.

> We might have an opportunity for an interesting test if you're up for it. I'll present a data structure for some simulation, and you try to figure out what it could be. I actually have no clue how that would turn out. (Or what model to pick.)

This would be a terrible test. My position is that patterns exist that are related to the structure of the thing being simulated. This in no way implies that they're easy for a human who doesn't know what's being simulated to identify.

Expand full comment
author

> I don't really see why that matters unless the analogue information is so essential to the process that you couldn't even have the same process without it,

Yes, exactly. As you say we can only *approximate* an analog process. Computation requires numbers. Numbers require rounding off reality. Chaos theory tells us we're immediately screwed when we do that. Worse, in many systems, the higher the precision, the faster chaos sets in. The Mandelbrot is an excellent illustration of this near its boundaries. Numbers that match to 20 decimal digits before they vary can return entirely different values. (I'll try to find a rendering and post it on Notes.)

> If defined broadly enough, you could say that some pattern of information in the computer is lasing.

No. I'm sorry, but there's just no way that's true. I have 47 years of computer science background and an even longer interest in basic physics. Please trust me on this. Lasing has a very specific definition. Remember it stands for Light Amplification by Stimulated Emission of Radiation. There is also such a thing as a maser - Microwave etc. These devices "mase" using a similar physical process but at much lower frequencies.

> Certainly not. If they were really random, they wouldn't simulate anything. but they do simulate something.

Yes, when combined with code written to process them. They're not random in the sense they could be just any numbers. They're definitely specific numbers. They're random in that any numerical analysis would turn up no obvious patterns. Watching the CPU execute the sim would be essentially indistinguishable from watching it simulate Tetris (all computer games are sims).

They're not random in that we have in mind a specific map from the numbers to reality. In the context of that map, we consider them meaningful. But that map is entirely arbitrary -- it depends on how we design it -- and the numbers in the computer are even more arbitrary and abstract and subject to our design. So my point is there is a Digital Divide between reality and a numerical sim, and while sims generally give adequate approximate results, my skepticism lies in that maybe adequate approximate aren't adequate enough for consciousness.

Expand full comment

> Yes, exactly. As you say we can only *approximate* an analog process. Computation requires numbers. Numbers require rounding off reality. Chaos theory tells us we're immediately screwed when we do that.

Remember that this is an "in principle" argument. It's not impossible to get enough precision to have a good approximation even in a chaotic system - it just takes an impractical amount of computing power. And I'm not aware of any evidence that there's enough chaos in the brain to prevent a simulation at the neural level from being a very good approximation of its behavior anyway. If the brain really did depend to an extreme degree on tiny sub-neural fluctuations, it wouldn't be very good at serving its purpose.

> The Mandelbrot is an excellent illustration of this near its boundaries.

I already know about the Mandelbrot set, but I don't see how it's particularly relevant here. Real objects aren't like the Mandelbrot set because they can only have complexity down to a certain scale.

> Lasing has a very specific definition. Remember it stands for Light Amplification by Stimulated Emission of Radiation.

Yes, but in your previous reply, you rejected that definition of laser, since you pointed out that things other than light can lase. My point here is not that the information in a computer is a real, physical laser. It's just that if you go far enough to define something purely in terms of its abstract structure, a pattern of information can end up meeting the definition because it shares the abstract structure.

> They're random in that any numerical analysis would turn up no obvious patterns.

Yes, of course they are random-ish in this sense. But this sense is not enough to get you to the claim that there is no structure in a computer simulation that mimics the structure of the thing being simulated. It doesn't matter whether the pattern is obvious or whether it shows up in standard techniques. After all, standard techniques are designed to detect totally different patterns, like, "The sequences of 5 consecutive bits aren't uniformly distributed across all 32 possibilities," or, "The sequence of numbers are the values of a simple mathematical function."

> They're not random in that we have in mind a specific map from the numbers to reality. In the context of that map, we consider them meaningful. But that map is entirely arbitrary -- it depends on how we design it -- and the numbers in the computer are even more arbitrary and abstract and subject to our design.

Of course there are some arbitrary decisions you have to make when designing the simulation. But, according to the view that a simulated brain would be conscious, all possible simulation woulds be conscious, regardless of which arbitrary decisions were made. The fact that there are arbitrary aspects of the simulation would only be an objection if those arbitrary choices actually mattered to whether the brain was conscious.

> So my point is there is a Digital Divide between reality and a numerical sim, and while sims generally give adequate approximate results, my skepticism lies in that maybe adequate approximate aren't adequate enough for consciousness.

If there were a way to design an absolutely perfect simulation of the brain, do you think that would be conscious? If not, do you think there's a hard divide between almost-perfect and absolutely perfect that makes the difference?

Expand full comment
author

> "It's not impossible to get enough precision to have a good approximation even in a chaotic system."

In a *dynamic* system, yes, it is. That was the point of showing you the Mandelbrot. It's not a matter of the precision of one number. Those renderings involved a precision of over 21 digits. The problem is the *evolution* of the system -- the next number and the one after that and so on. In a dynamic system, chaos corrupts that evolution, sometimes very quickly. (Are you familiar with the Lorenz Butterfly? Another demonstration of the same thing.)

> "It's just that if you go far enough to define something purely in terms of its abstract structure, a pattern of information can end up meeting the definition because it shares the abstract structure."

No, dude, I'm sorry, that just doesn't follow. Sharing the *abstraction* of lasing is no closer to lasing than an abstraction of water is wet.

Your next couple responses suggest you don't have much background in programming or computer science. It would take too long in these comments to get into it, but posts I'll be writing in the future will be covering this. Alternately, I've been writing about this stuff for over a decade on my old blog. If you're interested:

https://logosconcarne.com/tag/computationalism/

> "If there were a way to design an absolutely perfect simulation of the brain, do you think that would be conscious?"

I answered this in the post. In part, I wrote:

"But such a simulation might be of a comatose mind, or a mind filled with digital static, or an insane mind, or any of a number of possibilities. There is a bullseye to the target, a working mind, but there are more ways to miss, and when has software ever worked right, anyway?"

Bottom line, as I said repeatedly, I honestly don't know. But I'm a bit skeptical.

Expand full comment
author

I'm dubious the Chinese Room could pass a Turing Test, let alone a Rich one. The first wave of AI, expert systems, quickly discovered how difficult it is to be exhaustive with answers. Even a thought experiment version seems limited to a kind of robotic information desk. A person can easily answer queries about raves and rants that seem hard to encode into mechanical queries.

I think Searles notion here can read it to suggest *we're* just lookup devices, our responses all canned. Or it can be read it to suggest computers -- mere lookup devices -- can't be conscious. I lean towards the latter reading. Exactly as you say, *we're* not lookup devices, so the Room just isn't a good analogy other than, perhaps, to show how we can be fooled by a system into thinking someone's there.

Expand full comment

When I say the Chinese Room RTT, I mean it purely in thought experiment land. Nothing like that could ever be built in real life because the lookup table required to find a realistic answer to any human query would have to be combinatorially huge. I thought-experiment land, you could make one by having every possible conversation with a real person that fits into a human life span and recording all the responses. Of course, the fact that this is clearly not how we learned to converse and that this doesn't match the structure of our brain is in my mind what makes the Chinese Room thought experiment so weak, since it would only refute a view that says structure is irrelevant. I don't think it matters whether we interpret it as, "Searle is claiming that humans are just look up tables," or, "Searle is claiming that computers are just lookup tables," because both claims are false.

Expand full comment
author

Computers *are* lookup tables -- or can be successfully implemented as such. The question is whether consciousness can be implemented that way. If not, then humans don't really enter the picture at all. But if a lookup system can at least *appear* conscious, then it raises questions about human consciousness.

Searle's actual formulation of a file system is a bit like a Turing Machine compared to a laptop. LLMs are showing us a different view of lookup systems that encode vast amounts of information ("the whole internet") in a kind of holographic way in a very high dimensional space. High as in hundreds or thousands of dimensions. It's almost analog in function, which gives it the ability to superimpose data on itself. They're a whole new approach to computationalism, and I haven't made up mind about them.

Expand full comment

Computers can implement lookup tables, but a lookup table isn't a computer in any meaningful sense. It doesn't do any computation.

I'm not sure what most computationalist philosophers think of LLMs. I'm skeptical that they could ever be conscious because they acquire their outputs in such a different way from how humans do it. I think the Chinese Room is a good argument for why LLMs shouldn't be assumed to be conscious even if they pass the Turing test.

Expand full comment
author

> Computers can implement lookup tables, but a lookup table isn't a computer in any meaningful sense. It doesn't do any computation.

Remember addition and multiplication tables from grade school? Just lookup tables. The computer only needs a table for one digit and for what to do with carry. It just applies that to all digits. Any math function can be implemented by a lookup table.

Look at it this way. Any computation is determined, right? Absent noise or deliberate randomness, the same computation with the same input always returns the same output. That's a lookup table. This equivalence is fundamental in computer science.

As for LLMs. 🤷🏼‍♂️

Expand full comment

Wonderful post!

I'm honoured to play a small role in getting you back to writing and thinking more about consciousness. I really enjoyed this post -- so, I say -- write more!

I love reading about how people's views on consciousness morph over time -- you've gone from assuming robots (i.e. computationalism) to your current skepticism about them. But then you mention that LLMs have 'dented your skepticism a little'. I wonder why this is?

You mentioned a few things you think might be important for consciousness -- fuzzy thinking and forgetfulness, sleep and dreaming, embodiment. Sleep, dreaming, and embodiment don't seem to apply to LLMs (not unless we stretch those definitions beyond recognition), so is it "fuzzy thinking" that is putting that dent in your scepticism?

Expand full comment
author

Thank you! It's actually a little intimidating writing amongst so many professionals. The honor flows both ways and then some.

LLMs give me pause because it's the first time we're seeing the effect of scale, both in the compute model and in the data. Not close to being on par with the scale of the human brain, but more a step in that direction than we've seen. I think combining LLMs with neuromorphic chips could take us even closer. Such systems may offer a strong test of computationalism.

That said, LLMs remain essentially search engines, and perhaps this is common coin, but it only struck me the other day that they're essentially Searle's Chinese Room embodied. And I suspect no more conscious. That still lies with the makers. So, I remain skeptical (but, damn, those LLMs are a bit eerie).

Also, as you say, no fuzzy thinking in LLMs. Presumably, identical inputs result in identical outputs (unless there's some randomization function). My thoughts on fuzzy thinking come from other sources (admittedly, some science fictional). It's just the thought that *our* thinking is fuzzy, and sleep is instrumental for learning, so maybe those are important for consciousness as we recognize it. (I find it interesting that people are generally bad at math and wonder how that might connect with consciousness.)

Expand full comment

This angle might both address the point of comparison here between LLMs and Searle's room, and present a first approximation defense that the biological substrate may prohibit true equivalence to support a functionalist replication on different substrates.

The angle here is to explore the idea that sensitivity is non-native to unliving substrates and that specificity is non-native to biological substrates. However, processes of sensitization combined with desensitization could, in theory, approach a kind of equivalence principle with specificity, effectively being sensitive to what something is while being insensitive to what it isn't. Conversely, LLMs presume specification, and upon failure engage inspecificity, which can functionally mimic sensitivity under the right conditions, to "dissolve" that specificity.

LLMs would be more than Searle's room in this telling since there is an element of self-controlled adjustment with a dash of functional stochasticity.

A living substrate, on the other hand, may require a different kind of functional noise. I would posit that what we experience as "dissonance" is a far more general magnitude of anticipated need for change, and the "target" for the offloading (think cathartic expressions, actions, social blame etc) is dialectically selected rather than specifically or stochastically. This would be akin to how metabolism breaks down physical complexes to release/store "caloric energy" as a kind of general/potential energy, except in the inverse. Here, dissonance would be akin to accumulating "Rounding errors" relative to body budgeting (Lisa Feldman Barrett), and not earmarked with some specific chain of causes or some set of known effects needed (unlike what we describe as "pressure").

Expand full comment
author

>>> "...sensitivity is non-native to unliving substrates and that specificity is non-native to biological substrates."

Not sure I'm clear what you mean. I know of inorganic materials more sensitive than organic ones, and biology can be specific, so I think I'm not understanding how you mean sensitivity and specificity.

You raise an interesting point about LLMs being more than Searle's Room. Their answers are predictions, potentially wrong to the point of being hallucinations -- utter fantasies. The GFR always provides a correct answer. (Or none at all, its fallback for an unknown query being "Sorry! I don't know.") On some level, that almost makes LLMs lesser.

Dissonance is another interesting point. Tina just finished a series of posts about her husband's book, "Truth & Generosity", which considers those terms in terms of how we understand language (and more). In accord with both of you, I visualize dissonance as an irritating difference between our mind's phenomena-based model of reality and some perception we can't fit to it closely enough. Humor often comes from a dissonance that amuses us.

Not sure if this was responsive to your comment, though...

Expand full comment

On the specificity/sensitivity front, it might be wise for me to first ask, how familiar are you with the two terms in medical testing, bayesian inference, and especially the false positive paradox? I will also note that one of the common uses of "sensitivity" is "sensitive to initial conditions," which is somewhat antithetical to what sensitivity means in other contexts. LLMs are biased toward the initial conditions meaning, which annoys me.

LLM hallucinations are one of the reasons why the framing of specificity and sensitivity matters if we want to properly detect and correct such cases. For example, is or possible for an LLM to "hallucinate correctly," and would we have any means of detecting it? It sounds silly at first, but so too does a binary test with "99% accuracy" telling you that you have a 10% chance of having a disease.

As regards dissonance, I will have to check out Tina's posts! Your visualization of dissonance appears to me similar to Festinger's "cognitive dissonance," though it sounds more general given your example of humor. I take it to be even more general, where the dissonance generated by the "grinding of the cognitive gears" is not limited to solutions which directly address the source of that grinding at all.

It's sort of like how most people seem like "impatient drivers" even if they seem like patient people otherwise. If you have an impatient boss waiting for you and work, and you have an impatient boss waiting for you at home, the person on the road who cuts you off is now the straw that broke the camel's back. "It's people like you...!!!" Then of course we learn our lesson and drive even more carefully.

Nah, we stew impatiently and cut someone else off in traffic because by god this world full of impatient people owes you. I think it was Brene Brown that described Blame as "the discharge of discomfort." Notably, this is treating it like it is zero sum. One of Barbara Tversky's Laws of Cognition is that "when thought overflows the mind, the mind puts it out into the world." I would extend that to dissonance also.

I think of humor as being one of many sources of dissonance released cathartically because it is shared, and rather than merely dissipating, it forms a kind of shared sentiment, which you might put in terms of "consonance" if you were feeling cheeky.

Expand full comment
author

Ah, perhaps that explains my confusion. Biology and statistics, especially the latter, aren't anywhere near my wheelhouse. As you say, "sensitivity" has broad application, from sensors to emotions.

What do you mean by "hallucinate correctly"? That seems an oxymoron to me.

I agree with what you say about dissonance. I would add, with regard to humor, that a sense of joy is part of it, too. I've long thought the answer the question, "Why do whales breach?" is simply for the sheer joy of it. Wouldn't an intelligent creature confined to the water find it *fun* to leap momentarily into an alien world? Fond teasing and pranks are apparently not restricted to humanity!

Expand full comment
Aug 28Liked by Wyrd Smythe

Nice piece. Thank you. Agree on Suzi Travis - she’s a great educator imo.

Expand full comment

Great post! I'm not entirely sure I see the difference between structuralism and functionalism, though. Is it just that structuralism is essentially functionalism minus computationalism?

"Something that might be important for true machine consciousness is fuzzy thinking and forgetfulness. These are self-evident in our experience, but what if they’re instrumental to consciousness? What if sleep or dreaming are important? Some believe an actual body, with all that entails, is necessary. If consciousness evolved to let us navigate the physical world, how important might that physical world be as a foundation for consciousness?"

These are great questions. I can't see how we can simply wave away embodiment or interaction with the world—and other consciousnesses.

My main problem: If you're a functionalist and you create AI that meets your standards, you'd say you created consciousness...but others would disagree. There's no way to know which theory is correct; you would just be feeding your own assumptions that consciousness can be reduced to function. What bothers me most is not that I disagree with those assumptions (and I do), but when those assumptions aren't made explicit. I can't stand this sneaky trick when people start out by defining consciousness as phenomenal experience from the inside, a la Nagel's "what it's like", but then go on to address consciousness purely from the outside, without any sort of acknowledgement of the category error.

"Some argue that, because she knows everything there is to know about color, she does not gain new knowledge."

Thanks for bringing this up. Those who say that 'everything there is to know' must include phenomenal experience are agreeing with us that phenomenal experience contributes to knowledge. I have no problem with that!

I'm not sure what the wording is in the original thought experiment, but clearly whoever said "Mary knows everything there is to know about color" really meant "Mary knows everything SCIENCE tells us about color". The thought experiment is set up to exclude the phenomenal experience of color, leaving only what we know about color from the objective point of view—that's the point. Those who deny that Mary gains knowledge upon seeing the color believe the phenomenal experience of color in general gives us nothing whatsoever that can be called knowledge. (Otherwise they need to be prepared to make clear why Mary, in this particular instance, is being deceived about her experience. Which would be bizarre.)

And yet, without the phenomenal experience of color in general, there would be no scientific theory of color. We wouldn't know about color at all! From the scientific point of view, color is not an inherent property of matter, but exists only in perception. (In philosophy, this is what's called a 'secondary property'.) Color is a secondary property, which means it doesn't exist independent of minds. If you deny that perception or the experience of color contributes anything whatsoever to knowledge, you also deny the possibility of a scientific knowledge of color. See how that snake eats its own tail?

Expand full comment
author

>> "Is it just that structuralism is essentially functionalism minus computationalism?"

A fine way to put it. Structuralism requires physical implementation. It can't be computed.

I'm sure it's no surprise that I agree with your analysis there. I think that category error is fairly common. As you point out, it often happens with regard to poor Mary and her prison. To me the two categories of knowledge seem so obvious that's it's head-turning that others disagree. In the post I linked to, I mentioned my own "Mary's room" in skydiving. I knew a fair bit about it objectively, but the subjective experience was a radically different realm. And impossible to communicate to others. If you know, you know.

FWIW, I quibble a bit that color is not an inherent property. How we experience color isn't inherent, but the electromagnetic radiation reflected and/or emitted from an object is. (Gold is a fun one, because its slightly unusual quantum properties are why it's the shiny "gold" color that it is.)

Expand full comment

Good! Now I think I finally have an idea of what structuralism is. Thanks!

On skydiving: "I knew a fair bit about it objectively, but the subjective experience was a radically different realm." Ha! I bet it was. I would probably have a panic attack before I even got on the plane. If someone said to me, "Tina, you have a choice between learning about skydiving objectively or subjectively," my answer would be immediate: "Objective please."

As for color, what I meant was, if no one ever experienced color, how would we know about it in terms of electromagnetic radiation or light reflect from an object? Maybe it's conceivable we'd find out, but hard to imagine how.

Expand full comment
author

Ha! Well, you know how *I* define structuralism, anyway. 😁 (I thought I made up the term, but I think I did run into it in the literature at some point. 🙄)

I was told that, for every ten people who decide, "I'm definitely gonna go skydiving!", on average, only one does. I saw groups show up at the drop zone for a group jump only to have most of them chicken out in the face of reality. Some have even changed their mind in the air and stayed in the plane. All entirely understandable and no one looks at them funny. It's a pulse-pounding thing, but life-changing. As they say, when you exit that plane, you are dead, and it's up to you to save your life (it's not quite that extreme, but close). The large difference between knowledge about and knowledge of makes the comparison to Mary's Room is entirely apt.

I think we'd definitely find out once we invented instruments. Similar to how we now know some animals use ultrasound or ultraviolet. The EMF emitted from an object is a physical aspect of reality, so we'd definitely discover it once our technology was capable of exploring that sector.

Expand full comment

Chalmers talks about structuralism a lot in his book Reality+, but I found his book VERY confusing.

I'm not surprised people chicken out of jumping out a plane! I wouldn't even think of doing it. I'm not too keen on climbing beyond the second rung of a ladder, much less free falling from the sky. Oh hail no.

"I think we'd definitely find out once we invented instruments."

But why would we think to invent instruments to detect color if we didn't even know color existed? That's sort of like saying, "I'm going to discover teeny tiny living breathing unicorns once I make this instrument." (Of course the whole set up is pretty ridiculous since it's hard to imagine we could have visual experiences without any sort of color.)

Expand full comment
author

I haven't read Reality+ but my library has it. (It's the only Chalmers book they have.) It's currently checked out, but I put it on hold. I like Chalmers but think he can sometimes get lost down his own rabbit holes. I wrote a three-part post examining, and rejecting, his "topological invariance" idea about computationalism.

Heh. One of the things I liked about skydiving was the way big brave guys were unashamedly chicken about it. OTOH, *I'm* that way about the idea of being on a motorcycle. I think *that's* crazy.

You are correct that we wouldn't invent instruments to explicitly look for teeny light unicorns if light was utterly unknown to us. But the EMF emitted by all objects is a physical reality, and the instruments we would build to explore physical reality would inevitably stumble upon it. Not a matter of making a machine to see if unicorns exist but a machine to see *what* exists in realms beyond our physical perceptions. The contents of a drop of pond water were a complete surprise to van Leeuwenhoek. We had no idea those unicorns (micro-organisms) existed.

Expand full comment

I’m not sure you’ll like the Chalmers book if you like his academic papers. He’s much breezier in the book. And it’s written in a choose your adventure sort of way. I did like the last few chapters.

No motorcycles huh? I get that. I do have a motorcycle license though, believe it or not, but that was back when I had a motor scooter.

On color, I think you’re probably right that we’d find out something about its existence by accident, but would it be color that we found out about? See what I mean?

Expand full comment