Thank you! Yes, I think we're a long way off from being able to accomplish a truly accurate simulation.
And it gives one a sense of what would be required for us all to be living in a virtual simulation. The compute resources are staggering and, I think, likely out of reach of even an advanced civilization (although it's foolish to predict future advances).
Bottom line, however, is that all of this is still just data. We, our bodies and brains, are nothing more than chemical data processors. Nothing mystic or mysterious about that
Exactly *why* (and how) chemical data processors have self-awareness is a huge mystery, what David Chalmers termed "the hard problem".
Indeed, the world can be viewed as data, but until we reach the quantum level, it's mostly analog data whereas inside a computer it's all digital data. There are significant differences between them, something I plan to write about in the next post in this series, so stay tuned.
I submit that the Hard Problem is humans elevating their data processing above its actual operating capacity. Gather enough data with enough processing with enough sensory input with enough feedback with enough memory and the thing that encompasses all this enough will declare "I'm conscious!" Humans are not special, only complex.
I think we could be called "special" in how highly complex we are. We're the only thing the universe creates that asks questions about the universe. If one accepts the premise that intelligent life such as us requires at least six events with 1:10,000 odds, then the odds of intelligent life are 1:1,000,000,000,000,000,000,000,000. So, "special" in being rare and complex, but not magical or mystical.
I agree that self-awareness is very probably what the kind of information processing brains do feels like from the inside. It's the Hard Problem because we have no physics that accounts for it. There are things arguably more complex than human brains, yet these things show no signs of self-awareness. So, something about brains is if not special at least different from anything we know.
If special = rare then yes. I believe we conflate our definitions of introspection and otherness to being special rather than admit we're merely complex.
Speaking only for myself, "complex" doesn't even begin to explain "self-aware". Complexity certainly seems required, but it doesn't seem sufficient in itself.
And yet, humanity produces instances all along the spectrum from pathologically evil to the breathlessly altruistic. Wouldn’t biology’s predisposition towards self preservation and preservation of the species naturally constrain these extremes into some sort of uniformity? (If this is too far off topic just say so. ) -jgp
I don't know if it's off-topic or not because I'm not clear on exactly what you're responding to, our rarity, or complexity, the Hard Problem, or something else.
With regard to uniformity, a notable thing about human brains is that they've allowed us to transcend biology and evolution. One consequence of that is nonproductive behavioral traits. Our minds allowed us to become the top dogs and exist in almost every ecological niche, but it also brought a host of extreme behaviors. Everything has a cost.
I re-read your post and cannot identify a specific passage that relates to my comment (it was late in the day). In fact, I think I had wandered into the nature vs nurture discussion which has no bearing on the simulation of a basic brain. I suppose, should someone be successful in producing and operating a functioning brain simulation, one could then run trials of experience or even brain configurations to determine under what conditions certain behaviors arise. Thanks, -jgp
Isn’t it generally accepted that we humans use only a small percentage of our brains? And any artificial intelligence starts small, focused on a limited set of tasks, or else emulates a simple organism. These factors would chip away the number of circuits needed to more manageable levels.
If you mean the thing about humans using only 10% of their brain, that's a myth. All animals use all parts of their brain pretty much all the time. We have created subsets of brains for simple tasks. A thermostat is a very simple version, LLMs are a much more involved version, yet those are still only fairly crude subsets of human brains.
The ultimate goal is AGI -- Artificial General Intelligence. What we have. The ability to solve novel problems and come up with new ideas. The only example of general intelligence we have is us, so it's possible AGI requires something similar to us. Hard to say given how much we don't yet know.
Am I right in thinking that the signals that pass through a synapse into the waiting dendrites are an on-off kind of deal rather than a continuous signal. Similarly is the signal that makes it to the axon from the cell body, on-off?
Yes. I think one reason people (mistakenly, IMO) conflate neurons with logic gates is that both have "on" and "off" states. Neurons are "firing" or not firing. When they're firing, the signal they produce is a series of pulses (the timing of which may contain analog information). These do travel down the axon to the pre-synaptic cleft where they activate neurotransmitters that cross the gap to the post-synaptic receptors. Enough neurotransmitters in a group of receptors generate a signal within the neuron, which integrates the signals from all synapses on its dendrites in deciding whether to fire. Some of those synapse signals can be inhibitory -- if they "win" they suppress the neuron from firing.
I would echo Dave below that the brain cells involved in sense, perceptions and consciousness are a tiny fraction of the 85 billion cells in the brain.
Also (random thought with no prior expertise), I wonder if there is an analogy with cell development where 20,000 genes result in an incredible array of heart cells, muscles and neurons as well as toes, eye lashes and intestines. Maybe the brain doesn't need all those neurons for thinking.
I'd echo my response that the brain seems to be a holistic instrument, and we don't know what's required for consciousness and what's not. On the one hand, people have survived massive brain injury. On the other, small amounts of certain chemicals completely disrupt consciousness.
Consider that the brain takes 20% of the body's energy to maintain. That's a huge share for one organ. Why would evolution bother with such a wasteful organ if it didn't have demonstrated survival value? Certainly, some fraction of the brain is devoted to autonomic functions, but our big brains are largely devoted to thinking. (Our brains are so big they make live birth slightly problematic for humans.)
I suspect that, for consciousness, you have to solve for simulation and emulation simultaneously, which is unlikely within our possible wheelhouse because we are still limited in social bandwidth to mirroring (across numerous physical levels), synchronization, and then fitting a larger theory of minds, ourselves' and others', into a limited working memory pipeline.
The limited working memory bandwidth may have been evolution's sneaky solution to getting around the black box. One of Barbara Tversky's laws of cognition was "when thought overflows the mind, the mind puts it out into the world." This means that keeping certain capacities limited forces "communication" even if it's an outflow of anger and violence. The theory of mind we build may require watching others interact with the world and with one another with the "overwhelm" being the richest source of information from which to infer. Tiny humans do have a way of testing those limits.
I don't understand what "solving for simulation and emulation simultaneously" means, so I'm not sure this is responsive to your point, but humanity -- some time ago I suspect -- has vastly more information than any one mind can hold. Much of what we do these days requires a team of experts in their respective fields. Think about all the minds that collaborate in designing and building a skyscraper or large bridge. The same is true of making movies or launching rockets into space.
Almost like ants. We're tiny in body and mind, but we can group together to accomplish far more than any individual could.
Couldn't agree more. Moreover, it helps me clarify what I mean. Try to formalize "where" in humanity all this information is "stored." Certainly a great deal is out there in the world in material we can reference, and that is a product, according to Tversky, of the same "overflow" of the mind. For example, the inability to distinguish larger numbers might be somewhat credited for why someone was motivated to use rudimentary tallying to "store" what they could not.
But equally importantly is the puzzle of how the use of tallying would replicate. Without some theory of mind, how would a second person infer the purpose or utility of tallying. It's unlikely to be genetic, and notice how being worse at counting might be a key ingredient relative to genetics, essentially to have the same need and converge on a solution.
One system to evolve would be the saliency system, following gazes and noticing patterns such as the tallying person pointing, marking, pointing, marking. Notice how this blurs into a chicken/egg problem between saliency and theory of mind. If our body was predisposed to mimicking, we might learn associatively by action of which we understand nothing. Or we could have learned via "mirroring" (rudimentary emulation), inferring purpose when witnessing repetition.
Jump to modern day, and if you are passing an aisle in a store, and some gift idea "leaps out at you" when you see an item ("my ex would have loved that." *sob*), are you constructing this from some set of individual, associative facts, or were you likely emulating their perspective in some way that you were not immediately aware? I would say the latter, that episodic memory is a more middle-out source of emulation than bottom-up simulation plus constructive association.
Which is why it's such a cool and weird feeling for a smell to "transport you" in both time and place, like a "side-loaded" frame of reference that even kicks out many salient sensory features of the real environment you occupy.
So the idea of "solving for both" here means a lack of clear heirarchy in many cases, allowing both internal competition and cooperation for immediate representation. Hence why you weirded get shit when did systems disagree. For example, "the room is spinning" can be caused by motion sickness where visuospatial inputs disagree, by vertigo where a tiny calcification in your ear botches your equilibrioception directly, or drink so much, or eat something that makes you sick, such that your body budget gets confused what sensory processes and integrations are worth funding. I'm sure that last one is metaphorically insufficient, but it seems better than a shrug.
Part of your comment involves a topic I may write about in the Math Musings newsletter: the inevitable discovery of counting, and hence math. As you say, it begins with the need to tally thing. How many ships or sheep do you have?
Most civilizations initially use one-to-one representation. A bag of pebbles or knots in string. *This* many ships or sheep. These usually come before notation, but eventually civilizations invent markings or symbols. Language comes before tallying. In small hunter-gatherer groups, there isn't much to count but much to talk about. Words for "one", "some", "many", and "many-many" (essentially infinite). It's interesting that humans are able to instantly recognize groups of two, three, four, five, even six, but around seven or so, we have to stop and count.
Gift ideas. Hmmm. Thinking of the ones that leapt out at me, it was a recognition of a match between the item and the person. "Oh, X would really like that!"
Yes, smell is supposed to be highly evocative of memories, but I can't say I've ever experienced that call of the past due to a smell. Might be because I have almost zero feel for nostalgia. I'm more, "Let the past die. Kill it if you have to." 😆
Still not sure I understand what you mean about "solving for both". Simulation and emulation, as defined in the post, are labels for ends of the spectrum of generating a conscious mind numerically (that is, in a computer). They speak to how fine-grained the numeric model is. Simulation is a fine-grained physics model. Cellular level at its largest, quantum level at its most fine-grained. Emulation is more blackbox -- it
considers larger parts of the brain as functional units not necessarily tied to how actual brains work -- just providing the same outputs for given inputs. Not sure if that helps or muddies...
If you've the time and interest, I highly recommend checking out Barbara Tversky's book "Mind in Motion." It deals with the origins of of many representational systems, including tallying and maps. Or if you want a quick and dirty version, I recommend the talk she gave at Stanford that can be found on YouTube.
Relative to how you are defining simulation and emulation, I would say there is no combination that we should expect consciousness to arise short of a fuller replication of physical substrata and meaningfully convergent information being processed, which may or may not be feasibly reducible to the quantized arrangement.
One of the problems here is that any concept of simulation or emulation are based on the ways we think, what is distilled as Salient, and what is trivialized as noise. "It's like this mental thing, but minus that mental thing" is essentially the basis of our understanding of how computers work, then we use that understanding as a metaphor for how the mind might work, despite no firm basis for our conceptual separations.
I do not mean to say that I can prove it is a false dichotomy, but I can point to several, meaningfully complicating factors to such a dichotomy. For example, if we think that scientists are the best for the job at weighing the evidence of what is physically real, and if we gain no more fidelity of their "weighting" process by measuring their brains, then scientists who declare measurement to be the foundation of science are caught somewhere between consistency and completeness. The sufficiency conditions of a biological substrate and the necessary conditions of physical reality have no guaranteed overlap, though we have many reasons to think there would be, but their sum never reaches exactness. When we claim it does, that's our sense of sufficiency talking.
It seems easier to rule out emulation, because, as you suggest, we don't understand consciousness well enough to emulate it. For the most part, such attempts come from the first era of Ai, the attempts to build expert systems. It didn't work because it's difficult -- possibly impossible -- to extract a lifetime of expertise and experience from one human, let alone many.
What confounds me a little is that LLMs are the biggest advance ever towards Ai, and their architecture is a rough numerical emulation of a neural net. LLMs are pure math, so they are necessarily limited in the Gödelian sense. If the Lucas-Penrose argument is correct, then LLMs will prove to be limited, incapable of consciousness, but we're still waiting for the plateau to happen (there are some indications the next generation or so will hit a limit). But for now, emulation doesn't seem sufficient.
Simulation replicates the physics and hopes the higher-level functions emerge. Exactly as they do in the physical world. This post touches on how simulation might turn out to require quantum level replication. We know the brain is affected by blood chemistry and other low-level processes, so simulation would seem to need to replicate physics at least at a chemical or molecular level. Formidable. Perhaps beyond practical reach.
But I have a tough time saying exactly why a sufficiently detailed simulation wouldn't work. I floated some possible failure modes in the post, but I haven't found anything that definitely rules it out in principle. I remain skeptical of numeric methods, even so.
I haven't mentioned it since the first post, but I think replication stands the best chance. Replication of the brain's physical structure and function in physical, analog form. Same brain, different materials.
I have a vague recollection of someone (Guy Kawasaki?) getting hacked and embarking on a successful mission that ultimately nabbed the culprit. Someone wisecracked that all things considered, it would have been easier to escape had the perp simply burgled the hard drive. Strikes me as similar here. Blank slate synthetics still have to be trained up to knit the PFC and then that fully functional mind needs more training to become specialized.
Sensory inputs solve only part of job of getting rid of the mind-body duality. There need also be all the positive and negative feedback loops. Plus the human mind appears to rely heavily on emotion as an engine to produce solutions to limited rationality by imposing rules of choice on what would be, otherwise, a thought process unable to arrive at conclusions.
I would expect, assuming we get there at all, we'd start by scanning an existing mind and recreating the brain model by analyzing that scan. So, the PFC would be wired up, and the brain would have a lifetime of experience and acquired skills. Trying to synthetically grow a model from scratch would, I think, be *much* harder.
Simulations involve low-level physics and the assumption that high-level behavior, the mental feedback loops and emotions, emerges in consequence. What I call emulation (in contrast to simulation) is more concerned with the high-level behaviors. I wrote about emulation in the previous post in the series: https://logosconcarne.substack.com/p/digital-emulation
I do think emulation is harder and less likely to work than simulation. But LLMs are a type of emulation, software versions of crude neurons, and they provided the greatest advances in Ai so far. I think they'll plateau far short of AGI, but I may end up eating my words. What they've accomplished is impressive.
Great post! Looks like we’re jumping the gun a bit on simulated consciousness.
Thank you! Yes, I think we're a long way off from being able to accomplish a truly accurate simulation.
And it gives one a sense of what would be required for us all to be living in a virtual simulation. The compute resources are staggering and, I think, likely out of reach of even an advanced civilization (although it's foolish to predict future advances).
Bottom line, however, is that all of this is still just data. We, our bodies and brains, are nothing more than chemical data processors. Nothing mystic or mysterious about that
Exactly *why* (and how) chemical data processors have self-awareness is a huge mystery, what David Chalmers termed "the hard problem".
Indeed, the world can be viewed as data, but until we reach the quantum level, it's mostly analog data whereas inside a computer it's all digital data. There are significant differences between them, something I plan to write about in the next post in this series, so stay tuned.
I submit that the Hard Problem is humans elevating their data processing above its actual operating capacity. Gather enough data with enough processing with enough sensory input with enough feedback with enough memory and the thing that encompasses all this enough will declare "I'm conscious!" Humans are not special, only complex.
I think we could be called "special" in how highly complex we are. We're the only thing the universe creates that asks questions about the universe. If one accepts the premise that intelligent life such as us requires at least six events with 1:10,000 odds, then the odds of intelligent life are 1:1,000,000,000,000,000,000,000,000. So, "special" in being rare and complex, but not magical or mystical.
I agree that self-awareness is very probably what the kind of information processing brains do feels like from the inside. It's the Hard Problem because we have no physics that accounts for it. There are things arguably more complex than human brains, yet these things show no signs of self-awareness. So, something about brains is if not special at least different from anything we know.
If special = rare then yes. I believe we conflate our definitions of introspection and otherness to being special rather than admit we're merely complex.
Speaking only for myself, "complex" doesn't even begin to explain "self-aware". Complexity certainly seems required, but it doesn't seem sufficient in itself.
My point precisely.
And yet, humanity produces instances all along the spectrum from pathologically evil to the breathlessly altruistic. Wouldn’t biology’s predisposition towards self preservation and preservation of the species naturally constrain these extremes into some sort of uniformity? (If this is too far off topic just say so. ) -jgp
I don't know if it's off-topic or not because I'm not clear on exactly what you're responding to, our rarity, or complexity, the Hard Problem, or something else.
With regard to uniformity, a notable thing about human brains is that they've allowed us to transcend biology and evolution. One consequence of that is nonproductive behavioral traits. Our minds allowed us to become the top dogs and exist in almost every ecological niche, but it also brought a host of extreme behaviors. Everything has a cost.
I re-read your post and cannot identify a specific passage that relates to my comment (it was late in the day). In fact, I think I had wandered into the nature vs nurture discussion which has no bearing on the simulation of a basic brain. I suppose, should someone be successful in producing and operating a functioning brain simulation, one could then run trials of experience or even brain configurations to determine under what conditions certain behaviors arise. Thanks, -jgp
Isn’t it generally accepted that we humans use only a small percentage of our brains? And any artificial intelligence starts small, focused on a limited set of tasks, or else emulates a simple organism. These factors would chip away the number of circuits needed to more manageable levels.
If you mean the thing about humans using only 10% of their brain, that's a myth. All animals use all parts of their brain pretty much all the time. We have created subsets of brains for simple tasks. A thermostat is a very simple version, LLMs are a much more involved version, yet those are still only fairly crude subsets of human brains.
The ultimate goal is AGI -- Artificial General Intelligence. What we have. The ability to solve novel problems and come up with new ideas. The only example of general intelligence we have is us, so it's possible AGI requires something similar to us. Hard to say given how much we don't yet know.
I have a question:
Am I right in thinking that the signals that pass through a synapse into the waiting dendrites are an on-off kind of deal rather than a continuous signal. Similarly is the signal that makes it to the axon from the cell body, on-off?
Yes. I think one reason people (mistakenly, IMO) conflate neurons with logic gates is that both have "on" and "off" states. Neurons are "firing" or not firing. When they're firing, the signal they produce is a series of pulses (the timing of which may contain analog information). These do travel down the axon to the pre-synaptic cleft where they activate neurotransmitters that cross the gap to the post-synaptic receptors. Enough neurotransmitters in a group of receptors generate a signal within the neuron, which integrates the signals from all synapses on its dendrites in deciding whether to fire. Some of those synapse signals can be inhibitory -- if they "win" they suppress the neuron from firing.
Thank you! Someone needs to figure what that analog information in the pulses is all about!
Yep. All part of the study of “consciousness correlates”.
I would echo Dave below that the brain cells involved in sense, perceptions and consciousness are a tiny fraction of the 85 billion cells in the brain.
Also (random thought with no prior expertise), I wonder if there is an analogy with cell development where 20,000 genes result in an incredible array of heart cells, muscles and neurons as well as toes, eye lashes and intestines. Maybe the brain doesn't need all those neurons for thinking.
I'd echo my response that the brain seems to be a holistic instrument, and we don't know what's required for consciousness and what's not. On the one hand, people have survived massive brain injury. On the other, small amounts of certain chemicals completely disrupt consciousness.
Consider that the brain takes 20% of the body's energy to maintain. That's a huge share for one organ. Why would evolution bother with such a wasteful organ if it didn't have demonstrated survival value? Certainly, some fraction of the brain is devoted to autonomic functions, but our big brains are largely devoted to thinking. (Our brains are so big they make live birth slightly problematic for humans.)
I suspect that, for consciousness, you have to solve for simulation and emulation simultaneously, which is unlikely within our possible wheelhouse because we are still limited in social bandwidth to mirroring (across numerous physical levels), synchronization, and then fitting a larger theory of minds, ourselves' and others', into a limited working memory pipeline.
The limited working memory bandwidth may have been evolution's sneaky solution to getting around the black box. One of Barbara Tversky's laws of cognition was "when thought overflows the mind, the mind puts it out into the world." This means that keeping certain capacities limited forces "communication" even if it's an outflow of anger and violence. The theory of mind we build may require watching others interact with the world and with one another with the "overwhelm" being the richest source of information from which to infer. Tiny humans do have a way of testing those limits.
I don't understand what "solving for simulation and emulation simultaneously" means, so I'm not sure this is responsive to your point, but humanity -- some time ago I suspect -- has vastly more information than any one mind can hold. Much of what we do these days requires a team of experts in their respective fields. Think about all the minds that collaborate in designing and building a skyscraper or large bridge. The same is true of making movies or launching rockets into space.
Almost like ants. We're tiny in body and mind, but we can group together to accomplish far more than any individual could.
Couldn't agree more. Moreover, it helps me clarify what I mean. Try to formalize "where" in humanity all this information is "stored." Certainly a great deal is out there in the world in material we can reference, and that is a product, according to Tversky, of the same "overflow" of the mind. For example, the inability to distinguish larger numbers might be somewhat credited for why someone was motivated to use rudimentary tallying to "store" what they could not.
But equally importantly is the puzzle of how the use of tallying would replicate. Without some theory of mind, how would a second person infer the purpose or utility of tallying. It's unlikely to be genetic, and notice how being worse at counting might be a key ingredient relative to genetics, essentially to have the same need and converge on a solution.
One system to evolve would be the saliency system, following gazes and noticing patterns such as the tallying person pointing, marking, pointing, marking. Notice how this blurs into a chicken/egg problem between saliency and theory of mind. If our body was predisposed to mimicking, we might learn associatively by action of which we understand nothing. Or we could have learned via "mirroring" (rudimentary emulation), inferring purpose when witnessing repetition.
Jump to modern day, and if you are passing an aisle in a store, and some gift idea "leaps out at you" when you see an item ("my ex would have loved that." *sob*), are you constructing this from some set of individual, associative facts, or were you likely emulating their perspective in some way that you were not immediately aware? I would say the latter, that episodic memory is a more middle-out source of emulation than bottom-up simulation plus constructive association.
Which is why it's such a cool and weird feeling for a smell to "transport you" in both time and place, like a "side-loaded" frame of reference that even kicks out many salient sensory features of the real environment you occupy.
So the idea of "solving for both" here means a lack of clear heirarchy in many cases, allowing both internal competition and cooperation for immediate representation. Hence why you weirded get shit when did systems disagree. For example, "the room is spinning" can be caused by motion sickness where visuospatial inputs disagree, by vertigo where a tiny calcification in your ear botches your equilibrioception directly, or drink so much, or eat something that makes you sick, such that your body budget gets confused what sensory processes and integrations are worth funding. I'm sure that last one is metaphorically insufficient, but it seems better than a shrug.
Part of your comment involves a topic I may write about in the Math Musings newsletter: the inevitable discovery of counting, and hence math. As you say, it begins with the need to tally thing. How many ships or sheep do you have?
Most civilizations initially use one-to-one representation. A bag of pebbles or knots in string. *This* many ships or sheep. These usually come before notation, but eventually civilizations invent markings or symbols. Language comes before tallying. In small hunter-gatherer groups, there isn't much to count but much to talk about. Words for "one", "some", "many", and "many-many" (essentially infinite). It's interesting that humans are able to instantly recognize groups of two, three, four, five, even six, but around seven or so, we have to stop and count.
Gift ideas. Hmmm. Thinking of the ones that leapt out at me, it was a recognition of a match between the item and the person. "Oh, X would really like that!"
Yes, smell is supposed to be highly evocative of memories, but I can't say I've ever experienced that call of the past due to a smell. Might be because I have almost zero feel for nostalgia. I'm more, "Let the past die. Kill it if you have to." 😆
Still not sure I understand what you mean about "solving for both". Simulation and emulation, as defined in the post, are labels for ends of the spectrum of generating a conscious mind numerically (that is, in a computer). They speak to how fine-grained the numeric model is. Simulation is a fine-grained physics model. Cellular level at its largest, quantum level at its most fine-grained. Emulation is more blackbox -- it
considers larger parts of the brain as functional units not necessarily tied to how actual brains work -- just providing the same outputs for given inputs. Not sure if that helps or muddies...
If you've the time and interest, I highly recommend checking out Barbara Tversky's book "Mind in Motion." It deals with the origins of of many representational systems, including tallying and maps. Or if you want a quick and dirty version, I recommend the talk she gave at Stanford that can be found on YouTube.
Relative to how you are defining simulation and emulation, I would say there is no combination that we should expect consciousness to arise short of a fuller replication of physical substrata and meaningfully convergent information being processed, which may or may not be feasibly reducible to the quantized arrangement.
One of the problems here is that any concept of simulation or emulation are based on the ways we think, what is distilled as Salient, and what is trivialized as noise. "It's like this mental thing, but minus that mental thing" is essentially the basis of our understanding of how computers work, then we use that understanding as a metaphor for how the mind might work, despite no firm basis for our conceptual separations.
I do not mean to say that I can prove it is a false dichotomy, but I can point to several, meaningfully complicating factors to such a dichotomy. For example, if we think that scientists are the best for the job at weighing the evidence of what is physically real, and if we gain no more fidelity of their "weighting" process by measuring their brains, then scientists who declare measurement to be the foundation of science are caught somewhere between consistency and completeness. The sufficiency conditions of a biological substrate and the necessary conditions of physical reality have no guaranteed overlap, though we have many reasons to think there would be, but their sum never reaches exactness. When we claim it does, that's our sense of sufficiency talking.
It seems easier to rule out emulation, because, as you suggest, we don't understand consciousness well enough to emulate it. For the most part, such attempts come from the first era of Ai, the attempts to build expert systems. It didn't work because it's difficult -- possibly impossible -- to extract a lifetime of expertise and experience from one human, let alone many.
What confounds me a little is that LLMs are the biggest advance ever towards Ai, and their architecture is a rough numerical emulation of a neural net. LLMs are pure math, so they are necessarily limited in the Gödelian sense. If the Lucas-Penrose argument is correct, then LLMs will prove to be limited, incapable of consciousness, but we're still waiting for the plateau to happen (there are some indications the next generation or so will hit a limit). But for now, emulation doesn't seem sufficient.
Simulation replicates the physics and hopes the higher-level functions emerge. Exactly as they do in the physical world. This post touches on how simulation might turn out to require quantum level replication. We know the brain is affected by blood chemistry and other low-level processes, so simulation would seem to need to replicate physics at least at a chemical or molecular level. Formidable. Perhaps beyond practical reach.
But I have a tough time saying exactly why a sufficiently detailed simulation wouldn't work. I floated some possible failure modes in the post, but I haven't found anything that definitely rules it out in principle. I remain skeptical of numeric methods, even so.
I haven't mentioned it since the first post, but I think replication stands the best chance. Replication of the brain's physical structure and function in physical, analog form. Same brain, different materials.
I have a vague recollection of someone (Guy Kawasaki?) getting hacked and embarking on a successful mission that ultimately nabbed the culprit. Someone wisecracked that all things considered, it would have been easier to escape had the perp simply burgled the hard drive. Strikes me as similar here. Blank slate synthetics still have to be trained up to knit the PFC and then that fully functional mind needs more training to become specialized.
Sensory inputs solve only part of job of getting rid of the mind-body duality. There need also be all the positive and negative feedback loops. Plus the human mind appears to rely heavily on emotion as an engine to produce solutions to limited rationality by imposing rules of choice on what would be, otherwise, a thought process unable to arrive at conclusions.
I would expect, assuming we get there at all, we'd start by scanning an existing mind and recreating the brain model by analyzing that scan. So, the PFC would be wired up, and the brain would have a lifetime of experience and acquired skills. Trying to synthetically grow a model from scratch would, I think, be *much* harder.
Simulations involve low-level physics and the assumption that high-level behavior, the mental feedback loops and emotions, emerges in consequence. What I call emulation (in contrast to simulation) is more concerned with the high-level behaviors. I wrote about emulation in the previous post in the series: https://logosconcarne.substack.com/p/digital-emulation
I do think emulation is harder and less likely to work than simulation. But LLMs are a type of emulation, software versions of crude neurons, and they provided the greatest advances in Ai so far. I think they'll plateau far short of AGI, but I may end up eating my words. What they've accomplished is impressive.