4 Comments
User's avatar
Tina Lee Forsee's avatar

"Calculation (computation) and evaluation are not the same."

Haha.. you won't have to convince me. I'm pretty sure my mind is involved in a great deal of the latter and not much of the former. So I fully admit to skipping over all the numbers.

The other day I was trying to get Claude AI to argue with me in a meaningful way, but either it pandered to me "Thank you for pointing that out, you are correct...blah blah blah" or when I pointed out that it was pandering, it feigned indignation, "I am not! I was merely pointing out that...." Finally it just gave up and admitted it was pandering. The obvious thing that was missing from the discussion was the usual nonsense that real people say when they're digging in their heels. I find it hard to believe that emotions, especially subtle ones such as 'digging in one's heels,' could be computationally replicated. It's not just emotion, it's a repressed emotion, it's subtle and tricky to analyze in ourselves, so how on earth do we think we can turn it into an algorithm? And it's not like we all behave in the same way.

Not to diminish what's going on with AI these days, which is truly impressive. But still, I'm nowhere near worried about it becoming conscious.

Expand full comment
Wyrd Smythe's avatar

No, neither am I with the current technology, and I don't think we even know yet whether the LLM architecture is the right way to go. Nor is it clear a *simulation* of a neural network works the same as a physical one. (A key topic of these posts.)

We recently discussed scale, and I think it pops up again here. Despite the billions of parameters used on LLMs, they're still far below the scale of the brain with 500 trillion synapses. New phenomenon (e.g. the "wetness" of water) can emerge at large scale, so it's hard to judge what truly large-scale LLMs might do. OTOH, there are some signs the LLMs are maxing out and may have a ceiling.

The thing about LLMs is that they're a close analogue to Searle's Chinese Room. The data storage and indexing methods are different, but an LLM is essentially a stochastic search engine. Their architecture makes them rough analogues of brain neural nets, so LLMs are (in my taxonomy) emulations. Which I've thought might have the lowest chance of producing consciousness compared to simulation or replication.

Re your experience with Claude, as I understand it, current LLMs have a lot of support software designed to prevent the LLM from giving out problematic information. Or acting naughty or mean. As people find new ways to get around the protections, new protections get added. An ongoing arms race that may also put a ceiling on LLMs. So, who knows where it all ends.

It is interesting that LLMs turn out to be bad at math. They're good at answering test questions with answers found in the training data but pretty awful at original (and simple) math questions. It's interesting because humans are also notoriously bad at math. (In part because it is a challenging skill to acquire.) I wonder what, if anything, that implies for computationalism.

Expand full comment
Tina Lee Forsee's avatar

Again, I think you'd like Erik's book, though he is quite critical of the neuroscience research into consciousness. Here's his recent post on the topic of the AI plateau, which I suspect you'll appreciate better than I:

https://www.theintrinsicperspective.com/p/ai-progress-has-plateaued-at-gpt?r=schg4&utm_campaign=post&utm_medium=web

Expand full comment
Wyrd Smythe's avatar

Yep, exactly. (And another case where scale comes into play.)

Expand full comment