Two days ago I posted about the need for a Gestalt Switch: From Artificial Intelligence to Artificial Minds. I argued that intelligence is best conceived as a performance measure while a mind “is something that is implemented in some kind of device, to use a fairly generic term.” I certainly believe just that much. But that alone doesn’t justify speaking of these artificial devices, these computer systems, as minds, does it?
I believe it does, providing these devices meet the definition I have given for a mind. That’s what this post is about.
* * * * *
I have always maintained that we will never “build a machine to equal the human brain,” as I recounted in a recent post (Apr. 5, 2022). I still believe that. I have never, or at least not that I can remember, believed that human minds are the only minds that exist. Why can’t we say that animals have minds? I see no reason why not, though I understand that others do. The same goes for consciousness. Are animals conscious? Yes.
Let’s look at the definition of mind I gave in that post:
A MIND is a relational network of logic gates over the attractor landscape of a partitioned neural network. A partitioned network is one loosely divided into regions where the interaction within a region is (much) stronger than the interactions between regions. Each region compresses and categorizes its inputs, with each category having its own basin of attraction, and sends the results to other regions. Each region will have many basins of attraction. The relational network specifies relations between basins in different regions.
There is nothing in there that wouldn’t apply to many, most, all(?), animals. It’s possible that the brain of C. elegans, with its 302 neurons, doesn’t meet that definition. There might well some minimum number of neurons required to cross the threshold to mind. I’m not prepared to speculate about what that number, and in what architecture, would be.
Here's a passage from a note I sent to an old friend and colleague the other day:
The complexity literature is vast and I certainly don’t know it well, but in what I’ve seen, I’ve never seen anyone talking about a system consisting of multiple interconnected attractor landscapes. As far as I know, Walter Freeman never did so, and he spent a career investigating the complex dynamics of the nervous system. I’m talking about I don’t know how many landscapes – 150 to 1000, who knows – each with tens or 100s of thousands of attractor basins. Each thing you can identify visually has its own basin. Each word has a half dozen basins associated with it. And so forth. It’s crazy, but then how could an account of the brain not be crazy?
That’s what the numbers game is about. But if a machine, a computer, can meet the terms of the definition, then, yes, it has a mind.
I don’t know whether any of the current deep learning systems meet the terms of the definition. They are very different from organic wetware in many respects. The one that comes most readily to mind is that they require two phases of operation: learning and inference. The system that does the learning has to be much larger than the one that does the inference. I don’t quite know how to handle that, but perhaps what matters for the definition is the inference system.
I can easily imagine that the current inference engines do not meet the terms of the definition. If so, however, that is as likely to be a matter of architecture as of size. But then it may also be the case that this two-phase existence is an automatic fail. But it’s not obvious to me that the two-phase existence is inherent in digital technology. It is recognized as a problem and work is being done to move beyond it.
So, for the moment my position is that, yes, artificial minds are possible, but it is an open question whether or not any current systems meet the terms of the definition.
What about consciousness? I see no reason to deny consciousness to animals. Am I willing to extend it to machines that meet the terms of the definition I’ve offered for MIND? That is a very interesting question, one that I haven’t thought about until just now. But I do not see that coming up with an answer is an urgent matter. It can wait.
No comments:
Post a Comment