Pages in this blog

Tuesday, May 3, 2022

Gestalt Switch: From Artificial Intelligence to Artificial Minds

For many, both pro AI and con, the difference between the two terms – artificial intelligence and artificial minds – is of relatively little consequence, a matter of emphasis, perhaps. I take a different view. I believe that we are ready to switch from  thinking about the duck of AI to thinking about the rabbit of artificial minds.

Intelligence as a Measure of Performance

Intelligence is best conceived as a measure of performance. In the case of IQ it is the performance of a human mind. But one can also measure the performance of artificial devices of all kinds. For example, acceleration from 0 to 60 mph is a standard way of measuring the performance of automobiles. We also meaure the performance of computational systems of all kinds. Both artificial intelligence and computational linguistics have developed many measures of performance.

Talk of building an AI device is thus a category error, and talk of human-level AI, or AGI, simply compounds the error. We can imagine constructing many different kinds of devices intended to perform tasks heretofore performed only by the human mind. Indeed, we have done so, and have, for the most part, done it in the name of artificial intelligence. But we have yet to formulate a coherent account of just what this intelligence-thing is, of how it is constructed and operates, much less a human-level intelligence-thing.

It’s time we give it up. Let’s figure out what a mind is and then figure out how to build one of those.

For many, both pro AI and con, I suppose that suggestion is rather like leaping from the frying pan into the fire. Are you crazy? No, I’m not. But as you might expect, I do take a different view. I offer an explicit definition of mind and proceed from there. Are you crazy? No, I’m not. But I am a fearless speculator.

Minds, Both Natural and Artificial, a Speculative Definition

An artificial mind is something that is implemented in some kind of device, to use a fairly generic term. The artificial mind emerges from the operation of the device. Can we build such a device?

I’m not sure. I don’t know whether or not any of the devices we’ve built in the name of artificial intelligence qualify as minds in the sense I am about to propose, but some of the more recent machine learning devices might. I’ve certainly been thinking a lot about them recently, e.g. GPT-3: Waterloo or Rubicon? Here be Dragons, Version 4, Working Paper, April 26, 2024, https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_4.

I propose to define mind as follows:

A MIND is a relational network of logic gates over the attractor landscape of a partitioned neural network. A partitioned network is one loosely divided into regions where the interaction within a region is (much) stronger than the interactions between regions. Each region compresses and categorizes its inputs, with each category having its own basin of attraction, and sends the results to other regions. Each region will have many basins of attraction. The relational network specifies relations between basins in different regions.

Yes, I know, there’s a fair amoung of quasi-technical jargon there. I don’t intend to unpack it here and now. I’m working on thant off-line. A bit later I will, however, say a few words about where that came from.

Right now I offer some CAVEATS:

The underlying neural network, whether real or artificial, must meet certain minimal conditions. Those conditions, I assume, would have to do with:

  1. the SIZE of the network,
  2. its internal DIFFERENTIATION, and
  3. the nature of its ACCESS TO THE WORLD external to it.

I assume that a proper theory would address each of those issues, and others as well. Such a theory would of course be subject to empirical verification. I also assume that, to some extent, empirical investigation would precede such a theory and thus would contribute to its development.

Some further definitions:

A NATURAL MIND is one where the substrate is the nervous system of a living animal.

An ARTIFICIAL MIND is one where the substrate is inanimate matter engineered by humans to be a mind.

A SOCIAL MIND is capable of fluid interaction with natural or artificial minds, or both. That interaction could be mediated by language, but perhaps by music as well, or other means.

For the moment I take those as self-evident, but also as provisional, as is the definition of mind.

My point is that these definitions specify DEVICES of some kind. They tell us something about how such devices are constructed.

Where’d This Come From?

I have been working on such things for a long time, my whole career. This is not the place to recount that story. You can find the most recent version in this post, On the Differences between Artificial and Natural Minds: Another version of my intellectual biography, with specific comments on recent weeks in this post, What I've been up to in the last two weeks, across the Continental Divide and on to the Pacific [how the mind works]. What I would like to do here is outline the intellectual genealogy behind the terms in my basic definition of mind.

Here’s the first sentence, which contains the basic definition – the other sentences comment on it: A MIND is a relational network of logic gates over the attractor landscape of a partitioned neural network. I first learned about relational networks from David Hays in the linguistics department at SUNY Buffalo, where I went to graduate school. They were in common use in the cognitive sciences for characterizing mental structures and can been seen as deriving, at least in part, from associationist psychology.

Such networks commonly used the nodes of the network to represent conceptual objects and the arcs or edges of the network to represent links between those objects. Sydney Lamb developed networks in which the nodes were logical operators and the content of the network was carried on the arcs. Lamb was specifically inspired by the nervous system and that’s why I chose his notation, though I interpret it differently than he did.

That brings us to the second clause of the definition: ... over the attractor landscape of a partitioned neural network. The notion of an attractor landscape comes from complexity theory of various kinds which I picked up in various places over the years. The idea of a partitioned neural network is simply my way of talking about the fact that the neocortex is divided in various regions that seem to be functionally distinct, though the borders between these regions are not distinct. Let us assume that interaction within a region is (much) stronger than the interactions between regions. If that weren’t the case it’s hard to see the point of differentiating between the regions.

That is, the neocortex is partitioned into functionally specialized regions: Each region will have many basins of attraction. The notion of basins of attraction comes from complexity theory. I was particularly influenced, however, by the work of the late Walter Freeman on the complex dynamics of the cortex. He did a lot of work on the olfactory cortex (mostly of rats and rabbits I believe), where showed that each odor corresponds to a specific basin of attraction. When a new ordor is learned, a new basin is added to the landscape, which is reconfigured as a result.

And that brings us back to those logic gates. When a rat recognizes an ordor, its olfactory cortext enters the basic of attraction associated with that odor. It can enter only one basin at a time. The basins thus stand in an competitive relationship with one another, logical OR. Now, think about some cortical patch that receives inputs from, say, three other cortical regions. Its attractor landscape thus reflects the interation between outputs from those regions. That corresponds to logical AND. Those are the two basic building blocks of Lamb’s network: The relational network specifies relations between basins in different regions.

There’s one final bit: Each region compresses and categorizes its inputs, with each category having its own basin of attraction, and sends the results to other regions. Back in the 1970s, when I was working with him, David Hays wrote a book, Cognitive Structures (1981). He talked of the parameters of perception as mediating between sensorimotor activity and cognition proper. Though he didn’t put it in these terms, those parameters are an aspect of the compression and categorizing process. More recently Peter Gärdenfors has argued for Conceptual Spaces (2000) and The Geometry of Meaning (2014). Each conceptual space characterizes objects along several dimensions. Those dimensions also characterize the compression and categorizing process.

That’s it. Well, the core of it anyhow.

Do I Believe It?

Not at this time, no. But I don’t disbelieve it either. It’s too soon. I hold the idea in epistemic suspension, if you will.

Rather, I offer it as speculation to guide further investigation. The scientist can take these speculations as pointers for experiment and observation. The engineer can use them as guides for the design and fabrication of devices.

This domain is a complicated one. The scientist needs to think like an engineer, to reverse engineer the mind and brain, as a way of coming up with predictions for experimental verification. And the engineer needs to conjure up their inner scientist in order to investigate and observe the operations of the devices they construct.

I offer these ideas, this speculative engineering, in the hope and belief that they will prove useful in these endeavors.

No comments:

Post a Comment