Monday, May 30, 2022

Eureka! Have I Found It? How to Model the Mind, that Is. [Symbols and Nets]

Since roughly the last week in April, when I applied for an Emergent Ventures grant (which was quickly, but politely, turned down), I have been working hard on revising and updating work on a system of notation which I sketched out in 2003 and posted to the web in 2010, 2011. I am referring to what I then called called an Attractor Network, but now call a Relational Network over Attractors (RNA) because I found out that neuroscientists already talk about attractor networks, which are not the same as what I’ve got in mind. The neuroscientists are referring to a network of neurons whose dynamics tend toward an attractor. I am referring to a network that specifies relationships between a very large number of attractors (hence, it is constructed over them).

Anyhow, by the time Emergent Ventures had turned me down, I was committed to the project, which has gone well so far. I had no particular expectations, just a general direction. I’ve been looking, and I’ve found some interesting things, encouraging things. Or, if you will, I’ve been puttering around, assembling bits and pieces here and there, and an interesting structure has begun to emerge.

Lamb Notation

The idea has been to develop a new notation for representing semantic structures in network form. Actually, the notation is not new; it had already been developed by Sydney Lamb in the 1960s. He developed it to model the structures of a stratificational grammer. I’ve been adapting it to model semantics.

I am doing that by assuming that the cerebral cortex is loosely divided into functionally distinct regions which I call neurofunctional areas (NFAs). The activity of these NFAs is to be modeled by complex dynamics (Walter Freeman) and a low-dimensional projection of each NFA phase space can be modeled by a conceptual space (Peter Gärdenfors). Each NFA is thus characterized by an attractor landscape.

The RNA (relational net over attractors) is a network where the nodes are logical operators (AND, OR) and the edges are basins of attraction in the NFA attractor landscapes. This is not the place to explain what that actually means, but I can give you a taste by showing you three pictures.

This is a simple semantic structure expressed in a “classical” notation from the 1970s:

It depicts the fact that both beagles and collies are varieties (VAR) of dog. The light gray nodes at the bottom are perceptual schemas, while the dark gray nodes at the right are lexemes. The white nodes are cognitive.

Here’s a fragment of one of Lamb’s networks:

The triangular nodes are AND while the brackets (both pointing up and down) are OR. The content is carried on the edges.

This RNA network takes the information expressed in the semantic network and expresses it using AND and OR nodes.

I am not even going to attempt to explain just how that works. Suffice it to say that it seems a bit more visually complicated than the old notation and thus harder to read. It also expresses more informatation. Those AND and OR nodes specify processing while no processing is specified in the classical diagram.

I am finding it more demanding to work with. In part that is because I haven’t drawn nearly so many RNA diagrams, perhaps 100 or so as compared to 1000s. But also, in drawing RNAs I have to imagine these structures being somehow laid out on a sheet of cortex, which is tricky. It would be even trickier if I were working with data about the regional functional anatomy of the cortex at my elbow, trying to figure just where each NFA is on the cortical sheet. Eventually, that will have to be done, but right now I’m satisfied just to draw some diagrams.

Crazy and Not So Crazy

The fact that I intend these diagrams as a very abstract sketch of functional cortical anatomy means that they have fairly direct empirical implications that the old diagrams never had. Of course, we were always committed to the view that we were figuring out how the human mind worked and so  eventually someone would have to figure out where and how those structures were implemented in the brain. Well, now is eventually and these new diagrams are a tool for figuring out the where and how.

And that, I suppose, is a crazy assertion. Everyone who knows anything knows that the brain is fiercely complicated and we’re never going to figure it out in a million years but anyhow we have to a waste a billion euros building a damned brain model that tells us a bit more than diddly squat, but not a whole hell of a lot more. But then what I’m doing costs nothing more than my time. Excuse the rant.

As I said, it’s crazy of me to propose a way of thinking about how high-level cognitive processes are organized in the brain. But I’m only proposing, and I’m doing it by offering a conceptual tool, a notation, that helps us think about the problem in a new way. I don’t expect that the constructions I propose are correct. I ask only that they are coherent enough to lead us to better ones.

There’s one further thing and this is not so crazy: This notation, in conjunction with 1) my assertation that it is about complex cortical dynamics, and 2) and Lev Vygotsky’s account of language development, gives us a new way of thinking about a debate that is currently blazing away in a small region of the internet: How do we model the mind, neural vectors, symbols, or both? If both, how? I am opting for both and making a fairly specific proposal about how the human brain does it. The question then becomes: What will it take to craft an artificial device that does it? If my proposal ends up taking 14K or 15K words and maybe 30 diagrams, well it deals with a very a complicated problem.

Here is the draft introduction, Symbols, holograms, and diagrams, to the working paper. With that, I’ll leave you with a brief sketch of my proposal.

The Model in 14 Propositions

1. I assume that the cortex is organized into NeuroFunctional Areas (NFAs), each of which has its own characteristic pattern of inputs and outputs. As far as I can tell, these NFAs are not sharply distinct from one another. The boundaries can be revised – think of cerebral plasticity.

2. I assume that the operations of each NFA are those of complex dynamics. I have been influenced by Walter Freeman in this.)

3. A low dimensional projection of each NFA phase space can be modeled by a conceptual space as outlined by Peter Gärdenfors.

4. Each NFA has its own attractor landscape. A primary NFA is one driven primarily by subcortical inputs. Then we have secondary and tertiary NFAs, which involve a mixture of cortical and subcortical inputs. (I’m thinking of the standard notions of primary, secondary, and tertiary cortex.)

5. Interaction between NFAs is defined by a Relational Network over Attractors (RNA), which is a relational network defined over basins in multiple linked attractor landscapes.

6. The RNA network employs a notation developed by Sydney Lamb in which the nodes are logical operators, AND & OR, while ‘content’ of the network is carried on the arcs. [REF/LINK to his paper.]

7. Each arc corresponds to a basin of attraction in some attractor landscape.

8. The output of a source NFA is ‘governed’ by an OR relationship (actually exclusive OR, XOR) over its basins. Only one basin can be active at a time. [Provision needs to be made for the situation in which no basin is entered.]

9. Inputs to a basin in a target NFA are regulated by an AND relationship over outputs from source NFAs.

10. Symbolic computation arises with the advent of language. It adds new primary attractor landscapes (phonetics & phonology, morphology?) and extends the existing RNA. Thus overall RNA is roughly divided into a general network and a lingistic network.

11. Word forms (signifiers) exist as basins in the linguistic network. A word form whose meaning is given by physical phenomena are coupled with an attractor basin (signifier) in the general network. This linkage yields a symbol (or sign). Word forms are said to index the general RNA.

12. Not all word forms are defined in that way. Some are defined by cognitive metaphor (Lakoff and Johnson). Others are defined by metalingual definition (David Hays). I assume there are other forms of definition as well (see e.g. Benzon and Hays 1990). It is not clear to me how we are to handle these forms.

13. Words can be said to index the general RNA (Benzon & Hays 1988).

14. The common-sense concept of thinking refers to the process by which one uses indices to move through the general RNA to 1) add new attractors to some landscape, and 2) construct new patterns over attractors, new or existing.

No comments:

Post a Comment