Thursday, May 26, 2022

Symbols, holograms, and diagrams

I am currently revising and updating some work I did over a decade ago. The new document is tentatively titled: Relational Nets Over Attractors, A Primer: Part 1, Basics. This a draft of the introduction.

Introduction: Symbols, holograms, and diagrams

As I have indicated in the Preface, this primer is about a notational convention. I adopted the convention to solve a problem. This introduction is about the problem I am trying to solve.

I could say that I’m trying to understand how the brain works. That is true, but it is too broad. It would be better to say that I am trying to understand how a mind is implemented in the brain. That word, “implemented,” is carefully chosen. Though the word is common enough, I take it from computing, where one talks of implementing a program in a particular high-level programming language. One may also talk of implementing a high-level language in the low-level language for a particular processor. Higher levels are implemented in lower levels.

While some would talk of the mind as emerging from the operations of the brain, I prefer to look at it from the other direction; the mind is implemented in the brain. Beyond that I think it is best to see how this problem developed in my early intellectual life.

I studied ‘classical’ symbolic semantics with the late David Hays in the Department of Linguistics at Bufflo back in the mid-1970s. One day we were discussing a diagram that looked something like Figure 1:

Figure 1: Birds have parts.

It is a simple diagram, asserting that the typical bird consists of various parts (CMP = component), in this case, head, body, left wing, right wing, and tail. If you wish, you can imagine other components as well, two legs, and maybe a neck, a beak, and so forth. We were discussing the problem presented by having to account for a bird’s feathers:

Figure 2: Feathers.

Figure 2 shows a number of feathers for the left wing. Surely the left wing has more than eight feathers, no? How many? What about the feathers for the right wing, for the body, the head, the tail, the legs? Moreover, feathers are not primitive parts; they have shafts to which barbs are attached. Do we have to represent all those as well?

Perhaps you are thinking, that can’t possibly be right. We don’t think about all those hundreds if not thousands of parts for each and every bird. No, we don’t. But the logic inherent in this kind of symbolic representation says that we have to get the parts list right.

I had an idea. And that time we have been studying a book by William Powers, Behavior: The Control of Perception. He argued that the mind/brain employed a fundamentally analog, rather than digital, representation of the world. I suggested something like this:

Figure 3: Bird in perception and cognition.

Figure 3 shows a cognitive system where the typical bird is represented by a node, just as in Figures 1 and 2. That node is linked to a perceptual system that is, following Powers, analog in nature. The system contains a sensorimotor schema that is analog in character. It is connected to the cognitive node with a representation (REP) arc.

If you wish, we can then add some further structure to the cognitive depiction along with some other adjustments, as we see in Figure 4:

Figure 4: Parts of a bird in cognition and perception.

In cognition we see the same structure we had in Figure 1. Each node in that structure is connected to the appropriate part of the sensorimotor schema by a representation arc. We can think of the cognitive structure as digital and symbolic in character where the perceptual scheme has a quasi-analog character, which we’ll get to shortly. The bird itself is in the external world.

What of all the feathers, and their parts? you ask. They’re in the perceptual representation, all you have to do is look closely.

Well, that’s not quite correct. All the parts are there in the physical bird, and we are free to examine it at whatever level of detail we choose. Hunters, taxidermists, butchers, naturalists, aritists and illustrators will choose a relatively high level of detai. The rest of us can be satisfied by a crude representation.

And thus we had proposed a solution to what would become known as the symbol grounding problem, though I do not believe the term was known to us at the time. The cognitive system is digital and symbolic in character and is linked to a perceptual system that is analog in character. Given that, the cognitive system need not be burdened with accounting for all the detail inherent in the world. The perceptual system can handle much of it. But it need not handle all of it, only enough to distinguish between one object and another. If we only need to tell the difference between birds and mammals, that’s not much detail at all. If we need to distinguish between one kind of bird and another, between robins and starlings, eagles and owls, and so forth, then more detail is required. Most of the differentiating detail will be in the respective sensorimotor schemas, only some of it need be represented in cognition. Unless, of course, you are one of those people with a particularly strong interest in birds. Then you will develop rich sensorimotor schemas and reconstruct them in cognition at a high level of detail. While you may well count every feather in a wing or a tail, you are unlikely to count every barb in every feather. But you will know they are there and have the capacity to count them if it becomes necessary.

Hays went on to develop a model of cognition – Cognitive Structures (1981) – built on basic on this basic idea: Cognition is grounded in an analog perceptual (and motor) system that is in direct contact with the world. And some years after that he and I became curious about the brain and wrote a paper outlining that curiosity, “Principles and Development of Natural Intelligence” (1988). We suggested five principles. We called the fourth one the figural principle introduced the work of the mathematician Miriam Yevick in the course explicating it (pp. XX-XX):

The figural principle concerns the relationship between Gestalt or analogue process in neural schemas and propositional or digital processes. In our view, both are necessary; the figural principle concerns the relationship between the two types of process. The best way to begin is to consider Miriam Yevick's work (1975, 1978) on the relationship between ‘descriptive and holistic’ (analogue) and ‘recursive and ostensive’ (digital) processes in representation.

The critical relationship is that between the complexity of the object and the complexity of the representation needed to ensure specific identification. If the object is simple, e.g. a square, a circle, a cross, a simple propositional schema will yield a sharp identification, while a relatively complex Gestalt schema will be required for an equivalently good identification (see Fig. 5). Conversely, if the object is complex, e.g. a Chinese ideogram, a face, a relatively simple Gestalt (Yevick used Fourier transforms) will yield a sharp identification, while an equivalently precise propositional schema will be more complex than the object it represents. Finally, we have

Figure 5: Yevick's law. The curves indicate the level of representational complexity required for a good identification.

those objects which fall in the middle region of Figure 5, objects that have no particularly simple description by either Gestalt or propositional methods and instead require an interweaving of both. That interweaving is the figural principle.

We then went on to explicate that figural principle in some detail.

But we need not enter into that here. I introduced it only as a way of introducing Yevick’s distinction between two types of identification, one ‘descriptive and holistic’ (analogue) and the other ‘recursive and ostensive’ (digital). Yevick wrote about visual identification. In the annoying, if not flat-out hubristic, way of theoreticians, Hays and I generalized her distinction to every modality. I will continue with that generalization in this paper, where I will refer to the one process as symbolic and the other in various says, but connectionist will do as perhaps the most general term.

And that brings me to a controversy currently afoot in the world of artificial intelligence. To be sure, that is not my primary concern, which is and remains the human mind and nervous system, but is very much on my mind. As Geoffrey Hinton, a pioneer in connectionist models, declared in a interview with Karen Hao, “I do believe deep learning is going to be able to do everything” (MIT Technology Review, 11.3.2020). And deep learning operates in the connectionist world of artificial neural networks.

It is my belief that the highest-level processes of human intelligence are best conceived in symbolic terms, but that the basic processes in the brain are not symbolic in character. They are based on the “big vectors of neural activity” that Hinton talks about. My objective in this paper is to present a way of thinking about how those neural vectors can serve as the basis of symbolic structures and processes. Turned in the other direction: How do we implement symbolic processes on a connectionist foundation?

That is my subject in this paper. Consider this tripartite distinction made by Peter Gärdenfors:

Symbolic models: Based on a given set of predicates with known denotation. Representations based on logical and syntactic operations. [...]
Conceptual spaces: Based on a set of quality dimensions. Representations based on topological and geometrical notions. […]
Connectionist models: Based on a (uninterpreted) inputs from receptors. Distributed representations by dynamic connection weights. [...]

Let us think of a connectionist model as mediating between perception an the external world (in Figure 4 above). It performs a process of data compression. But there is also a categorization aspect to that process. That is a function of conceptual spaces, which are central to Gärdenfors’ thinking. They mediate the relationship between perception and cognition.

I will have relatively little to say about connectionist models. I have been strongly influenced by the ideas about complex neurodynamics developed by the late Walter Freeman and I will assume his approach, or something similar, is reasonable. He investigated how medium-scale patches of tissue in the olfactory cortex reacted to odorants. Thus I assume that I am dealing with mesoscale patches of corticial tissue, which I will call neurofunctional areas (NFAs).

I will also assume that each NFA corresponds to one of Peter Gäedenfors’ conceptual spaces. If you will, the geometry of each conceptual space is a low dimensional projection of the high dimensional space of the connectionist dynamics. For the purposes of this paper I am willing to take Gärdenfors’ work on those spaces at face value.

Given those assumptions, I am proposing a notional convention that will allow us to see how symbolic computation can be implemented in cortical tissue. While I am proposing this convention, it is not a convention I have invented. Rather, I have adapted it from the work of Sydney Lamb, a linguistic of David Hays’s generation and who was a friend of his.

No comments:

Post a Comment