Pages in this blog

Sunday, April 10, 2022

The machine in my mind: Lessons learned

Several days ago I recounted how my thinking on the distinction between minds and machines had evolved: The machine in my mind, my mind on the machine: Will we ever build a machine to equal the human brain? This is a conclusion to that. My point was simply that I have always believed that we cannot not make an artificial intelligence as powerful as a real human brain. Some other things are worth noting.

You can’t predict the course of intellectual development

This is well-known. But it’s one thing to know it from knowing intellectual history. Knowing it from having lived it is different.

Not being able to predict means you can’t project the future evolution of current lines of research. It also means that you can’t predict what new possibilities will appear out of nowhere. At the time in 1970s when I was predicting the development of a computer system capable to reading a Shakespeare play, I was not predicting that one day I would be thinking seriously about human origins and offer a concrete hypothesis. But that’s what I did in my book about music: Beethoven’s Anvil (2001). Nor, for that matter, did I imagine that I would one day thinking about the metaphysical structure of the world as a high-dimensional space constructed by an AI engine.

The space gets larger

As we come to know more, the space gets larger. As a crude analogy, here’s something I stuck into my post on the meaning of “understand”:

Let's assume for a minute that we're going to rate understanding on a scale, say, from 1 to 10. GPT-3 rates, say, 3. Along comes Pathways and it's clearly better than GPT-3. Where does it go? 4? 5? 6?

No.

Given that considerable distance remains between Pathways and humans, I'd say that a 1-10 scale is insufficient. Let's make it 1-100. GPT-3 goes in at 23 and Pathways at, say, 38.

That is to say, each time one of these remarkable results comes in I think it enlarges our sense of the measure space. Maybe it even forces us to start adding dimensions. It just makes the world larger.

The space of possible models for human intelligence, not to mention models for artificial intelligence, is now much larger than it was in the 1970s. Will it keep getting larger and larger as our knowledge grows, or will the time come when we have bounded the space? How will we know?

The very idea of such a space, of course, implies computational understanding. It certainly isn’t a physical space. What kind of space is it? Conceptual? Imaginary? The very fact that we can conceive of such a space, imagine it, depends on computation. Computer programs can search spaces, at least in AI. But I don’t think the idea. Is due to AI. When I took a course in computer programming during my undergraduate years at Johns Hopkins we wrote a program to search for values on a hidden surface. [For that matter, we also did a program to play tic-tac-toe.]

My basic conceptual ontology remains

As far as I can tell, despite the collapse of my dream for that Shakespeare-reading computer system, my basic underlying conceptual ontology remains the same. That’s why I continue to believe that we will not be able to fully simulate a human brain in an artificial system.

We don’t have a language, a conceptual system, in which to describe that ontology, though we worked on it in Hays’s research group – see the concept of assignment in this working paper, Ontology in Cognition: The Assignment Relation and the Great Chain of Being, this technical report for the Center for Integrated Manufacturing at RPI, Ontology in Knowledge Representation for CIM, and this encyclopedia article, Ontology of Common Sense. I don’t know just when and how my ontology developed, but I’d point to my undergraduate years at Johns Hopkins:

It was while working on a master’s thesis about “Kubla Khan” that all those things came together. That’s the underlying ontology that was in place when I met David Hays at Buffalo in spring on 1974. I tell that story in an article I published originally in 1975 and have since updated and revised, Touchstones.

* * * * *

What do I mean by “basic underlying conceptual ontology?” Think of a set of building blocks, Lego pieces, Erector set components, or, for that matter, the various components that go into the construction of, say, actual buildings, whatever. There is a finite set of distinct different types of objects in these various collections. That set of types is your ontology. This set of types places constraints on what you can build. But what you can actually build depends on your imagination and determination, plus, of course, having enough tokens of each type to complete the job.

My set of conceptual Lego pieces was complete by the time I completed my master’s thesis on “Kubla Khan” in 1972. It was rich enough that I was able to learn Hays’s computational semantics and, on that basis, imagine Prospero, the system that could “read” Shakespeare. When the possibility of actually constructing Prospero disappeared, the set of conceptual Lego pieces – my conceptual ontology – remained unchanged. But my sense of what one can build with those pieces changed.

When I began (email) conversations with Walter Freeman about the complex dynamics of the nervous system, I was able to do so with that set of conceptual Lego pieces (ontology) – though, keep in mind, I don’t command the underlying mathematics and so have to work analogy and metaphor. That same conceptual ontology has allowed me to conceive of attractor nets, networks of logical operators over attractors in various attractor landscapes, where each landscape corresponds to a neurofunctional area in the brain. When I began thinking seriously about deep learning and artificial neural nets, I did so in terms those ontological primitives. They allowed me to see, both that GPT-3 represents a conceptual advance, and that such technology is not sufficient in itself.

Remember, finally, that that conceptual ontology took shape through investigating the form and meaning of “Kubla Khan.” That ontology was ‘designed,’ if you will, to encompass a rich example of verbal artistry. It ranges over neurons, logical operators, poems, and more. 

What has happened over the course of my career is the my sense of what can be built within this ontology has changed. Yes, I have had to drop Prospero and things ‘like’ it from the list, but I have added things to the list as well, such as the origins of human thought and attractor nets. On the whole, my sense is that the space of possible constructs has grown larger and more various.

No comments:

Post a Comment