Pages in this blog

Saturday, December 28, 2019

Computation, Mind, and the World [bounding AI]

Some thoughts on the above topics. Think of it as an exercise in conceptual factoring. What’s being factored? The “space” of AI.

* * * * *

The debate goes on:

* * * * *

A week ago I did another post in my continuing effort to understand the limits of AI, AI at its best, pratfalls and all [the common sense problem is the resistance that the world presents to us]. That post ended like this:

It's as though Go and chess embody the abstract mental powers we bring to bear on the world (Chomskyian generativity? Cartesian rationality?) while the common sense problem, in effect, represents the resistance that the world presents to us. It is the world exerting its existence by daring us: "parse this, and this, and this, and...!"

What I’m suspecting is that the right mathematician should be able some how to put a boundary around this whole domain. But how? I’m certainly not the right mathematician, I’m not any kind of mathematician at all. But, hey! I’ve posed the question. I might as well ramble on about it.

Let us, for the moment, restrict ourselves to the realm of “common sense”, no specialized knowledge, just stuff that everyone knows more or less.

What we need is an abstract mathematical characterization of the world, the whole damn thing. Sounds crazy, no? Yes, definitely. But I’m not after what some physicists call a Theory of Everything. What I’m after isn’t physics at all. Forget physics, forget the ‘deep’ world. I’m interested in the surface, where we live, with our sensorimotor apparatus. Call it the phenomenal world. I’m interested in a mathematical characterization of THAT. What does the phenomenal world have to be like in order to be intelligible?

A completely chaotic world would not be intelligible, for there are no patterns to grasp. And it would be very difficult to make your way in a “smooth” world, where any given thing is very much like any number of other things such that discriminating between any two of them is difficult, though not impossible.

We living in a “lumpy” world. What do I mean by lumpy? Consider visual appearance. Cats look resemble one another to a high degree, more so than any of them resembles a dog, who in turn resemble one another as well, though my impression is that there is greater variety in the appearances of dogs than of cats (and I’m not only thinking of domestic cats, but of the wild ones too). Similarly with snakes. Now is there anything that resembles a snake as much as it resembles a cat? No? The “form space” between cats and snakes is pretty empty, as is the form space between dogs and snakes. That’s what I mean by lumpy. In a smooth world the space between cats, dogs, and snakes would be populated so that dividing the space into distinctly different kinds of creatures is all but arbitrary.

Of course this lumpy world isn’t confined to objects. Things move, they grow, act, and sense, and so forth. All these are aspects of lumpiness as well.

Given that we live in a lumpy world, what’s the structure of the lumpiness? That’s what I want to know. That’s one thing.

* * * * *

Let’s confine ourselves language systems that learn from large bodies of text. What kind of argument would I like to see made about such systems?

Those texts, of course, were generated by humans using their full mental facilities, including their sensorimotor systems, to learn about the world, which we have posited as, in some sense, being lumpy. The texts themselves are a “flattened” or “smoothed” (in the sense I’ve indicated in the previous section) representation of the world. As such, those texts have already “squeezed out” a lot of that lumpy structure. We can deal with the resulting compressed representations because we always have recourse to the world itself and so can, in effect, expand them. All those piles of text have no access to the “pointers” we use to guide the expansion. That is, the various representations we produce about the world – which, after all, are data deep learning feeds on – do not in fact capture our knowledge of the world no matter how many of those representations are consumed.

What deep learning systems gain by using every larger bodies of text is more and more resolution in their recovery of structure from the text, a smoothed representation. But they can never recover or reconstruct the information that was lost in the smoothing process. THAT’s the limitation of these new techniques.

And THAT’s what I want this hypothetical mathematician to begin investigating. We need 1) some account of the lumpy world, 2) some account of what happens in a smoothed (verbal) expression of that world, so that 3) we can argue that deep learning can never recover that lost information.

* * * * *

Note that I don’t really think that symbolic systems can produce a full expression of the world any more than natural language itself can. But the humans coding symbolic systems can take advantage of their knowledge of the world to introduce things into the system that aren’t available to deep learning systems.

What things? And how can we model that? How much common sense can human modelers introduce into the system through symbolic means?

* * * * *

What’s critical in dealing with the “lumpiness” of the world is interacting with the world through a sensorimotor system. We’re going to need robots, not just isolated AI ‘thinking machines’.

* * * * *

Let’s return to where we began. I said:
Forget physics, forget the ‘deep’ world. I’m interested in the surface, where we live, with our sensorimotor apparatus. Call it the phenomenal world. I’m interested in a mathematical characterization of THAT.
What’s the relationship between a robust characterization of sensorimotor perception, action, and cognition and THAT mathematical characterization? In particular, what’s the relationship between a robust mathematical characterization of sensorimotor perception, action, and cognition and our mathematical characterization of the phenomenal world?

And, of course, the common sense world is not all we live in, not at all. We have worlds of specialized knowledge, and, in particular, of abstract knowledge. What of them? Hays and I have argued (reference below) that to date four major techniques of constructing abstract knowledge have emerged: metaphor, metalingual definition, algorithm, and control. What of them? I note that some, though certainly hot all, of these abstract worlds already have rich mathematical characterizations.

William Benzon and David Hays, The Evolution of Cognition, Journal of Social and Biological Structures. 13(4): 297-320, 1990, https://www.academia.edu/243486/The_Evolution_of_Cognition.

We've also done a little work on how metaphor advances thought into new regions: William Benzon and David Hays, Metaphor, Recognition, and Neural Process, The American Journal of Semiotics, Vol. 5, No. 1 (1987), 59-80, https://www.academia.edu/238608/Metaphor_Recognition_and_Neural_Process.

While I’m at it, Hays and I also took a run on the brain. We reviewed a wide range of work in perceptual and cognitive psychology, neuroscience, developmental psychology and comparative psychology and neuroanatomy. This is what we came up with: William Benzon and David Hays, Principles and Development of Natural Intelligence, Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322, https://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence.

* * * * *

Where are we? I began by suggesting that “the right mathematician should be able some how to put a boundary around this whole domain.” In particular, that mathematician would provide a mathematical characterization of the “lumpiness” of the phenomenal world. Of course I don’t know whether or not we’re there yet. It was just a suggestion.

I ended up by pointing out that there is more to the human world than the phenomenal world of common sense perception, action, and cognition. We’ve also got the worlds created through various techniques of abstraction and that some of those already have mathematical characterizations. Would a mathematical characterization of the phenomenal world then, in effect, close the space?

Who knows if that’s even a meaningful question? To say it is meaningful would be to imply that we know how to go about answering it? Do we?

If the answer to that is, yes, and it’ll take centuries, then, no, we haven’t a clue.

Is any one ready to hazard, yes, in a decade or three we’ll be there?

* * * * *

Michael Jordan has written, “Artificial Intelligence — The Revolution Hasn’t Happened Yet”, Medium 4/19/2018. For the revolution to happen, we need a more differentiated sense of the application domain: What techniques work for what applications and why? A framework like the one I've attempted to sketch out above would be useful there, wouldn't it?

No comments:

Post a Comment