Pages in this blog

Tuesday, July 5, 2022

Why Are Symbols So Useful to Us? [Relational Nets]

I’ve been participating in the discussion of Yann LeCun’s recent position paper, A Path Towards Autonomous Machine Intelligence. My first comment was a long one, Why are symbols important? Because they index cognitive space.

My opening paragraph:

I want to address the issue that your raise at the very end of your paper: Do We Need Symbols for Reasoning? I think we do. Why? 1) Symbols form an index over cognitive space that, 2) facilitates flexible (aka ‘random’) access to that space during complex reasoning.

My final paragraph is addressed to that second issue:

I really should say something about how symbols facilitate flexible access to cognition, but well, that’s tricky. Let me offer up a fake example that points in the direction I’m thinking. Imagine that you’ve arrived at a local maximum in your progression toward some goal but you’ve not yet reached the goal. How do you get unstuck? The problem is, of course, well known and extensively studied. Imagine that your local maximum has a name1, and that name1 is close to name2 of some other location in the space you are searching. That other location may or may not get you closer to the goal; you won’t know until you try. But it is easy to get to name2 and then see where that puts you in the search space. If you’re not better off, well, go back to name1 and try name3. And so forth. Symbol space indexes cognitive space and provides you with an ordering over cognitive space that is different from and somewhat independent of the gradients within cognitive space. It’s another way to move around. More than that, however, it provides you with ways of constructing abstract concepts, and that’s a vast, but poorly studied subject [1].

I really need to elaborate on that. Two discussions are needed: 1) one elaborates on the hill-climbing problem I mention, and 2) the other talks about syntax.

On the first, in an unindexed neural net all inference must proceed locally. In a space with billions and billions of dimensions, locality is obviously a very tricky matter. A local move on one dimension can easily put you in touch with locations on other dimensions which had been quite distant from your starting point. Still, an index constructed within the space gives you a set of vantage points which are outside the gradient structure of the network.

Syntax is one mechanism you have for moving around in index space. That’s what the syntax discussion needs to be about, how syntactic motion in index space can make it easier to move outside the local gradients in semantic space. But not here and now.

Nor is syntax the only mechanism available to you. You could move through a simple alphabetized list of word forms. Such a path would be arbitrary with respect to the gradients in semantic space, which is to say, such a path takes you outside semantic space.

What other mechanisms are there? How does rhyme in poetry figure into this?

More later.

[1] For some thoughts on various mechanisms for constructing abstract concepts, see William Benzon and David Hays, The Evolution of Cognition, Journal of Social and Biological Structures. 13(4): 297-320, 1990, https://www.academia.edu/243486/The_Evolution_of_Cognition

No comments:

Post a Comment