Yann LeCun recently posted a major position paper that has been receiving quite a bit of discussion:
Yann LeCun, A Path Towards Autonomous Machine Intelligence, Version 0.9.2, 2022-06-27, https://openreview.net/forum?id=BZ5a1r-kVsf
I posted the following remarks to the LeCun discussion on 3 July 2022.
* * * * *
I want to address the issue that your raise at the very end of your paper: Do We Need Symbols for Reasoning? I think we do. Why? 1) Symbols form an index over cognitive space that, 2) facilitates flexible (aka ‘random’) access to that space during complex reasoning.
Let me quote a passage from the paper you recently published with Jacob Browning:
For the empiricist tradition, symbols and symbolic reasoning is a useful invention for communication purposes, which arose from general learning abilities and our complex social world. This treats the internal calculations and inner monologue — the symbolic stuff happening in our heads — as derived from the external practices of mathematics and language use.[1]
I agree with the second sentence. Symbols are not primitive to the nervous system, they are derived. Initially, from linguistic communication, but then, as culture evolves, from mathematics as well.
The first sentence is true, but not entirely adequate for understanding language (where I consider arithmetic, for example, to be a very specialized form of language). Back in the 1930s the Russian psychologist, Lev Vygotsky, gave an account of language acquisition that moves through three phases: 1) adults (and others) use language to direct the very young child’s attention and actions, 2) gradually the child learns to use speech to direct their own attention and action, and finally 3) the process becomes completely internalized, e.g. inner monologues. I spell this out in more detail in a wide-ranging working paper I’ve recently posted to the web [2].
Now, what is the nature of cognitive space? That’s a complicated question, but much of it is defined directly over physical objects, events, and processes and that is, I believe, differentiable in the way you desire. Here I believe the geometric semantics developed by Peter Gärdenfors [3] may prove useful in seeing how cognition is linked to symbols and I utilize it in my working paper.
Still, let me mention one complication. Here’s an example that was much discussed in the Old Symbolic Days: What’s a chair? Chairs are obviously physical objects, but when you consider the range of objects that are recruited to serve as chairs, it becomes difficult to imagine a single physical description that characterizes all of them, even a fairly abstract description. Perhaps chairs are best characterized by their function, that is, by the role they play in a simple action. The concept of “poison” presents a similar problem. There’s no doubt that poisons are physical substances, but they don’t have a common physical appearance. Nor, for that matter, do fruits and vegetables. Fruits and vegetables play certain roles in cuisines and poisons are most generally characterized known by their effects. And so forth. It’s a complicated problem, but a secondary one at the moment.
Will your proposed H-JEPA architecture support such symbols? I find the following passage suggestive [p. 7]:
The world model may predict natural evolutions of the world, or may predict future world states resulting from a sequence of actions proposed by the actor module. The world model may predict multiple plausible world states, parameterized by latent variables that represent the uncertainty about the world state. The world model is a kind of “simulator” of the relevant aspects of world. What aspects of the world state is relevant depends on the task at hand.
That sounds a bit like natural language parsers, where partial parses will be developed and maintained until enough information is obtained to decide on one of them. I tentatively conclude that, yes, your architecture can accommodate symbols, though you will have to deal with the discrete nature of the symbols themselves.
I really should say something about how symbols facilitate flexible access to cognition, but well, that’s tricky. Let me offer up a fake example that points in the direction I’m thinking. Imagine that you’ve arrived at a local maximum in your progression toward some goal but you’ve not yet reached the goal. How do you get unstuck? The problem is, of course, well known and extensively studied. Imagine that your local maximum has a name1, and that name1 is close to name2 of some other location in the space you are searching. That other location may or may not get you closer to the goal; you won’t know until you try. But it is easy to get to name2 and then see where where that puts you in the search space. If you’re not better off, well, go back to name1 and try name3. And so forth. Symbol space indexes cognitive space and provides you with an ordering over cognitive space that is different from and somewhat independent of the gradients within cognitive space. It’s another way to move around. More than that, however, it provides you with ways of constructing abstract concepts, and that’s a vast, but poorly studied subject [4].
[1] Yann LeCun and Jacob Browning, What AI Can Tell Us About Intelligence, Noema, June 16, 2022, https://www.noemamag.com/what-ai-can-tell-us-about-intelligence/
[2] William Benzon, Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, June 20, 2022, https://ssrn.com/abstract=4141479
[3] Peter Gärdenfors, Conceptual Spaces, MIT 2000, The Geometry of Meaning, MIT 2014. For a quick introduction see Peter Gärdenfors, An Epigenetic Approach to Semantic Categories, IEEE Transactions on Cognitive and Developmental Systems (Volume: 12, Issue: 2, June 2020) 139 – 147. DOI: 10.1109/TCDS.2018.2833387
[4] For some thoughts on various mechanisms for constructing abstract concepts, see William Benzon and David Hays, The Evolution of Cognition, Journal of Social and Biological Structures, 13(4): 297-320, 1990, https://doi.org/10.1016/0140-1750(90)90490-W
No comments:
Post a Comment