Pages in this blog

Friday, March 8, 2024

AI, Chess, and Language 2: Further remarks [SPSH]

Yesterday I reflected on the computational approximation of chess play and the computational approximation of linguistic activity: AI, Chess, and Language 1: Two VERY Different Beasts. Today I want to reflect on the actual physical situation, considering it in relation to Saty Chary’s Structured Physical System Hypothesis (SPSH), which stands in contrast to Newell and Simon’s 1976 Physical System System Hypothesis (PSSH). The latter states: “A physical symbol system has the necessary and sufficient means for general intelligent action.” In contrast the SPSH posits an underlying analog substrate rather than the digital one posited by Newell and Simon. The analog substrate we’re talking about is, of course, the human brain embodied in a human body.

The PSSH implies:

  • that the brain is physical symbol system, all the way down, and
  • that, because computers are such systems as well, they can adequately simulate/emulate the perceptual and cognitive activities of the human brain.

The SPSH implies:

  • the brain is such a system (denying that it is a physical symbol system), and
  • digital computers can only approximate the brain’s perceptual and cognitive activities.

Given that human brains can deal with language, they must in some sense be physical symbols systems. But they are not symbol systems all the way down. At the basic level, the brain is just a structured physical system, most likely one exhibiting complex dynamics.[1] My previous chess and language post was about the computational approximation of those two activities by computational systems, pointing out that the requirements of those systems must be quite different. In this post I am interested in what’s really going on, physically. This will be mostly tautological in character. I just want to make things explicit.

In the case of chess, the board and associated pieces are the physical limit of the chess world. There is nothing more beyond that. Of course, a board and pieces can be physically realized in many different ways, but each realization is a complete and sufficient basis for playing chess. The relationship between the chess world and the geometric footprint required for a computational simulation it, that relationship is thus simple and transparent, so much so that a very simple symbolic notation provides an adequate basis for any computer chess engine.

It is quite otherwise in the case of the real world and the operations of the brain in that world. We start with the real world as given to the senses. That is the basis for the primary geometric footprint of any computer system. In particular, that is what defines the possibilities for adhesion in a perceptual-cognitive system.[2] The relationship between the geometric footprint of a computer system and the world is not very well-defined; it is fuzzy and complex.

Through the abstractive capacities of the cognitive system, features of the physical world can be and are redefined and new entities can be introduced into cognition.[3] As examples of the first, consider salt and sodium chloride. The first is an entity given to the sense while the second is based on the 19th century conceptual system. Similarly, where the senses see two entities, the Morning Star and the Evening Star, astronomers see only one entity, the planet Venus.[4] As an example of the latter, think of charity as when someone does something nice for someone without thought of reward. That is the mechanism of metalingual definition as discussed in [3] and in this post, Does ChatGPT know what a tragedy is?

Contemporary large language models (LLMs), such as the one at the core of ChatGPT, do not have direct access to the physical world. They must approximate human cognitive capacities through the relationality implicit it existing written texts. It is a matter of some dispute whether or not this relationality, if sampled sufficiently, is an adequate basis for a computer system to achieve AGI, artificial general intelligence. I do not think it is an adequate basis.

Beyond this, we do not know what kind of computational system will be required for a “complete and adequate” simulation of human cognitive capacities. The relationship between the structured physical system that is the brain and the physical world is vast, complex, and ill-defined. It should be obvious from this brief discussion, however, that a computer system that is adequate for chess, will not, on the face of it, be adequate for all of human cognition.

* * * * *

[1] I have a working paper where I sketch out a scheme whereby the brain, as a complex dynamical system, can implement language, a symbolic system: Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, Working Paper, June 20, 2022, pp. 73, https://www.academia.edu/81911617/Relational_Nets_Over_Attractors_A_Primer_Part_1_Design_for_a_Mind

[2] Here I am referring to the three-part scheme for meaning that I have outlined in various places:

  • meaning consists of intention plus semanticity, where intention inheres in the relationship between two speakers, and
  • semanticity consists of adhesion plus relationality, where adhesion connects perception and cognition to the external world and relationality is about the relationships among elements in a perceptual-cognitive system. See e.g. this post: Semanticity: adhesion and relationality.

[3] See, e.g. William Benzon and David Hays, The Evolution of Cognition, Journal of Social and Biological Structures. 13(4): 297-320, 1990, https://www.academia.edu/243486/The_Evolution_of_Cognition

[4] These examples are discussed in William Benzon, Ontology of Common Sense, Hans Burkhardt and Barry Smith, eds. Handbook of Metaphysics and Ontology, Muenchen: Philosophia Verlag GmbH, 1991, pp. 159-161, https://www.academia.edu/28723042/Ontology_of_Common_Sense

* * * * *

Bonus: Consider this clip from Yann LeCun's recent conversation with Lex Friedman:

Lecun's point is simple: The amount of visual (and only visual) information that a four-year old has taken-in is far in excess of the amount of information our largest LLMs have been trained on.

No comments:

Post a Comment