Kevin J. Mitchell, Why free will is required for true artificial intelligence, excerpted from FREE AGENTS: How Evolution Gave Us Free Will, Princeton University Press, 2023:
Understanding causality can’t come from passive observation, because the relevant counterfactuals often do not arise. If X is followed by Y, no matter how regularly, the only way to really know that is a causal relation is to intervene in the system: to prevent X and see if Y still happens. The hypothesis has to be tested. Causal knowledge thus comes from causal intervention in the world. What we see as intelligent behavior is the payoff for that hard work.
The implication is that artificial general intelligence will not arise in systems that only passively receive data. They need to be able to act back on the world and see how those data change in response. Such systems may thus have to be embodied in some way: either in physical robotics or in software entities that can act in simulated environments.
Artificial general intelligence may have to be earned through the exercise of agency.
Note that machine learning involves a kind of micro-level agency. In the case of transformers, for example, the training engine has to guess/predict the next word. The model-in-progress is modified according to whether or not the guess has been correct. Rodney Brooks' pioneering robot, Gengis, was built on a similar principle: “Brooks wanted to solve the problem of how to make robots intelligent and suggested that it is possible to create robots that displayed intelligence by using a “subsumption architecture” which is a type of reactive robotic architecture where a robot can react to the world around them.” Gengis, however, functioned in a very limited environment.
As for LLMs, the agency is in the engine that creates them, not the model itself. A prerequisite condition, necessary but not sufficient, for endowing the model with agency is making a model which itself can learn, that is, which can change its structure. At the moment that is not possible. Once a model has been trained, it cannot be altered. You can't add new information to the model or, for that matter, delete information from it.
* * * * *
On the fixed nature of LLMs see my post, Physical constraints on computing, process and memory, Part 1 [LeCun], July 24, 2023.
More generally, see my post, Minds are built from the inside [evolution, development], December 25, 2021, and, for neuroscientists, Consciousness, reorganization and polyviscosity, Part 4: Glia, August 20, 2022.
While we're at it, check out my various posts on the perceptual control theory of William Powers, in particular, Behavior: The Control of Perception – Bill Powers rediscovered, again!, January 10, 2020, and A phylogenetic approach to the neural basis of behavior, May 4, 2023.
No comments:
Post a Comment