Two papers
Kotseruba, I., Tsotsos, J.K. 40 years of cognitive architectures: core cognitive abilities and practical applications. Artif Intell Rev 53, 17–94 (2020). https://doi.org/10.1007/s10462-018-9646-y
Abstract: In this paper we present a broad overview of the last 40 years of research on cognitive architectures. To date, the number of existing architectures has reached several hundred, but most of the existing surveys do not reflect this growth and instead focus on a handful of well-established architectures. In this survey we aim to provide a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 84 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning, reasoning and metareasoning. In order to assess the breadth of practical applications of cognitive architectures we present information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight the overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.
Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38(4), 13-26. https://doi.org/10.1609/aimag.v38i4.2744
A standard model captures a community consensus over a coherent region of science, serving as a cumulative reference point for the field that can provide guidance for both research and applications, while also focusing efforts to extend or revise it. Here we propose developing such a model for humanlike minds, computational entities whose structures and processes are substantially similar to those found in human cognition. Our hypothesis is that cognitive architectures provide the appropriate computational abstraction for defining a standard model, although the standard model is not itself such an architecture. The proposed standard model began as an initial consensus at the 2013 AAAI Fall Symposium on Integrated Cognition, but is extended here through a synthesis across three existing cognitive architectures: ACT-R, Sigma, and Soar. The resulting standard model spans key aspects of structure and processing, memory and content, learning, and perception and motor, and highlights loci of architectural agreement as well as disagreement with the consensus while identifying potential areas of remaining incompleteness. The hope is that this work will provide an important step toward engaging the broader community in further development of the standard model of the mind.
Toward a Common Model
I found out about those articles in a recent article published by Rosenblum, Lebiere, and Laird (the authors of the second article), Cross-pollination among neuroscience, psychology and AI research yields a foundational understanding of thinking, The Conversation, July 25, 2022. I skimmed until I came to these paragraphs:
This Common Model of Cognition divides humanlike thought into multiple modules, with a short-term memory module at the center of the model. The other modules – perception, action, skills and knowledge – interact through it.
Learning, rather than occurring intentionally, happens automatically as a side effect of processing. In other words, you don’t decide what is stored in long-term memory. Instead, the architecture determines what is learned based on whatever you do think about. This can yield learning of new facts you are exposed to or new skills that you attempt. It can also yield refinements to existing facts and skills.
The modules themselves operate in parallel; for example, allowing you to remember something while listening and looking around your environment. Each module’s computations are massively parallel, meaning many small computational steps happening at the same time. For example, in retrieving a relevant fact from a vast trove of prior experiences, the long-term memory module can determine the relevance of all known facts simultaneously, in a single step.
It’s that middle paragraph that caught my attention. Why? Because it is and isn’t true. Sure, a lot of learning is a side effect, as they say. Speaking is perhaps the classic example. But it is also the case that we do devote enormous effort to deliberate learning. That’s what happens in school. Just why they gloss over it is a mystery. However...
Automatic vs. deliberate learning
This speaks to the issues I raised in my recent post, Physical constraints on computing, process and memory, Part 1 [LeCun], where I was concerned with the distinction that Jerry Fodor and Zenon Pylyshyn made between “classical” theories of cognition where there is an explicit distinction between memory and program and connectionist accounts where memory and program are interwoven in one structure. Classical systems can easily acquire new knowledge by adding more memory; the structure of the program is unaffected. Connectionist systems are not like that.
To a first approximation the human nervous system seems to be a connectionist system. Each neuron seems to be both an active unit and a memory unit. There is no obvious division between a central processor, where all the programming resides, and a passive memory store. And yet, we learn, all the time we learn. How is that possible?
In that post I cited research by Walter Freeman on the sense of smell. It seems that when a new odorant is learned, the entire ‘landscape’ of odorant memory is changed. That is, not only is a new item added to the landscape, but the response patterns of existing items are change. That’s what we would expect in a connectionist model. Just how the brain does this is obscure, though I offered an off-the-cuff speculation.
Anyhow, let’s say that what Freeman was observing was the automatic memory that happens in the course of ordinary processing. Let us say that automatic memory is consonant with those ordinary processes. Deliberate memory is necessary to learn things that a dissonant with those processes. Let’s leave those two terms, consonant and dissonant, undefined beyond their contrastive use. We – me or someone else – can worry about a more thorough characterization later.
Deliberate learning: arithmetic, the method of loci
As an example of deliberate learning, consider arithmetic. It begins with learning the meaning of number names by enumerating collections of objects and then by learning the tables for addition, subtraction, multiplication, and division. This process requires considerable drill. Let’s hypothesize that that is necessary to overcome the inertia, the viscosity – to use a term I introduced in that earlier post – of the automatic process.
As a result of this drill, a foundation is laid on which one can then learn how to do more complex calculations. Considerable drill is required to become fluent in that process. But we’ve got three kinds of drill going on.
1. Meaning of number words: this is an episodic procedure that establishes the meaning of a small number of words. To determine whether any of the words applies to a collection of object, execute the procedure.
2. Learning arithmetic table: this is straight memorization of system items, each having the form: numeral, operation, numeral, equals, numeral.
3. Learning multiple-digit calculation: this is an episodic level set of procedures in which one calls up the items in the arithmetic tables and applies them in succession to pairs and n-tuples of multiple digit numbers.
The episodic procedures, 1 and 3, are dissonant with respect to ordinary episodic processes, such as moving about the physical world, while the system procedures, 2, are dissonant with respect to the ordinary processes of learning the meanings of words.
As another example, consider the method of loci, sometimes known as the memory palace. Here’s the account I gave in my working paper on Visual Thinking:
The locus classicus for any discussion of visual thinking is the method of loci, a technique for aiding memory invented by Greek rhetoricians and which, over a course of centuries, served as the starting point for a great deal of speculation and practical elaboration — an intellectual tradition which has been admirably examined by Frances Yates. The idea is simple. Choose some fairly elaborate building, a temple was usually suggested, and walk through it several times along a set path, memorizing what you see at various fixed points on the path. These points are the loci which are the key to the method. Once you have this path firmly in mind so that you can call it up at will, you are ready to use it as a memory aid. If, for example, you want to deliver a speech from memory, you conduct an imaginary walk through your temple. At the first locus you create a vivid image which is related to the first point in your speech and then you “store” that image at the locus. You repeat the process for each successive point in the speech until all of the points have been stored away in the loci on the path through the temple. Then, when you give your speech you simply start off on the imaginary path, retrieving your ideas from each locus in turn. The technique could also be used for memorizing a speech word-for-word. In this case, instead of storing ideas at loci, one stored individual words.
The process starts with choosing a suitable building and memorizing it. That’s deliberate learning. Think of it as analogous to the three kinds of drill involved in learning arithmetic calculation.
Actually using the memorized building for a specific task, that is deliberate learning as well. Here the deliberation is confined to associating items to be learning with positions in the palace. One learning a collection of system links. The idea that one is to use vivid images no doubt reflects the inherent nature of the nervous systems, it is an exhortation to consonance.
More later.
No comments:
Post a Comment