There’s a big conversation going on in the AI world these days about appropriate architectures: Can new-style machine-learning/neural-net methods take us all the way to the Holy Land or do we need to incorporate old-style structured symbolic models? My sense the that most current practitioners learn toward the former position; I favor the latter. (As for the Holy Land, it’s a mirage.)
In the first section I present an important, if somewhat neglected, paper from 1975 and what David Hays and I made from it. Then some tweets culled from the current stream. I conclude with some crazy stuff, my notes on attractor networks and the fluid mind. That crazy stuff is about blending the two kinds of logic, intermixing structured symbolic systems with data-driven machine/deep learning systems. Something like that.
Yevick’s Law: Two Kinds of Logic
As Louis Armstrong used to say, it’s one of those old time good ones:
Yevick, Miriam Lipschutz (1975) Holographic or Fourier logic. Pattern Recognition 7: 197-213.
https://doi.org/10.1016/0031-3203(75)90005-9
Abstract: A tentative model of a system whose objects are patterns on transparencies and whose primitive operations are those of holography is presented. A formalism is developed in which a variety of operations is expressed in terms of two primitives: recording the hologram and filtering. Some elements of a holographic algebra of sets are given. Some distinctive concepts of a holographic logic are examined, such as holographic identity, equality, containment and “association”. It is argued that a logic in which objects are defined by their “associations” is more akin to visual apprehension than description in terms of sequential strings of symbols.
Yes, it was published in 1975, which is ancient times in the world of artificial intelligence. It was inspired by a body of theorizing and evidence – promulgated by Karl Pribram, among others – that the neocortical processing was based on holographic principles rather than those of propositional/symbolic logic. It seems to me that what Yevick called holographic logic is similar in spirit, and even in mathematics in some respects, to current work on neural networks, while, in contrast, ordinary logic is as the abstract has it, "description in terms of sequential strings of symbols."
A decade later David Hays and I called on that paper in a highly speculative synthesis of a variety of work in cognitive, neural, perceptual, and comparative psychology with computational orientation:
William Benzon and David Hays, Principles and Development of Natural Intelligence, Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322.
https://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence
We sketched out five principles. The fourth principle, which we called the figural principle, was based on Yevick’s work. Here’s how we opened the discussion:
The figural principle concerns the relationship between Gestalt or analogue process in neural schemas and propositional or digital processes. In our view, both are necessary; the figural principle concerns the relationship between the two types of process. The best way to begin is to consider Miriam Yevick's work (1975, 1978) on the relationship between 'descriptive and holistic' (analogue) and 'recursive and ostensive' (digital) processes in representation.
Fig. 10. Yevick's law. The curves indicate the level of representational complexity required for a good identification The critical relationship is that between the complexity of the object and the complexity of the representation needed to ensure specific identification. If the object is simple, e.g. a square, a circle, a cross, a simple propositional schema will yield a sharp identification, while a relatively complex Gestalt schema will be required for an equivalently good identification (see Fig. 10). Conversely, if the object is complex, e.g. a Chinese ideogram, a face, a relatively simple Gestalt (Yevick used Fourier transforms) will yield a sharp identification, while an equivalently precise propositional schema will be more complex than the object it represents. Finally, we have those objects which fall in the middle region of Figure 10, objects that have no particularly simple description by either Gestalt or propositional methods and instead require an interweaving of both. That interweaving is the figural principle.Definition. The figural mechanism brings environments of moderate complexity within the limits of computability by putting a propositional assemblage of local narrow band-width Gestalts into a framework provided by global wide band-width analysis to achieve cross-validation of the two analyses.
At various points later in the essay of the propositional reconstruction Gestalt processes. In our view the mind evolves through an interweaving of these two kinds of logic. Both are necessary.
Subsequently we used this line of thought in a paper about metaphor:William Benzon and David Hays, Metaphor, Recognition, and Neural Process, The American Journal of Semiotics, Vol. 5, No. 1 (1987), 59-80.
https://www.academia.edu/238608/Metaphor_Recognition_and_Neural_Process
Karl Pribram's concept of neural holography suggests a neurological basis for metaphor: the brain creates a new concept by the metaphoric process of using one concept as a filter — better, as an extractor — for another. For example, the concept “Achilles” is “filtered” through the concept “lion” to foreground the pattern of fighting fury the two hold in common. In this model the linguistic capacity of the left cortical hemisphere is augmented by the capacity of the right hemisphere for analysis of images. Left-hemisphere syntax holds the tenor and vehicle in place while right-hemisphere imaging process extracts the metaphor ground. Metaphors can be concatenated one after the other so that the ground of one metaphor can enter into another one as tenor or vehicle. Thus conceived metaphor is a mechanism through which thought can be extended into new conceptual territory.
Caveat: Students of cognitive linguistics should think of this as an account of blending rather than (cognitive) metaphor.
Some current work, from the Twitterverse
"Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective": updated, #IJCAI2020 https://t.co/FiRnLSbfXl w/@AvilaGarcez @marceloprates_ @vardi @Melleo54Sis Avelar, drawing inspiration from many: @GaryMarcus @kahneman_daniel E. Davis et al #neuralsymbolic #AI pic.twitter.com/gXiKBR55dV— Luis Lamb (@luislamb) June 1, 2020
GPT-3: Language Models are Few-Shot Learners, by @notTomBrown et al.— hardmaru (@hardmaru) May 29, 2020
“We train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.”https://t.co/qhcHPSH22I pic.twitter.com/ng1Dc6aFg3
Exactly what you get when you look for your keys where the streetlight is. https://t.co/PkGOtWsPyu— Gary Marcus (@GaryMarcus) May 30, 2020
Attractor nets
This work is very sketchy and unpolished and I’m just a little embarrassed to present it in public. I do so because I’ve taken it pretty much as far as I can. Perhaps others will find some value in it.
Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic, and Dynamics in Relational Networks, Working Paper, 52 pp.
https://www.academia.edu/9012847/Attractor_Nets_Series_I_Notes_Toward_a_New_Theory_of_Mind_Logic_and_Dynamics_in_Relational_Networks
Abstract: These notes explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. A mind is a fluid attractor net of fractional dimensionality over a neural net whose behavior displays complex dynamics in a state space of unbounded dimensionality. The attractor-net moves from one discrete state (frame) to another while the underlying neural net moves continuously through its state space.
https://www.academia.edu/9012810/Attractor_Nets_2011_Diagrams_for_a_New_Theory_of_Mind
Introduction: This is a series of diagrams based on the informal ideas presented in Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic and Dynamics in Relational Networks, which explains the notational conventions and discusses the constructions. These diagrams should be used in conjunction with that document, which contains and discusses many of them. In particular, the diagrams in the first three sections are without annotation, but they are explained in the Attractor Nets paper.
The rest of the diagrams are annotated, but depend on ideas developed in the attractor nets paper.
The discussions of Variety and Fragments of Language compare the current notation, based on the work of Sydney Lamb, with a more conventional notion. In Lamb’s notation, nodes are logical operators (and, or) while in the more conventional notation nodes are concepts. The Lamb-based notation is more complex, but also fuller.
https://www.academia.edu/9508938/From_Associative_Nets_to_the_Fluid_Mind
Abstract: We can think of the mind as a network that’s fluid on several scales of viscosity. Some things change very slowly, on a scale of months to years. Other things change rapidly, in milliseconds or seconds. And other processes are in between. The microscale dynamic properties of the mind at any time are context dependent. Under some conditions it will function as a highly structured cognitive network; the details of the network will of course depend on the exact conditions, both internal (including chemical) and external (what’s the “load” on the mind?). Under other conditions the mind will function more like a loose associative net. These notes explore these notions in a very informal way.
No comments:
Post a Comment