Here's the most recent two entries from my intellectual diary, which is mostly just short notations. The second entry is unusually long.
* * * * *
Emergent Ventures Grant
4.28.22
Applied in April to work over my attractor-net stuff, run it through the brain paper, on to “Kubla Khan” and out into a differentiation between natural and artificial minds. It was turned down on April 25, 2022. By that time I’d started thinking my way back into the old work on Attractor Nets.
3rd Version of “The Theory”
4.30.22
By which I mean the theory I’d been working on with Dave Hays. The first version was based on Mechanisms of Language. That was in place when first met Hays in Spring of 1974. Hays wrote Cognitive Structures in the Spring of 1975. That marks the second version of the theory, where Hays grounded cognition in the servomechanical model developed by William Powers (Behavior: The Control of Perception). My 1978 dissertation, “Cognitive Science and Literary Theory,” advanced that a notch, mainly with the addition of what I call The TV Tube model. Then Hays and I wrote and published “Principles and Development of Natural Intelligence” (1988).
That marks the beginning of the third phase of the theory – though we’d published on metaphor the year before. We’d completed the “brain paper” in ’85, I believe, the review process took three years. The brain paper retained by four degrees from the stage 2 theory – sensorimotor, systemic, episodic, gnomonic – and added a fifth at the bottom, modal. That was based on McCulloch’s model of the reticular activating system (RAS). We stuck Pribram’s holographic ‘model’ in at the second degree, sensorimotor, and the Powers stack at three, systemic. But we didn’t actually know how to construct cognitive models (comparable to those of stage 1 & 2) in those terms.
I began that work in 2003 when I had the idea of taking Sydney Lamb’s relational notation and using it to make logical relations between the basins of attraction in patches of cortical tissue, such as Walter Freeman found in his work. That went well for two or three months until I decided that things were beginning to seem arbitrary and unmotivated. So I stopped working. But by that time I had a pile of very interesting diagrams and some provocative prose. I created two documents, a text document (MSWord), and a diagrams document (PowerPoint), and sent those around to various people. I also came up with the idea of an open-ended natural language front-end for end-user software. I sent that around as well. Sydney Lamb thought it was a good idea. A decade later I put those three documents on the web at on my Academia page.
And that was that, until a week or so ago. That’s when I decided it was time to get back into the fray. By then I’d been thinking seriously about work in machine learning, mostly in cognitive criticism. But I’d also been thinking about NLP, especially the machine translation work. Along came GPT-3 and I got serious. I wrote a working paper, “GPT-3: Waterloo or Rubicon? Here be Dragons”, in 2020. I made real progress on that, began to get a sense that what’s going on in those engines is intelligible. The work of Peter Gärdenfors was important.
The upshot: By the time I’d submitted the proposal to Emergent Ventures I’d begun to think my way back into it. When I got turned down, I couldn’t stop. Yesterday I figured it out:
|
physical substrate |
data reduction |
concepts |
Hays |
Powers stack (analog servos) |
parameters of perception |
cognition |
Gärdenfors |
subsymbolic neural net |
conceptual spaces, dimensions |
symbolic |
What does that mean, figured it out? It means I finally made it across the continent and am viewing the Pacific Ocean. I’ve put a boundary around the territory. Most of the territory has yet to be explored, much less become settled and domesticated.
These diagrams are nice as well. This diagram relates to the work of Gärdenfors. Each rectangle is a domain in his terminology. Conceptual spaces (again, his terminology) exist in different domains.
This diagram relates to both Hays-Benzon and Gärdenfors. The rectangles are Gärdenfors. The network structure in Benzon-Hays.
No comments:
Post a Comment