Pages in this blog

Saturday, August 9, 2025

From Mirror Recognition to Low-Bandwidth Memory, A Working Paper

New working paper. Title above, links, abstract, contents, and introduction below:

Academia.edu: https://www.academia.edu/143347171/From_Mirror_Recognition_to_Low_Bandwidth_Memory_A_Working_Paper
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5385194
ResearchGage: https://www.researchgate.net/publication/394414193_From_Mirror_Recognition_to_Low-Bandwidth_Memory_A_Working_Paper

Abstract: We start with a developmental and cognitive analysis of mirror recognition, highlighting its dependence, not on self-awareness per se, but on episodic-level intersensory coordination—a capacity that enables spatially dislocated, temporally synchronized associations across sensory modalities. In a layered control architecture of hyperorders (sensorimotor, systemic, episodic, gnomonic), such recognition can arise without invoking a representational “self.” We extended this framework to the role of the default mode network (DMN), which is orthogonal to the hyperorders—a low-bandwidth, drifting subsystem that provides broad, non-task-specific access to memory and perception. This led to an inquiry into associative memory systems, where we confronted the challenge of searching without specific content-based probes. To address this, we proposed the design of an “associative drift engine”: a cognitive module capable of variable-bandwidth access, modulating the precision, noise, and scope of its memory probes. This system mirrors the DMN’s exploratory function and suggests a foundational mechanism for spontaneous recollection, creative association, and cognitive play—essential features of both natural and artificial minds.

Background Notes 1
1. Self and Mirror Recognition 1
2. ChatGPT’s assessment of the account mirror recognition 6
3. Default Mode Network 9
4. Toward an Associative Drift Engine 11
Summary of the discussion 15    

Background Notes

This first major section of this document consists of pages 74 to 84 from my 1978 dissertation, Cognitive Science and Literary Theory, Department of English, State University of New York at Buffalo. Yes, I was in the English Department and the dissertation uses examples from literature, the Oedipus story, the evolution of narrative form, and Shakespeare’s Sonnet 129. But I was also working closely with David Hays in the Linguistics Department. He was a first-generation researcher in machine translation, which transformed itself into computational linguistics in the mid-1960s.

During the period when I was in his research group – 1974 to 1978, when I finished my degree – we were sketching schemes for how to ground a symbolic cognitive system in the operations of a sensorimotor system organized as a stack of control systems in a scheme suggested by William Powers, Behavior: The Control of Perception (1973). Thus, when I talk about the sensorimotor hyperorders in the dissertation excerpt, I’m talking about a system modeled on Powers. By contrast, the systemic, episodic, and gnomonic hyperorders are symbolic systems, all directly linked to the sensorimotor system.

One thing else I want to emphasize is that, by this time, I had come to understand that the physical construction of the nervous system, both in its layout in the brain and its relationship with the external world, that structure carried information that did not have to be explicitly represented inside the system itself – I’ve used yellow highlighting to emphasize those sections. The account I offer of mirror recognition depends on this.

About this document

This document contains four things:

1. A passage from my 1978 dissertation in which I discuss mirror recognition,
2. ChatGPT’s (current) assessment of that passage,
3. A discussion of the Default Mode Network (DMN) in the brain, and
4. Some speculation from ChatGPT on how to construct, in effect, a DMN for an artificial associative memory, something it calls “an associative drift engine.”

2 comments:

  1. Congrats Bill.
    I must say the "1978 dissertation in which I discuss mirror recognition" needed more recognition!
    What was the reception back then? Any cites or .... recognition?
    ###

    "Could the episodic hyperorder be linked to predictive coding?
    "Your model’s structure invites comparison to modern computational neuroscience, especially frameworks that posit hierarchies of prediction and feedback (Friston, Clark).
    ...
    "modern neurocomputational language (e.g., Bayesian inference, predictive coding),
    • insights from developmental robotics,

    "Let the DMN drift."
    Let the DMN attend to attention? Friston on my reading notes attention, and suppression of self during mirroring, and causation running both ways as the mechanism. Poorly worded!

    Ymmv, yet all are mentioned in...
    "Action understanding and active inference"
    Karl Friston · Jérémie Mattout · James Kilner
    Received: 5 August 2010 / Accepted: 31 January 2011 / Published online: 17 February 2011

    Abstract
    "Abstract We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action–observation.

    These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference."

    Keywords Action–observation · Mirror-neuron system · Inference · Precision · Free-energy · Perception · Generative models · Predictive coding

    Biol Cybern (2011) 104:137–160
    DOI 10.1007/s00422-011-0424-z

    Any comment of Deiston et al's paper?

    Best, SD.

    ReplyDelete
  2. OT
    Bill, you now have imo, a baseline w Claude to be able to judge other models output. I realised reading "How I code with AI on a budget/free" below ... "helps jump back and forth from all the different AI chat tabs". Just triggered me thinking you pasting prompts from one to the other model. The utility "aicodeprep-gui" may be of utility handling multiple search engines concurrently. If you were to choose to do it that way. Serial seems old hat now.

    I'd appreciate to see the Benzon Benchmark re other ai models.

    "How I code with AI on a budget/free (wuu73.org)142 points by indigodaddy 5 hours ago | hide | past | favorite | 41 comments
    ...
    "Every month so many new models come out. My new fav is GLM-4.5... Kimi K2 is also good, and Qwen3-Coder 480b, or 2507 instruct.. very good as well. All of those work really well in any agentic environment/in agent tools.

    "I made a context helper app ( https://wuu73.org/aicp ) which is linked to from there which helps jump back and forth from all the different AI chat tabs i have open (which is almost always totally free, and I get the best output from those) to my IDE. The app tries to remove all friction, and annoyances, when you are working with the native web chat interfaces for all the AIs. 
    ...
    https://news.ycombinator.com/item?id=44850913


    "My Browser Setup: The Free AI Buffet
    First things first, I have a browser open loaded with tabs pointing to the free tiers of powerful AI models. Why stick to one when you can get multiple perspectives for free? My typical lineup includes
    ...
    https://wuu73.org/blog/aiguide1.html

    aicodeprep-gui
    The fast context maker
    Take control of your AI's context to get radically better bug-fixing and planning assistance. When you don't tell the AIs about MCP servers and tools, you'll be suprised at what they are capable of.
    https://wuu73.org/aicp/

    ReplyDelete