New working paper. Title above, links, abstract, contents, and introduction below:
Academia.edu: https://www.academia.edu/165530520/Natural_intelligence_Revisited_The_Five_Fold_Way_A_Working_Paper
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6529398
ResearchGate: https://www.researchgate.net/publication/403545810_Natural_intelligence_Revisited_The_Five-Fold_Way_A_Working_Paper
Abstract: In 1988 David Hays and I published an article entitled, “Principles and Development of Natural Intelligence.” The principles were computational: 1) modal, 2) diagonalization, 3) decision, 4) finitization, and 5) indexing. We made our argument in terms of the principles themselves along with behavioral, neuroanatomical, ontogenetic and phylogenetic evidence. The literature in all those fields has changed enormously in the four decades since we finished writing. To get a read on how our computational proposals have fared, I asked ChatGPT 5.2 to evaluate it against the current literature. Its verdict: the “empirical specifics have aged unevenly but [the] central agenda has held up surprisingly well.” This article presents the five principles, in brief, followed by ChatGPT’s full evaluation. Also, I have asked ChatGPT to evaluate a section on control structure, “Vehicularization,” that we cut from the original argument. Verdict: “vehicularization points toward a more complete account of natural intelligence—one in which cognition is understood as coordinated navigation across multiple, nested domains.”
CONTENTS
Introduction: Constraining Theories and Models 2
The Five Principles of Natural Intelligence 4
Revisiting The Principles and Development of Natural Intelligence (1988) 6
Vehicularization 11
Introduction: Constraining Theories and Models
Sometime in 1985 David Hays and I decided it was time to set forth our views on the nature of, well, of natural intelligence. First, however, we had to discover what those views were. We sat down to a table in my parents’ kitchen and made a list of the various things we wanted to include in this article, experimental findings, observations, models, mathematical ideas, and so forth, from psychology, neuroscience, linguistics, evolutionary biology, and computing. We just wrote them down in no particular order, probably on unlined paper. When we’d accumulated about 50 or so items we decided to gather them into a small number of groups of items that seemed to belong together. We arrived at five groups.
Just how we proceeded from that point I don’t recall. Perhaps we sat around discussing the various groups and came up with a principle for each group. Maybe we had to do some writing first. I don’t recall. But however we actually proceeded, we end with an article we called, “Principles and Development of Natural Intelligence.” We intended “natural” to contrast with “artificial” but didn’t say that anywhere in the article. When we’d finished a draft, days or weeks later, Hays said that it felt like fundamental work; he used the term “bedrock.” I agreed. It took three years to get it published, in a now defunct interdisciplinary journal, The Journal of Social and Biological Structures.
I’ve included the abstract of that article, along with a bit of the introduction, below, as the first part of this document: “Five Principles of Natural Intelligence.” That should give you an idea of the framework without all the expository elaboration, argumentation, and support.
ChatGPT reviews
That was four decades ago. I have continued to like what we did. But has any of it held up? How could it? By now the literature we referenced was 40 years out of date? And, yet, it wasn’t about that literature, it was about how we put it together. Is there anything left of that framework?
About a week ago I asked ChatGPT 5.2 to evaluate it. Here’s the prompt I gave it:
I want you to evaluate a paper that David Hays and I published back in 1988: The Principles and Development of Natural Intelligence. Give me a third-party assessment from the standpoint of what we now know, not a summary and not a defensive reconstruction. Be explicit about where it now looks prescient, where it looks historically bounded, and where it still poses unresolved challenges.
Here’s the first line of ChatGPT’s conclusion:
As of 2026, I would not describe the paper as a correct theory of mind. I would describe it as an ambitious synthetic manifesto whose empirical specifics have aged unevenly but whose central agenda has held up surprisingly well.
I’ll take it. Could I take issue with some of ChatGPT’s criticisms? Sure. But that assessment pinpoints the single most important facet of the essay, its synthetic nature. To push back on ChatGPT’s reservations would blunt that point.
After making various comments on an ad hoc basis, ChatGPT offered to write a “more formal review-essay.” I’ve included that as the second part of this document: “Revisiting Principles and Development of Natural Intelligence (1988).” There’s more.
The article we had submitted was long. The editors asked us to cut what we could, but made no particular suggestions. Our single largest cut was a section on vehicularization – that’s what we called it. It was about control. While it was about the same general line of thinking, it didn’t seem to fit. The bulk of the article was about the five principles and how the developed, both phylogenetically and ontogenetically (in humans). Vehicularization was about how they operated in concert. I have included that as a third section followed by comments by ChatGPT as the fourth and final section.
The Five-Fold Way
Let’s return to ChatGPT’s characterization of the original article as a “synthetic manifesto.” From its conclusion:
It is best understood as an architectural proposal about the structure of intelligence. Many of its mechanistic claims have aged poorly, particularly its neuroanatomical simplifications and evolutionary staging. Yet several of its central insights—heterogeneous cognitive regimes, the integration of regulation and cognition, the interaction between holistic and symbolic processing, and the role of language in cognitive control—remain highly relevant.
Though we didn’t use such phrases when we wrote the article, that’s certainly what Hays and I thought we were doing.
The diagram to the left indicates the range of material we brought to bear in our thinking about natural intelligence. The labels on the vertices of the pentangle are from my 1978 Ph. D. thesis in the English Department at SUNY Buffalo, “Cognitive Science and Literary Theory.” There I somewhat idiosyncratically defined cognitive science as investigating a five-way correspondence between behavior, computation, computational geometry (neuroanatomy), phylogeny, and ontogeny. That dissertation was mostly about behavior, in the form of literary texts, and computation, in the form of cognitive networks semantics, though touched on the others here and there. But “Principles and Development of Natural Intelligence” covered all five. The principles themselves were computational in nature and we made our primary arguments in terms of their ability to account for behavior, but we also suggested which brain regions supported them and related them to the phylogeny of animal behavior and the ontogeny of human development.
By the usual standards of the academy, that range was wide, crazy wide. We certainly weren’t expert across that range; no one could be. However, when I look back in retrospect, it is clear that we weren’t attempting some grand synthesis over that range. We were doing something quite different, something that was and remains fundamentally conservative. We had some high-level ideas about the computational structure of the mind and we wanted to place constraints on those ideas by expanding the range of evidence that could be brought to bear on them. While it is necessary that those ideas account for observed behavior, that alone is not sufficient. The model implied by those ideas must be implemented somewhere in the brain and must be consistent with developmental evidence both from phylogeny, our evolutionary history, and ontogeny, child development. THAT was the central agenda that, in ChatGPT’s estimation, has held up well.

No comments:
Post a Comment