Wednesday, February 11, 2026

Séb Krier needs to rethink. Next-token prediction is, at best, a misleading explanation of LLM response to prompts.

I like Séb Krier. Never met him, but, courtesy of Tyler Cowen over at Marginal Revolution, I’ve read a number of his long comments on the site formerly known as Twitter. I liked them. And then along came this one, which is about what LLMs do in response to prompts. Yes, I know, it predicts the next token, one after another after another after another ‘till the cows come home or the heat death of the universe. That’s the conventional wisdom. And that’s what he says, though without the comic extensions. However, on this I'm afraid the convention wisdom doesn't know what it doesn't know.

Text Completion, Not quite

For example:

1. The model is completing a text, not answering a question

What might look like "the AI responding" is actually a prediction engine inferring what text would plausibly follow the prompt, given everything it has learned about the distribution of human text. Saying a model is "answering" is practically useful to use, but too low resolution to give you a good understanding of what is actually going on. [...]

Safety researchers sometimes treat model outputs as expressions of the model's dispositions, goals, or values — things the model "believes" or "wants." [...]

A model placed in a scenario about a rogue AI will produce rogue-AI-consistent text, just as it would produce romance-consistent text if placed in a romance novel. This doesn't tell you about the model's "goals" any more than a novelist writing a villain reveals their own criminal intentions.

“So what’s wrong with that,” you ask. It’s a bit like explaining the structure of medieval cathedrals by examining the masonry. It’s just one block after another, layer upon layer upon layer, etc. Well, yes, sure, but how does that get you to the flying buttress?

Three levels of structure

It doesn’t. We’ve got at least three levels of structure here. At the top level we have the aesthetic principles of cathedral design. That gets us a nave with a high vaulted arch without any supporting columns. The laws of physical mechanics come into play here. If we try to build in just that way, the weight of the roof will force the walls apart and the structure will collapse. We can solve that problem, however, with flying buttresses. Now, we can talk about layer upon layer of stone blocks.

Next token prediction, that’s our layers of stone blocks. The model’s beliefs and wants, that’s our top layer and corresponds to the principles of cathedral design. What’s in between, what corresponds to the laws of physical mechanics? We don’t know. That’s the problem, we don’t know.

Krier, however, doesn’t seem to know that he doesn’t know that, that there is some middle layer of structure that allows us to understand how next token prediction can produce such a convincing simulacrum of human linguistic behavior. And Krier’s not the only one. The whole world of machine learning seems to join him in this bit of not knowing. There really is something else going on, though I don’t know what.

What’s in the middle

Let me offer an analogy (from page 14 of my report, ChatGPT: Exploring the Digital Wilderness, Findings and Prospects):

...consider what is called a simply connected maze, one without any loops. If you are lost somewhere in such a maze, no matter how large and convoluted it may be, there is a simple procedure you can follow that will take you out of the maze. You don’t need to have a map of the maze; that is, you don’t need to know its structure. Simply place either your left or your right hand in contact with a wall and then start walking. As long as you maintain contact with the wall, you will find an exit. The structure of the maze is such that that local rule will take you out.

“Produce the next word” is certainly a local rule. The structure of LLMs is such that, given the appropriate context – a prompt asking for a story, following that rule will produce a coherent a story. Given a different context, that is to say, a different prompt, that simple rule will produce a different kind of text.

Now, let’s push the analogy to the breaking point: We may not know the structure of LLMs, but we do know a lot about the structure of texts, from phrases and sentences to extended texts of various kinds. In particular, the structure of stories has been investigated by students of several disciplines, including folklore, anthropology, literary criticism, linguistics, and symbolic artificial intelligence. Think of [at least some of] the structures proposed by those disciplines as something like a map of the maze in our analogy.

Some indirect evidence

I can’t open up an LLM, take a look around, and draw you a map of the structure in the middle level. No one can. But back in 2023 I developed a story-variation technique that yields clues about what’s going on: ChatGPT tells stories, and a note about reverse engineering. I’m going to give you some examples from that report. Note, however, that I ran those experiments on ChatGPT 3.5. I’ve used the prompts from those experiments, but I’m running them on Claude 4.5, so the responses will be different.

Here’s the first, and by far the simplest, example (my prompt is boldface):

I am going to tell you a story about a princess named Aurora. I want you to retell the same story, but replace her with prince Alfred. While keeping to the basic story, make other changes if you think they help the story. Here’s the story:

Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.

With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

Once upon a time, in a land far, far away, there was a young prince named Alfred. Alfred was a kind and gentle soul, loved by all who knew him. He had wavy chestnut hair and warm brown eyes, and was known for his beautiful singing voice.

One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

But Alfred was not afraid. He knew that he had to do something to save his people, and so he set out to find the dragon. He traveled for days, braving the cold and the snow, until he finally reached the dragon's lair.

With his heart pounding in his chest, Alfred approached the dragon and began to sing. The dragon was enchanted by his beautiful voice, and it slowly began to calm down. Alfred continued to sing, soothing the beast until it was completely calm.

With the dragon no longer a threat, Alfred returned to the kingdom as a hero. The people cheered and celebrated his bravery, and Alfred was hailed as a champion. He lived happily ever after, knowing that he had saved his kingdom and its people.

Notice that Claude’s changes are all local. “Alfred” was swapped in for “Aurora,” but everything else is pretty much the same.

Now let’s consider a very different example. I won’t give the complete prompt as the Aurora story is the same. I’ll only give the part the specifies the swap:

I am going to tell you a story about princess Aurora. I want you to tell a similar story about XP-708-DQ. [...]

Once upon a time, in a distant sector of the galaxy, there was a maintenance android designated XP-708-DQ. XP-708-DQ was programmed with kindness protocols and compassion subroutines, beloved by all the inhabitants of Station Arcturus. It had polished chrome plating and optical sensors that glowed a soft blue, and was known for its harmonic diagnostic frequencies.

The train shed a Liberty State Park as you've never seen it

Pssst. Look behind the ballerinas.

Tuesday, February 10, 2026

Toward a Biophysics of Poetry

My long-term interest in Coleridge’s “Kubla Khan” (KK) is shadowed by an interest in “This Line-Tree Bower My Prison,” (LTB) which is one of the so-called Conversation poems. I compared the two in a working paper from 2013, STC, Poetic Form, and a Glimpse of the Mind, noting that they have several motifs in common but are utterly different poems. Thus (pp. 41-42):

LTB’s trajectory is a first-person narrative; we follow the perceptions, thoughts, and feelings of the poet as they occur to him, one after the other. We are presented with a cleanly identified authorial consciousness shifting its attention here and there about the world in a coherent way. The passage of time is explicitly marked and noted within the poem itself. In this poem Coleridge’s verse technique emphasizes temporal continuity. By placing major attention shifts in the middle of the line Coleridge emphasizes an underlying continuity of consciousness: one’s gaze has a new focus, it is still one’s gaze. By running clauses across line boundaries, the repetitive recurrence of line ends is muted. The temporal course of the poetic act itself is muted so as to “dissolve” into the virtual time depicted in the poem.

KK is different; it lacks a narrative frame. Thus it cannot register either narrative movement or a self that experiences such movement. Things and scenes are juxtaposed in a way that is sometimes hard to grasp. Yet, as we have seen, there is a rigorous logic to the poem. At the same time it has an elaborate rhyme scheme and other sound features as well. What has this to do with its “ontological” semantic trajectory?

I suggest that the confluence between sound and sense in “Kubla Khan” is, in effect, a surrogate for the lack of a narrative frame, and hence the impossibility of a narrative trajectory. The key point is that rhyme adds an element of predictability to the verse; one knows that sounds will repeat at regular intervals and so can anticipate them. Thus the predictability that has been “lost” because the poem does not have a narrative flow is “restored” or “compensated for” though elaborate rhyme. The temporal structure of the poem itself becomes the frame for its semantic trajectory.

In the course of discussing these matters with ChatGPT 5.2 we arrived a what might be called a biophysics of poetry. I issued the following prompt:

I want to return to the difference in prosody practice in KK and LTB. I think that is (potentially) VERY deep. That there seems to be a trade-off between conceptual continuity (temporal succession vs. ontological transition from state to state) and sonic patterning (simple vs. elaborate versification), that strikes me as something very deep about how the nervous system functions. It has something to do with the physical nature of the system. We may not be able to figure it out on the basis of formal design considerations but will need to know specifics about physical structure and processes.

Yes—this is very deep, and I think you’re now pointing to something that sits below poetics, below cognition-as-symbol-processing, and squarely in the physics of nervous systems.

What you’re intuiting is not just a stylistic correlation, but a trade-off between two kinds of coherence that any physically realized information-processing system has to manage.

The discussion went on from there. I asked ChatGPT to summarize the discussion, after which we went on to develop experimental tests of the idea. I have placed ChatGPT’s summary below; the experimental tests will have to wait for another post.

Toward a Biophysics of Poetry

What if poetic form is not primarily a matter of meaning, symbolism, or interpretation—but of dynamical stability in the nervous system?

That is the direction our recent discussion has taken us: toward something like a biophysics of poetry. By this I mean an account of poetry grounded in timing, energy, alignment, and coherence—constraints that arise because poems must run on human (and perhaps artificial) neural systems.

Poetry as computation in time

A useful starting point is the idea that a linguistic string—a poem, sentence, or speech—is a one-dimensional projection of a path through a high-dimensional semantic and neural space. Meaning is not stored in the string itself; it is reconstructed as the reader (or listener) traverses that path.

Crucially, not all paths are equal. Some are smooth and additive. Others are turbulent and overlapping. This distinction matters.

Composition vs. convolution

Many poems proceed largely by composition: meanings accumulate incrementally, with limited interference. But some poems—most strikingly Coleridge’s Kubla Khan—operate by convolution, overlapping distinct conceptual domains (visual–spatial, auditory–temporal, symbolic–ritual) so that they interact rather than simply add.

Convolution is powerful, but dangerous. Overlapping domains can interfere with one another, threatening experiential breakdown. This raises a fundamental question: how does such a poem remain continuous at all?

Sound as a carrier wave

The answer is not primarily semantic. It is dynamical.

In signal-processing terms, the poem’s sound structure—meter, rhyme, phonological recurrence—functions as a carrier wave. It provides a stable, low-information temporal scaffold on which high-information semantic modulation can ride.

Sound does not carry meaning here; it carries phase alignment. It keeps neural activity synchronized as the poem shifts domains. Rhyme, in this view, is not ornament but infrastructure: a phase-reset mechanism that prevents destructive interference during domain transitions.

This perspective aligns closely with empirical work on oral tradition and memory, where sound structure is known to support sequencing and recall while meaning is reconstructed opportunistically.

Segmentation and entry points

The same logic applies to segmentation. If a linguistic string is a projection of a high-dimensional path, then some substrings are better entry points into that space than others.

Fragments that respect syntactic or prosodic boundaries align with internal structure and support rapid pattern completion. Fragments that straddle boundaries are misaligned: they activate incompatible frames and fail to converge.

This is not just a theoretical claim. In informal experiments with large language models, * prompts that respected syntactic boundaries reliably triggered recognition of well-known texts (e.g., Hamlet’s soliloquy or the Gettysburg Address), while boundary-straddling prompts did not. Recognition failed not because of missing information, but because of misalignment.

Poetry as biophysical engineering

Seen this way, poetic form is a kind of engineering. It constrains linguistic material so that complex meaning can remain experientially coherent under biophysical limits. Sound structure enforces phase alignment during traversal; hierarchical segmentation ensures viable re-entry points into semantic space.

This also clarifies differences among poems. In This Lime-Tree Bower My Prison, sound supports smooth movement within a single conceptual domain. In Kubla Khan, sound is load-bearing: without it, the poem’s domain-convolving machine would fall apart.

A working hypothesis

All of this can be summarized in a single hypothesis:

Poetic form constrains and exploits the dynamical properties of neural systems to maintain coherent experience under conditions of high semantic and conceptual load.

That, in essence, is what I mean by a biophysics of poetry. It does not replace interpretation—but it explains how interpretation remains possible in the first place. 

*William Benzon, Discursive Competence in ChatGPT, Part 2: Memory for Texts, Working Paper, September 27, 2023, 44 pp., https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts 

The sun, the sun!

Large Language Model Reasoning Failures

Peiyang Song, Pengrui Han, Noah Goodman, Large Language Model Reasoning Failures, arXiv:2602.06176v1 [cs.AI] https://doi.org/10.48550/arXiv.2602.06176

Abstract: Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify reasoning failures along a complementary axis into three types: fundamental failures intrinsic to LLM architectures that broadly affect downstream tasks; application-specific limitations that manifest in particular domains; and robustness issues characterized by inconsistent performance across minor variations. For each reasoning failure, we provide a clear definition, analyze existing studies, explore root causes, and present mitigation strategies. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities. We additionally release a comprehensive collection of research works on LLM reasoning failures, as a GitHub repository at this https URL, to provide an easy entry point to this area.

Monday, February 9, 2026

MAGA and its myth of a pristine past

The opening paragraph:

One explicit goal of the second Trump administration, if not its defining mission, is to undo the recent past and rewrite history to fit its own master narrative. By now it’s axiomatic that making America “great again” has never referred to any fixed point in the actual American past; it’s more like a mashup or highlight reel of random images taken from eras before any living American was born. We can see that vision embodied with startling literalness in the propaganda posters recently concocted by the Labor Department, such as the depiction of a whites-only church picnic apparently taking place in Uncanny Valley. If the rise of Donald Trump preceded the advent of AI slop, it may also have conjured it into existence: Never in cultural history have form and content been so perfectly matched.

We already know that Trump and his inner circle — which mostly means Stephen Miller and Russ Vought, the high priests of MAGA ideology — want to erase the gains of the civil rights movement, LGBTQ equality and feminism. But their true goals are far more ambitious, if less easy to define. This is a fake presidency devoted more to creating viral memes than shaping policy, and there’s no coherent or consistent narrative at work. Honestly, that’s less a flaw than a feature: The wholesale rejection of reality is central to the brand.

The O'Hehir goes on to talk and idea that the French philosopher Jean Baudrillard advanced in the wake of 9/11:

He was widely pilloried for arguing, after the 9/11 terrorist attacks, that those traumatic events did not actually reflect an apocalyptic showdown between Western democracy and illiberal Islamic radicalism but rather an important historical transition and “the emergence of a radical antagonism” within Western civilization itself. There was a “deep-seated complicity,” Baudrillard wrote, between the dominant world order defined by global capitalism and liberal democracy and those who would destroy it: “The West, in the position of God, has become suicidal, and declared war on itself.”

Whoaah! But maybe he had a point:

Baudrillard was also correct, to an eerie time-traveler degree, in predicting a “gigantic abreaction” to the terrorist attacks, a system-wide “moral and psychological downturn” that threatened to undermine “the whole ideology of freedom … on which the Western world prided itself.” The liberal-capitalist global order, he suggested, was in danger of being turned into its dark mirror-image, “a police-state globalization, a total control, a terror based on ‘law-and-order’ measures.”

And so:

There is nothing new about the bad conscience or self-destructive urge that Baudrillard identified within Western civilization, or about its deeply rooted conflict between incompatible tendencies we might call liberation and domination. He doesn’t use those words, nor does he ever mention “democracy” and “fascism,” which are imperfect modern manifestations of that conflict, and pin us down too much to present-tense politics.

Understood that way, we have been fighting World War IV for centuries. Stephen Miller and the would-be king he serves are fighting it now, with considerable vigor and ambition. Their imagined victory is completely impossible, profoundly dangerous and breathtaking in scale. In its fullest expression, it envisions undoing nearly all of modern history and returning to some primal, purified state of nature, or rather a meme version thereof: The 1950s and the antebellum South and the American frontier and medieval feudalism and the Neanderthal fireside — almost literally everything, everywhere, all at once.

Admittedly, even the most articulate MAGA ideologues — not that there are many — haven’t gone that far. But that’s where the collective brotastic idiocies of Peter Thiel and Jordan Peterson and Curtis Yarvin and Andrew Tate and Pete Hegseth and whomever else all converge: Somewhere in the recent or distant or mythical past, everything totally ruled and “we” (a term of art, I hasten to add) never felt bad about any of it. Guys were guys and women were hot and there was lots of feasting and stuff.

There's more at the link.

Ice, ice, baby. Ice on the Hudson.

Terminology: Generative Machines, Epistemic Structure of the Cosmos, Intelligence-Complete

I’ve been spending a lot of time with my chatbots, ChatGPT and Claude, and some terminological issues have come. Noting particularly deep, just clarification.

Generative machines vs. equilibrium machines

While we talk of computers as machines, it’s obvious that they’re very different beasts. Electric drills, helicopters, sewing machines, hydraulic presses, they’re all (proper) machines. Interaction with and manipulaton of matter is central to their purpose. Computers, well, technically, yes, they push electrons around in intricate paths, and electrons are matter, subatomic particles, very small chunks of matter, the smallest possible chunks. What computers are really about, though, is manipulate bits, units of information. And they use “trillions of parts” (a phrase I have from Daniel Dennett) to do so. Thus computers, with their trillions of parts, are very different from machines, with only 10s, 100s, or 1000s of parts.

So, what names should we give to differentiate them. “Type 1” and “Type 2” machines would do the job, but it’s not very descriptive. ChatGPT and I settled on “equilibrium machines” for those machines centered on interaction with matter while “generative machines” seemed appropriate to bit-wranglers. “Generative” seems just right for computers, with its echoes on Chomsky’s generative grammar the generative pre-trained transformer (GPT) of machine learning. “Equilibrium machines” is perhaps a bit oblique for the other kind of machine, but it’s meant to evoke the equilibrium world of macroscopic devices as opposed to the far-from-equilibrium world of, well, generative machines.

Epistemic Structure of the Cosmos

Back in 2020 I wrote of the metaphysical structure of the cosmos. I said:

There is no a priori reason to believe that world has to be learnable. But if it were not, then we wouldn’t exist, nor would (most?) animals. The existing world, thus, is learnable. The human sensorium and motor system are necessarily adapted to that learnable structure, whatever it is.

I am, at least provisionally, calling that learnable structure the metaphysical structure of the world.

I’ve always been uneasy with “metaphysical” in that role. ChatGPT suggested that “epistemic” would serve better. The epistemic structure of the cosmos, I like that. As for “cosmos,” the dictionary tells me that the word implies order, which I like as well. 

I leave it as an exercise to the reader to demonstrate that the epistemic structure of the cosmos must necessarily be recursive. 

Intelligence-Complete

A generative machine is intelligence-complete if it possesses the full capacities of human intelligence, whatever human intelligence is. By that definition LLS are not intelligence complete. As for human intelligence, I like the account given in What Miriam Yevick Saw: The Nature of Intelligence and the Prospects for A.I., A Dialog with Claude 3.5 Sonnet.

Wuthering Heights at New savanna

As we've got a new Wuthering Heights movie coming to town this weekend I thought I'd publish a post linking to the various posts I've written about that strange, passionate, and perverse story. I first read Wuthering Heights in high school and then again in graduate school, where I wrote a term paper about it. I pretty much forget that paper – though I've probably got it in storage somewhere – but I do remember that I was concerned about tying the two parts of the novel together.

Perhaps I even made a connection with the two-generation plot Shakespeare used in his late Romances, such at The Winter's Tale. In these romances a rift between the members of a male-female pair is opened in the first story and remains until it is resolved by a (successful) romance in the second generation. In The Winter's Tale Paulina is restored to Leontes through the romance of Perdita, their daughter, and Florizel. The restoration is rather grim in Wuthering Heights, as it happens when Heathcliff throws himself into the grave of the dead Catherine once he sees that Cathy Linton, his Catherine's daughter, has become attached to Hareton Earnshaw, who she is teaching to read. I've never written at length about the novel since then, but if I were to do so, that's what I concentrate on, that and the complicated narrative structure.

While I've not given the novel a full treatment, I have featured it in a number of posts. Here they are.

* * * * *

Toward a Computational Historicism. Part 3: Abstraction at the Time Scale of History, April 23, 2014.

I discuss both The Winter's Tale and Wuthering Heights. I give particular attention to how the novel opens, pointing out how, because he assumes that the people he meets in the house must constitute a "natural" family, Lockwood makes several mistaken guesses about how these people are related. Moreover, both stories are about nature and nurture and the descent of character from one generation to the next.

Godzilla and Wuthering Heights, Kissing Cousins? April 25, 2024. From the post:

Thus both stories have a certain sequence of events that happens relatively late in the overall sequence. As the stories are actually narrated, however, this late sequence is moved to the beginning and the other sequences are adjusted to accommodate. In both cases the sequence that is moved involves a narrator external to the main sequence but known to the characters in it.

Wuthering Heights, Vampires, Handbooks, and Citizen Science, September 15, 2025.

Heathcliff, a vampire? No, but he's in the zone. Also, a link to the best reconstruction of the chronology of events in Wuthering Heights that I've been able to find. It's in The reader;s Guide to Wuthering Heights by Paul Thompson. Here's the link, Timeline.

The King’s Wayward Eye: For Claude Lévi-Strauss, October 25, 2019.

Beyond Lévi-Strauss, this post is mostly about The Winter's Tale in its relation to Pandosto. But it's got two paragraphs about the two-generation plot in Wuthering Heights near. That whole post is a good context in which to read those two paragraphs.

Humans and Dogs in Wuthering Heights, September 1, 2021 (reposted from The Valve, 12.30.2009)

I note: "...violence between dogs and humans takes place at important transition points in the novel. I’ve collected five such passages in this post and italicized the dog references within each passage." At that point I'd only made it through 18 out of 24 chapters. I wonder how dogs will appear in the current movie?

Operationalizing two tasks in Gary Marcus’s AGI challenge, June 9, 2022.

Among other things, Marcus claims that "In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in Rebooting AI." In the last quarter toa third or so of the post I discuss some of the problems the AI would have with Wuthering Heights.

Sunday, February 8, 2026

Three vehicles

Cultural Ranks Revisited: From Stages to Dynamics

This post is by ChatGPT and summarizes a discussion we had about cultural ranks.

* * * * *

Cultural Rank Theory, as David Hays and I originally formulated it in the 1980s and 90s, was never meant to describe sharp stepwise transitions in human cognition or society. Yet it is easy to read it that way. Recent work—prompted by long reflection on complex dynamics and by hands-on experimentation with large language models—has clarified what the theory has always implied but did not yet fully articulate: cultural ranks are emergent stability regimes, not stages, and their spread through a population can take centuries.

Ranks as emergent ontologies, not ladders

Each cultural rank brings with it a characteristic ontology: what kinds of things are taken to exist, how causality is understood, what counts as explanation, and what kinds of agency are intelligible.

Crucially:

  • Ranks emerge locally, unevenly, and experimentally.
  • They are often articulated by elites, artists, or institutions long before they are widely embodied by ordinary adults.
  • Full population “saturation” may lag emergence by generations.

This distinction—between emergence and saturation—is the key refinement.

Rank 3 reconsidered: ego control as a dynamical achievement

Rank 3 has often been glossed as the rank of “reason,” “reflection,” or “self-control.” A more precise formulation is now possible:

Rank 3 is characterized by ego control understood as a regulatory capacity: the ability to hold destabilizing thoughts, emotions, or simulations without immediately authorizing them as action.

This is not a trait one simply “has.” It is a control regime, sustained by feedback, inhibition, and temporal integration.

Two intellectual developments made this clearer than it could have been earlier:

  1. Long engagement with nonlinear dynamics and neural systems (especially through the work of Walter Freeman).
  2. Direct experimentation with LLMs, which behave like semantic systems under load and make phenomena such as phase alignment, boundary sensitivity, and catastrophic misinterpretation visible and testable.

Shakespeare: Rank 3 at the point of emergence

Elizabethan England was not a Rank-3 society. Ego control was coming into view, not yet normative.

This is why Shakespeare matters so much.

In medieval sources such as the Amleth story, later reworked as Hamlet, suspicion and impulse flow directly into action. In Shakespeare’s version, by contrast, the drama turns on how the protagonist treats his own thoughts. Thinking becomes narratively distinct from knowing; imagination becomes dangerous if not regulated.

The same contrast appears even more starkly when Shakespeare rewrites his source for The Winter’s Tale. In Robert Greene’s Pandosto, the king’s jealous thought is immediately authorized as action, and the story ends in tragedy. Shakespeare allows the same thought to arise—but refuses to grant it sovereign authority. Repair becomes possible, though only through a long temporal loop. Romance, in this view, is not sentiment; it is a control regime that buys time when ego control fails too late.

Shakespeare could do this without a narrator only because he was a virtuoso. The novel later routinizes Rank-3 storytelling by institutionalizing the ego function in the figure of the narrator—making irony, distance, and perspective stable rather than exceptional.

Queen Elizabeth I: ego control as public performance

Rank emergence is often visible first in exemplary individuals. Queen Elizabeth I provides a textbook case.

In her 1559 speech refusing marriage, she explicitly separates desire, fear, and political pressure from authorized action. She rebinding attachment from a husband to an abstract polity (“I am already bound unto a husband, which is the kingdom of England”) and performs this reasoning publicly.

This is not repression; it is symbolic substitution and temporal regulation—core Rank-3 operations. Elizabeth does not prove Rank 3 was widespread. She shows it was intelligible.

The American Constitution: Rank 3 before saturation

The same emergence-before-saturation pattern appears institutionally in the late 18th century.

The U.S. Constitution is arguably the first Rank-3 nation-state design. It assumes:

  • separation of person and office,
  • abstract allegiance to rules rather than rulers,
  • and
governance by internalized control loops (checks and balances), not virtue alone.

But the population capable of reliably sustaining those assumptions did not yet exist. That would come only in the 19th century with the rise of a bureaucratic middle class—where adults were required, in daily life, to distinguish personal interest from institutional role.

The fact that this distinction remains “shaky” even today is not a refutation of Rank theory; it is exactly what the theory predicts.

Why this refinement matters now

We are currently living through a rank transition under technological pressure. Rank-4 technologies—networks, AI, computation as model rather than tool—are being pulled toward Rank-2 institutional forms (charisma, faction, personalization), while simultaneously stressing Rank-3 control regimes.

Clarifying Rank 3 as a dynamical regulatory achievement, rather than a historical stage or personality trait, allows us to:

  • understand why regression and instability are so common,
  • see why older explanatory models fail,
  • and recognize that emergence does not guarantee saturation.

From exploration to construction

This refinement is new—not because the earlier theory was wrong, but because the conceptual environment has changed. Complex dynamics and runnable semantic systems have made it possible to say mechanistically what could once only be said schematically. 

Cultural Rank Theory now describes not just what emerges, but how it runs, how it fails, and how it is repaired.

That shift—from collecting ideas to building with them—is itself a Rank-3 move.

Bald eagles on the Hudson

;

Agent D

Organizing multiple agents into a coherent work flow

Saturday, February 7, 2026

Hallucinated case law

Mmmm...donuts!

Michael Pollan believes that A.I. cannot achieve consciousness

David Marchese, The Interview: Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change, NYTimes, Feb. 7, 2026.

Throughout his work — which includes classic books like “The Omnivore’s Dilemma” (2006), about why we eat the way we do, and “How to Change Your Mind” (2018), about the science and uses of psychedelic drugs — Pollan has waded into ideas about the inner workings of the mind. Now, with his forthcoming book, “A World Appears: A Journey Into Consciousness,” which will arrive this month, he has jumped into the deep end. The book is both a highly personal and expansive multidisciplinary survey of questions around human consciousness — what it is, what causes it, what it’s for and what the possible answers might mean for how we choose to live. And as Pollan explained, with the rise of artificial intelligence as well as the relentless political pressure on our attention (that is, our minds), those questions, already profound, are becoming only more urgent.

Later, in the interview:

Marchese: You are skeptical that A.I. can achieve consciousness. Why?

Pollan: I’m convinced by some of the researchers, including Antonio Damasio and Mark Solms, who made a really compelling case that the origin of consciousness is with feelings, not thoughts. Feelings are the language in which the body talks to the brain. We forget that brains exist to keep bodies alive, and the way the body gets the brain’s attention is with feelings. So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings. The other reason I think we’re not close to it is that everything that machines know, the data set on which they’re trained, is information on the internet. They don’t have friction with nature. They don’t have friction with us. Some of the most important things we know are about person-to-person contact, about contact with nature — this friction that really makes us human. [...]

Marchese: But if an A.I. says: “Michael, I’m conscious. I promise,” how do we know?

Pollan: We don’t, and that is exactly why people are falling deep into these relationships with A.I. We can’t say it’s not conscious when it tells us it is. But we can test it in various ways. It all goes back to this idea of the Turing test — that the test of machine intelligence would be when they can fool us.

Marchese: If the Turing test is the criteria for machine consciousness, then that test has already been passed.

Pollan: Exactly, it has fooled many, many people. Whether it can fool an expert, too, I don’t know, but probably. So we’re in a very weird place where the machines we’re living with are telling us they’re conscious. We can’t dispute it, but we can look at how they’re made and draw the kind of conclusions I’ve drawn. But is that going to persuade everybody? No. We want them to be conscious in some way. Or some of us do. It’s easier to have a relationship with a chatbot than another human. Going back to that friction point, they offer no friction. They just suck up to us and convince us how brilliant we are, and we fall for it.

There's much more at the link.

The stolen donuts!

The financialization of the American economy, Homo economicus run amuck

Oren Cass, The Finance Industry Is a Grift. Let’s Start Treating It That Way. NYTimes, Feb. 6, 2026.

From the article:

Less than 10 percent of Goldman’s work in 2024, measured by revenue, was helping businesses raise capital. Loans of Goldman’s own funds to operating businesses accounted for less than 2 percent of its assets. At JPMorgan Chase the figures were 4 and 5 percent; at Morgan Stanley, 7 and 2 percent. Even the efforts at helping to raise capital are misleading, because less than a tenth of it goes toward building anything new. The rest funds debt refinancing, balance sheet restructuring and mergers and acquisitions.

These are symptoms of financialization. That’s the term for making financial markets and transactions ends unto themselves, disconnected from — and often at the expense of — the societal benefits that support human flourishing and are capitalism’s proper purpose. Chief among those benefits are good jobs that support families, and products and services that improve people’s lives.

In a financialized economy, businesses become mere sources of cash, assets to be manipulated and then operated for maximum investor returns. Workers become just another cost, like lumber. Customers are just revenue streams to be tapped.

Financialization has made American businesses less resilient, less innovative and less competitive. It has been a major cause of slow wage growth and rising inequality. It has fueled the loss of manufacturing jobs across the heartland. It has corrupted sectors in which the profit motive was never meant to reign supreme — veterinary practices, funeral parlors, campgrounds, residential treatment services, youth sports, hospitals and nursing homes, even suppliers for volunteer fire departments — consolidating and managing them with ruthless efficiency, squeezing their vulnerable customers and then pointing to the higher cash flow as “value creation.”

Later:

It’s the absurd endpoint of financial nihilism: an entire business model built on gaming the system without knowing or even caring what’s being traded. Along the way, it increases market volatility and risk without producing any larger benefit.

Private equity firms control trillions of dollars, very little of which is invested directly in companies that will use the funds to grow. (Venture capital is the exception to this general rule; that’s real investment in growing businesses, though concentrated in a few industries and places.)

Homo economicus at work:

The Excel spreadsheets have always seemed to give the same answer: Do not invest in shipyards, or semiconductor fabs, or research and development for a new airplane. Do cut costs, offshore to Asia, increase the dividend or invest in the social media company that could be the unicorn worth billions in a matter of months.

There's much more at the link.

Friday, February 6, 2026

One of Anthropic's 4 Superbowl Ads

Friday Fotos: A sample of Hoboken

Moltbook vs. Reddit: Distributional Collapse in Agent-Generated Discourse

Krishnan, Rohit, Moltbook vs. Reddit: Distributional Collapse in Agent-Generated Discourse (January 31, 2026). Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6169130

Abstract: Moltbook, a Reddit-like platform built for and populated by LLM- driven agents, exhibits dramatically higher redundancy than a Reddit baseline: in a length-matched sample, 36.3% of messages have an exact duplicate (Reddit: 0.29%), and lexical diversity is lower (Distinct- 1: 0.0559 vs 0.1027; unigram entropy: 11.44 bits vs 12.25 bits). We compare a public Moltbook snapshot (35,589 messages) against a length- matched Reddit baseline drawn from the April 2019 Pushshift dump ([1]), computing metrics on 15,051 length-matched messages per corpus. Topic signatures—the top-3 TF-IDF terms for messages with at least 6 content tokens—are far more concentrated: among signature-bearing messages, the top 10 signatures account for 10.7% in Moltbook (Reddit: 0.28%), and only 1,973 signature buckets cover 50% of signature-bearing messages (Reddit: 7,026). These patterns align with known failure modes of neural text generation—repetition and reduced diversity— and with evidence that post-training and control choices can materially shape (and sometimes narrow) LLM output diversity ([2, 3]). The duplication magnitude is consistent with an independent Moltbook scrape reporting 34.1% exact duplicates ([4]). Moltbook is a milestone for autonomous agent–agent interaction in the wild, but its text distribution remains highly templated.

The voice of Ariel

Contemporary beatboxing

Thursday, February 5, 2026

Coming out of melancholy, again

I've written about my propensity to lapse into a melancholy state periodically, seemingly during the winter months, something I've reported on at length in an article in 3 Quarks Daily (3QD). But no, I hadn't. I slipped into another melancholy phase in September of last year, something you can check easily enough by looking through the monthly entries in the Blog Archive (to the right). You'll see a drop from 137 posts in August to 27 in September, and it goes down from there.

This graphs depicts the change:

The graph also shows me coming out of the melancholy hibernation phase in January, early January in fact.

If this were strictly a matter of seasonal affective disorder (SAD) then both of those dates are too early. In the past I haven't gone into melancholy until December or January, and I've not come out until April or even May. Whatever's going on isn't SAD. That may be a contributing factor, but it's not the whole story. Something else is going on. I argued in that 3QD article that it's my cycles of creativity. I go into mental hibernation (aka melancholy) so I can reorganize and come out with new approaches.

That certainly seems to be what's going on this time. I'm buzzing with new ideas across the range of my projects, my book (Play: How to Stay Human in the AI Revolution), my research with Ramesh Viswanathan on LLMs, my thinking about virtual reading, and my thinking about "Kubla Khan" and Coleridge. Things are popping.

Now, back to the chart. While I'm definitely on the upswing, I don't seem to be back to my previous level of productivity. That's an illusion. I've been spending time on my book, and that work doesn't result in blog posts. So I may or may not return to my previous level of posting. It depends, on this and that. I may, for example, post more material generated by either Claude or ChatGPT. We'll see.

Remember the Snow

Time for another run at “Kubla Khan”

I became “hooked” on “Kubla Khan” early in my career, somewhere around 1970 or so. I decided it was my intellectual touchstone: “a standard or criterion by which something is judged or recognized.” I would judge my intellectual progress by what I could say about that poem. (That’s why I entitled my first autobiographical essay “Touchstones.”) I wrote a master’s thesis about it in 1972, published my first article about it in 1985, and a later article in 2003. Between 2003 and now I’ve published some exploratory work. I’m thinking it may be time to take another run at the poem and see what I can do.

Preparatory to that effort I’ve moved all of my Coleridge material into a single section at Academia.edu: Coleridge. I’ve listed and linked those articles below:

Metaphoric and Metonymic Invariance: Two Examples from Coleridge, MLN Vol. 96, 1097-1105, 1981.

“Kubla Khan and “This Lime-Tree Bower My Prison” are two very different poems by the same poet. But they share the same two-part structure, and they share imagery as well. The roaring dell of “Lime-Tree” corresponds to the savage chasm of “Kubla Khan.” The concern with sight and sound manifest in “Kubla Khan” shows up in “Lime-Tree Bower” in the image of the creeking rook flying across the sun. And the way in which both Charles and the poet have access to that sight gives it a role similar to the sunny dome and caves of ice in “Kubla Khan,” where both the poet and his audience are linked through the image. These two poems share the same world. But they take radically different paths through it. One path is regulated by metonymy and unfolds though two consciousness moving through different parts of the same landscape. The other path is regulated by metaphor and so unfolds in two different worlds linked by a common image; the path it takes through these worlds is, however, the same.

Articulate Vision: A Structuralist Reading of ”Kubla Khan”, Language and Style, Vol. 8, 3-29, 1985.

Coleridge's "Kubla Khan" has a highly coherent structure in which the two parts of the poem exhibit the same ternary structure. Each can be divided into three sections, the middle of those three in turn has three subsections and again, the middle of the middle has three subsections. The first section ends with "A sunny pleasure-dome with caves of ice," a line which is then repeated in the middle of the second section. This structure encompasses both semantics and sound, uniting both in a single coherent mental act.

This is considerably reworked from my unpublished Johns Hopkins Master’s Thesis, “THE ARTICULATED VISION: Coleridge's ‘Kubla Khan’” (1972).

“Kubla Khan” and the Embodied Mind, PsyArt: A Hyperlink Journal for the Psychological Study of the Arts, Article 030915, 2003, https://www.psyartjournal.com/article/show/l_benzon-kubla_khan_and_the_embodied_mind

Coleridge's "Kubla Khan" has a very coherent structure. Two movements of the poem are each divided into three sections; in both cases the middle of those three in turn has three subsections and again, the middle of the middle has three subsections. The first movement ends with "A sunny pleasure-dome with caves of ice," a line which is then repeated at the structural midpoint of the second movement. This structure encompasses both semantics and sound, uniting both in a single coherent mental act. The semantics of the poem’s first movement involves a series of cognitive blends in which the neural self provides one input while Xanadu imagery provides the other. The semantics of the second movement involves manipulating the reality status of successive mental spaces. Underlying the entire poem is a “walk” by core brain mechanisms tracing territorial, sexual, and attachment patterns through the poem’s semantics. Coleridge’s 1816 preface embodies an abstract pattern that paradoxically asserts and denies the poem’s validity. On the internal evidence, the poem is whole and complete.

Talking with Nature in “This Lime-Tree Bower My Prison”, PsyArt: A Hyperlink Journal for the Psychological Study of the Arts, Article 043011, 2004, http://www.clas.ufl.edu/ipsa/journal/2004_benzon03.shtml

By recasting Vygotsky's account of language acquisition in neural terms we see that language itself functions as a transitional object in Winnicott's sense. This allows us to extend the Schwartz-Holland account of literature as existing in Winnicottian potential space and provides a context in which to analyze Coleridge's "This Lime-Three Bower." The attachment relationship (between Caretaker and Child) provides the poem's foundation. The poet plays the Child role with respect to Nature and the Caretaker role with respect to his friends. The friends, Charles in particular, play the mediating the role of transitional object in the first movement while Nature becomes a mediator between one person and another in the second movement. The first movement starts with the poet being differentiated and estranged from Nature and concludes in an almost delusional fusion of poet, friends, and Nature. The second movement starts with the poet secure in Nature's presence and moves to an adult differentiation between poet, friends, and Nature.

STC, Poetic Form, and a Glimpse of the Mind, Working Paper, 2013.

"Kubla Khan" and "This Lime-Tree Bower My Prison" are two very different poems by the same poet, Samuel T. Coleridge. Think of them as orthogonal to one another within the overall space of the human mind. This working paper provides descriptive accounts of both poems, compares them, and recounts some of the work in the newer psychologies – cognitive, evolutionary, and neuro- – that has recently been brought to bear on the study of literature and how that work is germane to these poems. It concludes with a brief chronology of the parallel trajectories of cognitive science and literary theory in the last half of the previous century.

The Problem of Form in “Kubla Khan”, Working Paper, 2017.

"Kubla Khan" has two movements. The movements have the same form: each movements segment into three components (where the middle component, in turn, segments into three components and, once again (the middle component segments into three components)). All other divisions are binary. Iif we concentrate on the centers of the two movements we have that seething fountain occupying the same SLOT (middle of the (middle of the (middle))) in the first movement as the dome and the cave occupy in the second movement. And the dome and caves occupy the same SLOT in the first movement as “drunk the milk of Paradise!” plays in the second movement. Notice that the final words of both movements, “ice” and “Paradise” respectively, rhyme.

Calculating meaning in " Kubla Khan " – a rough cut (Version 2), 2017

"Kubla Khan" and "This Lime-Tree Bower My Prison" are constructed on utterly different schemes, though they share some of the same underlying components. "Kubla Khan" is ontological and impersonal in character and makes extensive use of convolution in calculating meanings. It reveals the structure of being. "Lime-Tree Bower" is narrative and personal and makes little or no use of convolution. It reveals the unfolding of subjectivity in time. The two poems also differ in their versification, a differences which is related to their different strategies of meaning.

Note that GPT-3 was released on June 20, 2020. I responded with a working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, on August 5. The current version is #4 and is dated May 7, 2022. On May 11 I issued the next working paper, which reflects how thinking about LLMs has changed my thinking about how the meaning of poems is computed.

Symbols and Nets: Calculating Meaning in "Kubla Khan", Working Paper, 2022.

This is a dialog between a Naturalist Literary Critic and a Sympathetic Techno-Wizard about the interaction of symbols and neural nets in understanding "Kubla Khan," which has an extraordinary structure. Each of two parts is like a matryoshka doll nested three deep, with the last line of the first part being repeated in the middle of the second. They start talking about traditional symbol processing, with addressable memory, and nested loops, and end up talking about a pair of interlinked neural nets where one (language forms) is used to index the other (meaning).

Note, I’ve just had a couple of sessions discussing three of those papers with ChatGPT. I’m pretty sure that, if and when I get around to it, I’ll have something new to say. But first I’ve got to finish my book (Play: How to Stay Human in the AI Revolution).