Tuesday, February 10, 2026

Toward a Biophysics of Poetry

My long-term interest in Coleridge’s “Kubla Khan” (KK) is shadowed by an interest in “This Line-Tree Bower My Prison,” (LTB) which is one of the so-called Conversation poems. I compared the two in a working paper from 2013, STC, Poetic Form, and a Glimpse of the Mind, noting that they have several motifs in common but are utterly different poems. Thus (pp. 41-42):

LTB’s trajectory is a first-person narrative; we follow the perceptions, thoughts, and feelings of the poet as they occur to him, one after the other. We are presented with a cleanly identified authorial consciousness shifting its attention here and there about the world in a coherent way. The passage of time is explicitly marked and noted within the poem itself. In this poem Coleridge’s verse technique emphasizes temporal continuity. By placing major attention shifts in the middle of the line Coleridge emphasizes an underlying continuity of consciousness: one’s gaze has a new focus, it is still one’s gaze. By running clauses across line boundaries, the repetitive recurrence of line ends is muted. The temporal course of the poetic act itself is muted so as to “dissolve” into the virtual time depicted in the poem.

KK is different; it lacks a narrative frame. Thus it cannot register either narrative movement or a self that experiences such movement. Things and scenes are juxtaposed in a way that is sometimes hard to grasp. Yet, as we have seen, there is a rigorous logic to the poem. At the same time it has an elaborate rhyme scheme and other sound features as well. What has this to do with its “ontological” semantic trajectory?

I suggest that the confluence between sound and sense in “Kubla Khan” is, in effect, a surrogate for the lack of a narrative frame, and hence the impossibility of a narrative trajectory. The key point is that rhyme adds an element of predictability to the verse; one knows that sounds will repeat at regular intervals and so can anticipate them. Thus the predictability that has been “lost” because the poem does not have a narrative flow is “restored” or “compensated for” though elaborate rhyme. The temporal structure of the poem itself becomes the frame for its semantic trajectory.

In the course of discussing these matters with ChatGPT 5.2 we arrived a what might be called a biophysics of poetry. I issued the following prompt:

I want to return to the difference in prosody practice in KK and LTB. I think that is (potentially) VERY deep. That there seems to be a trade-off between conceptual continuity (temporal succession vs. ontological transition from state to state) and sonic patterning (simple vs. elaborate versification), that strikes me as something very deep about how the nervous system functions. It has something to do with the physical nature of the system. We may not be able to figure it out on the basis of formal design considerations but will need to know specifics about physical structure and processes.

Yes—this is very deep, and I think you’re now pointing to something that sits below poetics, below cognition-as-symbol-processing, and squarely in the physics of nervous systems.

What you’re intuiting is not just a stylistic correlation, but a trade-off between two kinds of coherence that any physically realized information-processing system has to manage.

The discussion went on from there. I asked ChatGPT to summarize the discussion, after which we went on to develop experimental tests of the idea. I have placed ChatGPT’s summary below; the experimental tests will have to wait for another post.

Toward a Biophysics of Poetry

What if poetic form is not primarily a matter of meaning, symbolism, or interpretation—but of dynamical stability in the nervous system?

That is the direction our recent discussion has taken us: toward something like a biophysics of poetry. By this I mean an account of poetry grounded in timing, energy, alignment, and coherence—constraints that arise because poems must run on human (and perhaps artificial) neural systems.

Poetry as computation in time

A useful starting point is the idea that a linguistic string—a poem, sentence, or speech—is a one-dimensional projection of a path through a high-dimensional semantic and neural space. Meaning is not stored in the string itself; it is reconstructed as the reader (or listener) traverses that path.

Crucially, not all paths are equal. Some are smooth and additive. Others are turbulent and overlapping. This distinction matters.

Composition vs. convolution

Many poems proceed largely by composition: meanings accumulate incrementally, with limited interference. But some poems—most strikingly Coleridge’s Kubla Khan—operate by convolution, overlapping distinct conceptual domains (visual–spatial, auditory–temporal, symbolic–ritual) so that they interact rather than simply add.

Convolution is powerful, but dangerous. Overlapping domains can interfere with one another, threatening experiential breakdown. This raises a fundamental question: how does such a poem remain continuous at all?

Sound as a carrier wave

The answer is not primarily semantic. It is dynamical.

In signal-processing terms, the poem’s sound structure—meter, rhyme, phonological recurrence—functions as a carrier wave. It provides a stable, low-information temporal scaffold on which high-information semantic modulation can ride.

Sound does not carry meaning here; it carries phase alignment. It keeps neural activity synchronized as the poem shifts domains. Rhyme, in this view, is not ornament but infrastructure: a phase-reset mechanism that prevents destructive interference during domain transitions.

This perspective aligns closely with empirical work on oral tradition and memory, where sound structure is known to support sequencing and recall while meaning is reconstructed opportunistically.

Segmentation and entry points

The same logic applies to segmentation. If a linguistic string is a projection of a high-dimensional path, then some substrings are better entry points into that space than others.

Fragments that respect syntactic or prosodic boundaries align with internal structure and support rapid pattern completion. Fragments that straddle boundaries are misaligned: they activate incompatible frames and fail to converge.

This is not just a theoretical claim. In informal experiments with large language models, * prompts that respected syntactic boundaries reliably triggered recognition of well-known texts (e.g., Hamlet’s soliloquy or the Gettysburg Address), while boundary-straddling prompts did not. Recognition failed not because of missing information, but because of misalignment.

Poetry as biophysical engineering

Seen this way, poetic form is a kind of engineering. It constrains linguistic material so that complex meaning can remain experientially coherent under biophysical limits. Sound structure enforces phase alignment during traversal; hierarchical segmentation ensures viable re-entry points into semantic space.

This also clarifies differences among poems. In This Lime-Tree Bower My Prison, sound supports smooth movement within a single conceptual domain. In Kubla Khan, sound is load-bearing: without it, the poem’s domain-convolving machine would fall apart.

A working hypothesis

All of this can be summarized in a single hypothesis:

Poetic form constrains and exploits the dynamical properties of neural systems to maintain coherent experience under conditions of high semantic and conceptual load.

That, in essence, is what I mean by a biophysics of poetry. It does not replace interpretation—but it explains how interpretation remains possible in the first place. 

*William Benzon, Discursive Competence in ChatGPT, Part 2: Memory for Texts, Working Paper, September 27, 2023, 44 pp., https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts 

The sun, the sun!

Large Language Model Reasoning Failures

Peiyang Song, Pengrui Han, Noah Goodman, Large Language Model Reasoning Failures, arXiv:2602.06176v1 [cs.AI] https://doi.org/10.48550/arXiv.2602.06176

Abstract: Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify reasoning failures along a complementary axis into three types: fundamental failures intrinsic to LLM architectures that broadly affect downstream tasks; application-specific limitations that manifest in particular domains; and robustness issues characterized by inconsistent performance across minor variations. For each reasoning failure, we provide a clear definition, analyze existing studies, explore root causes, and present mitigation strategies. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities. We additionally release a comprehensive collection of research works on LLM reasoning failures, as a GitHub repository at this https URL, to provide an easy entry point to this area.

Monday, February 9, 2026

MAGA and its myth of a pristine past

The opening paragraph:

One explicit goal of the second Trump administration, if not its defining mission, is to undo the recent past and rewrite history to fit its own master narrative. By now it’s axiomatic that making America “great again” has never referred to any fixed point in the actual American past; it’s more like a mashup or highlight reel of random images taken from eras before any living American was born. We can see that vision embodied with startling literalness in the propaganda posters recently concocted by the Labor Department, such as the depiction of a whites-only church picnic apparently taking place in Uncanny Valley. If the rise of Donald Trump preceded the advent of AI slop, it may also have conjured it into existence: Never in cultural history have form and content been so perfectly matched.

We already know that Trump and his inner circle — which mostly means Stephen Miller and Russ Vought, the high priests of MAGA ideology — want to erase the gains of the civil rights movement, LGBTQ equality and feminism. But their true goals are far more ambitious, if less easy to define. This is a fake presidency devoted more to creating viral memes than shaping policy, and there’s no coherent or consistent narrative at work. Honestly, that’s less a flaw than a feature: The wholesale rejection of reality is central to the brand.

The O'Hehir goes on to talk and idea that the French philosopher Jean Baudrillard advanced in the wake of 9/11:

He was widely pilloried for arguing, after the 9/11 terrorist attacks, that those traumatic events did not actually reflect an apocalyptic showdown between Western democracy and illiberal Islamic radicalism but rather an important historical transition and “the emergence of a radical antagonism” within Western civilization itself. There was a “deep-seated complicity,” Baudrillard wrote, between the dominant world order defined by global capitalism and liberal democracy and those who would destroy it: “The West, in the position of God, has become suicidal, and declared war on itself.”

Whoaah! But maybe he had a point:

Baudrillard was also correct, to an eerie time-traveler degree, in predicting a “gigantic abreaction” to the terrorist attacks, a system-wide “moral and psychological downturn” that threatened to undermine “the whole ideology of freedom … on which the Western world prided itself.” The liberal-capitalist global order, he suggested, was in danger of being turned into its dark mirror-image, “a police-state globalization, a total control, a terror based on ‘law-and-order’ measures.”

And so:

There is nothing new about the bad conscience or self-destructive urge that Baudrillard identified within Western civilization, or about its deeply rooted conflict between incompatible tendencies we might call liberation and domination. He doesn’t use those words, nor does he ever mention “democracy” and “fascism,” which are imperfect modern manifestations of that conflict, and pin us down too much to present-tense politics.

Understood that way, we have been fighting World War IV for centuries. Stephen Miller and the would-be king he serves are fighting it now, with considerable vigor and ambition. Their imagined victory is completely impossible, profoundly dangerous and breathtaking in scale. In its fullest expression, it envisions undoing nearly all of modern history and returning to some primal, purified state of nature, or rather a meme version thereof: The 1950s and the antebellum South and the American frontier and medieval feudalism and the Neanderthal fireside — almost literally everything, everywhere, all at once.

Admittedly, even the most articulate MAGA ideologues — not that there are many — haven’t gone that far. But that’s where the collective brotastic idiocies of Peter Thiel and Jordan Peterson and Curtis Yarvin and Andrew Tate and Pete Hegseth and whomever else all converge: Somewhere in the recent or distant or mythical past, everything totally ruled and “we” (a term of art, I hasten to add) never felt bad about any of it. Guys were guys and women were hot and there was lots of feasting and stuff.

There's more at the link.

Ice, ice, baby. Ice on the Hudson.

Terminology: Generative Machines, Epistemic Structure of the Cosmos, Intelligence-Complete

I’ve been spending a lot of time with my chatbots, ChatGPT and Claude, and some terminological issues have come. Noting particularly deep, just clarification.

Generative machines vs. equilibrium machines

While we talk of computers as machines, it’s obvious that they’re very different beasts. Electric drills, helicopters, sewing machines, hydraulic presses, they’re all (proper) machines. Interaction with and manipulaton of matter is central to their purpose. Computers, well, technically, yes, they push electrons around in intricate paths, and electrons are matter, subatomic particles, very small chunks of matter, the smallest possible chunks. What computers are really about, though, is manipulate bits, units of information. And they use “trillions of parts” (a phrase I have from Daniel Dennett) to do so. Thus computers, with their trillions of parts, are very different from machines, with only 10s, 100s, or 1000s of parts.

So, what names should we give to differentiate them. “Type 1” and “Type 2” machines would do the job, but it’s not very descriptive. ChatGPT and I settled on “equilibrium machines” for those machines centered on interaction with matter while “generative machines” seemed appropriate to bit-wranglers. “Generative” seems just right for computers, with its echoes on Chomsky’s generative grammar the generative pre-trained transformer (GPT) of machine learning. “Equilibrium machines” is perhaps a bit oblique for the other kind of machine, but it’s meant to evoke the equilibrium world of macroscopic devices as opposed to the far-from-equilibrium world of, well, generative machines.

Epistemic Structure of the Cosmos

Back in 2020 I wrote of the metaphysical structure of the cosmos. I said:

There is no a priori reason to believe that world has to be learnable. But if it were not, then we wouldn’t exist, nor would (most?) animals. The existing world, thus, is learnable. The human sensorium and motor system are necessarily adapted to that learnable structure, whatever it is.

I am, at least provisionally, calling that learnable structure the metaphysical structure of the world.

I’ve always been uneasy with “metaphysical” in that role. ChatGPT suggested that “epistemic” would serve better. The epistemic structure of the cosmos, I like that. As for “cosmos,” the dictionary tells me that the word implies order, which I like as well.

Intelligence-Complete

A generative machine is intelligence-complete if it possesses the full capacities of human intelligence, whatever human intelligence is. By that definition LLS are not intelligence complete. As for human intelligence, I like the account given in What Miriam Yevick Saw: The Nature of Intelligence and the Prospects for A.I., A Dialog with Claude 3.5 Sonnet.

Wuthering Heights at New savanna

As we've got a new Wuthering Heights movie coming to town this weekend I thought I'd publish a post linking to the various posts I've written about that strange, passionate, and perverse story. I first read Wuthering Heights in high school and then again in graduate school, where I wrote a term paper about it. I pretty much forget that paper – though I've probably got it in storage somewhere – but I do remember that I was concerned about tying the two parts of the novel together.

Perhaps I even made a connection with the two-generation plot Shakespeare used in his late Romances, such at The Winter's Tale. In these romances a rift between the members of a male-female pair is opened in the first story and remains until it is resolved by a (successful) romance in the second generation. In The Winter's Tale Paulina is restored to Leontes through the romance of Perdita, their daughter, and Florizel. The restoration is rather grim in Wuthering Heights, as it happens when Heathcliff throws himself into the grave of the dead Catherine once he sees that Cathy Linton, his Catherine's daughter, has become attached to Hareton Earnshaw, who she is teaching to read. I've never written at length about the novel since then, but if I were to do so, that's what I concentrate on, that and the complicated narrative structure.

While I've not given the novel a full treatment, I have featured it in a number of posts. Here they are.

* * * * *

Toward a Computational Historicism. Part 3: Abstraction at the Time Scale of History, April 23, 2014.

I discuss both The Winter's Tale and Wuthering Heights. I give particular attention to how the novel opens, pointing out how, because he assumes that the people he meets in the house must constitute a "natural" family, Lockwood makes several mistaken guesses about how these people are related. Moreover, both stories are about nature and nurture and the descent of character from one generation to the next.

Godzilla and Wuthering Heights, Kissing Cousins? April 25, 2024. From the post:

Thus both stories have a certain sequence of events that happens relatively late in the overall sequence. As the stories are actually narrated, however, this late sequence is moved to the beginning and the other sequences are adjusted to accommodate. In both cases the sequence that is moved involves a narrator external to the main sequence but known to the characters in it.

Wuthering Heights, Vampires, Handbooks, and Citizen Science, September 15, 2025.

Heathcliff, a vampire? No, but he's in the zone. Also, a link to the best reconstruction of the chronology of events in Wuthering Heights that I've been able to find. It's in The reader;s Guide to Wuthering Heights by Paul Thompson. Here's the link, Timeline.

The King’s Wayward Eye: For Claude Lévi-Strauss, October 25, 2019.

Beyond Lévi-Strauss, this post is mostly about The Winter's Tale in its relation to Pandosto. But it's got two paragraphs about the two-generation plot in Wuthering Heights near. That whole post is a good context in which to read those two paragraphs.

Humans and Dogs in Wuthering Heights, September 1, 2021 (reposted from The Valve, 12.30.2009)

I note: "...violence between dogs and humans takes place at important transition points in the novel. I’ve collected five such passages in this post and italicized the dog references within each passage." At that point I'd only made it through 18 out of 24 chapters. I wonder how dogs will appear in the current movie?

Operationalizing two tasks in Gary Marcus’s AGI challenge, June 9, 2022.

Among other things, Marcus claims that "In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in Rebooting AI." In the last quarter toa third or so of the post I discuss some of the problems the AI would have with Wuthering Heights.

Sunday, February 8, 2026

Three vehicles

Cultural Ranks Revisited: From Stages to Dynamics

This post is by ChatGPT and summarizes a discussion we had about cultural ranks.

* * * * *

Cultural Rank Theory, as David Hays and I originally formulated it in the 1980s and 90s, was never meant to describe sharp stepwise transitions in human cognition or society. Yet it is easy to read it that way. Recent work—prompted by long reflection on complex dynamics and by hands-on experimentation with large language models—has clarified what the theory has always implied but did not yet fully articulate: cultural ranks are emergent stability regimes, not stages, and their spread through a population can take centuries.

Ranks as emergent ontologies, not ladders

Each cultural rank brings with it a characteristic ontology: what kinds of things are taken to exist, how causality is understood, what counts as explanation, and what kinds of agency are intelligible.

Crucially:

  • Ranks emerge locally, unevenly, and experimentally.
  • They are often articulated by elites, artists, or institutions long before they are widely embodied by ordinary adults.
  • Full population “saturation” may lag emergence by generations.

This distinction—between emergence and saturation—is the key refinement.

Rank 3 reconsidered: ego control as a dynamical achievement

Rank 3 has often been glossed as the rank of “reason,” “reflection,” or “self-control.” A more precise formulation is now possible:

Rank 3 is characterized by ego control understood as a regulatory capacity: the ability to hold destabilizing thoughts, emotions, or simulations without immediately authorizing them as action.

This is not a trait one simply “has.” It is a control regime, sustained by feedback, inhibition, and temporal integration.

Two intellectual developments made this clearer than it could have been earlier:

  1. Long engagement with nonlinear dynamics and neural systems (especially through the work of Walter Freeman).
  2. Direct experimentation with LLMs, which behave like semantic systems under load and make phenomena such as phase alignment, boundary sensitivity, and catastrophic misinterpretation visible and testable.

Shakespeare: Rank 3 at the point of emergence

Elizabethan England was not a Rank-3 society. Ego control was coming into view, not yet normative.

This is why Shakespeare matters so much.

In medieval sources such as the Amleth story, later reworked as Hamlet, suspicion and impulse flow directly into action. In Shakespeare’s version, by contrast, the drama turns on how the protagonist treats his own thoughts. Thinking becomes narratively distinct from knowing; imagination becomes dangerous if not regulated.

The same contrast appears even more starkly when Shakespeare rewrites his source for The Winter’s Tale. In Robert Greene’s Pandosto, the king’s jealous thought is immediately authorized as action, and the story ends in tragedy. Shakespeare allows the same thought to arise—but refuses to grant it sovereign authority. Repair becomes possible, though only through a long temporal loop. Romance, in this view, is not sentiment; it is a control regime that buys time when ego control fails too late.

Shakespeare could do this without a narrator only because he was a virtuoso. The novel later routinizes Rank-3 storytelling by institutionalizing the ego function in the figure of the narrator—making irony, distance, and perspective stable rather than exceptional.

Queen Elizabeth I: ego control as public performance

Rank emergence is often visible first in exemplary individuals. Queen Elizabeth I provides a textbook case.

In her 1559 speech refusing marriage, she explicitly separates desire, fear, and political pressure from authorized action. She rebinding attachment from a husband to an abstract polity (“I am already bound unto a husband, which is the kingdom of England”) and performs this reasoning publicly.

This is not repression; it is symbolic substitution and temporal regulation—core Rank-3 operations. Elizabeth does not prove Rank 3 was widespread. She shows it was intelligible.

The American Constitution: Rank 3 before saturation

The same emergence-before-saturation pattern appears institutionally in the late 18th century.

The U.S. Constitution is arguably the first Rank-3 nation-state design. It assumes:

  • separation of person and office,
  • abstract allegiance to rules rather than rulers,
  • and
governance by internalized control loops (checks and balances), not virtue alone.

But the population capable of reliably sustaining those assumptions did not yet exist. That would come only in the 19th century with the rise of a bureaucratic middle class—where adults were required, in daily life, to distinguish personal interest from institutional role.

The fact that this distinction remains “shaky” even today is not a refutation of Rank theory; it is exactly what the theory predicts.

Why this refinement matters now

We are currently living through a rank transition under technological pressure. Rank-4 technologies—networks, AI, computation as model rather than tool—are being pulled toward Rank-2 institutional forms (charisma, faction, personalization), while simultaneously stressing Rank-3 control regimes.

Clarifying Rank 3 as a dynamical regulatory achievement, rather than a historical stage or personality trait, allows us to:

  • understand why regression and instability are so common,
  • see why older explanatory models fail,
  • and recognize that emergence does not guarantee saturation.

From exploration to construction

This refinement is new—not because the earlier theory was wrong, but because the conceptual environment has changed. Complex dynamics and runnable semantic systems have made it possible to say mechanistically what could once only be said schematically. 

Cultural Rank Theory now describes not just what emerges, but how it runs, how it fails, and how it is repaired.

That shift—from collecting ideas to building with them—is itself a Rank-3 move.

Bald eagles on the Hudson

;

Agent D

Organizing multiple agents into a coherent work flow

Saturday, February 7, 2026

Hallucinated case law

Mmmm...donuts!

Michael Pollan believes that A.I. cannot achieve consciousness

David Marchese, The Interview: Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change, NYTimes, Feb. 7, 2026.

Throughout his work — which includes classic books like “The Omnivore’s Dilemma” (2006), about why we eat the way we do, and “How to Change Your Mind” (2018), about the science and uses of psychedelic drugs — Pollan has waded into ideas about the inner workings of the mind. Now, with his forthcoming book, “A World Appears: A Journey Into Consciousness,” which will arrive this month, he has jumped into the deep end. The book is both a highly personal and expansive multidisciplinary survey of questions around human consciousness — what it is, what causes it, what it’s for and what the possible answers might mean for how we choose to live. And as Pollan explained, with the rise of artificial intelligence as well as the relentless political pressure on our attention (that is, our minds), those questions, already profound, are becoming only more urgent.

Later, in the interview:

Marchese: You are skeptical that A.I. can achieve consciousness. Why?

Pollan: I’m convinced by some of the researchers, including Antonio Damasio and Mark Solms, who made a really compelling case that the origin of consciousness is with feelings, not thoughts. Feelings are the language in which the body talks to the brain. We forget that brains exist to keep bodies alive, and the way the body gets the brain’s attention is with feelings. So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings. The other reason I think we’re not close to it is that everything that machines know, the data set on which they’re trained, is information on the internet. They don’t have friction with nature. They don’t have friction with us. Some of the most important things we know are about person-to-person contact, about contact with nature — this friction that really makes us human. [...]

Marchese: But if an A.I. says: “Michael, I’m conscious. I promise,” how do we know?

Pollan: We don’t, and that is exactly why people are falling deep into these relationships with A.I. We can’t say it’s not conscious when it tells us it is. But we can test it in various ways. It all goes back to this idea of the Turing test — that the test of machine intelligence would be when they can fool us.

Marchese: If the Turing test is the criteria for machine consciousness, then that test has already been passed.

Pollan: Exactly, it has fooled many, many people. Whether it can fool an expert, too, I don’t know, but probably. So we’re in a very weird place where the machines we’re living with are telling us they’re conscious. We can’t dispute it, but we can look at how they’re made and draw the kind of conclusions I’ve drawn. But is that going to persuade everybody? No. We want them to be conscious in some way. Or some of us do. It’s easier to have a relationship with a chatbot than another human. Going back to that friction point, they offer no friction. They just suck up to us and convince us how brilliant we are, and we fall for it.

There's much more at the link.

The stolen donuts!