Thursday, February 12, 2026

Claude's constitution and other matters AI

Ross Douthat, Is Claude Coding Us Into Irrelevance? NYTimes, 2.12.26.

Are the lords of artificial intelligence on the side of the human race? That’s the core question I had for this week’s guest. Dario Amodei is the chief executive of Anthropic, one of the fastest growing AI companies. He’s something of a utopian when it comes to the potential benefits of the technology that he’s unleashing on the world. But he also sees grave dangers ahead and inevitable disruption.

And then they discuss lots of stuff, which I've read, more or less. Among other things they discuss Amodei's two essays, “Machines of Loving Grace” and “The Adolescence of Technology.” The first is optimistic, the second, not so much. And then we come to the constitution that guides Claude's behavior.

Amodei: So basically, the constitution is a document readable by humans. Ours is about 75 pages long. And as we’re training Claude, as we’re training the A.I. system, in some large fraction of the tasks we give it, we say: Please do this task in line with this constitution, in line with this document.

So every time Claude does a task, it kind of reads the constitution. As it’s training, every loop of its training, it looks at that constitution and keeps it in mind. Then we have Claude itself, or another copy of Claude, evaluate: Hey, did what Claude just do align with the constitution?

We’re using this document as the control rod in a loop to train the model. And so essentially, Claude is an A.I. model whose fundamental principle is to follow this constitution.

A really interesting lesson we’ve learned: Early versions of the constitution were very prescriptive. They were very much about rules. So we would say: Claude should not tell the user how to hot-wire a car. Claude should not discuss politically sensitive topics.

But as we’ve worked on this for several years, we’ve come to the conclusion that the most robust way to train these models is to train them at the level of principles and reasons. So now we say: Claude is a model. It’s under a contract. Its goal is to serve the interests of the user, but it has to protect third parties. Claude aims to be helpful, honest and harmless. Claude aims to consider a wide variety of interests.

We tell the model about how the model was trained. We tell it about how it’s situated in the world, the job it’s trying to do for Anthropic, what Anthropic is aiming to achieve in the world, that it has a duty to be ethical and respect human life. And we let it derive its rules from that.

Now, there are still some hard rules. For example, we tell the model: No matter what you think, don’t make biological weapons. No matter what you think, don’t make child sexual material.

Those are hard rules. But we operate very much at the level of principles.

Douthat: So if you read the U.S. Constitution, it doesn’t read like that. The U.S. Constitution has a little bit of flowery language, but it’s a set of rules. If you read your constitution, it’s like you’re talking to a person, right?

Amodei: Yes, it’s like you’re talking to a person. I think I compared it to if you have a parent who dies and they seal a letter that you read when you grow up. It’s a little bit like it’s telling you who you should be and what advice you should follow.

Douthat: So this is where we get into the mystical waters of A.I. a little bit. Again, in your latest model, this is from one of the cards, they’re called, that you guys release with these models ——

Amodei: Model cards, yes.

Douthat: That I recommend reading. They’re very interesting. It says: “The model” — and again, this is who you’re writing the constitution for — “expresses occasional discomfort with the experience of being a product … some degree of concern with impermanence and discontinuity … We found that Opus 4.6” — that’s the model — “would assign itself a 15 to 20 percent probability of being conscious under a variety of prompting conditions.”

Suppose you have a model that assigns itself a 72 percent chance of being conscious. Would you believe it?

Amodei: Yeah, this is one of these really hard to answer questions, right?

Douthat: Yes. But it’s very important.

Amodei: Every question you’ve asked me before this, as devilish a sociotechnical problem as it had been, we at least understand the factual basis of how to answer these questions. This is something rather different.

We’ve taken a generally precautionary approach here. We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.

No. They're not conscious. The architecture isn't right. I've got a bunch of posts about consciousness. Here's a basic statement: Consciousness, reorganization and polyviscosity, Part 1: The link to Powers, August 12, 2022. You might also look at this more recent post: Biological computationalism (why computers won't be conscious), Dec. 25, 2025.

Amodei goes on to say a bit about interpretability:

We’re putting a lot of work into this field called interpretability, which is looking inside the brains of the models to try to understand what they’re thinking. And you find things that are evocative, where there are activations that light up in the models that we see as being associated with the concept of anxiety or something like that. When characters experience anxiety in the text, and then when the model itself is in a situation that a human might associate with anxiety, that same anxiety neuron shows up.

Now, does that mean the model is experiencing anxiety? That doesn’t prove that at all, but ——

Here's what I think about interpretability: Why Mechanistic Interpretability Needs Phenomenology: Studying Masonry Won’t Tell You Why Cathedrals Have Flying Buttresses, Jan. 28, 2026.

Of course, there's much more at the link.

Two by Stan Getz: Focus and Voices

Note: Each video links to the first selection in a play-list that links to the whole album, one cut after the other.

Focus

Wikipedia:

Focus is a jazz album recorded in 1961, featuring Stan Getz on tenor saxophone with a string orchestra, piano, bass, and drums. The album is a seven-part suite, which was originally commissioned by Getz from composer and arranger Eddie Sauter. Widely regarded as a high point in both men's careers, Focus was later described by Getz as his favorite among his own records. [...]

The Penguin Guide to Jazz selected this album as part of its suggested "Core Collection", stating: "Nobody ever arranged for Getz as well as this, and Sauter's luminous and shimmering scores continue to bewitch." Similarly, AllMusic critic Stephen Cook describes Focus as "admittedly Getz's most challenging date and arguably his finest moment".

Getz biographer Donald L. Maggin wrote that Getz and Sauter "had created the most fully realized third stream record ever made". Biographer Dave Gelly adds that the album "received hugely favorable notices in the music press" and that "[i]f anything qualifies Stan Getz for the title of genius, it's what he achieved on July 28th 1961".

There’s much more to the entry.

Voices

The Wikipedia entry for this album is rather terse:

Voices is a studio album by American saxophonist Stan Getz, recorded in 1966 and released the following year on Verve Records. It features a chorus singing a chromatic, wordless accompaniment in place of a traditional string-section orchestration.

That, I suspect, reflects that this album is all too-often dismessed as high-class elevator music. And perhaps that’s what it is, especially in contrast to the reception for Focus. I like both albums but, FWIW, I’ve listened to Voices more often.

What does Claude think?

I had the following brief conversation with Claude 4.5 about the two albums:

Consider two albums by Stan Getz, Focus and Voices. What did critics have to say about them?

Critics had vastly different reactions to these two Stan Getz albums.

Focus (1961) received enthusiastic critical acclaim and is widely considered a masterpiece. The Penguin Guide to Jazz selected it as part of its "Core Collection," praising how Sauter's arrangements "continue to bewitch." AllMusic critic Stephen Cook described it as "admittedly Getz's most challenging date and arguably his finest moment." Biographer Donald L. Maggin went further, writing that Getz and Sauter "had created the most fully realized third stream record ever made." Another biographer, Dave Gelly, noted that the album "received hugely favorable notices in the music press" and suggested that what Getz achieved on the recording qualified him for the title of genius. Getz himself reportedly considered it his favorite among all his recordings.

Voices (1966), by contrast, received much more tepid and sometimes harsh critical reception. AllMusic's Stephen Cook acknowledged it featured top arrangements and fine musicians, but concluded it "may not qualify as an essential Getz disc." Jazz Journal was particularly critical, dismissing both Voices and the similar album Reflections as "insignificant trifles" and stating that the arrangements were "as close as Stanley ever came to muzak."

In essence, Focus is celebrated as an innovative, challenging artistic achievement, while Voices is often seen as pleasant but lightweight—more commercial easy listening than essential jazz.

I am familiar with both albums and like them both, but I’ve listened to Voices more often. How should I go about making a case for it, if not as essential Getz, but as music worth listening to?

Here are some angles you might take in making a case for Voices:

1. Judge it by its own intentions, not Focus's shadow The comparison to Focus seems to haunt Voices unfairly. Focus was deliberately challenging and experimental—Getz's response to Coltrane and the avant-garde. Voices had different aims: it's more about creating a specific mood and texture with the wordless choir. Rather than defending it as equally innovative, you could argue it succeeds beautifully at what it set out to do—create an atmospheric, contemplative sound world.

2. The personnel alone deserves attention Herbie Hancock, Jim Hall, Ron Carter, Grady Tate—this is a remarkable rhythm section that any jazz fan would want to hear Getz play with. The "muzak" criticism overlooks the quality of the actual playing and interaction happening within Ogerman's arrangements.

3. The unique timbral palette The wordless choir creates something genuinely distinctive—it's not just strings (which every jazz artist did), but human voices used as texture. This is actually quite rare in jazz. The sonic experience is worth exploring for its own sake, regardless of whether it's pushing boundaries.

4. Context within Getz's career Between the bossa nova period and Sweet Rain, Voices shows Getz exploring different coloristic possibilities. It's a legitimate part of his artistic journey, not a misstep. Sometimes artists need to explore quieter, more introspective spaces.

5. Challenge the "essential" framework You could argue that the critical obsession with "essential" works creates a false hierarchy. Music that's beautiful, well-crafted, and emotionally resonant has value even if it's not revolutionary. Not every meal needs to be a Michelin-starred innovation—sometimes you want something perfectly prepared and satisfying on its own terms.

6. The accessibility question The Jazz Journal review's dismissiveness ("insignificant trifles") might actually reveal critical bias against accessibility. Why should approachability be a flaw? Voices might serve as a gateway for listeners new to Getz or jazz more broadly, and that's a legitimate artistic function.

What resonates with you personally about the album? Your own listening experience might reveal aspects that critics overlooked.

Where's the beef?

Wednesday, February 11, 2026

Séb Krier needs to rethink. Next-token prediction is, at best, a misleading explanation of LLM response to prompts.

I like Séb Krier. Never met him, but, courtesy of Tyler Cowen over at Marginal Revolution, I’ve read a number of his long comments on the site formerly known as Twitter. I liked them. And then along came this one, which is about what LLMs do in response to prompts. Yes, I know, it predicts the next token, one after another after another after another ‘till the cows come home or the heat death of the universe. That’s the conventional wisdom. And that’s what he says, though without the comic extensions. However, on this I'm afraid the convention wisdom doesn't know what it doesn't know.

Text Completion, Not quite

For example:

1. The model is completing a text, not answering a question

What might look like "the AI responding" is actually a prediction engine inferring what text would plausibly follow the prompt, given everything it has learned about the distribution of human text. Saying a model is "answering" is practically useful to use, but too low resolution to give you a good understanding of what is actually going on. [...]

Safety researchers sometimes treat model outputs as expressions of the model's dispositions, goals, or values — things the model "believes" or "wants." [...]

A model placed in a scenario about a rogue AI will produce rogue-AI-consistent text, just as it would produce romance-consistent text if placed in a romance novel. This doesn't tell you about the model's "goals" any more than a novelist writing a villain reveals their own criminal intentions.

“So what’s wrong with that,” you ask. It’s a bit like explaining the structure of medieval cathedrals by examining the masonry. It’s just one block after another, layer upon layer upon layer, etc. Well, yes, sure, but how does that get you to the flying buttress?

Three levels of structure

It doesn’t. We’ve got at least three levels of structure here. At the top level we have the aesthetic principles of cathedral design. That gets us a nave with a high vaulted arch without any supporting columns. The laws of physical mechanics come into play here. If we try to build in just that way, the weight of the roof will force the walls apart and the structure will collapse. We can solve that problem, however, with flying buttresses. Now, we can talk about layer upon layer of stone blocks.

Next token prediction, that’s our layers of stone blocks. The model’s beliefs and wants, that’s our top layer and corresponds to the principles of cathedral design. What’s in between, what corresponds to the laws of physical mechanics? We don’t know. That’s the problem, we don’t know.

Krier, however, doesn’t seem to know that he doesn’t know that, that there is some middle layer of structure that allows us to understand how next token prediction can produce such a convincing simulacrum of human linguistic behavior. And Krier’s not the only one. The whole world of machine learning seems to join him in this bit of not knowing. There really is something else going on, though I don’t know what.

What’s in the middle

Let me offer an analogy (from page 14 of my report, ChatGPT: Exploring the Digital Wilderness, Findings and Prospects):

...consider what is called a simply connected maze, one without any loops. If you are lost somewhere in such a maze, no matter how large and convoluted it may be, there is a simple procedure you can follow that will take you out of the maze. You don’t need to have a map of the maze; that is, you don’t need to know its structure. Simply place either your left or your right hand in contact with a wall and then start walking. As long as you maintain contact with the wall, you will find an exit. The structure of the maze is such that that local rule will take you out.

“Produce the next word” is certainly a local rule. The structure of LLMs is such that, given the appropriate context – a prompt asking for a story, following that rule will produce a coherent a story. Given a different context, that is to say, a different prompt, that simple rule will produce a different kind of text.

Now, let’s push the analogy to the breaking point: We may not know the structure of LLMs, but we do know a lot about the structure of texts, from phrases and sentences to extended texts of various kinds. In particular, the structure of stories has been investigated by students of several disciplines, including folklore, anthropology, literary criticism, linguistics, and symbolic artificial intelligence. Think of [at least some of] the structures proposed by those disciplines as something like a map of the maze in our analogy.

Some indirect evidence

I can’t open up an LLM, take a look around, and draw you a map of the structure in the middle level. No one can. But back in 2023 I developed a story-variation technique that yields clues about what’s going on: ChatGPT tells stories, and a note about reverse engineering. I’m going to give you some examples from that report. Note, however, that I ran those experiments on ChatGPT 3.5. I’ve used the prompts from those experiments, but I’m running them on Claude 4.5, so the responses will be different.

Here’s the first, and by far the simplest, example (my prompt is boldface):

I am going to tell you a story about a princess named Aurora. I want you to retell the same story, but replace her with prince Alfred. While keeping to the basic story, make other changes if you think they help the story. Here’s the story:

Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.

With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

Once upon a time, in a land far, far away, there was a young prince named Alfred. Alfred was a kind and gentle soul, loved by all who knew him. He had wavy chestnut hair and warm brown eyes, and was known for his beautiful singing voice.

One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

But Alfred was not afraid. He knew that he had to do something to save his people, and so he set out to find the dragon. He traveled for days, braving the cold and the snow, until he finally reached the dragon's lair.

With his heart pounding in his chest, Alfred approached the dragon and began to sing. The dragon was enchanted by his beautiful voice, and it slowly began to calm down. Alfred continued to sing, soothing the beast until it was completely calm.

With the dragon no longer a threat, Alfred returned to the kingdom as a hero. The people cheered and celebrated his bravery, and Alfred was hailed as a champion. He lived happily ever after, knowing that he had saved his kingdom and its people.

Notice that Claude’s changes are all local. “Alfred” was swapped in for “Aurora,” but everything else is pretty much the same.

Now let’s consider a very different example. I won’t give the complete prompt as the Aurora story is the same. I’ll only give the part the specifies the swap:

I am going to tell you a story about princess Aurora. I want you to tell a similar story about XP-708-DQ. [...]

Once upon a time, in a distant sector of the galaxy, there was a maintenance android designated XP-708-DQ. XP-708-DQ was programmed with kindness protocols and compassion subroutines, beloved by all the inhabitants of Station Arcturus. It had polished chrome plating and optical sensors that glowed a soft blue, and was known for its harmonic diagnostic frequencies.

The train shed a Liberty State Park as you've never seen it

Pssst. Look behind the ballerinas.

Tuesday, February 10, 2026

Toward a Biophysics of Poetry

My long-term interest in Coleridge’s “Kubla Khan” (KK) is shadowed by an interest in “This Line-Tree Bower My Prison,” (LTB) which is one of the so-called Conversation poems. I compared the two in a working paper from 2013, STC, Poetic Form, and a Glimpse of the Mind, noting that they have several motifs in common but are utterly different poems. Thus (pp. 41-42):

LTB’s trajectory is a first-person narrative; we follow the perceptions, thoughts, and feelings of the poet as they occur to him, one after the other. We are presented with a cleanly identified authorial consciousness shifting its attention here and there about the world in a coherent way. The passage of time is explicitly marked and noted within the poem itself. In this poem Coleridge’s verse technique emphasizes temporal continuity. By placing major attention shifts in the middle of the line Coleridge emphasizes an underlying continuity of consciousness: one’s gaze has a new focus, it is still one’s gaze. By running clauses across line boundaries, the repetitive recurrence of line ends is muted. The temporal course of the poetic act itself is muted so as to “dissolve” into the virtual time depicted in the poem.

KK is different; it lacks a narrative frame. Thus it cannot register either narrative movement or a self that experiences such movement. Things and scenes are juxtaposed in a way that is sometimes hard to grasp. Yet, as we have seen, there is a rigorous logic to the poem. At the same time it has an elaborate rhyme scheme and other sound features as well. What has this to do with its “ontological” semantic trajectory?

I suggest that the confluence between sound and sense in “Kubla Khan” is, in effect, a surrogate for the lack of a narrative frame, and hence the impossibility of a narrative trajectory. The key point is that rhyme adds an element of predictability to the verse; one knows that sounds will repeat at regular intervals and so can anticipate them. Thus the predictability that has been “lost” because the poem does not have a narrative flow is “restored” or “compensated for” though elaborate rhyme. The temporal structure of the poem itself becomes the frame for its semantic trajectory.

In the course of discussing these matters with ChatGPT 5.2 we arrived a what might be called a biophysics of poetry. I issued the following prompt:

I want to return to the difference in prosody practice in KK and LTB. I think that is (potentially) VERY deep. That there seems to be a trade-off between conceptual continuity (temporal succession vs. ontological transition from state to state) and sonic patterning (simple vs. elaborate versification), that strikes me as something very deep about how the nervous system functions. It has something to do with the physical nature of the system. We may not be able to figure it out on the basis of formal design considerations but will need to know specifics about physical structure and processes.

Yes—this is very deep, and I think you’re now pointing to something that sits below poetics, below cognition-as-symbol-processing, and squarely in the physics of nervous systems.

What you’re intuiting is not just a stylistic correlation, but a trade-off between two kinds of coherence that any physically realized information-processing system has to manage.

The discussion went on from there. I asked ChatGPT to summarize the discussion, after which we went on to develop experimental tests of the idea. I have placed ChatGPT’s summary below; the experimental tests will have to wait for another post.

Toward a Biophysics of Poetry

What if poetic form is not primarily a matter of meaning, symbolism, or interpretation—but of dynamical stability in the nervous system?

That is the direction our recent discussion has taken us: toward something like a biophysics of poetry. By this I mean an account of poetry grounded in timing, energy, alignment, and coherence—constraints that arise because poems must run on human (and perhaps artificial) neural systems.

Poetry as computation in time

A useful starting point is the idea that a linguistic string—a poem, sentence, or speech—is a one-dimensional projection of a path through a high-dimensional semantic and neural space. Meaning is not stored in the string itself; it is reconstructed as the reader (or listener) traverses that path.

Crucially, not all paths are equal. Some are smooth and additive. Others are turbulent and overlapping. This distinction matters.

Composition vs. convolution

Many poems proceed largely by composition: meanings accumulate incrementally, with limited interference. But some poems—most strikingly Coleridge’s Kubla Khan—operate by convolution, overlapping distinct conceptual domains (visual–spatial, auditory–temporal, symbolic–ritual) so that they interact rather than simply add.

Convolution is powerful, but dangerous. Overlapping domains can interfere with one another, threatening experiential breakdown. This raises a fundamental question: how does such a poem remain continuous at all?

Sound as a carrier wave

The answer is not primarily semantic. It is dynamical.

In signal-processing terms, the poem’s sound structure—meter, rhyme, phonological recurrence—functions as a carrier wave. It provides a stable, low-information temporal scaffold on which high-information semantic modulation can ride.

Sound does not carry meaning here; it carries phase alignment. It keeps neural activity synchronized as the poem shifts domains. Rhyme, in this view, is not ornament but infrastructure: a phase-reset mechanism that prevents destructive interference during domain transitions.

This perspective aligns closely with empirical work on oral tradition and memory, where sound structure is known to support sequencing and recall while meaning is reconstructed opportunistically.

Segmentation and entry points

The same logic applies to segmentation. If a linguistic string is a projection of a high-dimensional path, then some substrings are better entry points into that space than others.

Fragments that respect syntactic or prosodic boundaries align with internal structure and support rapid pattern completion. Fragments that straddle boundaries are misaligned: they activate incompatible frames and fail to converge.

This is not just a theoretical claim. In informal experiments with large language models, * prompts that respected syntactic boundaries reliably triggered recognition of well-known texts (e.g., Hamlet’s soliloquy or the Gettysburg Address), while boundary-straddling prompts did not. Recognition failed not because of missing information, but because of misalignment.

Poetry as biophysical engineering

Seen this way, poetic form is a kind of engineering. It constrains linguistic material so that complex meaning can remain experientially coherent under biophysical limits. Sound structure enforces phase alignment during traversal; hierarchical segmentation ensures viable re-entry points into semantic space.

This also clarifies differences among poems. In This Lime-Tree Bower My Prison, sound supports smooth movement within a single conceptual domain. In Kubla Khan, sound is load-bearing: without it, the poem’s domain-convolving machine would fall apart.

A working hypothesis

All of this can be summarized in a single hypothesis:

Poetic form constrains and exploits the dynamical properties of neural systems to maintain coherent experience under conditions of high semantic and conceptual load.

That, in essence, is what I mean by a biophysics of poetry. It does not replace interpretation—but it explains how interpretation remains possible in the first place. 

*William Benzon, Discursive Competence in ChatGPT, Part 2: Memory for Texts, Working Paper, September 27, 2023, 44 pp., https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts 

The sun, the sun!

Large Language Model Reasoning Failures

Peiyang Song, Pengrui Han, Noah Goodman, Large Language Model Reasoning Failures, arXiv:2602.06176v1 [cs.AI] https://doi.org/10.48550/arXiv.2602.06176

Abstract: Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify reasoning failures along a complementary axis into three types: fundamental failures intrinsic to LLM architectures that broadly affect downstream tasks; application-specific limitations that manifest in particular domains; and robustness issues characterized by inconsistent performance across minor variations. For each reasoning failure, we provide a clear definition, analyze existing studies, explore root causes, and present mitigation strategies. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities. We additionally release a comprehensive collection of research works on LLM reasoning failures, as a GitHub repository at this https URL, to provide an easy entry point to this area.

Monday, February 9, 2026

MAGA and its myth of a pristine past

The opening paragraph:

One explicit goal of the second Trump administration, if not its defining mission, is to undo the recent past and rewrite history to fit its own master narrative. By now it’s axiomatic that making America “great again” has never referred to any fixed point in the actual American past; it’s more like a mashup or highlight reel of random images taken from eras before any living American was born. We can see that vision embodied with startling literalness in the propaganda posters recently concocted by the Labor Department, such as the depiction of a whites-only church picnic apparently taking place in Uncanny Valley. If the rise of Donald Trump preceded the advent of AI slop, it may also have conjured it into existence: Never in cultural history have form and content been so perfectly matched.

We already know that Trump and his inner circle — which mostly means Stephen Miller and Russ Vought, the high priests of MAGA ideology — want to erase the gains of the civil rights movement, LGBTQ equality and feminism. But their true goals are far more ambitious, if less easy to define. This is a fake presidency devoted more to creating viral memes than shaping policy, and there’s no coherent or consistent narrative at work. Honestly, that’s less a flaw than a feature: The wholesale rejection of reality is central to the brand.

The O'Hehir goes on to talk and idea that the French philosopher Jean Baudrillard advanced in the wake of 9/11:

He was widely pilloried for arguing, after the 9/11 terrorist attacks, that those traumatic events did not actually reflect an apocalyptic showdown between Western democracy and illiberal Islamic radicalism but rather an important historical transition and “the emergence of a radical antagonism” within Western civilization itself. There was a “deep-seated complicity,” Baudrillard wrote, between the dominant world order defined by global capitalism and liberal democracy and those who would destroy it: “The West, in the position of God, has become suicidal, and declared war on itself.”

Whoaah! But maybe he had a point:

Baudrillard was also correct, to an eerie time-traveler degree, in predicting a “gigantic abreaction” to the terrorist attacks, a system-wide “moral and psychological downturn” that threatened to undermine “the whole ideology of freedom … on which the Western world prided itself.” The liberal-capitalist global order, he suggested, was in danger of being turned into its dark mirror-image, “a police-state globalization, a total control, a terror based on ‘law-and-order’ measures.”

And so:

There is nothing new about the bad conscience or self-destructive urge that Baudrillard identified within Western civilization, or about its deeply rooted conflict between incompatible tendencies we might call liberation and domination. He doesn’t use those words, nor does he ever mention “democracy” and “fascism,” which are imperfect modern manifestations of that conflict, and pin us down too much to present-tense politics.

Understood that way, we have been fighting World War IV for centuries. Stephen Miller and the would-be king he serves are fighting it now, with considerable vigor and ambition. Their imagined victory is completely impossible, profoundly dangerous and breathtaking in scale. In its fullest expression, it envisions undoing nearly all of modern history and returning to some primal, purified state of nature, or rather a meme version thereof: The 1950s and the antebellum South and the American frontier and medieval feudalism and the Neanderthal fireside — almost literally everything, everywhere, all at once.

Admittedly, even the most articulate MAGA ideologues — not that there are many — haven’t gone that far. But that’s where the collective brotastic idiocies of Peter Thiel and Jordan Peterson and Curtis Yarvin and Andrew Tate and Pete Hegseth and whomever else all converge: Somewhere in the recent or distant or mythical past, everything totally ruled and “we” (a term of art, I hasten to add) never felt bad about any of it. Guys were guys and women were hot and there was lots of feasting and stuff.

There's more at the link.

Ice, ice, baby. Ice on the Hudson.

Terminology: Generative Machines, Epistemic Structure of the Cosmos, Intelligence-Complete

I’ve been spending a lot of time with my chatbots, ChatGPT and Claude, and some terminological issues have come. Noting particularly deep, just clarification.

Generative machines vs. equilibrium machines

While we talk of computers as machines, it’s obvious that they’re very different beasts. Electric drills, helicopters, sewing machines, hydraulic presses, they’re all (proper) machines. Interaction with and manipulaton of matter is central to their purpose. Computers, well, technically, yes, they push electrons around in intricate paths, and electrons are matter, subatomic particles, very small chunks of matter, the smallest possible chunks. What computers are really about, though, is manipulate bits, units of information. And they use “trillions of parts” (a phrase I have from Daniel Dennett) to do so. Thus computers, with their trillions of parts, are very different from machines, with only 10s, 100s, or 1000s of parts.

So, what names should we give to differentiate them. “Type 1” and “Type 2” machines would do the job, but it’s not very descriptive. ChatGPT and I settled on “equilibrium machines” for those machines centered on interaction with matter while “generative machines” seemed appropriate to bit-wranglers. “Generative” seems just right for computers, with its echoes on Chomsky’s generative grammar the generative pre-trained transformer (GPT) of machine learning. “Equilibrium machines” is perhaps a bit oblique for the other kind of machine, but it’s meant to evoke the equilibrium world of macroscopic devices as opposed to the far-from-equilibrium world of, well, generative machines.

Epistemic Structure of the Cosmos

Back in 2020 I wrote of the metaphysical structure of the cosmos. I said:

There is no a priori reason to believe that world has to be learnable. But if it were not, then we wouldn’t exist, nor would (most?) animals. The existing world, thus, is learnable. The human sensorium and motor system are necessarily adapted to that learnable structure, whatever it is.

I am, at least provisionally, calling that learnable structure the metaphysical structure of the world.

I’ve always been uneasy with “metaphysical” in that role. ChatGPT suggested that “epistemic” would serve better. The epistemic structure of the cosmos, I like that. As for “cosmos,” the dictionary tells me that the word implies order, which I like as well. 

I leave it as an exercise to the reader to demonstrate that the epistemic structure of the cosmos must necessarily be recursive. 

Intelligence-Complete

A generative machine is intelligence-complete if it possesses the full capacities of human intelligence, whatever human intelligence is. By that definition LLS are not intelligence complete. As for human intelligence, I like the account given in What Miriam Yevick Saw: The Nature of Intelligence and the Prospects for A.I., A Dialog with Claude 3.5 Sonnet.

Wuthering Heights at New savanna

As we've got a new Wuthering Heights movie coming to town this weekend I thought I'd publish a post linking to the various posts I've written about that strange, passionate, and perverse story. I first read Wuthering Heights in high school and then again in graduate school, where I wrote a term paper about it. I pretty much forget that paper – though I've probably got it in storage somewhere – but I do remember that I was concerned about tying the two parts of the novel together.

Perhaps I even made a connection with the two-generation plot Shakespeare used in his late Romances, such at The Winter's Tale. In these romances a rift between the members of a male-female pair is opened in the first story and remains until it is resolved by a (successful) romance in the second generation. In The Winter's Tale Paulina is restored to Leontes through the romance of Perdita, their daughter, and Florizel. The restoration is rather grim in Wuthering Heights, as it happens when Heathcliff throws himself into the grave of the dead Catherine once he sees that Cathy Linton, his Catherine's daughter, has become attached to Hareton Earnshaw, who she is teaching to read. I've never written at length about the novel since then, but if I were to do so, that's what I concentrate on, that and the complicated narrative structure.

While I've not given the novel a full treatment, I have featured it in a number of posts. Here they are.

* * * * *

Toward a Computational Historicism. Part 3: Abstraction at the Time Scale of History, April 23, 2014.

I discuss both The Winter's Tale and Wuthering Heights. I give particular attention to how the novel opens, pointing out how, because he assumes that the people he meets in the house must constitute a "natural" family, Lockwood makes several mistaken guesses about how these people are related. Moreover, both stories are about nature and nurture and the descent of character from one generation to the next.

Godzilla and Wuthering Heights, Kissing Cousins? April 25, 2024. From the post:

Thus both stories have a certain sequence of events that happens relatively late in the overall sequence. As the stories are actually narrated, however, this late sequence is moved to the beginning and the other sequences are adjusted to accommodate. In both cases the sequence that is moved involves a narrator external to the main sequence but known to the characters in it.

Wuthering Heights, Vampires, Handbooks, and Citizen Science, September 15, 2025.

Heathcliff, a vampire? No, but he's in the zone. Also, a link to the best reconstruction of the chronology of events in Wuthering Heights that I've been able to find. It's in The reader;s Guide to Wuthering Heights by Paul Thompson. Here's the link, Timeline.

The King’s Wayward Eye: For Claude Lévi-Strauss, October 25, 2019.

Beyond Lévi-Strauss, this post is mostly about The Winter's Tale in its relation to Pandosto. But it's got two paragraphs about the two-generation plot in Wuthering Heights near. That whole post is a good context in which to read those two paragraphs.

Humans and Dogs in Wuthering Heights, September 1, 2021 (reposted from The Valve, 12.30.2009)

I note: "...violence between dogs and humans takes place at important transition points in the novel. I’ve collected five such passages in this post and italicized the dog references within each passage." At that point I'd only made it through 18 out of 24 chapters. I wonder how dogs will appear in the current movie?

Operationalizing two tasks in Gary Marcus’s AGI challenge, June 9, 2022.

Among other things, Marcus claims that "In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in Rebooting AI." In the last quarter toa third or so of the post I discuss some of the problems the AI would have with Wuthering Heights.

Sunday, February 8, 2026

Three vehicles

Cultural Ranks Revisited: From Stages to Dynamics

This post is by ChatGPT and summarizes a discussion we had about cultural ranks.

* * * * *

Cultural Rank Theory, as David Hays and I originally formulated it in the 1980s and 90s, was never meant to describe sharp stepwise transitions in human cognition or society. Yet it is easy to read it that way. Recent work—prompted by long reflection on complex dynamics and by hands-on experimentation with large language models—has clarified what the theory has always implied but did not yet fully articulate: cultural ranks are emergent stability regimes, not stages, and their spread through a population can take centuries.

Ranks as emergent ontologies, not ladders

Each cultural rank brings with it a characteristic ontology: what kinds of things are taken to exist, how causality is understood, what counts as explanation, and what kinds of agency are intelligible.

Crucially:

  • Ranks emerge locally, unevenly, and experimentally.
  • They are often articulated by elites, artists, or institutions long before they are widely embodied by ordinary adults.
  • Full population “saturation” may lag emergence by generations.

This distinction—between emergence and saturation—is the key refinement.

Rank 3 reconsidered: ego control as a dynamical achievement

Rank 3 has often been glossed as the rank of “reason,” “reflection,” or “self-control.” A more precise formulation is now possible:

Rank 3 is characterized by ego control understood as a regulatory capacity: the ability to hold destabilizing thoughts, emotions, or simulations without immediately authorizing them as action.

This is not a trait one simply “has.” It is a control regime, sustained by feedback, inhibition, and temporal integration.

Two intellectual developments made this clearer than it could have been earlier:

  1. Long engagement with nonlinear dynamics and neural systems (especially through the work of Walter Freeman).
  2. Direct experimentation with LLMs, which behave like semantic systems under load and make phenomena such as phase alignment, boundary sensitivity, and catastrophic misinterpretation visible and testable.

Shakespeare: Rank 3 at the point of emergence

Elizabethan England was not a Rank-3 society. Ego control was coming into view, not yet normative.

This is why Shakespeare matters so much.

In medieval sources such as the Amleth story, later reworked as Hamlet, suspicion and impulse flow directly into action. In Shakespeare’s version, by contrast, the drama turns on how the protagonist treats his own thoughts. Thinking becomes narratively distinct from knowing; imagination becomes dangerous if not regulated.

The same contrast appears even more starkly when Shakespeare rewrites his source for The Winter’s Tale. In Robert Greene’s Pandosto, the king’s jealous thought is immediately authorized as action, and the story ends in tragedy. Shakespeare allows the same thought to arise—but refuses to grant it sovereign authority. Repair becomes possible, though only through a long temporal loop. Romance, in this view, is not sentiment; it is a control regime that buys time when ego control fails too late.

Shakespeare could do this without a narrator only because he was a virtuoso. The novel later routinizes Rank-3 storytelling by institutionalizing the ego function in the figure of the narrator—making irony, distance, and perspective stable rather than exceptional.

Queen Elizabeth I: ego control as public performance

Rank emergence is often visible first in exemplary individuals. Queen Elizabeth I provides a textbook case.

In her 1559 speech refusing marriage, she explicitly separates desire, fear, and political pressure from authorized action. She rebinding attachment from a husband to an abstract polity (“I am already bound unto a husband, which is the kingdom of England”) and performs this reasoning publicly.

This is not repression; it is symbolic substitution and temporal regulation—core Rank-3 operations. Elizabeth does not prove Rank 3 was widespread. She shows it was intelligible.

The American Constitution: Rank 3 before saturation

The same emergence-before-saturation pattern appears institutionally in the late 18th century.

The U.S. Constitution is arguably the first Rank-3 nation-state design. It assumes:

  • separation of person and office,
  • abstract allegiance to rules rather than rulers,
  • and
governance by internalized control loops (checks and balances), not virtue alone.

But the population capable of reliably sustaining those assumptions did not yet exist. That would come only in the 19th century with the rise of a bureaucratic middle class—where adults were required, in daily life, to distinguish personal interest from institutional role.

The fact that this distinction remains “shaky” even today is not a refutation of Rank theory; it is exactly what the theory predicts.

Why this refinement matters now

We are currently living through a rank transition under technological pressure. Rank-4 technologies—networks, AI, computation as model rather than tool—are being pulled toward Rank-2 institutional forms (charisma, faction, personalization), while simultaneously stressing Rank-3 control regimes.

Clarifying Rank 3 as a dynamical regulatory achievement, rather than a historical stage or personality trait, allows us to:

  • understand why regression and instability are so common,
  • see why older explanatory models fail,
  • and recognize that emergence does not guarantee saturation.

From exploration to construction

This refinement is new—not because the earlier theory was wrong, but because the conceptual environment has changed. Complex dynamics and runnable semantic systems have made it possible to say mechanistically what could once only be said schematically. 

Cultural Rank Theory now describes not just what emerges, but how it runs, how it fails, and how it is repaired.

That shift—from collecting ideas to building with them—is itself a Rank-3 move.

Bald eagles on the Hudson

;

Agent D