Saturday, March 14, 2026

Not so long ago during the winter

The profession of literary criticism as I have observed it over the course of 50 years [& related matters]

Last entry added 3.14.26 
Partial update (two entries, the last two) 12.11.23
Updated 6.21.21.
Updated 12.9.19.
Updated 6.23.17.

In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.

This post is a guide to my historically-oriented thinking about academic literary criticism. Much, but not all, of the historical material is autobiographical in nature. For, above all, taken collectively, these posts represent my effort to understand my relationship to the academic study of literary criticism.

I list the articles more or less in the order of writing. In some cases an article has been rewritten and revised several years after I first wrote it. The link I give is to the most recent version.

Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)

This is about my years at Johns Hopkins, both undergraduate (1965-1969) and graduate (1969-72). That’s when, I see in retrospect, I left the profession intellectually, with a “structuralism and beyond” MA thesis on “Kubla Khan,” even before I’d joined it institutionally, by getting my PhD. I originally wrote this while I was working on my PhD in English at SUNY Buffalo. Art Efron published a journal, Paunch, and I wrote it for that. The current version includes interpolated comments from 2014 and 2015.

The Demise of Deconstruction: On J. Hillis Miller’s MLA Presidential Address 1986. PMLA. Vol. 103, No. 1, Jan. 1988, p. 57.

A letter I published in PMLA in which I replied to J. Hillis Miller on the eclipse of deconstruction. I suggested 1) that deconstruction had a different valence for those who merely learned it in graduate school than for those who had struggled to create it, and 2) that it was in eclipse because it did the same thing to every text.

“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto
March 31, 2010 (originally at The Valve)
https://new-savanna.blogspot.com/2011/06/naturalist-criticism-not-cognitive-not.html

I declare my commitment to ‘naturalist’ literary criticism, thereby denying ‘cognitive criticism,’ with which I had associated myself for years, and ‘Darwinian criticism,’ with which I had never associated myself. Takes the form of a loose dialog.

For the Historical Record: Cog Sci and Lit Theory, A Chronology
(2006-2016)

At the beginning of every course (at Johns Hopkins) Dick Macksey would hand out a chronology, a way, I suppose, of saying “history is important” without lecturing on the topic. It was with that in mind that I originally posted this rough and ready chronology in a comment to a discussion at The Valve. The occasion was an online symposium that interrogated Theory by discussing the anthology, Theory’s Empire (Columbia UP 2005). I then emended it a bit and made it a freestanding post. As the title suggests, it juxtaposes developments in cognitive science and literary theory from the 1950s through the end of the millennium.

[BTW The entire Theory’s Empire symposium is worth looking at, including the comments on the posts: http://www.thevalve.org/go/valve/archive_asc/C41]

Seven Sacred Words: An Open Letter to Steven Pinker
(2007-2011)

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism (2015)

Steven Pinker has been a severe critic of the humanities for ignoring recent work in the social and behavioral sciences. He has also argued that the arts serve no biological purpose, that they are “cheesecake for the mind.” When I read his The Stuff of Thought (2007) I realized his later chapters contained the basis for an account of the arts. I sketched that out, added a brief account of why deconstruction had been popular, and published it as an open letter, along with his reply. It appeared first at The Valve (2007) and then at New Savanna (2011). In 2015 I posted it to a “session” at Academia.edu. I took some of my comments in that discussion along with some other materials and published the lot at Academic.edu as a working paper. In a final section I propose a four-fold division of literary criticism: 1) description, 2) naturalist criticism, 3) ethical criticism, and 4) digital criticism.

Lévi-Strauss and Myth: Some Informal Notes
(2007-2011)

Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition
(2007-2015)

These are two versions of roughly the same material. Each was assembled from four blog posts. The first and fourth sections are the same in both working paper, but two and three differ. The more recent version also contains a short appendix comparing Lévi-Strauss and Latour. I published the first series at The Valve shortly after Lévi-Strauss had died. They are an attempt to explain what Lévi-Strauss was up to in his work on myth, why he failed, and why that work remains important. The fourth section (common to both versions), Into Lévi-Strauss and Out Through “Kubla Khan”, is an account of how and why I went from Lévi-Strauss’s structuralism to cognitive science. Warning: it contains diagrams. I suppose I could create a deluxe edition which contains all the posts.

The Only Game in Town: Digital Criticism Comes of Age
(May 5, 2014)

Here I argue that digital criticism’s deepest contribution to literary criticism is that it requires fundamentally different modes of thinking. It is not purely discursive. It is statistical and visual. Moreover the visualizations are central to the thought process. This may also be the first time I’ve explicitly identified the mid-1970s as an important turning point in the recent history of literary criticism.

Paths Not Taken and the Land Before Us: An Open Letter to J. Hillis Miller
(January 30, 2015)

I had studied with Miller at Johns Hopkins (but have had no contact with him since). While I certainly say a bit about what I’ve been doing since I left Hopkins, including ring-composition, I also introduce him to Matt Jockers’ Macroanalysis and Goldstone and Underwood, “The Quiet Transformations of Literary Studies: What Thirteen Thousand Scholars Could Tell Us”. New Literary History 45, no. 3, Summer 2014. I mention Kemp Malone, a Hopkins person, as he came up in blog discussion of the paper.

On the Poverty of Literary Cognitivism 2: What I Learned When I Walked off the Cliff of Cognitivism
(August 24, 2015)

I attempt to explain what, in the end, I got out of my immersion in cognitive networks since I haven’t used them in my post-graduate work in literature. What I got most immediately was a powerful way of thinking about language in general where there is a sharp distinction between the object of thought, captured in diagrams, and a given text: The text is one thing, the model is another. There is no confusing the two. Moretti has made similar remarks about the diagrams he uses in ‘distant reading.’

Turning Point interior

Three for 3QD: Man-Machine Collaboration, E.T. the Extra-Terrestrial, American Heartbreak in Jersey City

Generally when I post an article to 3 Quarks Daily I will follow up with a post here at New Savanna linking to the 3QD piece and extending or commenting on that argument in some way. However, as I’ve indicated in this post, Coming out of melancholy, again, from Feb. 5, I went into psychological hibernation (aka melancholy) last September. While I did manage to post to 3QD during that period, I didn’t post notices here at the Savanna. Here are those notices, belatedly.

* * * * *

Some Hybrid Remarks on Man-Machine Collaboration, September 12, 2025.

That essay touches on a number of things: 10 LLMs as cultural technology, 2) my Fourth Arena concept, 3) Latour on the (false) distinction between nature and culture, and 4) the issue of how proper attribution for hybrid essays (essays where an AI played a significant role). In between 3 and 4 I inserted an essay by ChatGPT.

* * * * *

E.T. the Extra-Terrestrial: Into the Bopi with Steven Spielberg, Oct. 12, 2025.

It’s what its title suggests, an essay review of Spielberg’s film. It’s staged as a science fiction story, but is that really what it is, science fiction? From the article:

On the whole, my sense is that, in making this film Spielberg ventured into the bopi. And just what, pray tell, is that? I have the term from by friend, Charlie Keil. Early in his career he did fieldwork among the Tiv of Nigeria. The bopi is an area that’s set aside for children’s play. Moreover, adults are forbidden to enter the bopi. [...] But the whole film feels like an imaginative bopi. It’s a kid-centric world in which adults are an intrusive presence. [...]

Ultimately the story of E.T. seems to be almost an allegory or metaphor for art itself, a zone apart from the world into which we move to revivify and reconstruct.

* * * * *

American Heartbreak: The ‘Urban Design Studio’ in Jersey City, Nov. 3, 2025.

This is a photo essay about a graffiti site in Jersey City, now demolished.

Friday, March 13, 2026

Friday Fotos: What’s a tablescape?

Just what the name suggests, like a landscape only on a table top.

I’ve been taking photos of my meals and posting them here since September of 2018. Since I eat my meals indoors where they’re arrayed on the top of a table, my food photos will often, though not necessarily always, catch the table itself. What I mean by a tablescape, however, is more specialized. Here’s a recent tablescape:

To get that shot I set the camera on the table, pointed it in an appropriate direction, and snapped the shutter. That means your point of view is about an inch or an inch-and-a-half above the table itself and that your angle of regard is parallel to the tabletop. Just as landscapes are photographed or painted from some location on the land itself, so a tablescape is photographed from a location on the table itself.

This, in contrast, is NOT a tablescape, though a table is quite visible:

But I’m holding my camera in my hand so that I can get a particular shot. In this case, I’m interested in how the glimpse of the placemat you see through the carafe is displaced relative to what you see through the air (due to optics). Couldn’t get that shot in a tablescape.

There is an element of chance in taking a tablescape. Why? Because you can’t line up the photo in the normal way, either through a viewfinder or viewscreen. I suppose you could try, but it would be difficult, not terribly successful and not worth the effort. Fact is, the chance element is one reason for taking tablescapes. You don’t quite know what you’re going to see.

I doubt that I would ever have deliberately taken this photo, but as a tablescape it’s fine.

Or is it? And that’s why I’m taking these shots, to force me to think about each photo. Is this an image I want to keep, to show to others, why? When you shoot, say, the Empire State Building, which I can do quite easily, those questions don’t arise quite so insistently. Why not? Because the Empire State Building is an iconic structure and, as such, is certified photo-worthy. Shooting certified photo-worthy subjects is a no-brainer.

Here’s another tablescape:

Notice the expanse of the table itself in the foreground. That’s a typical feature of table shots. I cropped most of the table out in that first shot up there. Notice also in that first shot, I’m pointing out the window, where you see street lights in Hoboken and tall buildings in Manhattan. This shot, in contrast is pointed at the interior of the restaurant. Look at the upper left; looks like shadows of some plants cast on the side of a column. What plants, where? Notice the reflection of the carafe on the shiny surface of the table. There are shadows cast across the table as well.

Here’s one last tablescape, without comment:

Thursday, March 12, 2026

Night codes

AI as coder, reports from the trenches

Clive Thompson, Coders Coded Their Job Away. Why Are So Many of Them Happy About It? NYTimes, Mar. 12, 2026.

He and Brennan-Burke, who is 32, are still software developers, but like most of their peers now, they only rarely write code. Instead, they spend their days talking to the A.I., describing in plain English what they want from it and responding to the A.I.’s “plan” for what it will do. Then they turn the agents loose.

A.I. being A.I., things occasionally go haywire. Sometimes when Claude misbehaves and fails to test the code, Ebert scolds the agent: Claude, you really do have to run all the tests.

To avoid repeating these sorts of errors, Ebert has added some stern warnings to his prompt file, the list of instructions — a stern Ten Commandments — that his agents must follow before they do anything. When you behold the prompt file of a coder using A.I., you are viewing a record of the developer’s attempts to restrain the agents’ generally competent, but unpredictably deviant, actions.

A coder is now more like an architect than a construction worker.

I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.”

Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance. [...]

Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots.

This vertiginous shift threatens to stir up some huge economic consequences. For decades, coding was considered such wizardry that if you were halfway competent you could expect to enjoy lifetime employment. If you were exceptional at it (and lucky), you got rich. Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code.”

Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace.

Why programmers like their AI coders:

The enthusiasm of software developers for generative A.I. stands in stark contrast to how other Americans feel about the impact of large language models. Polls show a majority are neutral or skeptical; creatives are often enraged. But if coders are more upbeat, it’s because their encounters with A.I. are diametrically opposite to what’s happening in many other occupations, says Anil Dash, a friend of mine who is a longtime programmer and tech executive. “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”

There's much more at the link.

The Norwegian Breakaway

Wednesday, March 11, 2026

Maybe the AI industry is waking up, at long last

From the tweet:

This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics. 

Which one doesn't belong? [Hint: Yum]

Why Trump's corruption is the single most disturbing thing about his presidency [+ Yazoo!]

A friend recently asked me for critiques of Trump’s performance. The first thing that came to mind was his corruption and, though I went on to say some other things, I kept thinking, “corruption, corruption, corruption.” Why? Can’t we just set the corruption aside as an unfortunate personal attribute and focus on the results of his policies? I thought about it and decided that, “no, we can’t.” Why? Because his corruption undermines the basis of modern institutional structures. It undermines democracy itself. 

The following conversation with Claude 5.4 explains. Also, there’s a bonus at the end, The Yazoo Land Fraud. It’s a whopper.

* * * * *

First, I want you to review the paper, The Evolution of Cognition, in the project area. Now, here’s a passage from a book [David] Hays wrote about the evolution of technology:

Let me begin, this time, closest to home, with New York. The most basic principle of organization here and now is that an officer of government must separate the rights and obligations of his office from those of his or her person. If the officer is given a budget of $10,000,000, the person must not take $1 of that--as numerous officers have learned only by going to prison. Giving special help to friends and relations is also forbidden. Nor may the officer look to daddy's college roommate, who now occupies a much higher office, for special help in solving a hard problem. This principle belongs to rank 3; violation is now said to be corrupt, but in rank 2 "corruption" is the way of life.

Now, that sense of corruption is consonant with the one you’ve been advancing in this conversation. But now it’s been placed in the larger context of the ranks account of cultural evolution. The separation of one’s own interests from the duties and obligations of one’s organizational role is central to modern society. Without it, it seems to me, modern society would be very difficult, but perhaps not impossible.

But it does seem to me that such separation is central to democracy. Without that separation, those at the top of the social order will feel and be free to maintain their position of advantage and those at the bottom would have no access to advancement.

Having now reviewed the paper, I can see how neatly the Hays passage places our entire conversation within the ranks framework — and it sharpens everything considerably. The key move Hays makes is identifying the separation of role from person as specifically a Rank 3 achievement. It's not just a moral or political norm — it's a cognitive-organizational one. Rank 3 is the rank of algorithms, of systematic procedure, of the impersonal rule applied consistently regardless of who is asking. The very idea that an officer's budget is not the officer's money requires a level of abstraction — the office as a distinct entity from the person occupying it — that Rank 2 organization simply doesn't sustain. In Rank 2, the person is the role, which is why what we call corruption was then simply the normal logic of how things worked. Daddy's college roommate in the higher office is not a bug in Rank 2, it's the system.

Your point about democracy is the crucial extension though, and I think it goes even deeper than access to advancement, important as that is. Democracy requires that losing be tolerable — that the party that loses an election accepts the result and peacefully transfers power. That is only possible if the institutional role of "president" or "minister" is clearly separable from the person occupying it. If power is personal rather than institutional, losing it is existential, not merely political. You can't peacefully hand over something that is felt to be constitutive of your identity and social existence. Which is why the collapse back into Rank 2 personal-relationship logic is so specifically lethal to democratic governance — it makes the normal mechanics of democracy feel like annihilation to those who lose.

And of course the peaceful transfer of power is a central problem of political organization. It was central to Shakespeare’s history plays, and is one reason for adopting a democratic form of government.

Yes, and it's one of those cases where the historical and the literary illuminate each other directly. Shakespeare's history plays are obsessed with exactly this problem — how does power move from one holder to the next without destroying the social order in the process? And the answer that medieval and early modern England kept reaching for was dynastic legitimacy: the king's body natural dies but the king's body politic — the office, the crown — persists and passes by hereditary right. Ernst Kantorowicz's great book The King's Two Bodies traces exactly this legal-theological construction, and it's directly relevant to Richard II and the Henriad.

But the dynastic solution is enormously fragile, as Shakespeare shows again and again. It depends on there being an unambiguous legitimate heir, on that heir being competent, on powerful barons accepting the arrangement, on the Church's sanction — any one of these can fail and the whole mechanism breaks down into civil war. The Wars of the Roses, which shadow the entire sequence of history plays, are essentially what happens when dynastic legitimacy becomes contested. And contested legitimacy means that power can only be settled by force, which is catastrophically expensive in lives and social order.

Democracy solves this — at its best — by separating the question of who holds power from the question of legitimate succession entirely. The procedure is the legitimacy. You don't need the right bloodline or divine sanction; you need a credible count of votes and an institutional framework that everyone accepts as binding. The office passes not through the body of the king but through the mechanism of the election.

Which is why attacks on the integrity of elections are so specifically dangerous — they don't just dispute a particular outcome, they corrode the procedural legitimacy that makes peaceful transfer possible at all. You're pulling out the load-bearing member, to return to your earlier metaphor.

And this connects back to the ranks framework in an interesting way. The democratic solution is a Rank 3 achievement — it requires the abstraction of legitimate procedure from personal authority. But it's perpetually vulnerable to Rank 2 pressures — the loyalty to persons over institutions, the feeling that my man losing means I have lost something existential. The democratic form sits on top of a substrate of human social psychology that was shaped long before Rank 3 cognition existed, and that substrate never goes away. 

* * * * *

The Yazoo Land Fraud

My friend Michael Liss told me about what may be the singled biggest act of corruption in America's political history. It's from the old days, The Yazoo Land Fraud. Here's what Claude had to say about it:

The Yazoo Land Fraud is a fascinating and somewhat underappreciated episode in early American history, touching on corruption, populist backlash, constitutional law, and the politics of the new republic all at once.

The rush to money and IP theft in the AI world

Tuesday, March 10, 2026

Contemplating French toast – Yummmm.....

AI and the American Millennial Imagination

This morning I had a long conversation with ChatGPT 5.2 Auto. We started with the varied role of the Messiah in Jewish thought, then zeroed in on Chabad. From there we went to America as a particularly fertile ground for millennial movements and ended up talking about the Technological Singularity and AI Doom. I asked ChatGPT 5.2 to write a blog essay on the last half of the conversation. Here's the text it wrote.

The current discourse about artificial intelligence is saturated with ideas that look strikingly familiar to historians of religion. On the one hand we hear predictions of a coming technological singularity—an intelligence explosion that will transform civilization so radically that the future becomes literally unimaginable. On the other hand we encounter warnings that a superintelligent AI could destroy humanity. Optimistic or catastrophic, both visions share a common structure: they anticipate a decisive rupture in history.

What is remarkable is not simply that such narratives exist. Millennial thinking has appeared many times before. What is remarkable is where it has appeared: at the center of one of the most powerful technological industries in the world. To understand this phenomenon, it helps to place contemporary AI discourse within the much longer history of American millenarian movements.

The American Millennial Tradition

From the beginning, the United States has been unusually fertile ground for millennial expectation. The roots lie partly in the worldview of the Puritan settlers of New England. Many of them interpreted their migration in providential terms, imagining the new colonies as a “city upon a hill” with a special role in sacred history. Biblical prophecy—especially the apocalyptic imagery of the Book of Revelation—was deeply embedded in their intellectual world.

The expectation of historical transformation did not disappear with the decline of Puritanism. Instead it evolved into new forms during the nineteenth century, when the young republic developed what sociologists call a “religious free market.” With no established church and relatively weak religious authority, charismatic leaders could easily found new movements. The result was an extraordinary proliferation of sects, many of them millennial.

The Millerite movement of the 1840s predicted the imminent return of Christ, culminating in the famous “Great Disappointment” of 1844. The aftermath gave rise to several enduring denominations, including Seventh-day Adventism. The same religious environment produced other innovative movements such as Mormonism and Jehovah’s Witnesses. The pattern was clear: Americans repeatedly generated groups that believed they were living at the threshold of world transformation.

Several structural features of American society encouraged this dynamic. Religious freedom allowed new sects to form easily. Frontier conditions weakened institutional authority and encouraged experimentation. High levels of biblical literacy encouraged direct interpretation of prophetic texts. And the ideology of American exceptionalism fostered the belief that the nation itself might play a central role in world-historical events.

Millennial imagination became, in effect, a recurring feature of American culture.

Counterculture and Technological Transformation

The next major transformation of millennial expectation occurred during the counterculture of the 1960s. The language changed, but the underlying structure remained.

Instead of the Second Coming, the counterculture anticipated an impending transformation of consciousness. The popular phrase “Age of Aquarius” expressed the hope that humanity was entering a new spiritual epoch. Psychedelic experience, Eastern spirituality, and communal experimentation were widely interpreted as signs that a new form of human existence might be emerging.

The San Francisco Bay Area became the epicenter of this cultural ferment. What is less widely recognized is how closely this environment overlapped with the emerging world of computing. Figures such as Douglas Engelbart were already imagining computers as tools for augmenting human intellect. Publications like Stewart Brand’s Whole Earth Catalog promoted a synthesis of ecological awareness, communal experimentation, and personal technology.

Out of this hybrid culture—part psychedelic, part technological—emerged a new kind of futurism. Instead of mystical awakening alone, technological systems were now imagined as instruments of human transformation.

The Singularity Narrative

The modern idea of the technological singularity crystallized in the early 1990s with the work of the science fiction writer and mathematician Vernor Vinge. Vinge argued that once machines became more intelligent than humans, they would design even more powerful machines, producing an “intelligence explosion.” Beyond that point, the course of history would become fundamentally unpredictable.

Ray Kurzweil later popularized this idea, embedding it in a broader narrative of exponential technological growth. In Kurzweil’s vision, advances in computation, biotechnology, and nanotechnology converge to produce radical life extension, human–machine integration, and eventually a post-biological civilization.

Structurally, the singularity narrative resembles earlier millennial expectations. In religious movements, a divine intervention transforms the world and inaugurates a new age. In the singularity narrative, a technological breakthrough performs the same role. Salvation becomes technological transcendence.

The narrative retains the familiar elements of eschatology: a decisive threshold, a transformed humanity, and a future that lies beyond ordinary comprehension.

The Rise of AI Doom

Alongside the optimistic singularity narrative, a darker version has emerged in recent decades. Philosophers and researchers concerned with existential risk have argued that a superintelligent AI might pursue goals incompatible with human survival. The philosopher Nick Bostrom articulated this concern in systematic form, analyzing the “control problem” posed by potentially superintelligent systems. More dramatic warnings have emerged from the rationalist community associated with the website LessWrong, where the possibility of human extinction from misaligned AI is often treated as a central moral concern.

Despite their differences, the optimistic and catastrophic narratives share a common structure. Both assume that technological progress is approaching a critical threshold that will fundamentally alter the course of human history. The singularity promises transcendence; AI doom predicts annihilation. In both cases, the present is interpreted as the final moment before a world-transforming event.

This structure is instantly recognizable to historians of millennial movements.

A New Historical Situation

Yet there is one crucial difference between AI millennialism and earlier movements. Traditional millenarian groups predicted world transformation but did not control the mechanisms that might produce it. The Millerites could only await the Second Coming. Apocalyptic sects interpreted history but did not make it.

In the case of artificial intelligence, the situation is different. The people articulating the transformative narrative are often the same people building the technology. Companies such as OpenAI, Anthropic, and DeepMind operate with enormous financial resources while simultaneously discussing existential risks associated with their own research.

In this sense, the prophetic discourse and the technological apparatus are intertwined. Concerns about alignment and safety are not merely philosophical speculation; they shape the research agendas of organizations backed by billions of dollars in investment. No previous millennial movement has possessed such material power.

A Persistent Cultural Pattern

Seen in historical perspective, the emergence of AI millenarian thinking is less surprising than it first appears. American culture has repeatedly produced narratives of imminent transformation, whether religious, political, or technological. The agents of change have varied—divine intervention, revolutionary politics, expanded consciousness, or superintelligent machines—but the expectation of a decisive turning point in history has remained remarkably stable.

Artificial intelligence represents the latest chapter in that tradition. The language is technological rather than theological, but the structure of expectation remains familiar: humanity stands at the edge of a new epoch, and the choices made in the present may determine the fate of the species.

Whether the transformation proves revolutionary or merely evolutionary remains to be seen. But the cultural imagination surrounding AI clearly belongs to a long American lineage of millennial thinking—one that stretches from Puritan prophecy to Silicon Valley futurism.