NEW SAVANNA
“You won't get a wild heroic ride to heaven on pretty little sounds.”– George Ives
Saturday, March 14, 2026
The profession of literary criticism as I have observed it over the course of 50 years [& related matters]
Updated 6.23.17.
In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.
“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto
March 31, 2010 (originally at The Valve)
https://new-savanna.blogspot.com/2011/06/naturalist-criticism-not-cognitive-not.html
I declare my commitment to ‘naturalist’ literary criticism, thereby denying ‘cognitive criticism,’ with which I had associated myself for years, and ‘Darwinian criticism,’ with which I had never associated myself. Takes the form of a loose dialog.
(2006-2016)
(2007-2011)
Lévi-Strauss and Myth: Some Informal Notes
(2007-2011)
(2007-2015)
(May 5, 2014)
(January 30, 2015)
(August 24, 2015)
Three for 3QD: Man-Machine Collaboration, E.T. the Extra-Terrestrial, American Heartbreak in Jersey City
Generally when I post an article to 3 Quarks Daily I will follow up with a post here at New Savanna linking to the 3QD piece and extending or commenting on that argument in some way. However, as I’ve indicated in this post, Coming out of melancholy, again, from Feb. 5, I went into psychological hibernation (aka melancholy) last September. While I did manage to post to 3QD during that period, I didn’t post notices here at the Savanna. Here are those notices, belatedly.
* * * * *
Some Hybrid Remarks on Man-Machine Collaboration, September 12, 2025.
That essay touches on a number of things: 10 LLMs as cultural technology, 2) my Fourth Arena concept, 3) Latour on the (false) distinction between nature and culture, and 4) the issue of how proper attribution for hybrid essays (essays where an AI played a significant role). In between 3 and 4 I inserted an essay by ChatGPT.
* * * * *
E.T. the Extra-Terrestrial: Into the Bopi with Steven Spielberg, Oct. 12, 2025.
It’s what its title suggests, an essay review of Spielberg’s film. It’s staged as a science fiction story, but is that really what it is, science fiction? From the article:
On the whole, my sense is that, in making this film Spielberg ventured into the bopi. And just what, pray tell, is that? I have the term from by friend, Charlie Keil. Early in his career he did fieldwork among the Tiv of Nigeria. The bopi is an area that’s set aside for children’s play. Moreover, adults are forbidden to enter the bopi. [...] But the whole film feels like an imaginative bopi. It’s a kid-centric world in which adults are an intrusive presence. [...]
Ultimately the story of E.T. seems to be almost an allegory or metaphor for art itself, a zone apart from the world into which we move to revivify and reconstruct.
* * * * *
American Heartbreak: The ‘Urban Design Studio’ in Jersey City, Nov. 3, 2025.
This is a photo essay about a graffiti site in Jersey City, now demolished.
Friday, March 13, 2026
Friday Fotos: What’s a tablescape?
Just what the name suggests, like a landscape only on a table top.
I’ve been taking photos of my meals and posting them here since September of 2018. Since I eat my meals indoors where they’re arrayed on the top of a table, my food photos will often, though not necessarily always, catch the table itself. What I mean by a tablescape, however, is more specialized. Here’s a recent tablescape:
To get that shot I set the camera on the table, pointed it in an appropriate direction, and snapped the shutter. That means your point of view is about an inch or an inch-and-a-half above the table itself and that your angle of regard is parallel to the tabletop. Just as landscapes are photographed or painted from some location on the land itself, so a tablescape is photographed from a location on the table itself.
This, in contrast, is NOT a tablescape, though a table is quite visible:
But I’m holding my camera in my hand so that I can get a particular shot. In this case, I’m interested in how the glimpse of the placemat you see through the carafe is displaced relative to what you see through the air (due to optics). Couldn’t get that shot in a tablescape.
There is an element of chance in taking a tablescape. Why? Because you can’t line up the photo in the normal way, either through a viewfinder or viewscreen. I suppose you could try, but it would be difficult, not terribly successful and not worth the effort. Fact is, the chance element is one reason for taking tablescapes. You don’t quite know what you’re going to see.
I doubt that I would ever have deliberately taken this photo, but as a tablescape it’s fine.
Or is it? And that’s why I’m taking these shots, to force me to think about each photo. Is this an image I want to keep, to show to others, why? When you shoot, say, the Empire State Building, which I can do quite easily, those questions don’t arise quite so insistently. Why not? Because the Empire State Building is an iconic structure and, as such, is certified photo-worthy. Shooting certified photo-worthy subjects is a no-brainer.
Here’s another tablescape:
Notice the expanse of the table itself in the foreground. That’s a typical feature of table shots. I cropped most of the table out in that first shot up there. Notice also in that first shot, I’m pointing out the window, where you see street lights in Hoboken and tall buildings in Manhattan. This shot, in contrast is pointed at the interior of the restaurant. Look at the upper left; looks like shadows of some plants cast on the side of a column. What plants, where? Notice the reflection of the carafe on the shiny surface of the table. There are shadows cast across the table as well.
Here’s one last tablescape, without comment:
Thursday, March 12, 2026
AI as coder, reports from the trenches
Clive Thompson, Coders Coded Their Job Away. Why Are So Many of Them Happy About It? NYTimes, Mar. 12, 2026.
He and Brennan-Burke, who is 32, are still software developers, but like most of their peers now, they only rarely write code. Instead, they spend their days talking to the A.I., describing in plain English what they want from it and responding to the A.I.’s “plan” for what it will do. Then they turn the agents loose.
A.I. being A.I., things occasionally go haywire. Sometimes when Claude misbehaves and fails to test the code, Ebert scolds the agent: Claude, you really do have to run all the tests.
To avoid repeating these sorts of errors, Ebert has added some stern warnings to his prompt file, the list of instructions — a stern Ten Commandments — that his agents must follow before they do anything. When you behold the prompt file of a coder using A.I., you are viewing a record of the developer’s attempts to restrain the agents’ generally competent, but unpredictably deviant, actions.
A coder is now more like an architect than a construction worker.
I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.”
Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance. [...]
Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots.
This vertiginous shift threatens to stir up some huge economic consequences. For decades, coding was considered such wizardry that if you were halfway competent you could expect to enjoy lifetime employment. If you were exceptional at it (and lucky), you got rich. Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code.”
Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace.
Why programmers like their AI coders:
The enthusiasm of software developers for generative A.I. stands in stark contrast to how other Americans feel about the impact of large language models. Polls show a majority are neutral or skeptical; creatives are often enraged. But if coders are more upbeat, it’s because their encounters with A.I. are diametrically opposite to what’s happening in many other occupations, says Anil Dash, a friend of mine who is a longtime programmer and tech executive. “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”
There's much more at the link.
Wednesday, March 11, 2026
Maybe the AI industry is waking up, at long last
Two Turing-class AI researchers just raised $2B in three weeks to bet against every LLM company on the planet.
— Aakash Gupta (@aakashgupta) March 11, 2026
Fei-Fei Li closed $1B for World Labs on February 18. LeCun closed $1.03B for AMI Labs today. Both building world models. Both arguing that the entire generative AI… https://t.co/pFHuVaY0XL
From the tweet:
This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics.
Why Trump's corruption is the single most disturbing thing about his presidency [+ Yazoo!]
A friend recently asked me for critiques of Trump’s performance. The first thing that came to mind was his corruption and, though I went on to say some other things, I kept thinking, “corruption, corruption, corruption.” Why? Can’t we just set the corruption aside as an unfortunate personal attribute and focus on the results of his policies? I thought about it and decided that, “no, we can’t.” Why? Because his corruption undermines the basis of modern institutional structures. It undermines democracy itself.
The following conversation with Claude 5.4 explains. Also, there’s a bonus at the end, The Yazoo Land Fraud. It’s a whopper.
* * * * *
First, I want you to review the paper, The Evolution of Cognition, in the project area. Now, here’s a passage from a book [David] Hays wrote about the evolution of technology:
Let me begin, this time, closest to home, with New York. The most basic principle of organization here and now is that an officer of government must separate the rights and obligations of his office from those of his or her person. If the officer is given a budget of $10,000,000, the person must not take $1 of that--as numerous officers have learned only by going to prison. Giving special help to friends and relations is also forbidden. Nor may the officer look to daddy's college roommate, who now occupies a much higher office, for special help in solving a hard problem. This principle belongs to rank 3; violation is now said to be corrupt, but in rank 2 "corruption" is the way of life.
Now, that sense of corruption is consonant with the one you’ve been advancing in this conversation. But now it’s been placed in the larger context of the ranks account of cultural evolution. The separation of one’s own interests from the duties and obligations of one’s organizational role is central to modern society. Without it, it seems to me, modern society would be very difficult, but perhaps not impossible.
But it does seem to me that such separation is central to democracy. Without that separation, those at the top of the social order will feel and be free to maintain their position of advantage and those at the bottom would have no access to advancement.
Having now reviewed the paper, I can see how neatly the Hays passage places our entire conversation within the ranks framework — and it sharpens everything considerably. The key move Hays makes is identifying the separation of role from person as specifically a Rank 3 achievement. It's not just a moral or political norm — it's a cognitive-organizational one. Rank 3 is the rank of algorithms, of systematic procedure, of the impersonal rule applied consistently regardless of who is asking. The very idea that an officer's budget is not the officer's money requires a level of abstraction — the office as a distinct entity from the person occupying it — that Rank 2 organization simply doesn't sustain. In Rank 2, the person is the role, which is why what we call corruption was then simply the normal logic of how things worked. Daddy's college roommate in the higher office is not a bug in Rank 2, it's the system.
Your point about democracy is the crucial extension though, and I think it goes even deeper than access to advancement, important as that is. Democracy requires that losing be tolerable — that the party that loses an election accepts the result and peacefully transfers power. That is only possible if the institutional role of "president" or "minister" is clearly separable from the person occupying it. If power is personal rather than institutional, losing it is existential, not merely political. You can't peacefully hand over something that is felt to be constitutive of your identity and social existence. Which is why the collapse back into Rank 2 personal-relationship logic is so specifically lethal to democratic governance — it makes the normal mechanics of democracy feel like annihilation to those who lose.
And of course the peaceful transfer of power is a central problem of political organization. It was central to Shakespeare’s history plays, and is one reason for adopting a democratic form of government.
Yes, and it's one of those cases where the historical and the literary illuminate each other directly. Shakespeare's history plays are obsessed with exactly this problem — how does power move from one holder to the next without destroying the social order in the process? And the answer that medieval and early modern England kept reaching for was dynastic legitimacy: the king's body natural dies but the king's body politic — the office, the crown — persists and passes by hereditary right. Ernst Kantorowicz's great book The King's Two Bodies traces exactly this legal-theological construction, and it's directly relevant to Richard II and the Henriad.
But the dynastic solution is enormously fragile, as Shakespeare shows again and again. It depends on there being an unambiguous legitimate heir, on that heir being competent, on powerful barons accepting the arrangement, on the Church's sanction — any one of these can fail and the whole mechanism breaks down into civil war. The Wars of the Roses, which shadow the entire sequence of history plays, are essentially what happens when dynastic legitimacy becomes contested. And contested legitimacy means that power can only be settled by force, which is catastrophically expensive in lives and social order.
Democracy solves this — at its best — by separating the question of who holds power from the question of legitimate succession entirely. The procedure is the legitimacy. You don't need the right bloodline or divine sanction; you need a credible count of votes and an institutional framework that everyone accepts as binding. The office passes not through the body of the king but through the mechanism of the election.
Which is why attacks on the integrity of elections are so specifically dangerous — they don't just dispute a particular outcome, they corrode the procedural legitimacy that makes peaceful transfer possible at all. You're pulling out the load-bearing member, to return to your earlier metaphor.
And this connects back to the ranks framework in an interesting way. The democratic solution is a Rank 3 achievement — it requires the abstraction of legitimate procedure from personal authority. But it's perpetually vulnerable to Rank 2 pressures — the loyalty to persons over institutions, the feeling that my man losing means I have lost something existential. The democratic form sits on top of a substrate of human social psychology that was shaped long before Rank 3 cognition existed, and that substrate never goes away.
* * * * *
The Yazoo Land Fraud
My friend Michael Liss told me about what may be the singled biggest act of corruption in America's political history. It's from the old days, The Yazoo Land Fraud. Here's what Claude had to say about it:
The Yazoo Land Fraud is a fascinating and somewhat underappreciated episode in early American history, touching on corruption, populist backlash, constitutional law, and the politics of the new republic all at once.
The rush to money and IP theft in the AI world
This level of desperation perfectly encapsulates the "monetization crisis" of the current AI paradigm. We’ve seen staggering capital injection, yet the path to a sustainable business loop remains so elusive that even "ethically bankrupt" shortcuts are now being legitimized in…
— David Wall (@DavidWall9987) March 11, 2026
Tuesday, March 10, 2026
AI and the American Millennial Imagination
This morning I had a long conversation with ChatGPT 5.2 Auto. We started with the varied role of the Messiah in Jewish thought, then zeroed in on Chabad. From there we went to America as a particularly fertile ground for millennial movements and ended up talking about the Technological Singularity and AI Doom. I asked ChatGPT 5.2 to write a blog essay on the last half of the conversation. Here's the text it wrote.
The current discourse about artificial intelligence is saturated with ideas that look strikingly familiar to historians of religion. On the one hand we hear predictions of a coming technological singularity—an intelligence explosion that will transform civilization so radically that the future becomes literally unimaginable. On the other hand we encounter warnings that a superintelligent AI could destroy humanity. Optimistic or catastrophic, both visions share a common structure: they anticipate a decisive rupture in history.
What is remarkable is not simply that such narratives exist. Millennial thinking has appeared many times before. What is remarkable is where it has appeared: at the center of one of the most powerful technological industries in the world. To understand this phenomenon, it helps to place contemporary AI discourse within the much longer history of American millenarian movements.
The American Millennial Tradition
From the beginning, the United States has been unusually fertile ground for millennial expectation. The roots lie partly in the worldview of the Puritan settlers of New England. Many of them interpreted their migration in providential terms, imagining the new colonies as a “city upon a hill” with a special role in sacred history. Biblical prophecy—especially the apocalyptic imagery of the Book of Revelation—was deeply embedded in their intellectual world.
The expectation of historical transformation did not disappear with the decline of Puritanism. Instead it evolved into new forms during the nineteenth century, when the young republic developed what sociologists call a “religious free market.” With no established church and relatively weak religious authority, charismatic leaders could easily found new movements. The result was an extraordinary proliferation of sects, many of them millennial.
The Millerite movement of the 1840s predicted the imminent return of Christ, culminating in the famous “Great Disappointment” of 1844. The aftermath gave rise to several enduring denominations, including Seventh-day Adventism. The same religious environment produced other innovative movements such as Mormonism and Jehovah’s Witnesses. The pattern was clear: Americans repeatedly generated groups that believed they were living at the threshold of world transformation.
Several structural features of American society encouraged this dynamic. Religious freedom allowed new sects to form easily. Frontier conditions weakened institutional authority and encouraged experimentation. High levels of biblical literacy encouraged direct interpretation of prophetic texts. And the ideology of American exceptionalism fostered the belief that the nation itself might play a central role in world-historical events.
Millennial imagination became, in effect, a recurring feature of American culture.
Counterculture and Technological Transformation
The next major transformation of millennial expectation occurred during the counterculture of the 1960s. The language changed, but the underlying structure remained.
Instead of the Second Coming, the counterculture anticipated an impending transformation of consciousness. The popular phrase “Age of Aquarius” expressed the hope that humanity was entering a new spiritual epoch. Psychedelic experience, Eastern spirituality, and communal experimentation were widely interpreted as signs that a new form of human existence might be emerging.
The San Francisco Bay Area became the epicenter of this cultural ferment. What is less widely recognized is how closely this environment overlapped with the emerging world of computing. Figures such as Douglas Engelbart were already imagining computers as tools for augmenting human intellect. Publications like Stewart Brand’s Whole Earth Catalog promoted a synthesis of ecological awareness, communal experimentation, and personal technology.
Out of this hybrid culture—part psychedelic, part technological—emerged a new kind of futurism. Instead of mystical awakening alone, technological systems were now imagined as instruments of human transformation.
The Singularity Narrative
The modern idea of the technological singularity crystallized in the early 1990s with the work of the science fiction writer and mathematician Vernor Vinge. Vinge argued that once machines became more intelligent than humans, they would design even more powerful machines, producing an “intelligence explosion.” Beyond that point, the course of history would become fundamentally unpredictable.
Ray Kurzweil later popularized this idea, embedding it in a broader narrative of exponential technological growth. In Kurzweil’s vision, advances in computation, biotechnology, and nanotechnology converge to produce radical life extension, human–machine integration, and eventually a post-biological civilization.
Structurally, the singularity narrative resembles earlier millennial expectations. In religious movements, a divine intervention transforms the world and inaugurates a new age. In the singularity narrative, a technological breakthrough performs the same role. Salvation becomes technological transcendence.
The narrative retains the familiar elements of eschatology: a decisive threshold, a transformed humanity, and a future that lies beyond ordinary comprehension.
The Rise of AI Doom
Alongside the optimistic singularity narrative, a darker version has emerged in recent decades. Philosophers and researchers concerned with existential risk have argued that a superintelligent AI might pursue goals incompatible with human survival. The philosopher Nick Bostrom articulated this concern in systematic form, analyzing the “control problem” posed by potentially superintelligent systems. More dramatic warnings have emerged from the rationalist community associated with the website LessWrong, where the possibility of human extinction from misaligned AI is often treated as a central moral concern.
Despite their differences, the optimistic and catastrophic narratives share a common structure. Both assume that technological progress is approaching a critical threshold that will fundamentally alter the course of human history. The singularity promises transcendence; AI doom predicts annihilation. In both cases, the present is interpreted as the final moment before a world-transforming event.
This structure is instantly recognizable to historians of millennial movements.
A New Historical Situation
Yet there is one crucial difference between AI millennialism and earlier movements. Traditional millenarian groups predicted world transformation but did not control the mechanisms that might produce it. The Millerites could only await the Second Coming. Apocalyptic sects interpreted history but did not make it.
In the case of artificial intelligence, the situation is different. The people articulating the transformative narrative are often the same people building the technology. Companies such as OpenAI, Anthropic, and DeepMind operate with enormous financial resources while simultaneously discussing existential risks associated with their own research.
In this sense, the prophetic discourse and the technological apparatus are intertwined. Concerns about alignment and safety are not merely philosophical speculation; they shape the research agendas of organizations backed by billions of dollars in investment. No previous millennial movement has possessed such material power.
A Persistent Cultural Pattern
Seen in historical perspective, the emergence of AI millenarian thinking is less surprising than it first appears. American culture has repeatedly produced narratives of imminent transformation, whether religious, political, or technological. The agents of change have varied—divine intervention, revolutionary politics, expanded consciousness, or superintelligent machines—but the expectation of a decisive turning point in history has remained remarkably stable.
Artificial intelligence represents the latest chapter in that tradition. The language is technological rather than theological, but the structure of expectation remains familiar: humanity stands at the edge of a new epoch, and the choices made in the present may determine the fate of the species.
Whether the transformation proves revolutionary or merely evolutionary remains to be seen. But the cultural imagination surrounding AI clearly belongs to a long American lineage of millennial thinking—one that stretches from Puritan prophecy to Silicon Valley futurism.

























