Thursday, March 5, 2026

The Discipline of Literary Criticism: A Quixotic Essay about Thinkers, Methods and Authority

New working paper. Title above, links, abstract, contents, and introduction below.

Academia.edu: https://www.academia.edu/164963582/The_Discipline_of_Literary_Criticism_A_Quixotic_Essay_about_Thinkers_Methods_and_Authority_A_Working_Paper 

SSRN: https://papers.ssrn.com/abstract=6352618 

ResearchGate: https://www.researchgate.net/publication/401591370_The_Discipline_of_Literary_Criticism_A_Quixotic_Essay_about_Thinkers_Methods_and_Authority

Abstract: This working paper examines the intellectual status of literary criticism as an academic discipline in the United States. Beginning from a playful prompt inspired by Tyler Cowen’s book GOAT: Who Is the Greatest Economist of All Time and Why Does It Matter?, the essay initially sets out to identify the “greatest” literary critics. Very quickly, however, the exercise reveals a deeper problem: unlike economics, the population of figures who count as literary critics is difficult to define, and the criteria by which they might be evaluated are far from clear. The project therefore shifts from ranking critics to examining the boundaries, origins, and intellectual ambitions of the discipline itself.

The essay traces the emergence of contemporary academic literary criticism to the mid-twentieth century, using Cleanth Brooks and Robert Penn Warren’s Understanding Poetry as a marker of the New Criticism’s institutional consolidation within American universities. From there it examines the crisis that arose in the 1960s when disagreements about interpretation raised doubts about whether literary criticism could claim the status of cumulative knowledge. The 1966 Johns Hopkins structuralism conference serves as a pivotal moment, bringing figures such as Claude Lévi-Strauss and Jacques Derrida into the orbit of literary studies and helping to catalyze the rise of “Theory”—a broad set of interpretive approaches drawing on philosophy, psychoanalysis, Marxism, feminism, and other disciplines.

Through discussions of figures such as Coleridge, Frye, Girard, Derrida, and Harold Bloom, the essay explores competing conceptions of literary criticism: as pedagogy, as cultural guardianship, as theoretical inquiry, and as personal commentary on great works. Bloom’s eventual retreat from academic criticism toward a more public and personal mode of literary judgment is treated as emblematic of the discipline’s ongoing uncertainty about its intellectual foundations.

Contents

Introduction: The Formation of an Academic Discipline 3
1. The search for GOAT Literary Critics 6
2. A discipline is founded: Brooks & Warren, Northrop Frye, and S. T. Coleridge 23
3. Structuralism and its aftermath: Girard, Lévi-Strauss, and Derrida 32
Character in the Age of Adam Smith [GOAT economist], an interlude 56
4. What’s up with the Bard? [Bloom, Cowen, and Girard] 59
Harold Bloom and Hillis Miller on the Demise of Literary Studies 71
5. Harold Bloom, the one and only 73
Appendix 1: The Chatbots Comment on this Essay 94
Appendix 2: Commentary on the Profession 98
Appendix 3: Open Letters about Literary Criticism 100
Appendix 4: Naturalist Literary Criticism 101

Introduction: The Formation of an Academic Discipline

I started this project on a whim, as I often do. Tyler Cowen had just announced his book, GOAT: Who is the Greatest Economist of all Time and Why Does it Matter, and I thought: Why don’t I do the greatest literary critics? And that’s what I set out to do.

But I had no plan, just a vague intention. When I began this project I had no idea, for example, that I would end it with a discussion of Harold Bloom, giving him more attention than any other critic or that I would deposit a longish piece about Susan Sontag in the middle of my Bloom discussion. No, I didn’t plan that, I hadn’t even anticipating discussing Bloom at all.

Once I got started, however, whole thing evolved more or less organically and is something of an opportunistic hodge-podge of various kinds of intellectual materials, prose that I’ve written (the biggest single chunk of material), lists from Wikipedia, queries to ChatGPT, charts from Google Ngrams, and topic model charts. Why don’t we agree that its form is an exercise in avant-garde criticism intended to mime the jagged and fuzzy state of the discipline?

This diagram depicts the argument that has emerged during this exercise. Read it as moving from the past, at the left, on through the present to the future, at the right:

I locate the beginning of the contemporary academic discipline of literary criticism in the mid-20th century pedagogical anthology, Understanding Poetry, edited by Cleanth Brooks and Robert Penn Warran, which I discuss in some detail later in this essay. Commentary on literature shades off into the past back through Samuel Johnson in the modern world and through Aristotle in the ancient world, and both classical and vernacular literature was studied in the nineteenth-century German universities that provided the model on which American universities were established, but the contemporary academic study of literature is based on interpretive methods and ideas that crystalized in the middle of the 20th century. Brooks & Warren are a convenient marker of that activity. It was known as the New Criticism, a term still in use for a certain body of work.

By the 1960s, however, that interpretive activity had become problematic. Some critics became bothered by the fact that different critics arrived at different interpretive conclusions about the same texts. “How,” they came to wonder, “how can we count this as knowledge if we can’t agree on meanings?” And so, some scholars at Johns Hopkins invited a group of Continental thinkers, mostly French, but not entirely, to a symposium in the Fall of 1966. The symposium was organized around structuralism, an interdisciplinary movement of the human sciences that emerged in Europe at the middle of the century. A French anthropologist, Claude Lévi-Strauss was the nominal head of the movement. He was invited to the symposium, but couldn’t make it. As it turned out, however, the star of the conference was a young philosopher, Jacques Derrida, who skewered Lévi-Strauss with his contribution, thus sowing the first seeds of poststructuralism. Note that neither of them is a literary critic, but they have had as much if note more influence on literary criticism than anyone who was and is primarily a literary critic.

And so I have indicated them at the middle of the diagram, with one of them pointed toward the future and the other pointing toward the past. The direction of those arrows reflects my judgment, but it should by no means be considered as reflective of the discipline. The discipline would come to reject Lévi-Strauss, but an increasingly large portion of it would come to at least accept, if not embrace, the insights of Derrida. I have a great deal to say about that later in the essay. And I want to say a bit about my own position in this – I was a student at Johns Hopkins when the French landed – just a bit later in this introduction.

Derrida’s method, deconstruction (a word he coined), opened the floodgates to a variety of interpretive methods that came to be collectively known as “Theory,” with a capital “T.” Theory is the application of some approach to the study of the human mind and/or society that is used as a vehicle for interpreting literary (and other) texts. Psychoanalysis and Marxism were the first through the door followed by feminism, African-American studies and so on and so forth.

Understandably many (older) critics resisted these new dispensations, none more forcefully than Harold Bloom. While he spent a few years trying to go along with the program, during the 1980s he broke ranks and not only abandoned poststructuralism but he pretty much abandoned academic literary criticism in favor of addressing himself to the general educated public in through edited collections and a variety of books, including one on American religion (which I’ve read), and big fat books on The Western Canon and Shakespeare. I’ve given over the last 20 pages of this essay (excluding the appendices) to Bloom, with a diversion into Susan Sontag, though it becomes 35 pages if you include the immediately preceding remarks on Shakespeare’s position in the canon, a reasonable inclusion given that Bloom is Bardolator in Chief.

That accounts for the position of Bloom on the chart, right of center at the apex of a triangle trailing off into the past. He abandoned the modes of thought ushered in by Brooks & Warren and retreated into a more personalistic mode of criticism, one that allowed him to luxuriate in his own opinions as amplified through his tremendous, but ultimately narrow, erudition. Bloom became an empire unto himself.

As for me, as I said, I was a student at Johns Hopkins when the structuralism conference took place. I didn’t attend it, didn’t even know it was happening, but I was introduced to structuralism and semiotics by Dr. Richard Macksey, a book-collecting polymath who did much of the organizational groundwork for the conference. Without going into detail, I decided that the natural progression from Lévi-Strauss was into cognitive science and computational semantics, which I pursued with David Hays in linguistics while getting a Ph.D. in English at The State University of New York at Buffalo. And that effectively took me well beyond the disciplinary boundaries of literary criticism.

While I have continued my interest in literature and have written both practical commentary and theoretical and methodological studies, I have also pursued other intellectual interests – cultural evolution, cognitive science, music, film, graffiti, this and that. The upshot is that I am not as widely and deeply read in literary criticism as I would have been had I decided to mind my Ps and Qs for the last 40 years.

That’s an obvious disqualification for writing a longish essay intended to do for literary critics what Tyler Cowen did for economists, identify the GOATs (Greatest of All Time). But that, as you will quickly see, that’s not what I ended up doing. Rather, I used that objective as a vehicle for examining the origins and boundaries of the academic discipline of literary criticism, which I have depicted in that diagram. And that, I would argue, is a task for which my outsider status suits me well. I can see what’s going on in a way that those in the middle of it cannot.

You be the judge.

Have at it.

* * * * *

Note: I’ve included appendices listing various articles I’ve written about the profession. The last one is about the opportunities opened up by computing, both as a conceptual model and a practical tool.

Sun (dirty) snow Hoboken

Market Volatility in Asia Swings on Energy and AI

Meaghan Tobin, What the Extraordinary Market Volatility in Asia Says About Energy and A.I. NYTimes, Mar. 5, 2026.

Stocks across most of Asia rallied on Thursday, a day after tumbling over fears around the region’s heavy reliance on imported oil and gas.

The turnaround illustrates the hair-trigger reactions of investors around the world who are trying to assess the immediate and possible long-term effects of the strikes on Iran by the United States and Israel and the repercussions around the Persian Gulf, where much of the world’s oil and gas is produced. [...]

Over the past year, intense optimism about artificial intelligence has led investors to pour money into tech stocks in Taiwan and South Korea. The two places make most of the equipment like computer chips and servers that power the world’s A.I. systems. They also depend on imports for virtually all of their energy.

The stock market seesaw served as a reminder not only of the central role that these two East Asian democracies play in the global economy, but how bullish investors remain about A.I.

There's more at the link.

A Lesson from the 60s: “For What It’s Worth”

Wednesday, March 4, 2026

TO WAR! Part 1: War and America's National Psyche

Once again I"m bumping this to the top of the queue. And for obvious reasons. Once again a foolhardy President has taken the country into a war with no obvious end in sight, just a squandering of human live and productive resources.

* * * * * 

As part of a special WAR EDITION of New Savanna I'm reproducing a set of notes I wrote up during the 2000 Presidential Election. I first published them on NS back in November of 2011 and I'm republishing them now in recognition of yet another turn in the long-spinning wheel of American mythology. Yet another bump to the top, as I'm thinking of this stuff, this time in connection with the relationship between war and nationalism.
Everything is connected to everything else and the causal forces meeting in the historical present stretch back into the past without end. Figuring out where to start is not easy. My sense is that we need to focus our attention on the dissolution of the Soviet Empire in the late 1980s. That left the nation without a national scapegoat, thus radically altering the nation’s psycho-cultural landscape. We no longer had Ronald Reagan’s Evil Empire to kick around.

As some of you may know, my thinking on these matters has been strongly influenced by an essay Talcott Parsons published in 1947 on “Certain Primary Sources of Aggression in the Social Structure of the Western World”. Parsons argued that Western child-rearing practices generate a great deal of insecurity and anxiety at the core of personality structure. This creates an adult who has a great deal of trouble dealing with aggression and is prone to scapegoating. Inevitably, there are lots of aggressive impulses which cannot be followed out. They must be repressed. Ethnic scapegoating is one way to relieve the pressure of this repressed aggression. That, Parsons argued, is why the Western world is flush with nationalistic and ethnic antipathy. I suspect, in fact, that this dynamic is inherent in nationalism as a psycho-cultural phenomenon.

For the most part I have used Parsons, and others as well, in arguing about the nature of racism in the USA. While Africans were brought to this country for economic reasons it seems to me that during, say, the 19th century African Americans increasingly assumed a dual psychological role in the white psyche. On the one hand, they were a source of entertainment. On the other, they were convenient scapegoats, as became evident with the lynchings that emerged during Reconstruction and continued well into the last century. That is to say, African America served as a geographically internal target for the ethnic and nationalist antipathy Parsons discussed.

Thus we have the thesis in Klinkner and Smith, The Unsteady March (U. Chicago, 1999). They argue that African Americans have been able to move forward on civil rights only during periods where the nation faced an external threat - the Revolutionary War, the Civil War, and the major wars of the first half of the 20th century. When the external danger had subsided, gains were lost. From my point of view, they’re arguing that, when external danger looms large and demands attention, the citizenry can focus aggression there and so ease up on the internal colony. Beyond this, of course, it becomes necessary to recruit from the colony to fight the external enemy, both physically and propagandistically - be kind to your black citizens when you fight the Nazis, etc.
  
Vietnam was the last major war of the Cold War period. As it receded into the past, a political backlash set in and affirmative action came under attack. That’s the situation we faced when the Soviet Empire collapsed. With the major external threat suddenly collapsed, there was a crisis of aggression - I’m reminded of the phrase “conservation of aggression” coined by Robert Wright. The fall of the Evil Empire deprived a great many people of an object for aggressive impulses. What then, happened to that aggression?

It got directed elsewhere. My sense is that the political rhetoric on a number of issues heated up in the wake of the fall: gun control, abortion, the arts, gays, affirmative action, violence in the media. A number of these issues come under the rubric of the so-called “culture wars”. Each of these issues was already on the political agenda, and had been there for some time.

Sexy music had been inspiring pulpit denunciations and legislative action since the early decades of the 20th century. Movies have been problematic since the beginning and the NAACP put itself on the political map by organizing protests against “Birth of a Nation.” But, it seems to me, that the scope of politicized cultural contest broadened. [If I’m correct, then this could be verified empirically by doing content analysis of periodicals, looking at opinion polls, crime rates, etc.]

Perhaps the most interesting redirection, however, was into the so-called War on Drugs. Political concern about drug use is not, of course, new. It goes back to Prohibition - which, was, of course, intimately linked with that objectionably sexy music - and got redirected by and in reaction to the counter-cultural 60s and 70s. However, it is my impression that the current effort ramped up in the wake of the Soviet collapse.

This war on drugs has had substantial material consequences: increased law enforcement and court activity, a considerable increase in the prison population and, of course, in the prison industry. Our prisons now have a relatively large population of non-violent offenders who are disproportionately black, taken off the voting rolls as felons, and available for labor in various prison-based enterprises. I do not know whether or not the increase in the economic “weight” of the prison sector is roughly equal to the losses suffered by the defense sector. I would, of course, like to know.

Regardless of how those numbers work out, my basic point is simply that the end of the Cold War changed the psycho-cultural system in a major way. Psycho-cultural aggression had to be redirected and much of it was redirected at targets within the country, rather than externally. That redirection is the central political phenomenon of the 90s and is responsible for much of the ugliness and programmatic futility of current politics.

Addendum 2011: Obviously, this internal redirection of aggression continues. It may well have gotten worse – I wonder what ‘sentiment analysis’ would turn up if applied to documents written over the past decade? The problem facing us, of course, is not that of finding suitable targets for this displaced aggression. No, we have to restructure our way of living so as to reduce our need to have such targets. That’s a tough one, and one that will take a generation or three.

Addendum 2014: Obviously the Arab world is a perfect target for this repressed aggression. We've got a long-standing and well-crafted Orientalist mythology of the exotic and crazy Arab Other. I fear we're going to be banging on this nail for a long time.
 
Note 2026: The second post in this series has a clip from the Marx Brothers movie, Duck Soup, that can be taken as a comment on how Trump arrived at the decision to attack Iran: TO WAR! Part 2: A Marx Brothers Analysis of America's War Craziness.  

Readership for tech media is WAY DOWN

The river: across, up, and back

The Chinese are optimistic about AI, no Doomers

Vivian Wang, Where are China’s A.I. Doomers? NYTimes, Mar. 4, 2026.

People in China are among the most excited in the world about A.I., according to a KPMG survey of 47 countries last year. While 69 percent of people in China said the technology’s benefits outweighed its risks, only 35 percent of Americans agreed. Other polls have shown similar disparities.

The question is, why?

The answer may be related to how the technology has been deployed in each country, as well as how the government and industry leaders have talked about it.

I don’t find this at all surprising. AI Doom is a projective fantasy, as I argued a couple of years ago in 3 Quarks Daily. Continuing on:

... Chinese tech companies have focused intensely on real-world applications for A.I. By contrast, many leading American tech companies have been focused on more abstract goals, like developing the most cutting-edge model, or achieving artificial general intelligence.

In addition, most of China’s leading A.I. models are free to use, unlike in the United States, where users have to pay for chatbots like ChatGPT to access all their features. (In fact, Chinese companies have been giving away money and luxury cars to entice people to download their apps.)

As a result, Chinese consumers are feeling the benefits of A.I., said Bai Guo, a professor who studies the digital economy at China Europe International Business School in Shanghai.

“A lot of things can already be helped by A.I., and people find that interesting, that’s useful, and so there are quite a lot of positive and active feelings toward it,” Professor Bai said. Potential dangers, such as unemployment or increased inequality, still feel remote.

The Chinese government has emphasized practical use: “Officials say that A.I. could help solve China’s thorniest problems, such as inequalities in health care, or an aging work force.” And so:

In August, the government laid out a plan, called A.I.+, for A.I. to penetrate more than 70 percent of Chinese society by 2027, and 90 percent by 2030. The plan said A.I. will “promote a revolutionary leap in productive ability” and “create higher-quality, beautiful lives.”

Because Chinese officials are promoting A.I. as an economic engine, they may also be silencing those who are more pessimistic about it. Crashes involving autonomous driving have attracted widespread attention online, only for posts to be censored. State media outlets have compared concerns about job loss for taxi drivers to the Luddite movement.

However:

Users have also raised concerns about how easily the government’s restrictions can be bypassed. A Chinese feminist group recently highlighted tutorials for making sexually explicit deepfakes that circulate openly on Chinese social media. Attempts to report the images were unsuccessful, the group said.

The Chinese government has also begun more directly addressing the technology’s potential for disrupting jobs, mental health or the Communist Party’s grip on power. [...]

For all of its potential, China must not let A.I. “spiral out of control,” Mr. Xi warned during a recent meeting of the leaders of the Communist Party.

There’s more at the link.

Tuesday, March 3, 2026

OJ and waffle

The Pentagon's position on Anthropic is legally hopeless

Michael Endrias and Alan Z. Rozenshtein have a substantial article about the Anthropic mess: Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System, Lawfare, 1,2,26.

From their introduction:

From the government's perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don't hold up. And Hegseth's own public statements may have doomed the government's litigation posture before it even begins.

After considerable reasoning:  

Step back and consider what these positions amount to together. The government is arguing that Claude is so vital to military operations that it cannot tolerate any contractual restrictions on it—while simultaneously claiming that Claude poses such a grave supply chain risk that the entire federal government must stop using it, every defense contractor must sever commercial ties with its maker, and the company should be cut off from the cloud infrastructure it needs to survive. It’s like the joke from “Annie Hall”: The food is terrible and the portions are too small.

That might be funny as a bit of Borscht Belt humor. It is less amusing as a description of the United States government's strategy toward one of the companies leading America's effort to develop what may be the most important technology of the century. What Hegseth is actually describing is not a supply chain risk determination but something closer to the beginning of a partial nationalization of the AI industry: Seize the technology and, if you can’t, destroy the company to ensure that no future AI developer dares negotiate terms the Pentagon dislikes.

Arbitrary and capricious review requires, at minimum, logical coherence. The government cannot credibly maintain that a vendor is indispensable, that its continued integration poses no immediate danger, that its technology is reliable enough for active combat operations in Iran, and that it is nonetheless so dangerous it must be severed from the entire federal procurement ecosystem—all in the same week. Even a court inclined to defer on national security matters will notice that these propositions cannot all be true at once. [...]

The most obvious: if the Pentagon finds Anthropic's usage restrictions unacceptable, it can simply decline to renew the contract and move to a competitor. That is a routine procurement decision, available to any buyer who dislikes a vendor's terms. It requires no supply chain designation, no secondary boycott, and no government-wide ban. The fact that the government reached past this straightforward option for the most extreme tool in the procurement arsenal—one designed for foreign adversaries infiltrating the supply chain—is itself evidence that the designation is doing something other than managing supply chain risk. [...]

The legal problems are so glaring, in fact, that a cynical possibility suggests itself: The administration knows this won't survive judicial review and is doing it anyway, so that when they inevitably lose, they can still claim to have gone hard against Anthropic. This is designation as political theater: a show of force that was never meant to stick.

But there is another possibility. The administration may genuinely believe that a Truth Social post and a procurement statute designed for state-influenced Russian and Chinese tech companies can destroy an American AI lab over a contract dispute. If so, they are in for a rude awakening. The statute wasn't built for this, the facts don't support it, and the courts will say so.

H/t Tyler Cowen.

The Shipyard

David Gallagher, The Exact Shade of Gray; THE SHIPYARD. By Juan Carlos Onetti. The New York Times, June 16, 1968.

An unexpected consequence of Doomer propaganda: The Pentagon wants to control the Doomsday Device

Casey Mock, Pete Hegseth Got His Happy Meal, Tomorrow's Mess, March 2, 2026.

Concerning the current dust-up between the Pentagon and Anthropic:

Something like this was always going to happen. Not because of Hegseth specifically, not because of this administration, but because of the narrative the AI safety community — the world that produced Anthropic, and whose language Anthropic still speaks even while disavowing its label — has been pushing for at least the last three years.

 [Oh, much longer than that, much longer. – BB]

Imagine a six-year-old whose entire media diet includes a steady stream of McDonald’s commercials, a Happy Meal ad at every break, focused on whatever toy is the latest to be included along with the McNuggets. Now put that child in a car that drives past a McDonald’s. What happens?

The Rationalist and Effective Altruist communities — the intellectual cultures that gave us Anthropic, influence many of their employees, and which still shape how Dario Amodei talks about his company and his technology — have spent the better part of a decade insisting, with increasing urgency, that artificial intelligence is the most consequential technology in human history. Maybe it’s civilization-ending; maybe it’s civilization-saving. Either way, it’s the hinge on which everything henceforth turns.

With policymakers and the media largely having accepted the premise, thus surrendered was the argument for treating AI like a normal technology subject to normal governance. Policies being pushed by Effective Altruist groups, like 2024’s SB1047 in California — deprioritize harms happening today for theoretical existential ones in the future; despite the fact that today’s harms that could be existential for the folks experiencing them. These groups incessantly made the case that whoever controls this technology controls the future, and so the hypothetical future needs to be prioritized now. In a Washington now run by people who tend to impulsiveness and contemptuousness of institutional constraint — well, it’s easy to see where this was headed. Hegseth saw the ads for the toy, and so now he wanted his Happy Meal. [...]

Yet the prognostications of the doomer community have been, nearly without exception, wrong — not in small ways, but in the foundational sense that the imagined trajectory keeps failing to materialize. [...]

Thus, this news reveals the rationalists’ under-examined blind spot: they cannot model the messy Pete Hegseths of the world, even as their claims whet Hegseth’s appetite. The rationalist view of the world assumes, at some level, that the relevant actors are optimizing for well-understood, predictable variables and a clear understanding of what best serves their self-interest. What it cannot account for is bad faith, impulsiveness, ideological motivation untethered from evidence, random instances of force majeure, and personal whims and petty rivalries. And so while the doomer community spent years warning about uncontrollable AI systems that do things their creators didn’t intend, they apparently did not consider what would happen when the humans currently running the United States government got access to technology they’d been told was the hinge of history.

H/t Gary Marcus.

I've published an article about Doomers in 3 Quarks Daily: On the Cult of AI Doom, September 12, 2026.

Tree with pink blossoms [Japan]

Monday, March 2, 2026

Fun in the snow

Pulling things together about creativity & LLMs

Consider these recent posts about AI:

Two Ways to Use AI: Homo Economicus vs. Homo Ludens (2.27.26) 

Chatbots have increased my sense of intellectual agency such that being an intellectual “outsider” becomes a superpower. (2.17.26) 

Three mathematicians are not impressed with the ability of AI to do professional math (2.16.26)

What do they have to do with one another? They are about creativity, interdisciplinary work and, ultimately, about the division of labor between humans and AI, currently represented by LLM-based chatbots. The NYTimes article about mathematicians makes the point that the creative work in mathematics involves creating frameworks in which to present and solve problems. AIs cannot do that currently. They’re better suited to working on well-defined problems.

That’s certainly consistent with observations I’ve been making for a while. All those benchmarks involve well-defined problems. The real problem, in contrast, is to frame such a problem in the first place. I’ve got a working paper where I present three case studies from my own work, cases where I started out with nothing in particular in mind and ended up doing a bit of focused research: Serendipity in the Wild: Three Cases, With remarks on what computers can't do.

The post about my “superpower” is based on the fact that the LLMs are trained on materials published to the web and so “gravitates” toward those ideas. But what if you take up a stance outside the current intellectual ecosystem, but nonetheless are conversant with it and can ground your work in it, at least partially? That’s been my situation since my early work on “Kubla Khan.” In 1978 I filed a doctoral dissertation entitled, “Cognitive Science and Literary Theory.” As far as I know, I was pretty much alone in working that territory. Here’s how I characterized cognitive science (somewhat idiosyncratically):

The basic problem of cognitive science is establishing a five way correspondence between the following:

  1. Brain Geometry: Neuroanatomy is the study of the geometry of the brain. Comparative neuroanatomy is the study of the correspondence between brains of different species.
  2. Computation: Different types of computers can perform different classes of computations and the nature of the computations depends on the geometry of the computer.
  3. Behavior: Much of psychology is the study of the behavior of organisms. The behavior an organism exhibits is determined by the class of computations which its brain can perform which in turn depends on the geometry of the brain.
  4. Phylogeny: Animals at different evolutionary grades have different brain geometries. The brain geometry must be capable of performing the class of computations necessary for survival in the animal’s particular ecological niche. But what is the relationship between moving to a new niche and the emergence of a new brain geometry?
  5. Ontogeny: As a child matures different brain structures develop and permit new classes of computation sustaining new types of behavior. But how does phylogeny exploit differential maturation rates to create a new class of computer?

Those are five distinct intellectual domains. My dissertation was strongest on behavior, literary texts, and computation, cognitive network semantics, but I touched on neuroanatomy and phylogeny (there’s a section on mirror recognition in humans and apes). Offhand I don’t recall anything about childhood development.

A decade later, however, David Hays and I published “The Principles and Development of Natural Intelligence” (1988). The principles themselves were computational. We identified neural structures associated with each, gave examples of behaviors enabled by each principle, and placed them in both phylogenetic and ontogenetic contexts. That’s a large part of the framework I’ve been working with my entire career – there’s also culture and cultural evolution. Am I an expert across that entire domain? Of course not. But I do have a high degree of expertise in some areas, particularly literary and textual analysis and semantic structures, and I’ve read in the technical literatures across that whole range. I’ve got a “feel” for the material, enough so that I can prompt chatbots (Claude and ChatGPT) across the whole range and follow it when it fills in details that I don’t myself command.

That most recent post (2.27.26) contrasts the way I use chatbots (Homo Ludens) versus more conventional usage (Homo Economicus). You might also look at the still more recent post, Why Gemini 3.1 is so good [long chains of reasoning, across disciplinary boundaries]. That title tells half the story. Somewhere in the video Jones makes the point that, while Gemini 3.1 Pro can construct long trains of reasoning that humans cannot, often crossing disciplinary boundaries, humans can very those chains. That’s my situation. I can check the reasoning for ontological consistency. That’s not the same as truth, but it’s a pre-requisite for it.

Ellie Pavlick, (How) Does AI Think?

c. 40:16 “At various points I’ve like argued really what we’re seeing here is a neural implementation of what is latently a symbolic system like our symbolic AI systems of yore.”

A bit later Pavlick will back off from that statement. However, her first example is arithmetic. Concerning arithmetic note that it is NOT “native” the human mind. Preliterate cultures may not even have open-ended counting systems, it any, and don’t do numerical calculations. Moreover, while children pick up language readily without specific instruction, arithmetic requires focus instruction and fluency requires hours of drill over several years. Careful reasoning is like that as well. Much of formal education is about learning how to reason in various domains.

Keep in mind, “symbolic AI systems” covers a LOT of ground. The expert systems, built on production rules, are perhaps the most visible type of symbolic system. But I think that cognitive nets are a better bet for the latent structure of neural nets. That’s what I argued in ChatGPT: Exploring the Digital Wilderness, Findings and Prospects.

Trump Ban Sends Claude to #1

AI Secret, Mar. 2, 2026.

👀 What’s happening: After Anthropic’s talks with the US Department of Defense collapsed over military AI limits, the White House moved to ban Claude from federal use and labeled it a supply chain threat. Within 24 hours of being publicly targeted, Claude shot from outside the top 100 to number one on the US and Canada App Store free charts, overtaking ChatGPT and Gemini.

🌍 How this hits reality: A federal ban was supposed to isolate a vendor. Instead, it converted policy punishment into consumer demand. SensorTower data shows a direct ranking spike tied to the announcement. Social feeds filled with subscription cancellations and data export tutorials. Billions in defense linked compute shifted toward OpenAI, but retail distribution shifted the other way. Politics instantly rewired both infrastructure allocation and user flows.

🛎️ Key takeaway: State pressure can redirect contracts overnight, but it can also manufacture market momentum. In AI, regulatory confrontation now doubles as distribution strategy, whether intentional or not.

Sunday, March 1, 2026

Something interesting to look at on Sunday morning

Words, code, guardrails & weasels: OpenAI, Anthropic, and the Pentagon

I've copied the entire “tweet” below in case you don't want to click. But you might want to glance through the thread. This is the “tweet” where Gimus says his badge stopped working.

* * * * *

I work in government affairs at OpenAI.

My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says "Policy Is Just Code That Runs on People." I bought the frame at Target. It was in the Live Laugh Love section. I did not see the irony at the time. I still don't.

We had a good week.

On Monday, we closed a $110 billion funding round. One hundred and ten billion dollars. Amazon put in fifty. Nvidia put in thirty. Valuation: $730 billion. The largest private fundraise in the history of anyone raising anything. There was a company-wide Slack message about it. The message used the word "transformative" twice and the word "safety" once. The word "safety" was in the last sentence, after the link to the new branded hoodie pre-order. The hoodies are nice. They're the soft kind.

On Tuesday, we fired a research scientist for insider trading on Polymarket.

Why Gemini 3.1 is so good [long chains of reasoning, across disciplinary boundaries]

YouTube:

What's really happening when Google ships the smartest AI model on the planet, prices it at a seventh of the competition, and doesn't care if you keep using Claude or ChatGPT? The common story is that this is another benchmark race—but the reality is more interesting when the company generating $100 billion in annual free cash flow is playing a fundamentally different game. In this video, I share the inside scoop on why Gemini 3.1 Pro reveals more about problem types than model rankings:

  • Why Google's vertical stack from TPU silicon to Nobel Prize research is an impregnable fortress
  • How Deep Think solved 18 previously unsolved problems across math, physics, and economics
  • What separates reasoning problems from effort, coordination, ambiguity, and emotional intelligence problems
  • Where the question "which AI should I use" becomes the wrong question entirely

For knowledge workers watching the model landscape differentiate, the margin between routing models well and using one model for everything is widening every single month.

Chapters

00:00 Google Shipped the Smartest Model and Doesn't Care If You Use It
03:15 Arc AGI 2: The Largest Single-Generation Reasoning Gain Ever
05:30 What Google Optimized For vs Anthropic and OpenAI
07:10 Demis Hassabis: Solve Intelligence, Then Solve Everything Else
09:45 Google's Vertical Stack: From Transistor Design to Protein Folding
13:20 Why Google Can Afford to Lose the Model Race 15:00 What Gemini 3.1 Pro Is and Isn't
17:30 Naked Reasoner vs Equipped Reasoner vs Specialist Coder
19:45 Deep Think: Disproving Conjectures and Catching Peer Review Errors
23:10 Hard Is Not One Thing: Six Types of Difficult Problems
28:40 Which Problems Does Pure Reasoning Actually Help?
32:15 What This Means for Your Work Tomorrow
35:50 Google's Quiet Game: Building the Thing Underneath the Thing

These two short passages give you a flavor:

14:03: “The model crossed disciplinary boundaries that human specialists very rarely cross because the model doesn't see disciplinary boundaries and that is one of the strengths of an AI model.”

15:10: “Gemini is good for certain kinds of problems: “And they share specific characteristics. The inputs are well-defined like a protein sequence. The problem can be stated extremely precisely. And the solution requires a long and sustained chain of logical deduction that a human mind can verify but often cannot generate without years of specialized training.”

If you don’t want to watch the whole thing, start with “What Gemini 3.1 Pro Is and Isn’t,” @ 15:00.

Carving at the joints: Plato, Zhuangzi, Guo Xiang

First, a prompt I gave Claude 5.4. Then Claude’s reply.

* * * * *

There’s a cliché about carving Nature at its joints.

There’s one version from Plato’s Phaedrus. Socrates has likened a well-formed speech to an animal with its various appropriately arranged parts and is now examining two different speeches on love (265e-266a):

... we are not to attempt to hack off parts like a clumsy butcher, but to take example from our two recent speeches. The single general form which they postulated was irrationality; next on the analogy of a single natural body with its pairs of like-named members, right arm or leg, as we say, and left, they conceived of madness as a single objective form existing in human beings. Wherefore the first speech divided off a part on the left, and

continued to make divisions ...

There’s a different version from Guo Xiang’s redaction of Zhuangzi (from Wikipedia):

Cook Ting was cutting up an ox for Lord Wen-hui. At every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee, zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Ching-shou Music.

“Ah, this is marvelous!” said Lord Wen-hui. “Imagine skill reaching such heights!”

Cook Ting laid down his knife and replied, “What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now I go at it by spirit and don't look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and follow things as they are. So I never touch the smallest ligament or tendon, much less a main joint.”

Plato dates: 428-348 BC. Guo Xiang dates: 252-312 AD.

Is there any chance that this idea made its way from Ancient Greece to Ancient China? Or is it likely that we have a case of independent invention? After all, butchering an animal is a common enough task and the difference between an unskilled butch and a highly skilled one must have been apparent to many people.

The question of transmission versus independent invention is genuinely interesting here, and the honest answer is that scholars are quite uncertain, though the weight of evidence favors independent invention.

The chronological gap is suggestive but not decisive. The Zhuangzi text itself predates Guo Xiang — Zhuangzi lived roughly 369-286 BC, making him a near-contemporary of Plato, not a distant inheritor. Guo Xiang's role was that of editor and commentator, so the Cook Ding story almost certainly originates in the 4th-3rd century BC, not the 3rd-4th century AD. This matters a great deal, because it means we're not looking at a case where ideas could have filtered eastward via the Silk Road over centuries — both texts are roughly contemporaneous.

There was contact between the Greek and Chinese worlds in this era, but it was extremely attenuated. Alexander's campaigns reached Bactria and the Punjab, and there are documented cases of idea-transmission in specific domains (certain astronomical and mathematical concepts, some artistic motifs). But the philosophical literature shows no clear fingerprints of cross-cultural borrowing in this period. The intermediary links needed to carry a specific Platonic metaphor from Athens to the Warring States courts of China in the 4th-3rd century BC are simply not there.

The more compelling case for independent invention rests on a few observations. First, as you note, skilled butchery was universally observable. Any culture that slaughtered animals for food — which is to say, every culture — could notice that an expert carver seems to find the natural seams while a novice hacks. The perceptual phenomenon is available everywhere. Second, and more importantly, the two passages are actually doing quite different philosophical work with the same raw material. Plato's Socrates is making an epistemological and rhetorical point about how to carve up concepts — how a good speech should divide a subject at its natural joints rather than arbitrarily. The emphasis is on correct classification, on getting the logical structure of reality right. Cook Ding's lesson is about something closer to wu wei — effortless action, yielding to the natural order, the dissolution of the self-conscious subject into the flow of the Way. The knife goes where the ox already opens; the cook stops trying. These are recognizably different philosophical preoccupations, and each fits neatly into its own tradition without requiring outside influence.

So what you have is a case where a shared human experience — watching an expert butcher work — gets recruited into philosophical argument independently in two places, but gets recruited to illuminate quite different problems. That's actually philosophically interesting in its own right: it suggests the analogy has a kind of natural aptness that makes it available to very different thinkers facing very different questions. The butcher doesn't just carve the ox at its joints; he also, apparently, carves the history of philosophy at a joint.

Saturday, February 28, 2026

Elysian Park, Hoboken

Computation, Chess, and Language in Artificial Intelligence

New working paper. Title above, links, abstract, contents and introduction below:

Academia.edu: https://www.academia.edu/164885566/Computation_Chess_and_Language_in_Artificial_Intelligence
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6319062
ResearchGate: https://www.researchgate.net/publication/401355671_Computation_Chess_and_Language_in_Artificial_Intelligence

Abstract: This paper reexamines the foundations of artificial intelligence by contrasting chess and natural language as paradigmatic domains. Chess, long treated as a benchmark for intelligence, is finite, rule-governed, and geometrically well-defined. It lends itself naturally to symbolic search and evaluation. Natural language, by contrast, operates in an unbounded and geometrically complex reality. Its rules are open-ended, its objectives diffuse, and its domain inseparable from embodied experience. With chess as its premier case – McCarthy: “the Drosophila of AI,” – AI has been guided by a deeper assumption: that the first principles of intelligence reduce to the first principles of computation. Drawing on Miriam Yevick’s distinction between symbolic and neural computational regimes, I propose that intelligence must be understood as operating in a geometrically complex world under finite resource constraints. Embodiment is therefore a formal condition of intelligence, not an incidental feature. Recognizing the structural difference between bounded games and open-ended cognition clarifies both the historical trajectory of AI and the conceptual limits of current systems.

Contents

Introduction: Chess, Language, and Intelligence 3 
Chess and Language as Paradigmatic Cases for Artificial Intelligence 5 
Three Principles of Intelligence (That Aren't Principles of Computation) 12 
Chronology of Chess, Language, and AI 15

Introduction: Chess, Language, and Intelligence

Chess has been a central concern of AI from the beginning. AI researchers didn’t become interested in natural language until the 1970s. Before that computational research on natural language was the domain of computational linguistics (CL), which started with machine translation (of texts from one natural language to another) as its primary problem. Thus we have two different disciplines, AI and CL.

In a sense, AI was fundamentally a philosophical exercise. It was an attempt to demonstrate, in effect, that we could understand the human mind in terms of computation. But rather than advance its philosophical objective through argument, it chose computational demonstration as its mode of expression. Chess became a central concern for two reasons: 1) On the one hand it was widely regarded as exhibiting the pinnacle of human reasoning ability. If we could create a computer program to play a championship game of chess, we could create a computer program that would be capable of cognitive or even perceptual task humans can do. 2) But also, the nature of chess made it well-suited for computational investigation.

The article that opens this working paper – Chess and Language as Paradigmatic Cases for Artificial Intelligence – concentrates on this and then goes on to make the point that language is utterly unlike chess in this respect. The chess domain is bounded and well-defined. Natural language is not; it is ill-defined and unbounded.

That’s as far as I got in the article, but I had been aiming for an argument that AI is still, in effect, mesmerized by the chess paradigm. I didn’t make it that far because language is so obviously different from chess that it is difficult to see how anyone would made that mistake.

What I have come to realize, only after I’d finished the article, is that it isn’t so much chess that has mesmerized AI. Rather it is computation itself. AI has been implicitly assuming that the First Principles of intelligence reduce to the First Principles of Computing. The first principles of computing can be found in the work of Alan Turing (the abstract idea of computing) and and others.

The first principles of intelligence are more stringent. As Claude put it a recent dialog:

First principle of intelligence: Must operate in unbounded, geometrically complex physical reality with finite resources.

Those two qualifications, an unbounded, geometrically complex reality, and finite computational resources, change the nature of the problem considerably. I note, in passing, that this allows us to assign formal significance to the concept of embodiment, for it is embodiment that commits intelligence to operating with finite resources in a geometrically complex universe.

Miriam Yevick’s 1975 paper, “Holographic or Fourier Logic,” is the crucial document, but it’s been forgotten. Using identification in the visual domain as her case, she showed that, where we are dealing with geometrically simple objects, sequential symbolic processing is the most efficient computational regime. But when we are dealing with geometrically complex objects, neural net processing is the most efficient computational regime. AI started out with symbolic processing in the 1950s and arrived at neural nets in the 2010s. But it hasn’t explicitly recognized that one must fit the mode of processing to the nature of the world. In that (perhaps a bit peculiar) sense, the researchers in the currently-dominant paradigm don’t know what they’re doing.

The second article in this working paper, Three Principles of Intelligence (That Aren't Principles of Computation), discusses this in more detail. I had it generated by Claude 4.5 after a long series of dialogs over several days.

The last article is a chronology of events in the history of chess and language in AI.

Friday, February 27, 2026

Are You Going With Me? – Pat Metheny

Friday Fotos: An urban canyon in Hoboken, NJ

Two Ways to Use AI: Homo Economicus vs. Homo Ludens

The academy has a problem, and it's been getting worse for over a century.[1]

We organize knowledge into disciplines—history, psychology, neuroscience, linguistics, economics—each with its own journals, conferences, and vocabulary. This structure, inherited from 19th-century German universities, serves one purpose brilliantly: it lets specialists gather details efficiently within well-defined boundaries.

But knowledge doesn't respect boundaries. The most important questions—How does the mind work? What makes us creative? Why do societies change?—require insights from multiple disciplines. The pattern you need to see often spans several "bins" of specialized knowledge.

Here's the paradox: we've been talking about interdisciplinary work for decades. Universities have interdisciplinary centers everywhere. Yet the actual structure of academic life—hiring, promotion, publication, funding—still runs on disciplinary rails laid down 150 years ago.

Now we have large language models. And we face a choice about how to use them.

The Economicus Approach

One path is to use LLMs to amplify and accelerate current arrangements. Let's call this the Homo economicus approach—the economic human, focused on optimizing production.

In this mode, LLMs become tools for:

  • Writing literature reviews faster
  • Reviewing papers for journals more efficiently
  • Generating incremental research at scale
  • Producing more publications per year
  • Staying safely within disciplinary boundaries

This sounds productive. More papers, faster reviews, greater output. But it doubles down on exactly what's broken. We already produce too many narrow specialist papers that too few people read. Using AI to produce more of them faster just amplifies the dysfunction.

The economicus approach treats knowledge production like manufacturing: maximize output, minimize cost, optimize existing processes. Stay in your lane. Don't take risks. Generate the next incremental advance.

The Ludens Alternative

There's another path. Call it Homo ludens—the playing human, focused on exploration and discovery.

In this mode, LLMs become tools for genuine cross-disciplinary integration. Not producing papers, but discovering connections. Not automating existing processes, but enabling new formations.

Here's what this looks like in practice:

Strategic Search Across Disciplines

Say you're investigating how language develops in children. Traditional approach: read the developmental psychology literature, maybe venture into linguistics if you're bold.

Ludens approach with LLMs: "Find work from any field that addresses the relationship between motor development and symbolic capacity."

The LLM doesn't care about departmental boundaries. It surfaces relevant work from neuroscience, evolutionary biology, comparative psychology, and anthropology—connections that specialists, confined to their silos, would miss.

Constraint Satisfaction Across Domains

Rigorous integration requires checking whether your ideas satisfy constraints from multiple fields simultaneously. Is your model of language acquisition consistent with what we know about brain development? Does it align with evolutionary timescales? Does it match observed behavior?

An LLM can rapidly check these cross-domain constraints: "Does this cognitive science claim contradict findings in neurobiology? What about developmental timelines?" It doesn't replace judgment, but it surfaces contradictions and connections that would take months of reading to discover.

Pattern Discovery in Unexpected Places

The most valuable insights often come from recognizing that two fields are studying the same phenomenon with different vocabularies. LLMs excel at this kind of pattern matching across terminological boundaries.

"What work in any discipline addresses hierarchical control systems switching between modes?" The answer might come from neuroscience (neural modulation), robotics (control architectures), or organizational psychology (decision-making frameworks). These aren't citations to pad your bibliography—they're genuinely different perspectives on the same deep problem.

The Center-Out Method

Start with a specific case—a text, an event, a phenomenon—and radiate outward to topics it touches. An LLM can help map these connections systematically: given this particular case study, what frameworks from different disciplines illuminate different aspects of it? [2]

This mirrors how actual insight works: you're wrestling with something specific, and you need whatever intellectual tools help, regardless of which department developed them.

Why This Matters

The difference isn't just practical—it's philosophical.

Economicus treats LLMs as labor-saving devices. Do what we already do, but faster and cheaper. This keeps us trapped in the existing system, just at higher speed.

Ludens treats LLMs as exploration tools. Find patterns we couldn't see before. Make connections that disciplinary blinders obscured. Enable the integrative work that institutions make nearly impossible.

The economicus approach optimizes local maxima—you get better and better at what you're already doing. The ludens approach helps you find new maxima you didn't know existed.

The Play Element

There's a deeper reason the ludens approach matters: genuine discovery requires play.

Not play as opposed to serious work, but play in the sense of free exploration before commitment. Trying unusual combinations. Following tangential connections. Seeing what emerges without knowing in advance what you're looking for.

This is how children learn, how scientists make breakthroughs, how jazz musicians create. You need freedom to explore widely before you settle on what's worth pursuing seriously.

The economicus approach eliminates this exploratory freedom in the name of efficiency. It optimizes production, but production of what? More of what we already have.

The ludens approach embraces exploration. You're not trying to write the next incremental paper. You're trying to discover what you don't yet know you're looking for.

The Current Moment

Right now, institutions are moving toward the economicus approach. Using LLMs to review more papers, generate more text, process more grant applications. It's understandable—they're under pressure to handle increasing volume.

But this is a catastrophic missed opportunity.

LLMs are genuinely good at working across disciplinary boundaries. They don't have careers to protect or departments to represent. They can pattern-match across the entire literature without caring which journal it appeared in. They're natural tools for the kind of integrative work that the current system makes nearly impossible.

Using them instead to accelerate existing processes is like using the internet purely to send faxes faster.