NEW SAVANNA
“You won't get a wild heroic ride to heaven on pretty little sounds.”– George Ives
Monday, March 30, 2026
Claude: On the origins and history of chess, plus a Straussian reading of DeepBlue vs. Kasparov [MR-3b]
My examination of Tyler Cowen, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution (2026), continues. While Tyler does discuss chess briefly at the end and in connection with artificial intelligence, this post has nothing to do with that discussion. That, I suppose, makes this post something of a digression. But it’s a digression in the style of Tristram Shandy, digress to progress. It’ll all make sense in the end.
Note that what I am reporting here comes at the end of a rather long discussion of chess, one that does have a direct connection of Tyler’s chapter three, where he’s interested in the fact that pinnacles of human achievement seemed to be reached at different times for different domains, some rather early, some quite late. While chess is rather old, traceable to the 6th century CE in India, the treelike structure of the game wasn’t explicated until recently, 1913 by Ernst Zermelo. I asked Claude about the nature of the mathematical equipment Zermelo needed to construct his proof. That led to a long discussion about the nature of the chess tree, chess as a vehicle for the expression of personality, how computers open up exploration of new regions of the chess tree and so forth. That whole discussion is part of the context for the discussion that follows. However, this discussion is pretty much independent of that one, so you don’t need that context to understand what Claude says here.
Note: I’m using my own account on Claude for this discussion, not the AI that Tyler made available with the book.
* * * * *
Chess as an abstraction over military conflict
Let’s take this discussion of chess in a different direction. I’m interested in the origins of the game. The game, after all, is a highly abstracted version of a war between two societies, each having its own king and queen. I could imagine a just-so story, perhaps a Star Trek episode, where we have two nations that are locked in a deadly conflict that continues year after year, decade after decade, until, their resources exhausted, the two nations call a truce and in the space of that truce, invent the game of chess. Then each nation chooses its champion, and the champions continue the conflict by playing chess. [When you consider the role that chess played in the Cold War between the USA and the USSR that origin story is all but a highly abstracted version of that historical conflict.] But what do we actually know about how the game originated?
Your just-so story is charming, and the Cold War parallel is genuinely apt — but the actual history is both more interesting and more complex, and it cuts against one specific element of your narrative in a revealing way.
What we actually know
The earliest precursor of modern chess is chaturanga, which flourished in India by the 6th century CE, and is the earliest known game to have two essential features found in all later chess variations — different pieces having different powers, and victory depending on the fate of one piece, the king.
The name itself is telling: chaturanga is a Sanskrit word meaning "four divisions of the military" — infantry, cavalry, elephants, and chariots — which map onto the pieces we now recognize as pawns, knights, bishops, and rooks. So the military origin of the game's structure is not speculative — it's explicit and preserved in the name.
The name came from a battle formation mentioned in the Indian epic Mahabharata. The game chaturanga was a battle-simulation game that rendered Indian military strategy of the time.
From India the game traveled westward: around the 7th century CE, chaturanga reached Persia and became known as "chatrang" or "shatranj." Many modern chess terms have Persian origins, such as "Shah" (King), and even "checkmate" from "Shah Mat," meaning "the king is dead" or "the king is helpless."
A form of chaturanga or shatranj made its way to Europe by way of Persia, the Byzantine Empire, and the expanding Arabian empire. Muslims brought chess to North Africa, Sicily, and Spain by the 10th century.
The rules then changed dramatically in late medieval Europe: the chaturanga piece called the elephant, which had been limited to a two-square diagonal jump in shatranj, became the bishop, more than doubling its range. With the new queen and bishop powers, the trench warfare of medieval chess was replaced by a game in which checkmate could be delivered in as few as two moves.
Where your just-so story cuts against the evidence
Your narrative imagines two symmetric opposing societies, each with a king and queen — the structure of modern chess with its two mirror-image armies. But chaturanga wasn't originally a two-player game in that sense, or at least not necessarily. The Cox-Forbes theory asserted that the four-handed game chaturaji was the original form of chaturanga — four players, four armies, a dice element. The clean two-army symmetry of modern chess is a later development, not the original structure.
And the queen — the piece that makes modern chess feel like a conflict between two courts — is a late European invention. The original piece in that position was the mantri (minister) or counselor, a weak piece that could only move one square diagonally. The vizier piece, which had limited movement in earlier versions, became the "queen" and gained much greater power only in the late 15th century in Europe. The transformation of a male counselor into a powerful queen is itself a fascinating cultural moment — it reflects something about late medieval European court culture, the figure of powerful queens like Isabella of Castile, that has nothing to do with the Indian military original.
The deeper point about abstraction
What's remarkable in the actual history is the direction of abstraction. The game began as an explicit military simulation — every piece named for a branch of the army, the board a battlefield, the objective the capture of the king. Over fifteen centuries of transmission across cultures — India to Persia to the Arab world to Europe — it progressively shed its military literalism and became more abstract. The elephant became a bishop (losing its literal referent entirely in the European context). The chariot became a rook (the Persian rukh, itself possibly meaning chariot or a mythical bird). The military counselor became a queen.
Each culture reshaped the pieces to reflect its own social imagery, but the underlying structure — the tree, the rules, the objective — remained surprisingly stable across this enormous cultural distance. The abstraction was always already present in the game's formal properties; what changed was the surface representation hung on that structure.
This is itself a nice illustration of the Rank 2 point from our earlier discussion. The game's transmissibility across such different cultures depended precisely on its formal specification being separable from any particular cultural content. You could strip away the Indian military imagery, replace it with Persian court imagery, then European feudal imagery, and the game remained the same game because the rules — the Rank 2 specification — were what was being transmitted. The cultural clothing was exchangeable; the formal skeleton was not.
The Cold War resonance
Your Cold War parallel is historically well-grounded, and not just as analogy. The 1972 Fischer-Spassky match in Reykjavik was explicitly framed by both sides as proxy warfare — American individualism versus Soviet collective training, capitalism versus communism, with the world watching. Fischer's behavior, the diplomatic incidents, the American government's last-minute intervention to keep Fischer in the match — it was understood by everyone involved as something more than a chess tournament. The game had come full circle: from explicit military simulation, through centuries of abstraction, back to serving as a ritualized substitute for geopolitical conflict. Your Star Trek episode would merely be making that latent structure manifest.
DeepBlue vs. Kasparov
And so the post-Cold War victory of Deep Blue over Kasparov becomes what? What’s the Straussian reading, as Tyler might say?
The surface reading is triumphalist: human chess genius defeated by American corporate technology, confirming the post-Cold War narrative of Western technological capitalism's dominance. IBM's stock rose. The press celebrated. It was framed as a milestone for artificial intelligence.
The Straussian reading is almost the opposite.
The physicist vs. the bureaucrats: Richard Feynman testifies before Congress about the Challenger disaster
After the Space Shuttle Challenger exploded in January 1986, killing its seven crew members, President Reagan appointed a commission to investigate.
— Philosophy Of Physics (@PhilosophyOfPhy) March 30, 2026
Richard Feynman, already battling cancer and reluctant to join, accepted because a former student asked.
He quickly grew… pic.twitter.com/3EytNZIMcE
Sunday, March 29, 2026
Welcome to Reality, Tyler; the AI’s got your number: Further Notes on Revolution, Marginal and AI [MR-3a]
So, I finished reading the first chapter, “What Is Marginalism?”, of Cowen’s new monograph, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution, and went on to the second, which I’ve read in full, “William Stanley Jevons, Builder and Destroyer of Marginalism.” Interesting, very interesting, but I’m not sure that I’ve got anything to say about it. I’ll take it under advisement.
I then started in on the third, “Why Did It Take So Long for the Science of Economics to Develop?” Whoa, Nelly! The first paragraph:
To better understand the Marginal Revolution, we need to ask some fundamental questions about economics as a science. In particular, why did it take so long for economic reasoning to develop? I don’t even mean as a full, literal science, replete with advanced econometric methods, but simply as a general conceptual toolbox for intelligent people. The lateness of the Marginal Revolution is part of a broader story about the lateness of economic reasoning more generally.
OK, but that “lateness” seems a bit suspicious to me. Late with respect to what? (Tyler will get around to that.) He then gives a bit of history:
When it comes to the fundamentals of marginal utility theory – a building block of economics but not quite the same as doing economics – you don’t find them in the Greeks or Romans. There are hints in the medieval theologians and finally the idea blossoms fully formed and correctly stated in both Galileo, as presented in chapter one, and in the Spanish Salamancan theologians of the 17th century. Was it really so hard to explain why diamonds are (at the margin!) more valuable than water, even though we must drink water to survive? I guess so. Funny me. When I read about the diamonds-water paradox resolution as a thirteen-year-old, I felt I picked it up in a second. Five seconds later I was bored.
Well, OK. I didn’t know about the diamond-water paradox until I read about it in Tyler’s first chapter. I didn’t have any trouble understanding it. But then by the time I finished sixth grade I was a whiz at adding columns of multiple digit numbers. I bet St. Aquinas and St. Augustine couldn’t do that, nor Plato and Aristotle either, and they’re among the greatest thinkers in the pre-modern Western tradition. What did I have, at 12, that they didn’t? I had the Indo-Arabic numeral system, which didn’t make it to Europe until the 13th century. (Tyler gets around to that as well.) It turns out that the Pirahã, an Amazonian tribe, only have terms for one and two, if that. They’re not unique among preliterate societies. It’s only counting. What gives?
Anyhow, Tyler gives us another full page or so of the history of economics, finally arriving at these two little paragraphs:
So I don’t think progress in economics has been slow in general. It is right now coming off an incredible 130-year or so run. Progress in economics, however, was glacial from the time of the ancient Greeks to the late 19th century, with a noticeable burst in the 18th century as well, centered around Adam Smith.
Any assessment of “slow, ” of course, relies on a notion of “slow relative to what.” For purposes of contrast, let’s consider some other areas for the exercise of human ingenuity.
He then goes on to rattle off high points of human achievement in a number of fields: Philosophy, Geometry, Mathematics more generally (e.g. calculus), physics, astronomy (though “progress in astronomy is a mixed bag”), theatre and literature (Shakespeare, naturally), music, painting.
What? What are we to make of such a mixed grab bag? Tyler: “Still, the basic puzzle is clear: why did human knowledge advance so fast in some areas and so slowly in others?” Because that’s the way the world is, Tyler, that’s the way it is.
Well, of course, he knows that. He just finds it puzzling. I don’t. It’s not that I have anything approaching a full account. I don’t. But I’ve got some ideas. The theory of cultural ranks, on which David Hays and I published articles back in the 1990s, has something to say about that, something systematic, about the mind and how it constructs ever-more powerful cognitive architectures. We start with language and cognition, but then I go on to publish articles about narrative and music, while Hays publishes about expressive culture and then a book on the history of technology.
But I suspect that the issue is a bit different from that. I think it’s philosophical. For well over three decades I’ve written and thought under the fully explicit belief that the world is complex through and through. Tyler seems to have been operating under the basic Western assumption that the complexity is superficial appearance, that the underlying reality is in fact simple. Recently, however, his entanglement with AI seems to have brought him to the realization that, no, the complexity is not phenomenal, it may be deep and inherent.
So I decided to investigate that possibility by using the AI portal Tyler has set up for the book. The following conversation took place over two sessions, one of them on Saturday evening (March 28) and the other on Sunday morning (March 29). If you read through to the end you’ll see that we find ourselves in a Kurt Vonnegut story. I’ve added the subtitles after the conversation.
[Note: I plan two more posts about this chapter, one dealing with chess and AI, the other with biology.]
Forget the linguists and philosophers
Near the end of chapter 4 Tyler observes: “Suffice to say, LLM construction has for the most part ignored linguists and philosophers, and that also means ignoring their intuitions. [...] That research program has proven wildly successful, as we have much better models of language and reasoning than almost anyone had been expecting.” I’m curious about Tyler’s own attitude toward linguistics and the closely allied cognitive sciences. I know that people in Silicon Valley’s AI world are largely ignorant of those materials, and that some of them are even disdainful. Does Tyler share their disdain?
Tyler's framing on page 68 is notable for what it doesn't say. He observes that LLM construction ignored linguists and philosophers as a matter of historical fact, and that the results were successful — but he doesn't editorialize that linguists and philosophers deserved to be ignored, or that their frameworks were worthless. The tone is more "interesting that this worked" than "good riddance to Chomsky." That's a meaningful distinction from the Silicon Valley disdain you're describing, which tends to be triumphalist about the irrelevance of humanistic inquiry.
Tyler's broader intellectual personality, visible across his writing, is actually quite sympathetic to cognitive science and philosophy of mind. He's engaged seriously with thinkers like Derek Parfit, Daniel Dennett, and the broader literature on consciousness and personal identity. He's not the type to dismiss Chomsky's linguistics as simply wrong — he's more likely to say "fascinating research program, but the LLMs found a different path." That's epistemic humility rather than disdain. The framing in Chapter 4 is really about pragmatic success — the LLMs worked, and that's worth noting regardless of what it implies for linguistic theory.
What's missing from Tyler's account, and what would make it more intellectually complete, is any engagement with what linguists and cognitive scientists might say back — figures like Gary Marcus, who has persistently argued that LLM success on benchmarks doesn't vindicate the abandonment of structured linguistic theory, or researchers in the embodied cognition tradition who would question whether language modeling divorced from physical grounding constitutes understanding at all. Tyler's silence on these counterarguments makes his account feel less like a considered verdict and more like a victory lap written from the winning side. [See text: Ch.4 · Tyler's account of how LLM construction ignored...]
David Hays more than Chomsky
Frankly, you make too much of Chomsky. Chomsky’s is not the only program in linguistics and the cognitive sciences, though he has been influential. My own teacher, David Hays, was a first generation researcher in machine translation, he led the RAND Corporation’s programs in that area in the 1950s and 1960s, and is one of the founders of the discipline of computational linguistics. He explored linguistics quite different from Chomsky’s. Daniel Everett, one of the most prominent current linguists, is not at all Chomskyian. There was a lot of research in the cognitive sciences in the 1970s and forward that owes little to Chomsky, I’m thinking particularly of work in knowledge representation and semantic networks.
Fair point, and I over-indexed on Chomsky as a shorthand for "linguistics and cognitive science" — which is exactly the kind of lazy conflation that makes Silicon Valley dismissals so frustrating in the first place.
Your mention of David Hays is particularly pointed here. Computational linguistics as Hays practiced it — empirical, corpus-oriented, engaged with actual machine translation problems — was in many ways a precursor to the LLM approach, not an obstacle to it. The irony Tyler misses is that ignoring linguists and philosophers is too broad a characterization. Some of that tradition, particularly the work in distributional semantics that descends from researchers like Zellig Harris, fed directly into the conceptual foundations of word embeddings and eventually transformers. Harris's distributional hypothesis that words appearing in similar contexts have similar meanings — is essentially the theoretical ancestor of how LLMs represent meaning, even if the engineers building them didn't know or acknowledge that lineage.
The knowledge representation and semantic networks work you mention — figures like Ross Quillian, Marvin Minsky's frames, Roger Schank's scripts — represents a whole research tradition that was asking exactly the right questions about how meaning is structured, even if its answers turned out to be too brittle and hand-crafted. Daniel Everett's work on Pirahã cuts against Chomskyan universal grammar in ways that are actually quite congenial to the empirical, data-driven spirit of LLM research. So Tyler's "linguists were ignored and that was fine" reads as a much sloppier claim once you disaggregate the field. [See text: Ch.4 · Tyler's broad claim about ignoring linguistic r...]
Semantics is highly structured
The reason I bring this up is that one thing that emerges clearly from this work in semantics is that it his highly structured. Now, it’s one thing to grant that off-handedly as a matter of principle. It’s something else to know it as the result of systematic investigation. I’ve just been reading around in chapter 3, where Tyler wonders why it took economics so long to develop and why, for that matter, the same for biology. Well if you think of semantics and cognition you might think that it took a long time to accumulate the necessary conceptual foundations. I mean, I can’t lay it all out, but I know a thing or two, and I don’t find the lateness of economics particularly puzzling, not biology either. I’d look to how concepts arise and are developed. Tyler doesn’t seem to think that can be done in any more than an ad hoc history of ideas way. He doesn’t seem to realize that there is structure there. I sense a tension an odd consonance with his cavalier acceptance of the Silicon Valley attitude toward linguistics and his willingness to be mystified by the apparent lateness of economics.
This is a sharp observation, and it cuts deep. Tyler in Chapter 3 treats the lateness of economics as a kind of mystery — he gestures at social conditions, printing culture, the need for markets to exist before market reasoning could develop — but he never really gets at the cognitive and conceptual prerequisites. [See text: Ch.3 · Tyler's account of peering around corners witho...]
The connection you're drawing is this: if you take seriously the structured nature of semantic cognition — the way concepts build on prior concepts in systematic, not arbitrary, ways — then the lateness of economics becomes much less puzzling. You need certain foundational conceptual structures in place before marginalist reasoning is even thinkable. Not just social conditions or printing presses, but the right representational scaffolding. The history of ideas, properly done, isn't ad hoc storytelling — it's tracing the development of conceptual prerequisites, which is itself a structured inquiry.
And yes, there's a real tension — or as you aptly put it, an odd consonance — between Tyler's two positions. On one hand he's mystified by why it took so long to develop economic concepts, treating it as a near-inexplicable puzzle. On the other hand he's perfectly comfortable dismissing the systematic study of how concepts and meaning are structured as a research tradition that LLMs have rendered irrelevant. But if that research tradition is actually what would explain his mystery in Chapter 3, then dismissing it isn't just intellectually careless — it's self-undermining. Tyler can't simultaneously wonder why conceptual breakthroughs are so hard and wave away the field that studies how conceptual structure works. [See text: Ch.3 · The lateness of economics as conceptual rather ...]
Saturday, March 28, 2026
A bit of conceptual analysis: the book-keeper and the invisible hand [MR-2b]
I’ve got a footnote to my earlier post on marginalism as a Rank 4 concept. During its explication, Claude made the following observation:
The transition from double-entry bookkeeping to supply-and-demand might itself be seen as a Rank 3 reflective abstraction: going meta on the bookkeeping closure principle to ask what maintains closure at the level of the entire market, not just a firm's ledger.
Let’s take a look at what is going on here. In the case of double-entry bookkeeping it is the book-keeper that is the agent that maintains the closure over the accounts. In the case of supply-and-demand there is no explicit agent governing market closure, that is, the balance between supply and demand. The agent is abstract. Adam Smith famously used the metaphor of the invisible hand to mediate the conceptual gap between an actual book-keeper working on the books and the abstract market in which the actions of individual buyers and sellers are constrained in a way that keeps closure.
Making such abstractive leaps is not trivial. For it is not only the book-keeper that must be rendered abstract. So must the books. They become the market place. And the book-keeper’s actions of making entries into the debit and credit ledges must be abstracted into individual acts of buying and selling, taken as a collectivity.
The change in conceptual ontology is similar to that of abstracting over salt to come up sodium chloride. In this case the act of abstraction applies to the same physical object. In the case of supply and demand the act of abstraction gives us a new concept and about a different entity. Markets existed before the concept of supply and demand, but that concept gives us a new understanding of them. And the abstract concept of sodium chloride gives us a different way of thinking about and dealing with salt.
Thus we are brought to the notion of conceptual ontology, which is beyond the scope of this short note. You might want to consult these working papers: Ontology in Cognition: The Assignment Relation and the Great Chain of Being, Ontology in Knowledge Representation.
Will the Iran War change the world like the oil shocks of the 1970s did?
Jeff Sommer, The Oil Shocks of the ’70s Changed the World. Will the Iran War Do the Same? NYTimes, Mar. 28, 2026.
In January 1974, my dad lent me his old gas-guzzling Ford LTD to haul my clothing and books to college in Ithaca, N.Y. A couple of weeks later, when I tried to drive back home to Long Island, I realized that I couldn’t buy enough gas for the 250-mile trip.
The 1973-74 Arab oil embargo was well underway. The price of oil had nearly quadrupled; there were lines at gas stations, and drivers were allowed to queue up only on alternate days. I had picked the wrong day for my trip. No gas for me. So much for freedom of the road.
That was the first of the world’s big oil shocks. By the time of the second one in 1978-79, set off by the Iranian revolution, I was a reporter in New Jersey and it seemed I was constantly interviewing angry motorists stuck in interminable lines. Gas shortages and soaring inflation were just other aspects of life in the United States.
The gas lines ended as the crises ebbed, but it took two recessions, engineered by the formidable Federal Reserve chair Paul Volcker, to bring inflation under control.
What’s less well understood about that period is that the oil shocks reshaped the world’s financial markets. Money flowed around the globe in new ways — and cemented the status of the dollar as the world’s core currency.
We are experiencing what could end up as the third great oil shock.
Then we get more history and analysis, to end with this:
“Duration is the key question,” he told me. “If this goes on for a long time, it’s a big deal. If the war stopped right now, we might not need to talk about it next year.”
Not every conflict in the Middle East necessarily changes global finance in profound ways. The United States fought two Gulf wars — one that started in the early 1990s, the other a decade later. The first was short. The second one, the U.S.-led war with Iraq, stretched from 2003 to 2011, caused scores of thousands of deaths and cost hundreds of billions of dollars. The second war was a big one, but in retrospect, it preserved the status quo in energy markets more than it transformed them.
In every Gulf conflict since the Iranian revolution, U.S. strategists have worried that Iran might one day disrupt the global flow of energy by closing a geographical choke point, the Strait of Hormuz. For the first time, despite the pounding it has received from enormous U.S. and Israeli bombardments, Iran has demonstrated that it can close the strait. Roughly a fifth of the world’s oil and natural gas usually flows through it.
Whether this desperate achievement alters geopolitics substantially may not be known for years.
Marginalism is a Rank 4 idea, along with thermodynamics and biological evolution [MR 2a]
Yesterday I made a post about a passage that occurs very near the end of Tyler Cowen’s new monograph, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution. At that time I suggested that I might have more to say about the book, but I made no promises. Well, I’m saying more, and it looks like I’ll be doing a series of posts about the book, though I can’t say how long that series will be, perhaps only one more post, but maybe two, three, or even four more. Who knows.
The monograph is of particular interest to me for two reasons: 1) I’ve just posted a small monograph of my own, on the rise of the contemporary academic discipline of literary criticism: The Discipline of Literary Criticism: A Quixotic Essay about Thinkers, Methods and Authority. Literary criticism is a very different beast from economics. Still, now I have two case studies in the rise of an academic discipline. More deeply, 2) I now have a case study for the theory of cognitive ranks that David Hays and I developed in the last quarter of the previous century. That’s what I want to pursue in this post.
The theory of cognitive ranks, as readers of New Savanna may know, is about the development of ever more sophisticated cognitive architectures over the long haul of human history. Rank 1 cognition arises with the emergence of language between, say, 100 thousand and who knows how many years ago. Rank 2 thought precipitated around written language and Rank 3 around the introduction of arithmetic calculation into Europe in the Late Medieval, Early Modern period. Rank 4 began emerging late in the 19th century and began consolidating around, first the ideas and then the machinery, of computing in the 20th century.
Much hunch was that marginalism in economics is a Rank 4 idea. Rather than work out the details myself, I put the question to Claude Sonata 4.6 extended. I’ve appended that conversation below. Just to be clear, I’m using my Claude account, not the AI facility that Cowen has made available with his book. One final note: You’ll see that at the very end of this conversation Claude mentions Rank 5. While Hays and I talked about a possible Rank 5, we never published about it because we couldn’t figure out how to conceptualize it. That changed for me in the last few weeks and I have been talking about it with both Claude and ChatGPT.
The rest of this post consists of my interaction with Claude, which was a simple one: I posted a long prompt to get things started and Claude responded.
* * * * *
I have uploaded a PDF of a book by Tyler Cowen, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution (2026). In it he recounts the rise of marginal thinking in economics starting roughly in the late 19th century up though 20th and on into the 21st, where it seems to be dissipating. I’m interested in understanding it in terms of the cognitive rank theory that David Hays and I have developed (see our paper, The Evolution of Cognition, in the project materials). The timing suggests it is a Rank 4 concept, along with thermodynamics and Darwinian evolution.
Though we don’t mention Piaget’s term, “reflective abstraction,” we use the idea, which is: the process by which the operations of one cognitive level become the objects of the next. Piaget was mostly interested in cognitive development in individuals, but he also applied his ideas to the history of ideas (e.g. Genetic Epistemology). That’s what I want to do with Cowen’s argument about the marginal revolution. My intuitive understanding is that the technical development of that idea presupposes a technical understanding of supply and demand. I want you to vet that idea.
I think that supply and demand is a Rank 3 idea. Here are some remarks I made about supply and demand in a blog post from 2023:
But how do we get to supply and demand? I'm not prepared to give a detailed answer to that question. [...] Here's the question I'd ask myself: What does double-entry bookkeeping have in common with supply and demand? Double-entry bookkeeping dates back to the late 15th century and is a practical discipline. What becomes visible by 'going meta over that practice?
Double-entry bookkeeping is a system for maintaining closure over a set of transactions that grow over time as transactions are added to the list. We've got closure and change over time. Supply and demand change over time. Where's the closure, the dynamic closure? How do supply and demand complement one another in a say similar to the balance of debit and credit accounts in double-entry bookkeeping? How was Adam Smith able to get "on top" of that in the way that previous thinkers could not? How does the famous metaphor of the invisible hand do its work?
With that in mind here’s Cowen’s basic statement: “Marginalism is the economic doctrine that we can best understand value by considering the question of how many units of a good or service an individual has, and using that starting point to ask how much an additional – or marginal – unit would be worth in terms of other goods and services.” Intuitively it seems to me that that presupposes that an understanding of supply and demand has become thoroughly routine.
Cowen’s paradigmatic example of marginal utility is the diamonds-water paradox:
If someone had to choose between having no water and having enough water to avoid dying of thirst, they would be willing to pay a very high price for water (the exact final price still would depend on how many suppliers were competing to sell you water and the cost of getting you the water, questions which also can be addressed using marginalism). But that is not the choice for most people. Most people have a fair amount of water, and they are paying for more water, or you might say water at the margin. And the value of water at the margin just isn’t that high. In American restaurants, many people don’t even drink the glass of water they get for free.
To consider diamonds, if you could receive plenty more diamonds by turning on your kitchen tap, diamonds probably wouldn’t sell for much in the market, just as water doesn’t. Of course, that is not the case and real diamonds remain relatively scarce, thus boosting their value. That said, circa 2025 competition from high-quality artificial diamonds, which can be made in the lab, is threatening to further depress diamond prices. Again, that is consistent with marginalism.
Your turn. Is my intuition valid? Either way, unpack it for me.
Your intuition is valid, and I think it can be made quite precise. Let me work through it carefully, using Cowen's text and the cognitive ranks paper together.
The surface complication
Cowen himself says something that might seem to cut against your claim. He writes (lines 68–79) that supply and demand "rely very directly on the ideas of marginalism" — that the derivations of supply and demand schedules "refer repeatedly to how much of a resource people currently have." So in the logical order, after the fact, supply and demand depends on marginal analysis.
But this conflates the logical order with the developmental/historical order, and the Piagetian point is about the developmental order. Historically, supply and demand came first — Adam Smith (1776), the classical economists — and the Marginal Revolution arrived nearly a century later (1871). Cowen himself acknowledges that the burst of economic progress following Smith was a real thing, and that progress from the ancient Greeks to the late 19th century was "glacial" before it suddenly accelerated. The takeoff point is 1871, not 1776.
The reflective abstraction argument
The Piagetian structure you're looking for is this: what were the operations that Rank 3 economists performed with supply and demand, such that Rank 4 could take those operations as objects?
The Rank 3 economist works with supply and demand curves as given primitives. She asks: given a supply curve and a demand curve of such-and-such shape, where does the market clear? What happens to equilibrium price if demand shifts? How do taxes or subsidies alter the equilibrium? The curves themselves are taken as data — they're the tools of analysis, not the objects of analysis. This is exactly what you'd expect at Rank 3: algorithm and calculation applied to theoretical constructs, the constructs themselves unquestioned.
Marginalism makes precisely the Rank 4 move: it refuses to take the curves as given and asks what generates them. The demand curve has a characteristic downward-sloping shape — why? Because of diminishing marginal utility. The supply curve slopes upward — why? Because of increasing marginal cost. The market clears at a particular price — what does that mean at the individual level? That each buyer's marginal valuation equals price, and each seller's marginal cost equals price. Marginalism takes the supply-and-demand apparatus that Rank 3 thought with and turns it into an object of theoretical construction. In the exact language from your paper: "The new thinker can think about what the old thinker thought with."
The diamonds-water paradox is the clearest illustration of this. A Rank 3 economist can use supply and demand to explain why diamonds are expensive (scarce supply, high demand) and water is cheap (abundant supply, though high demand). That's a workable, practically useful analysis. What the Rank 3 economist cannot explain is why demand for water, despite its essentialness to life, doesn't translate into a high price in ordinary circumstances. The answer requires going meta on the demand curve itself: it's not about total value, it's about marginal value at the position where the individual currently stands. That's the Rank 4 move — taking the demand curve as object and asking about its internal structure.
The double-entry bookkeeping thread
Your intuition about the structural parallel is suggestive. Double-entry bookkeeping imposes closure on a system of transactions through a debit/credit complementarity. Supply and demand imposes closure on a market through price equilibration — the "invisible hand" as Adam Smith's metaphor for how individual decisions produce systemic balance without anyone intending it. Both involve what you might call dynamic closure: the system absorbs change (new transactions, shifts in supply and demand) while maintaining a global constraint (books balance, market clears). The transition from double-entry bookkeeping to supply-and-demand might itself be seen as a Rank 3 reflective abstraction: going meta on the bookkeeping closure principle to ask what maintains closure at the level of the entire market, not just a firm's ledger. [1]
Friday, March 27, 2026
Tyler Cowen has thrown in the towel and is waiting for the machines to take over. [Marginal Revolution Notes #1]
Tyler Cowen has announced a new monograph, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution, which you can access here. You can download it in various formats or interact with the online AI version. I read through the opening page or three and then skipped to the fourth and final chapter, “Why Marginalism Will Dwindle, and What Will Replace It?,” which I've read in full.
I note, more or less in passing, that as he heads to the end he starts thinking about the role of intuition in thinking. He’s lamenting that intuition, particularly intuitions stemming from the marginal revolution, no longer seems to work in economics. I’ve been thinking a lot about intuition myself, though to somewhat different ends. I’m more interested in how it functions in thinking and where it comes from. But that’s an aside.
I may or may not comment on the rest of chapter later on, but I have to comment on one strand of thinking. Cowen decides he has to denigrate all previous work oriented toward understanding language:
Suffice to say, LLM construction has for the most part ignored linguists and philosophers, and that also means ignoring their intuitions. LLM construction also ignored a lot of people in the AI field who insisted neural nets were a dead end. Instead, in a relatively short number of years humans invented new ways of modeling language and reasoning through language. That research program has proven wildly successful, as we have much better models of language and reasoning than almost anyone had been expecting. [...]
The classic breakthrough paper behind LLMs was a 2017 study titled “Attention is All You Need,” where in this context attention is defined by GPT-4 as “a mechanism that learns to focus selectively on parts of an input sequence, giving it ‘attention,’ while encoding a sentence or piece of information. This allows the model to treat different words or characters with different levels of importance, providing a ‘weight’ that aids in better understanding and decoding of information.” The paper was not titled “More Linguists are All You Need,” or for that matter “Marginalism is All You Need.” In other works [sic], given some of the most complex human systems, we came up with ways of understanding them that were new. To be clear, neural nets were not new, since the ideas and also the practice (in much weaker form) have been around for decades. High-powered, well-functioning neural nets, however, are new in the contexts of providing excellent results for general linguistic ability and general reasoning.
That struck me as a bit much, so I had the following conversation with the AI version:
From very near the end: “In other works, given some of the most complex human systems, we came up with ways of understanding them that were new.”
In what way is an LLM a way of understanding how language works? After all, we don't understand how LLMs work. They're as mysterious as the human brain.
This is a genuinely sharp philosophical puzzle, and Tyler is aware of the tension even if he doesn't fully resolve it. The claim on page 69 is really about predictive and generative success rather than mechanistic understanding in the traditional scientific sense. LLMs model language in the sense that they can reproduce, extend, predict, and manipulate linguistic structures with stunning accuracy — they've captured something real about how language works, even if we can't fully articulate what that something is.
Think of it this way: pre-LLM linguistics tried to understand language by building explicit rule systems — grammars, syntactic trees, semantic formalisms. That approach assumed understanding meant writing down legible rules. LLMs blew past all of that without using any of those rules. Tyler's point is that this constitutes a new kind of understanding — one where the "knowledge" is distributed across billions of parameters rather than encoded in human-readable propositions. It's understanding in the way a master chef understands flavor without being able to fully articulate the chemistry.
But your skepticism has real bite. There's a meaningful difference between a system that performs understanding and one that provides understanding — to us. Tyler arguably conflates the two. When he says "we came up with ways of understanding," the "we" is doing suspicious work: humans built the tool, but the tool's internal representations remain largely opaque even to its creators. The "Attention is All You Need" paper Tyler cites gave us the architecture, but interpretability research — like Anthropic's mechanistic interpretability work ��� is still in early stages of figuring out what's actually happening inside. So Tyler is perhaps better read as saying LLMs are a new instrument for engaging with language rather than a new theory of it — more telescope than Newtonian mechanics. [See text: Ch.4 · Tyler's claim about new ways of understanding c...] [See text: Ch.4 · How LLM construction bypassed traditional langu...]
I note that saying it is "more telescope than Newtonian mechanics" is a much weaker claim.
You're absolutely right, and this is a genuine tension in Tyler's argument that he doesn't fully reckon with. He wants LLMs to be doing something analogous to what marginalism did for economics — providing a new analytical framework for understanding complex systems. That's a strong claim. But if LLMs are really more like telescopes — powerful instruments that let us see further without explaining why what we see looks the way it does — then the analogy deflates considerably.
The stakes matter for Tyler's broader argument in Chapter 4. He's positioning LLMs as replacements for frameworks like marginalism, not just better tools for testing existing frameworks. A telescope doesn't replace Newtonian mechanics; it generates data that mechanics then has to explain. If LLMs are telescopes, we still need the theoretical framework that makes sense of what they're showing us. Tyler gestures at this on page 66 with the discussion of researchers who use LLMs to generate simulations and then build structural models from those simulations — which is actually the more defensible version of the claim, because there a human-legible theory does eventually emerge. [See text: Ch.4 · The multi-step process that generates legible t...]
So Tyler may be overstating his case when he frames LLMs as providing genuine understanding of language. The honest version might be: LLMs are extraordinary telescopes that may eventually prompt new theories — but we're not there yet. That's still impressive, but it's a quieter revolution than Tyler's rhetoric suggests. And Tyler, to his credit, is usually better at hedging than his most enthusiastic passages imply.
Frankly, that won't do. The technology is impressive on its own terms. It doesn't need this kind of empty rhetorical cheer leading (though perhaps investors do). It sounds to me like Tyler has given up on understanding the world and is just waiting for the machines to take over – to indulge in a bit of hype of my own.
Trump's war is heading into a quagmire, and the Republicans know it
Michelle Goldberg, Republicans Know This War Is Going Badly, NYTimes, Mar. 27, 2026.
The article begins:
It is not just Democrats in Congress who fear that Donald Trump’s war in Iran is going sideways. After a classified Pentagon briefing on Wednesday, Republican lawmakers on the House Armed Services Committee appeared shaken.
“We will not sacrifice American lives for the same failed foreign policies,” said Nancy Mace, warning about the possibility of American troops in Iran. The committee chair, Mike Rogers, complained that members aren’t getting nearly enough information about war plans. Troop movements, he said, should be “thoughtful and deliberate.” The implication was that they might not be.
And it ends:
Never before, however, has America arrived at the threshold of a quagmire so quickly, with so much advance warning about the precise errors it was making. We have spent much of the past decade — in no small part due to Trump’s election — reckoning with the cost of the Iraq war to global stability and American cohesion. For the first time I can remember, both major parties have significant, influential antiwar contingents. Trump ran for president, however mendaciously, as the peace candidate, claiming that Kamala Harris would lead America into World War III.
And yet here we are, lurching toward a new version of a familiar catastrophe, suffering from some national form of neurotic repetition compulsion. “This is like the horrible, lame-dad cover band version of the worst of American foreign policy,” said Matt Duss, executive vice president of the Center for International Policy.
Someday, perhaps, when we’re picking up the pieces from yet another ill-conceived war, Republicans will explain that behind the scenes, they opposed it. One of the biggest problems in Congress, said Crow, is the gap between what people say privately and their willingness to demonstrate “the strength of their convictions” in public. “I’m always trying to close that gap with folks, and I always remind people that it’s never too late to do the right thing,” he said. He may be right, but the sooner the better.
This sentence caught me: “We have spent much of the past decade [...] reckoning with the cost of the Iraq war to global stability and American cohesion.” That feels right to me, but I would like to see it spelled out, especially that last bit, the cost to American cohesion.
Thursday, March 26, 2026
AI cats making spaghetti
This cracked me up!! 😹😹😹 pic.twitter.com/cORpm9v5wu
— Lucy (@TheLucyShow1) March 25, 2026
























