Friday, April 3, 2026

Friday Fotos: A long look back, potpourri

The hegemon no longer, the USA as one nation among others

I've long thought that the situation of the post-Soviet collapse, with the USA as the dominant hegemon in the world, I've long thought that situation was unstable, a mistake. For one thing, it forces the USA to devote a huge chunk of its national resources to the military-industrial complex, which amounts to burying that wealth in the sand. There are better things to do with those resources, both here and abroad. And it forces other nations into perpetual obsequiousness, which is undignified.

But, how do we get out of that situation? The answer seems to be: Trump. In the process of attempting to enforce America's will on the world, that is, Trump's will, he's forcing other nations to abandon us and stand up for themselves.

Ana Swanson, Rival Nations Seize On Choke Points to Counter Trump, NYTimes, Apr. 2, 2026.

President Trump has unapologetically wielded the power of the United States on the global stage, taking a much more belligerent approach economically and militarily to try to dictate the actions of other countries.

From high tariffs to the war with Iran, Mr. Trump has claimed that this aggressive behavior internationally has only upsides, and that past leaders were fools for refusing to tap into America’s power.

But one clear drawback of the strategy is emerging. While many countries have acceded to the president’s demands, some have found a highly effective new way to fight back. Mr. Trump’s aggression has given them the opportunity to test their control over choke points, threatening the United States and the global economy.

One such choke point is the Strait of Hormuz. Iran, which accounts for less than 1 percent of global economic output, has control over the shipping lane that transports a fifth of the world’s oil and gas. Its closure since the United States and Israel began attacking Iran at the end of February has blocked shipments of fuel, fertilizer and other goods, sending gas prices sharply higher and spreading anxiety among U.S. farmers and manufacturers.

Another experiment in retaliatory coercion began one year ago on Thursday, when Mr. Trump walked into the Rose Garden and unveiled tariffs on what he called “Liberation Day.” While many governments — even powerful economies like the European Union — complied with U.S. demands, China was a notable exception. Beijing rolled out a licensing system for exports of rare-earth minerals and magnets that has given China unparalleled control over the global manufacturing system.

Whoops! Maybe the sheer complexity of the existing world system is making Trump's hegemonic dreams utterly impossible. It's like the mythical hydra, chop one head off and three more crop up. Are we actually moving toward a really new world order?

There's more at the link.

Fanta Orange, have another sip

The Happiness Crash of 2020

Peltzman, Sam, The Happiness Crash of 2020 (March 13, 2026). Available at SSRN: https://ssrn.com/abstract=6465460

I document a sudden, sharp and historically unprecedented decline in self-reported happiness in the US population. It occurred during 2020, the year of the Covid pandemic, and mainly persists through 2024. This happiness crash spread across nearly all typical demographics and geographies. The happiest groups pre-Covid (e.g., whites, high income, well-educated and politically/ideologically right-leaning) tend to show the largest happiness reductions. The glaring exception is marital status, which has consistently been an important marker for happiness. The already wide happiness premium for marriage has, if anything, become slightly wider. With both married and unmarried reporting large declines in happiness the country has become segregated: slightly over half-the married adults-remain happy on balance; the unmarried, nearly half, are now distinctly unhappy. I also show that across a number of aspects of personal and social capital post-Covid deterioration is the norm, including a collapse of belief in the fairness of others and of trust in the US Supreme Court.

H/t Tyler Cowen.

Thursday, April 2, 2026

How people actually use AI

The whole thread is worth reading. 

More this and that from March

AI and Human Happiness

Tyler Cowen interviews Arthur Brooks:

Arthur Brooks reckons he’s on the fourth leg of a spiral-shaped career: French horn player, economist, president of the American Enterprise Institute, and now Harvard professor and evangelist for the science of happiness. His new book, The Meaning of Your Life: Finding Purpose in an Age of Emptiness, argues that happiness isn’t a feeling but a combination of enjoyment, satisfaction, and meaning — the macronutrients of happiness, he calls them — and that most of us are gorging on the wrong ones. Tyler, naturally, wants to know: what’s the marginal value of a book on happiness, and what does spiral number five look like?

Along the way, Tyler and Arthur cover how scarcity makes savoring possible and why knowing you’ll die young sharpens the mind, what twin studies tell us about the genetics of well-being and why that’s not actually depressing, the four habits of the genuinely happy, the placebo theory of happiness books, curiosity as an evolved positive emotion, the optimal degree of self-deception, why Arthur chose Catholicism rather than Orthodoxy, what the research says about accepting death, how he became an economist via correspondence school, AI’s effect on think tanks, the future of classical music, whether Trumpism or Reaganism is the equilibrium state of American conservatism, whether his views on immigration have changed, what he and Oprah actually agree on, which president from his lifetime he most admires, Barcelona versus Madrid, what 60-year-olds are especially good at, why he’s reading Josef Pieper, how he’ll face death, and much more.

The AI and happiness segment from all that is below the asterisks. And I've added a coda about AI and asking the right question. Oh, and the sharp right-brain/left-brain stuff seems a bit old. 

* * * * *

COWEN: How will artificial intelligence influence politics and the political spectrum? Easy question, right?

BROOKS: It’s a good question. I agree. One of the questions I get mostly, as you can imagine, because of my work, is how is it going to affect happiness? I’ve been thinking about it an awful lot. We discussed a minute ago the hemispheric lateralization theory of Iain McGilchrist, that the right side of the brain is the mystery and meaning and why questions of life, and the left hemisphere of the brain largely adjudicates the what and how to and engineering and technological questions of life. The problem is that in modern society, we’ve been kicking everybody into the left hemisphere of the brain and walling off the right hemisphere because people spend all day on their screens, and the hustle and grind-engineered Silicon Valley culture that we live in has actually eliminated a lot of the mystery and meaning from life and the incentive to actually ask those questions.

That’s a lot of what my new book is about, is how to get back to the right side of your brain, as a matter of fact. Now, AI is a magnificent extension of the left hemisphere of your brain. It’s a how-to and what engine, but it’s not a why engine. Any real why question that matters, you can’t put into ChatGPT and get something meaningful to you to say, “Why am I alive? For what would I be willing to give my life?” You put that into ChatGPT, it’ll start by buttering you up and telling you what a smart question it is, and then it’ll tell you how five different people have answered that question, and you’re left completely unsatisfied as a result of that.

The answer to the basic happiness question, which is an adjunct, which is next to the political questions, I think, is that if you use it for left brain things to free up your time and then go over to the right brain side of your life with your love and your faith and your relationships and beauty and suffering, then your life’s going to get better. It’s a very real possibility, Tyler, that this is what’s going to happen in economics and politics today.

If we went back 150 years or a little bit more, people would say, “Oh, the Industrial Revolution is going to permanently ruin society because it’s urbanizing and people don’t know each other and the traditional folkways are going away.” It had some rough transitions, to be sure, by bringing in market economics and division of labor and specialization, et cetera. The end of the day was a middle class and the weekend. That’s not the fruit of labor unions. That’s the fruit of the Industrial Revolution and the amazing largesse that it created through capitalism.

COWEN: That took a long time, right? This may be 70 years at the interim. What’s our interim going to look like? Will it be more nostalgia and more small-C conservatism?

BROOKS: Yes. I think it will be. I think this will be speeded up. I think within 20 years that we will have something like the post-industrial equivalent of the fruit that was wrought by the Industrial Revolution, that that’s what we’ll see from what’s going on today.

COWEN: Do you think the classical liberal view on AI should be that we don’t much regulate it or that we regulate it like a national security object, the way we might regulate atomic bombs?

BROOKS: I don’t know. I’m wrestling with that, and I don’t know the answer. Can you give me your opinion, please? Otherwise, I’m going to have to just go to your blog and look at it and form my own opinion on the basis of that.

COWEN: I think for now, we don’t know how to regulate it, and it’s changing more quickly than Congress can act intelligently. Maybe Congress cannot, at the moment, act intelligently at all. I’ll say hold off, but leave open the option because we might need to in some important ways.

* * * * *

COWEN: How long from now do you think it will be before an AI model with good prompting will write a better policy study than, say, the 70th percentile quality study from a good think tank?

BROOKS: That’s a good question. I think AI at not-too-distant future, will do a better job at executing policy analysis, but will do a very poor job at asking the right policy questions. I think that that’s going to be the comparative advantage, is the creativity and the human impulse, the curiosity that humans actually bring to it. The reason is because all large language models are being trained to what people have already done, ideas that people have already had.

What we’re incredibly good at using the right hemisphere, the why hemisphere of our brain, is asking new why questions, which actually leads us to the creativity that we’ll ask the best policy questions going forward. That’s what we’re going to have to specialize in—you, me, and everybody else in the space.

Anaconda (2025) is absurd and hilarious [Media Notes 177]

Two scenes had me laughing as hard as I’ve laughed in a year or three.

Wikipedia:

Anaconda is a 2025 American action-adventure comedy horror film[5] that serves as a meta-reboot of Anaconda (1997) and is the seventh installment in the Anaconda film series. The film was directed by Tom Gormican, who co-wrote it with Kevin Etten, and stars Paul Rudd, Jack Black, Steve Zahn, Thandiwe Newton, Daniela Melchior and Selton Mello. The story follows a group of childhood friends trying to remake the 1997 film, only to be attacked by a giant and murderous anaconda. [...]

Childhood friends Doug McCallister, Ronald "Griff" Griffin, Kenny Trent, and Claire Simons are all dissatisfied with the way their lives have gone. When Griff, a television actor struggling to find work at Sony Pictures Studios in Culver City, reveals that he has obtained the rights to one of their favorite films, Anaconda, the group secures a loan and travels to the Amazon Rainforest in Brazil to make a low-budget indie version of the film.

The group are joined by Ana Almeida, a woman who claims to be the daughter of the boat rental operator they've been working with. While shooting, Griff accidentally kills Heitor, the semi-tame snake they were using. Griff and their snake handler Santiago Braga venture into the jungle to find a replacement, but Santiago is attacked by a massive anaconda that terrorizes the jungle.

And so forth. It goes downhill, round and round, from there.

Wednesday, April 1, 2026

AI Optimism, AI Pessimism

This is a thoughtful discussion about AI. One discussant, Mike Feinberg, is deeply skeptical. The other, Ben Wittes, is less skeptical, but still wary. Note in the first four minutes or so Wittes delivers a monologue. I’ve listened to many of them, but I’ve now lost my taste for them. Still, you might want to give it a try. It’s only four minutes.

Whale ropes to the sky [Adam Roberts]

So, after doing a bit of work this morning, I checked Facebook and saw that my buddy Adam Roberts had posted this:

Off dream: I had an unfamiliar laptop, and couldn't get the power cord to fit into its socket. I knew the device was almost out of power, and fumbled around with this wire. Eventually I gave up, opened the laptop and saw what I'd been writing: a poem. The only line of the poem I can remember from the dream is:

"The bombed city threw whale-ropes to the sky"

Then the screen went blank, and I knew I had lost whatever it was I had been writing. Particular type of anxiety dream.

I thought a bit and posted this reply:

Sounds like you're channeling STC. How's your laudanum supply holding out?

In Xanadu did Adam R
A sacred city rebuild.
Where mad cetaceans swam
Beneath the bombed walls
Tossing whale-ropes to the sky.

I thought some more, Shazaam! ChatGPT! I added this to my reply, courtesy of the good old Chatster:

When I asked Adam whether or not I could post his note along with my reply, he said:

Please do! I could say that, in the dream (so far as I remember it: the memory is already fading away) my understanding of "whale" was less cetecean and more an index of thickness: bombs resulting in fat, ascending pillars of smoke. But I accept that's not necessarily what the line itself suggests.

Some things I saw in March

Claude: On the (late) discovery of the chess tree and things one can infer around and about that tree [MR-3c]

Late in his fourth chapter, Cowen introduces chess as an example of what contemporary AI can do. As you may know, chess has been a central interest of AI, so much so that John McCarthy, the mathematician who coined the term “artificial intelligence,” has written an article entitled, “Chess as the Drosophila of AI.” You may also know that chess has been a central interest of Cowen’s. He was a chess champion in his youth and he follows the game closely. 

Recognition of the chess tree comes late 

As it happens, chess presents us with one of those examples that Tyler finds so interesting in this book (The Marginal Revolution: Rise and Decline, and the Pending AI Revolution). It was during my freshman year at Johns Hopkins, I believe, that I read some Dover Publications book that was an omnibus presentation of information, cybernetics, and computing. I forget both the author and the exact title, but I remember two things. 1) In its presentation of computing, it talks about analog computing and digital computing. That was common until not long after personal computers arrived; after that, articles and books about “computers for dummies” stopped talking about analog computing and concentrated on digital (big mistake, IMO, but that’s a different story). 2) It talked about chess and made the point that, from an abstract point of view, chess was just like tic-tac-toe, an utterly trivial game. Both games are finite and the games can be arranged in a tree structure. But the chess tree is so large that not even the largest computer can list them all. I thought about it a little, got the point, and that has stuck with me ever since.

That was back in the mid-1960s. The largest computers at that time are tiny in comparison to the Brobdingnagian behemoths being used to train contemporary AIs, but they’re still too small for the chess tree.

The thing is, and here we’re getting to the Tylerian point, chess has been played by thoughtful people for centuries. Why was it only in the early 20th century that its tree-like structure was recognized? That’s so simple, and so basic.

So I put the question to Claude, who answered. And then we went on from there. I’ll present that whole dialog shortly, but I want to discuss something that I discovered while thinking about its answer.

Game theory (reading the historical record backwards)

First, I already knew more or less how it would answer, I was asking the question to get details. Ernst Zermelo explicated the tree structure in a paper published in 1913. The Wikipedia entry, “Zermelo’s theorem (game theory),” opens like this:

In game theory, Zermelo's theorem is a theorem about finite two-person games of perfect information in which the players move alternately and in which chance does not affect the decision making process. It says that if the game cannot end in a draw, then one of the two players must have a winning strategy (i.e. can force a win).

That clearly says that Zermelo’s theorem belongs to that branch of investigation known as game theory. But, thought I to myself, wasn’t game theory invented by John von Neumann in the middle of the 20th century? So I did an Ngram search on “game theory”:

The chart doesn’t lie. “Game theory” shows up in the middle of the century, not the beginning. So a did a bit more digging and found an article, “Zermelo and the Early History of Game Theory,” that begins with this sentence: “It is generally agreed that the first formal theorem in the theory of games was proved by E. Zermelo in an article on Chess appearing in German in 1913 (Zermelo (1913)).” So, Zermelo’s theorem was retrospectively assimilated into game theory once game theory had become a recognized field of inquiry. Culture works like that.

The same thing has happened to the diamonds-water paradox that Tyler presents early in the first chapter as THE paradigmatic example of marginalist reasoning (page 4 in the PDF). Tyler presents part of that story in the book. I dug out more of the story by querying the accompanying AI. I won’t go into the details, but it boils down to this:

  1. Galileo presents it in the 17th century, but in a context where it’s part of a debate between the Ptolemaic and Copernican cosmologies.
  2. Adam Smith picks it up in The Wealth of Nations (1776), where it’s framed as “a point about the inadequacy of use-value as an explanation of price” (AI’s wording).
  3. Jevons frames it as an example of marginalism (1871).
  4. Paul Samuleson uses it in his 1948 textbook, Economics: An Introductory Analysis, which enshrines it in economic doctrine as the prime example of marginalist thinking.

And that textbook went on to become one of the most important introductory texts in economics.

And so it goes. 

Note: At times Claude refers to my "Play book." That's the book I've been working on, and with ChatGPT as well: Play: How to Stay Human in the AI Revolution.

* * * * *

I now give you my dialog with Claude. Zermelo’s theorem is the start. From there:

  • Chess as a HUMAN game;
  • Jazz improvisation, constraint, freedom, and dialog:;
  • AlphaZero’s stylistic distinctiveness;
  • Machine intelligence and human intelligence;
  • Language as a domain for computation, and finally;
  • Chess as a specialized form of language.

I may return to those last three topics in a later post. 

* * * * * 

What are the prerequisites for Zermelo’s theorem?

I’ve got a question about chess, one that’s related to Chapter 3 of Tyler Cowen’s book, though he doesn’t discuss chess there. But he is concerned about why economics seems to come so late in intellectual history. So, why, given how long chess has been played, was it only in the early 20th century that the tree-structure of the game was recognized? On the one hand, since the chess tree is so large, it would be impossible to list the whole thing and determine its finite nature that way. But, abstractly considered, one might imagine getting a hint in that direction by trying to list the games. White has 20 possible moves to start (even if most of them are never played), and Black has 20 possible replies to each of those. At this point the chess tree is 400 branches wide, and it rapidly gets much worse. On second thought, no, that won’t work.

You need a method of reasoning. And that didn’t exist until….Well Zermelo’s proof dates from 1913. Just when the relevant math existed, I don’t know, but I doubt it would have been much before then. I’m sensing intuitively that we’re in the same ballpark as Cantor’s diagonal method, which was late 19th century.

Your intuition about the timing is sharp, and the Cantor connection is real — but the deepest reason runs even further into the foundational transformation of mathematics than just the diagonal argument. Let me develop this.

What the proof actually requires

Zermelo's 1913 paper is titled, revealingly, "On an Application of Set Theory (Mengenlehre) to the Theory of Chess Games." The title tells you everything: this is explicitly an application of Cantor's set theory. Zermelo needed three things that didn't exist before the late 19th century:

First, the concept of a complete strategy as a mathematical object — not a heuristic or a rule of thumb but a function that specifies a move for every possible position that could arise. This is a first-class set-theoretic object: a mapping from the (enormous but finite) set of all positions to the set of available moves. You cannot even state Zermelo's theorem without this concept. Before Cantor, "function" meant something like a formula or rule that you computed — the idea of a function as an arbitrary set-theoretic object, definable over an enormous domain without being explicitly written down, was simply not available.

Second, backward induction as a proof method on a finite tree of unbounded but determinate depth. You need to show that the set of positions from which White can force a win is well-defined, even though it's a subset of an incomprehensibly large structure. The argument works by showing that this set is recursively characterizable from the terminal positions backward — but "recursively characterizable over a structure you cannot enumerate" requires treating the whole tree as a completed mathematical object, not an unfolding process.

Third, Zermelo's own axiomatization of set theory (1908) — which came five years before the chess paper. The Axiom of Choice, in particular, is needed in the general case (for games of possibly unbounded length before the 50-move rule imposes finitude), and Zermelo had just invented it.

The Cantor connection is precise

You're right that it's the same ballpark, but it's more than approximate temporal proximity. The specific intellectual move Cantor pioneered that made Zermelo possible was the treatment of infinite (or astronomically large) structures as completed wholes — actual infinities, not potential infinities. Before Cantor, the dominant view (even among mathematicians, following Aristotle) was that infinity was always a process, never an object. You could always add one more, but you couldn't speak of "the set of all natural numbers" as a single thing you could reason about.

The chess tree is finite, but it's so large (roughly 10¹²⁰ possible games) that the conceptual move required is structurally identical to Cantor's: you cannot approach it by enumeration or by any incremental process. You have to treat the entire tree as a completed object and reason about its global properties by abstract methods. Cantor's diagonal argument is the purest example of this — you prove something about an uncountable structure by an argument that never touches most of its elements. Zermelo's chess proof does the same thing: it proves that a particular position is a forced win for White (or a draw) without ever specifying which moves constitute the winning strategy.

The Rank 4 structure

The cognitive rank analysis maps cleanly onto this. Rank 3 chess thinking is exactly what the great 19th-century practitioners developed: Morphy's tactical superiority, Steinitz's positional principles, opening theory, endgame theory. These are all algorithmic in the relevant sense — they are rules, patterns, procedures for navigating the game. This is sophisticated, even brilliant, but it's thinking with chess moves and positions as primitives.

The Rank 4 move is to ask: what kind of object is a chess game? Answer: a finite tree with two alternating agents, terminal nodes with assigned outcomes, and the question of whether a particular class of strategies (those guaranteeing a given outcome regardless of opponent play) is non-empty. You've taken the activity of playing chess — the Rank 3 algorithms and heuristics — and made the activity itself an object of mathematical modeling. That's precisely your formulation: the operations of the lower level become the objects of the next.

And notice what this means for the specific threshold you identified. The reason even imagining that the game is finite and tree-structured doesn't help you until you have the right tools is that the question you need to ask — "is there a complete strategy that guarantees outcome X?" — cannot be formulated without the concept of a complete strategy as a set-theoretic object. Knowing the tree is there doesn't help if you have no language for the thing you're trying to prove exists.

Cowen's theme

This fits Cowen's chapter 3 argument exactly. His diagnosis for why economics was so slow is that economic ideas require peering around a corner — you can't reach them by brute force, and you don't know where to look until the relevant conceptual apparatus exists. The chess case is if anything cleaner, because here the brute force failure is quantifiably obvious: 10¹²⁰ is not a number you can work through. The delay isn't from lack of effort or lack of institutional support or Christian resistance — the game had been played intensely for a thousand years by people who were clearly thinking hard about it. The delay was purely conceptual: the question couldn't even be asked in a mathematically tractable form until Cantor, Zermelo, and the late-19th-century foundations revolution made the requisite objects available. That's as clean an example as one could want of Cowen's "factoring a very large number" metaphor — once you have the factors (set theory, the function concept, backward induction), the result is almost immediate.

The 1913 date isn't merely contemporaneous with the Cantor-era mathematics — Zermelo himself was one of the creators of that mathematics, applying his own toolkit almost immediately to a new domain.

Potatos

Holy crap, Batman! Anthopic just showed its a$$ to the world.

You can’t make this stuff up. Since, however, it is April 1st, I thought maybe this is a prank, it’s just so outrageous. But no, it’s real

Here’s how it arrived to my inbox this morning:

Claude Code Leak Jumps Agent Timeline

👀 What’s happening: Anthropic accidentally exposed Claude Code’s internal codebase through an npm source map file, leaking over 510,000 lines across roughly 1,900 files. Core agent logic, orchestration layers, and hidden features spread across GitHub within hours, turning a closed system into something anyone can study and replicate.

🌍 How this hits reality: This hands the entire agent codebase to the market. Multi-agent coordination, tool execution loops, memory systems, and background daemons are no longer guesses but working implementations. What took model labs years and massive compute budgets is now inspectable and reproducible. Expect hundreds of forks, rapid commoditization, and a surge of wrapper products built on identical foundations.

🛎️ Key takeaway: This may be the most consequential mistake in agent history. Over half a million lines of production-grade agent code just became public overnight. That compresses years of frontier learning into a starting point. Expect agent development to accelerate sharply from here.

Inquiring minds want to know: Is this serious enough to ripple on through to Anthropic’s (prospective) IPO later this year?

Tuesday, March 31, 2026

English muffin

The Paradox of Contemporary AI @3QD

I’ve got another article at 3 Quarks Daily:

The Paradox of Contemporary AI: Engineering Success and Institutional Failure

The title pretty much encapsulates my argument. Yes, LLMs have been a brilliant technical success. But they’ve also led to an intellectual monoculture that narrows the range of ideas on which to draw in moving forward. That, I argue, will make progress much more difficult. I’ve published a longer and more narrowly circumscribed version of that argument on New Savanna, The Shock and the Narrowing: How ChatGPT's Success May Have Compromised AI's Future.

Part of the institutional failure lies in the educational system. Here’s a short note I posted to Facebook that points at that:

I entered Johns Hopkins as a freshman in the fall of 1965. JHU had just gotten Ford Foundation money to establish the interdisciplinary Humanities Center. “Interdisciplinary” was the next Big Thing. In the fall of ‘66 the Humanities Center hosted the (in)famous structuralism conference. And we all know what happened after that. Theory plus Harold ‘Effin’ Bloom.

Since then I believe JHU has acquired two more interdisciplinary humanities centers, but the place is still run by the traditional disciplines. That’s happened all over the place. Interdisciplinary centers and studies programs all over, but it’s all epicycles upon epicycles circling the same old 19th century German categories.

For all that time I’ve heard that too many people are publishing the same damn articles time and again. That problem persists. & I’ve heard that we need to do more to encourage Real Innovation, we always need more of that. But the institutions remain fundamentally the same.

Is there any chance that this is ever going to change? What are the chances that AI will force/allow fundamental change? Or do we have to somehow start anew from the ground up?

AI experts don’t realize how shallow their knowledge is, and neither does anyone else.