Friday, May 1, 2026

Friday Fotos: Messages written in shells

Mum’s the word.

I see you.

One if by land, two if by sea.

Does not compute.

Just talking trash.

Thursday, April 30, 2026

Flash Mob - Performing "I Will Follow Him" & "Jupiter"

A Quick Ramble: Computational Compressibility (order in the universe), Religion & Signaling, Silicon Valley vs. Pope Leo

I’m thinking there’s a relationship between computational compressibility (as an index of order in the universe) and my current interest in religion, which is what the other two items are about.

Computational Compressibility and order in the universe

I’ve been particularly pleased by my recent working paper: On Method: Computational Compressibility in Complex Natural and Cultural Phenomena. I’m not quite sure why I find it so pleasing. That it crosses disciplines, that’s nice: weather, microbiology, chess (AI), finance economics (asset pricing), film studies (Hollywood Economics), and cultural evolution (Macroanalysis, 19th century novel). But it’s the specific mode of the argument; it’s about description, about what kinds of things exist.

I began by framing the discussion in terms of Stephen Wolfram’s distinction between computational reducibility and computational irreducibility. I think we’ve got to look at that distinction in terms of Miriam Yevick’s 1975 distinction between holographic or Fourier logic and sequential logic. I think Wolfram’s notion of computational reducibility implies Yevick’s sequential logic. As far as I can tell, her notion of holographic logic doesn’t register with respect to Wolfram’s distinction. But it may be that what I’m calling computational compressibility (within the realm of irreducibility) resonates with her notion of holographic logic.

That is to say, if something is completely irreducible, that means that there’s no order there at all. It’s pure chaos, random. That’s not true for computationally compressible phenomena. The phenomenon of computational compressibility indicates order, but order of a kind that’s different from reducible order. Generative order?

Religion & Signaling

Glenn Loury has a recent video where he distinguishes between what we might call the propositional content of an utterance and its signal value. Explains that at some length in a recent lecture he gave at Stanford, Self-Censorship, Social Information, and the Conditions of Public Reason. In the lecture he examines three cases: race in America, academic life, and Israel and Gaza. His point is that in public discourse on these topics (and others) the signal value of what one is saying often eclipses the propositional value of one’s assertions. This often results in self-censorship where a person withholds their (propositional) views for fear of signally the wrong values.

Thus, in racial discourse:

A key question in this discourse is why racial inequality persists. In this domain speech is saturated with moral meanings. Claims about inequality, education, crime, family structure, or historical responsibility are rarely heard simply as attempts to explain social outcomes. They are taken also as signals of moral orientation—of sympathy or indifference, solidarity or hostility, compassion or disdain. For many audiences, answers to that question relying on structural causes are regarded more favorably than accounts emphasizing cultural factors. Speakers adapt accordingly—many by holding their tongues.

I think lots of discourse is like that. It seems to me that much religious discourse is like this. The conservative Christian arguments against biological evolution never made much sense to me as propositional assertion, nor did the accompanying belief about the age of the earth, but as signaling, those beliefs make more sense.

I'm thinking this signaling business is about coordination, about establishing a rock-bottom level of interpersonal coordination, of common knowledge. I need to think about this some more.

Silicon Valley vs. Pope Leo

I’ve only recently learned that Pope Leo XIV has decided to make AI a central concern of his papacy. Given that I think that the common Silicon Valley view of AI is idolatrous, or at best straddles the border, I’m wondering if we’re going to see a grand debate and battle between the Pope and Silicon Valley apologists. When I put the question to Claude, here’s how it responded:

Your instinct is well-founded, and Leo XIV has already moved aggressively on this front. Let me lay out what's emerged so far.

Who he is and what he's signaling

Leo XIV — born Robert Prevost in Chicago, the first American pope — identified AI as a central challenge from his very first formal address to the cardinals after his election in May 2025. His choice of the name Leo was itself a signal: Pope Leo XIII confronted the industrial revolution with Rerum Novarum (1891), defending workers' rights and the common good. By taking that name, the new pope announced that the Church would face today's technological upheaval with similar clarity.

The specific critique — and it maps closely onto your "idolatry" framing

This is where it gets interesting for your purposes. In what has been called a programmatic document on AI for World Communications Day, Leo warned that "behind this enormous invisible force that involves us all, there are only a handful of companies — those whose founders have recently been presented as the creators of the 'Person of the Year 2025.'" He expressed "serious concern regarding the oligopolistic control of algorithmic and artificial intelligence systems capable of subtly shaping behaviors and even rewriting human history."

That last phrase — rewriting human history, including the history of the Church — is remarkable. He's naming a power claim that goes beyond economic disruption.

His core argument is that "the challenge is not technological, but anthropological. Safeguarding faces and voices ultimately means safeguarding ourselves." He described face and voice as "sacred," given by God, "the constitutive element of every genuine encounter."

At a Vatican gathering that included representatives from Google, OpenAI, Anthropic, IBM, Meta, and Palantir, he said: "The person is not a system of algorithms: he or she is a creature, relationship, mystery."

The "data is not wisdom" theme

He has repeatedly insisted that "the ability to access vast amounts of data and information should not be confused with the ability to derive meaning and value from it," and that "authentic wisdom has more to do with recognizing the true meaning of life than with the availability of data." This is a direct counter to the implicit epistemology of Silicon Valley — the notion that intelligence is fundamentally information processing, that more data equals more understanding.

Where this connects to your book

Your framing of the dominant Silicon Valley attitude as idolatrous is theologically precise in a way that Leo XIV would likely recognize. The classic definition of idolatry isn't crude statue-worship — it's the attribution of ultimate value and salvific power to something that is not God. When figures like Altman, Musk, or Kurzweil speak of AI as the path to immortality, the transcendence of human limitation, the solution to death — that's structurally idolatrous in the theological sense. It substitutes a human artifact for the divine.

Leo has the resources of the entire tradition of Catholic social teaching to bring to bear on this — not just as ethics but as a rival anthropology. The Church's claim that the person is "creature, relationship, mystery" is not compatible with the claim that the person is, at bottom, a computational process that can be replicated, extended, or superseded by a sufficiently powerful machine.

Whether this becomes a genuine face-off rather than parallel talking depends on whether Silicon Valley feels the need to respond. So far, the tech executives seem to be treating Vatican engagement as a reputational opportunity — showing up in Rome, being photographed, nodding about "human dignity" — without conceding much. The more interesting question is whether Leo pushes beyond dialogue into something more like confrontation. The Rerum Novarum precedent suggests he might: that encyclical didn't merely express concern about the industrial revolution, it made specific demands.

Horizon's Edge

The coming AI-driven workplace apocalypse [We aren't ready]

Jasmine Sun, The A.I. Fear Keeping Silicon Valley Up at Night, NYTimes, April 30, 2026. Sampled from the article:

The opening paragraph:

Most people I know in the A.I. industry think the median person is screwed, and they have no idea what to do about it. I live in San Francisco, among the young researchers earning million-dollar salaries and the start-up founders competing to build the next unicorn. While Silicon Valley has long warned about the risk of rogue A.I., it has recently woken up to a more mundane nightmare: one in which many ordinary people lose their economic leverage as their jobs are automated away.

Silicon logic:

But even those who view the idea of a permanent underclass as overblown tell me that the meme contains a kernel of truth. [...]

Most economists and A.I. experts do not expect this scenario, but the persistence of the permanent underclass idea should concern all of us. First, because it signals how much collateral damage the A.I. companies will tolerate en route to A.G.I. And second, because the production of a social underclass is a policy choice. Instead of waiting for impact, we need to think seriously — now — about how we plan to support workers through A.I. disruption.

If left to its own devices, Silicon Valley may summon a permanent underclass through its own market logic. If you believe that human-substituting A.I. is inevitable, then every company should race to be the one to build it — and claim a market valuation the size of the economy and then some.

Unimaginative techno-determinism:

Tech workers, for their part, are scrambling for lucrative A.I. jobs in hopes of securing financial freedom — even when they harbor ethical hangups. [...]

This apparent dissonance can be justified if you believe that the arc of technological progress is fixed. For instance, the founders of Mechanize, a once buzzy start-up with a mission to “enable the full automation of the economy,” argued in a blog post that “the only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence.”

Many A.I. employees are ultimately motivated by visions of a beautiful future: a promised land where goods are cheap, diseases are cured, and abundant machine labor liberates humans to enjoy lives of infinite leisure. But increasingly, they also worry about triggering a jobs apocalypse along the way. “There are some people who care about jobs and inequality because they really care about people. There are others who think this is going to lead to instability, insurrection and revolution, and that’s bad for business,” said a researcher who has worked at two frontier A.I. labs...

And, I would add, if and when that future arrives, we'll not be ready. Why? Because we train adults to become addicted to work mode (Homo economicus). As a result, they won't know what to do with the leisure (Homo ludens).

An emerging techno-federal oligarchy (a successor to Eisenhower's "industrial military-complex"?):

At the same time as A.I. erodes ordinary workers’ leverage, it may concentrate power and wealth in large companies and the U.S. government — two entities whose interests are increasingly linked. A.I.-related investments such as software and data centers accounted for 39 percent of U.S. economic growth in the first three-quarters of 2025, per an analysis by the St. Louis Fed. That gives the federal government a vested interest in sustaining the A.I. boom. Mr. Amodei acknowledges that this concentration can lead to “the reluctance of tech companies to criticize the U.S. government, and the government’s support for extreme anti-regulatory policies on A.I.”

In March, the company started the Anthropic Institute to house its teams working on economics, societal impact and frontier safety. The institute is led by Jack Clark, the affable British journalist turned A.I. billionaire and Anthropic co-founder, who seems to be replacing Mr. Amodei on the media tour of late. When we spoke, I asked Mr. Clark if he, too, expects A.I. to create a permanent underclass.

“This is basically a societal choice,” he replied. Like Mr. Altman and Mr. Amodei, Mr. Clark sees the default path for A.I. as dire: one where we “let technology rip, and don’t think about the social effects until later.” But he also feels optimistic that sufficiently conscientious A.I. builders and policymakers can steer the ship away from the storm.

I have little faith in those (mythical) A.I. builders and policymakers. Meanwhile:

On the evening of Feb. 25, several dozen A.I. employees and civil society advocates gathered in a converted warehouse in San Francisco’s sleepy Dogpatch neighborhood to hear the Democratic pollster and strategist David Shor. The event was titled How to Prepare Our Politics for A.G.I., and doubled as a fund-raiser for a new “six-to-nine-month sprint” to rally Democratic politicians around the campaign issue of A.I. job displacement. [...]

While the American public ordinarily hesitates to support left-wing policies like a jobs guarantee or single-payer health care, A.I. seems to expand the political Overton window. “Right now, the argument is, ‘You’re all about to lose your jobs, and the choice is either you get nothing and starve, or we do something fair,’” Mr. Shor said. “People don’t want to be members of the permanent underclass.”

Not all policies are created equal, however. A universal basic income is unpopular, but a federal jobs guarantee has legs, Mr. Shor found. American voters don’t care about beating China, but they are excited about A.I. curing diseases. And, crucially, populism sells. In one of the top-performing political ads that Mr. Shor’s data firm tested, the nameless narrator declares: “We make the corporations and billionaires who profit from A.I. pay their fair share.” The ad concludes: “They work for the bots. We work for you.”

The near term:

If current trends continue, A.I. models and agents will be capable of performing a wider range of knowledge-work tasks at higher levels of complexity. At that point, A.I. shifts from automating single tasks to taking over entire roles. Hiring may slow in accounting, marketing, design, administrative work and other white-collar professions.

The work force will shift toward less automatable jobs where humans retain a comparative advantage — such as entrepreneurship, care work, the skilled trades and entertainment like sports and the performing arts. We will also see new jobs we haven’t imagined yet, in numbers we cannot predict. Many displaced workers will struggle to retrain, as they have in past automation waves. Education, health care and tax systems will require an overhaul if white-collar employment is no longer a reliable path to middle-class stability. [...]

But the debate over the most extreme scenarios conceals a more immediate threat: Even in the most limited case, A.I. will break the career ladder for millions of current and future workers, a prospect often waved away with euphemisms like “transitional friction.” The Oxford economist Carl Benedikt Frey puts it plainly: “Most economists will acknowledge that technological progress can cause some adjustment problems in the short run. What is rarely noted is that the short run can be a lifetime.”

Class solidarity?

In this sense, A.I.’s broad capabilities foster a rare class solidarity between white-collar and blue-collar workers. When 20-something software engineers in San Francisco talk about escaping the permanent underclass, I hear them projecting concerns about their own precarity: What happens if the invisible hand of the market decides that my skills are no longer valuable? Who will catch me if I fall? For once, a rarefied class of employees — those used to being the automaters, not the automated — is reckoning with their potential obsolescence.

The final paragraphs:

Society’s ability to cushion A.I.’s disruption may determine whether we get to reap its gains at all. Without a safety net and a transition plan, blunt protectionism is workers’ rational response to automation. If you hear that A.I. will entrench a permanent underclass, you’ll do anything to stop it. [...]

And what if we don’t act? [...] In March, the Palantir chief executive, Alex Karp, spoke on a panel with the Teamsters president, Sean O’Brien. “The biggest challenge to A.I. in this country is political unrest,” Mr. Karp said. “If I were sitting here in private with my peers, I’d be telling them the country could blow up politically and none of us are going to make any money when the country blows up.”

Breakfast at the counter

Wednesday, April 29, 2026

AI spending is out of control [Ahab in pursuit of Moby Dick]

Karen Weise, A.I. Spending Sets a Record, With No End in Sight, NYTimes, April 29, 2026.

For the past two years, Amazon, Google, Microsoft and Meta have repeatedly set records for how much they are spending on artificial intelligence.

On Wednesday, the four giants did it again.

In the first three months of the year, the four companies reported in their financial results, they plowed a total of $130.65 billion into capital expenditures, largely spending on data centers that power A.I. That figure — which was another record — was more than three times what the Manhattan Project cost to develop nuclear bombs and 71 percent higher than what the tech giants spent in the same quarter a year earlier.

All of the companies said they would be spending even more, totaling roughly $700 billion this year. Meta, for one, raised its spending forecast for 2026 to between $125 billion and $145 billion, up from its previous prediction of $115 billion to $135 billion. Google also boosted its projection, to at least $180 billion, and said its spending would be “significantly” higher next year.

The big four – Google, Microsoft, Amazon, and Meta can afford it because they “continue to dominate in core businesses that spew cash, such as serving ads on YouTube or Instagram, delivering items in a few hours or tallying cells in Excel.” They've entered into circular relationships and Anthropic and OpenAI which have, in turn, “committed to spending hundreds of billions on computing power that the tech giants provide.” Yada yada so forth and so on et cetera et cetera:

Some of the tech companies have justified their building binge by saying they cannot meet all the demand. But analysts said there were risks if the companies became too dependent on two young customers: OpenAI and Anthropic.

More than 40 percent of Microsoft’s $625 billion in outstanding cloud contracts, for example, come from OpenAI, the company said in January. This week, Microsoft and OpenAI announced new terms that loosened their ties.

Betting so much on OpenAI and Anthropic is a gamble. But even if the start-ups flop, the tech giants are likely to weather the losses because of their size, scale and other businesses, said Matt Stucky, who manages tech investments for Northwestern Mutual.

“The core business,” he said, “is good.”

I think they're taking the economy for a Nantucket sleigh ride.

Not that long ago...(snow)

Excerpts from Séb Krier's Omni-Thread from February

https://x.com/sebkrier/status/2018351274127962300?s=20

1. Existing models will continue improving and getting better. And they will continue to be trained while accounting for all sorts of things like cost, efficiency, steerability, personality etc. as we already see today. I think it’s more obvious than ever that there is likely no convergence to the One Big Model. [...]

4. Here, there is still a lot to work out, and I expect high complementarity with human workers for at least the next decade. Roles will evolve: as you start doing less coding, your work looks more like technical product management. [...]

5. You just keep going up layers of abstraction, and humans continue steering complex multi-agent systems, until fixed costs bite. Part of the reason why humans always stay at the top of the chain is that many decisions made are normative: about what you want to happen, where you want things to go, how you want to react to changes. This requires inherently human inputs, since there's no point in having an AI decide this alone no matter how smart without eliciting more information about what the relevant humans prefer. Put differently: the telos of the whole system is the amalgamation of what users/consumers/businesses want, and tracking whether you're actually achieving that requires human input. This is already the case today with highly complex gigantic companies that make 1000 opaque decisions a minute.

6. Remember, this doesn’t violate the basic fact that market-coordinated economic activity is downstream of consumer and business demand. Capital isn’t some sort of independent force of the universe. What is being built depends on buyers/consumers that are ultimately human, even if occasionally intermediated by agents. The "AI decides everything" frame misses something fundamental about what economic and political systems are for. But as we go through these transitions, there are also costs or externalities (both pecuniary and non-pecuniary). Some people lose jobs. New industries cause unforeseen harms. Terence Tao has a great analogy: the abundance of food solved famine, but of course also led to harms like obesity. The solution is not to slow down abundance, but to develop the right norms, technologies, and laws to curb the excesses.

7. Accounts of full disempowerment assume democracy disappears, but I don't think all roads lead to autocracy. I don’t think ‘this time it’s different’. Growth and innovation historically benefited from free trade and liberal democracy, and this will be the case here too because of its impacts on investment, human capital, institutional quality, self-correction mechanisms, and ensuing fly-wheel effects. [...]

8. As the world goes through these transitions, we will probably continue to see many commentators gloss over the vast benefits and improvements humanity will see. Progress in longevity, cured diseases, consumer welfare, massive reduction in poverty and famine, better education and so on. The arguments for market coordination over some sort of early-Soviet or Maoist collectivism apply even more in this world, not less. The world will generally become materially richer. [...]

9. If we allow sufficient deployment of technology, robots, AI and so on, while ensuring the supply of energy, housing, and other important inputs isn’t constrained to a strangling degree, then the production of many goods and services will go down in price. [...] In general I am more concerned with customer service operators in Bangalore than I am with upper middle class white-collar professions in the West. I think FDI [foreign direct investment] and aid will be critical if we want humanity to thrive.

10. But this doesn't justify regressive populist policies or a 'pause'. It's not even optimal if we were being maximally selfish, and the equivalent of saying "poverty, misery and illness should be preserved for a longer period of time, for the benefit of a particular group of workers in time." Opposing AI or technological progress is a particularly nasty version of degrowth: it kills people, it entrenches poverty, and generally locks in all sorts of tragedies for the benefit of a comfortable elite who can easily thrive with the status quo. However, this does mean ensuring the right welfare systems, democratic protections, ‘societal resilience’, public infrastructure etc is important, as many have repeatedly noted over time. Just because things net out positively doesn’t mean ignoring those who lose out in the short run is the best we can do. There’s so much work to be done still if you want to build a better world, and I think we desperately need new, better economists, scientists, sociologists, artists, and politicians more than ever. I have more faith in the zoomers than some of my peers!

[Hmmmm.... I'm not so sure of 10. Don't know what it means.-BB]

12. In the future, I expect politics and governance to be an increasingly important component of people's lives: many will care deeply about how things are organised and managed at the local or national or international level. Personally, I think it’s fine if a large fraction don’t care much about those issues most of the time, since I don’t think there’s an obligation for everyone to have an opinion on everything, and that preference will likely be easy to satisfy. [...]

13. And I do think status games will continue, albeit in a much more diverse ecosystem of sub cultures and geographies. But again: always has been. Even today plenty of people more interested in art have zero envy for techbro founder lifestyles, and conversely many engineers couldn't care less about being perceived as cultured. As people get richer, much of this will evolve too. [...]

14. Ultimately, AGI will bring about huge positive transformations for the world, many of which are hard to describe: could anyone at the dawn of the Industrial Revolution have told you about video games, eye surgery, deep sea diving, street tacos, and mRNA vaccines? [...]

15. Lastly, so much of the field uses "this time it's different" as hand-wavey justifications for flouting norms, justifying unusual political measures, ignoring fragile progress built on centuries of trial and error, and various yet-to-be seen proposals for haphazard action (made confidently despite the uncertainty that one might guess would come with handling unprecedented phenomena). I think this is misguided: AGI will be huge, and of course will affect everything around us; but in many ways it’s also not different, and as always, there's a lot to learn from History. Much still needs to be built, except that this time you will also have millions of agents by your side to make progress. 🚀

Orange juice and an omlette

Does Peter Thiel Think He's GOD? [Pope Leo XIV and AI]

You Tube:

Jon Favreau is joined by Christopher Hale to discuss Pope Leo XIV's warnings against A.I.

Is the internet slowly breaking our brains, and if so, what can we do about it? Offline with Jon Favreau is a place where you can take a break from doom-scrolling and tune in to smarter, lighter conversations about the impact of technology & the internet on our collective culture. Intimate interviews between Pod Save America host Jon Favreau and notable guests like Stephen Colbert, Hasan Piker, ContraPoints, Margaret Atwood, and Megan Rapinoe spark curiosity and introspection around the various ways our extremely online existence shapes everything from the ways we live, work, and interact with one another. Together we’ll figure out how to live happier, healthier lives, both on and offline.

At some point they mention Ross Douthat's NYTimes interview with Peter Thiel and how there's a moment where Thiel wasn't sure whether humanity was worth saving. Here's that moment"

Douthat: But the world of A.I. is clearly filled with people who, at the very least, seem to have a more utopian, transformative — whatever word you want to call it — view of the technology than you’re expressing here. And you mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a mechanism for transhumanism — for transcendence of our mortal flesh — and either some kind of creation of a successor species or some kind of merger of mind and machine.

Do you think that’s all irrelevant fantasy? Or do you think it’s just hype? Do you think people are raising money by pretending that we’re going to build a machine god? Is it hype? Is it delusion? Is it something you worry about?

Thiel: Um, yeah.

Douthat: I think you would prefer the human race to endure, right?

Thiel: Uh ——

Douthat: You’re hesitating.

Thiel: Well, I don’t know. I would — I would ——

Douthat: This is a long hesitation!

Thiel: There’s so many questions implicit in this.

Douthat: Should the human race survive?

Thiel: Yes.

Douthat: OK.

Thiel: But I also would like us to radically solve these problems. And so it’s always, I don’t know, yeah — transhumanism. The ideal was this radical transformation where your human, natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context, or, I don’t know, a transvestite is someone who changes their clothes and cross-dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work. But we want more transformation than that. The critique is not that it’s weird and unnatural, it’s: Man, it’s so pathetically little. And OK, we want more than cross-dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.

And then orthodox Christianity, by the way — the critique orthodox Christianity has of this, is these things don’t go far enough. That transhumanism is just changing your body, but you also need to transform your soul and you need to transform your whole self. And so ——

Tuesday, April 28, 2026

River and shore

On Method: Computational Compressibility in Complex Natural and Cultural Phenomena

New working paper. Title above, abstract, contents, and introduction below:

Academia.edu: https://www.academia.edu/166054951/On_Method_Computational_Compressibility_in_Complex_Natural_and_Cultural_Phenomena
SSRN: https://papers.ssrn.com/abstract=6666638
ResearchGate: https://www.researchgate.net/publication/404263330_On_Method_Computational_Compressibility_in_Complex_Natural_and_Cultural_Phenomena

Abstract: Various machine learning techniques have been used to develop models of complex systems from empirical data. Through discussions with Claude, this paper examines several examples, including: weather, protein folding, chess, language, asset pricing, ticket sales for movies, the 19th century English-language novel. These models differ from one another in various ways, but all are fundamentally descriptive in character. Explanations must necessarily reside with their respective disciplines. In some cases we already have fundamental accounts of the phenomena, while in other cases we do not. With respect to economics in particular, it is clear that such models reveal phenomena for which no explanations are currently available, presenting a challenge to economic theory.

Contents 

Part I: Computational Compressibility, Implications for Economics, Description and Explanation 5
Part II: Weather, Protein Folding, Chess, and Language 16
Part III: Interim Summary: Compressibility Without Reducibility 26
Part IV: Pricing Theory, Movies, 19th Century Novels, and Cultural Evolution 28
Part V: To Infinity and Beyond! – Hollywood Redux, Blockbusters, the Spreadsheet, Economics Going Forward 38

Introduction: Describing Computationally Compressible Systems 

This a transcription of a dialog I had with Claude 4.6 and 4.7 on April 21 - 23, 2026. While I started it with a specific case from Chapter 4 of Tyler Cowen’s recent monograph on marginalism, now that the dialog has concluded with Chomsky’s distinction between descriptive and explanatory adequacy (Aspects of the Theory of Syntax), I realize that I’ve been thinking about the underlying issues for some time. While I read Aspects in about 1970, give or take a year, I didn’t think much about description as such until the 2000s, and then I was thinking about describing individual texts; but that’s not directly relevant to these cases in this paper. Then in the second decade of this century I began thinking about computational criticism, aka digital humanities, which typically involve some kind of statistical or machine learning investigation of a corpus of texts. In particular, I gave a great deal of attention to Macroanalysis (2013), where Matthew Jockers studied a corpus of roughly 3000 English-language novels published in the 19th century. That investigation culminated in a directed graph showing depicting relationships of close-similarity among the novels in the corpus. I decided that that graph, in effect, was fundamentally descriptive in character, depicting, in effect, the 19th century Anglophone Geist, or Spirit.

But Jockers’s graph wasn’t on my mind when I started my dialog with Claude. Rather, I was thinking about the distinction between computationally reducible and irreducible phenomena that Stephen Wolfram had introduced in his New Kind of Science (2001). As Claude notes in its summary of the dialog, “a reducible system admits shortcuts through its dynamics; an irreducible one must be simulated step by step.” My target was a paper about asset pricing that Cowen discussed in his monograph, which produced a model having 360,000 parameters but which defied intuitive understanding.

The weather is a canonical example of phenomenon that is computationally irreducible. Thus forecasting the weather generally involves running a simulation of the weather and stepping through it interval after interval. This requires enormous computing resources and takes time. But DeepMind has created a machine learning system, GraphCast, that abstracts over historical data in a way that allows more accurate forecasts with less compute. Thus the weather system is computationally irreducible, but it is also compressible.

I take that as my paradigm case of computational compressibility (pp. 16 ff.) and then move on to other examples: protein folding (another physical phenomenon, pp. 18 ff.), chess (human activity, pp. 22 ff.), and natural language (a different human activity, pp. 23 ff.), each of which is compressible using machine learning techniques. Each example sharpens and extends the idea of computational compressibility. At that point I asked Claude to summarize the discussion (pp. 26 ff)..

Then, and only then do I ask Claude to consider Cowen’s problematic example, AI Pricing Theory (pp. 28 ff.). In its analysis of the paper, Claude notice that it introduces something fundamentally new to the discussion, reflexivity. Asset pricing is done by a large group of actors over time who thus influence one another’s decisions. And that, in turn, brings up Arthur De Vany’s work on Hollywood Economics (pp. 30 ff.). De Vany discovered that box-office success cannot be predicted by such analytic variables as producer, screen writer, director, movie stars, or opening weekend box office. Rather the success of a film depends on a word-of-mouth cascade which cannot be predicted. That leads me, in turn, to suggest a thought experiment involve a hypothetical system capable to abstracting over entire films and developing a high-dimensional model which could be used to predict the success of individual films.

And that, in turn, led me to the work that Matthew Jockers had done on 19th century English-language novels (pp. 33 ff.), something that had not been on my mind when I began this dialog on April 21. Jockers used machine learning, albeit nothing so elaborate and computationally expensive as using a transformer to create an LLM – it only had roughly 600 parameters. What his model revealed, and what made it so fascinating to me, is that there is an inherent directionality to the production of novels over the course of a century. It’s not simply that later novels are systematically different from earlier ones, but that that difference has a direction in the 600-dimensional measurement space. What we’d really like to know, now, is a say to characterize that diction. The model shows us that there is a direction, but it doesn’t tell us what that direction is. Though the model is much simpler than that asset pricing model – it has three orders of magnitude fewer parameters – its significance is no more legible.

After that I have two discussions that are not based on existing models, but that do have implications for economists who want to study them. First, I consider the phenomenon of the blockbuster, arguing that it reveals audience preferences that had previously been unrecognized (pp. 41 ff.). Then I consider the spreadsheet (e.g. VisiCalc), which transformed the personal computer market from a small niche market into a large mainstream market (pp. 43 ff.). How do you create a model that allows you to predict markets that don’t even exist at the time you make your model? What kind of a problem is that? After that I took a brief look at Cowen’s argument in The Great Stagnation (pp. 45 ff.), where Claude remarked:

If the VisiCalc model is right, then what matters about ChatGPT and its successors is not primarily that they do existing things faster or cheaper—though they do—but whether they are constitutive technologies in the VisiCalc sense. Do they reorganize the space of possible wants, making new activities imaginable and practical that previously had no well-formed representation in anyone's preference space? With that I brought the exploration to a halt.

I then asked Claude to summarize the entire dialog, which I’ve placed immediately following these remarks (pp. XX ff), with a special emphasis on implications for economics (pp. 7 ff.). Then I introduce Chomsky’s distinction from the 1960s, description vs. explanation (pp. 9 ff.). Each of these cases involves a complex phenomenon that is irreducible, but can be compressed into a model that is descriptive in character. They have that in common. As for explanations, those must necessarily be specific to each phenomenon. Note that in some cases we have explanatory theories grounded in a fundamental understanding of the underlying system (weather, protein folding) while in others we do not (chess, asset pricing, cultural evolution).

Finally, I’ve added a coda from a different conversation with Claude (pp. 13 ff.), one I had with the AI that accompanied Cowen’s book. That conversation is about Hollywood Economics and Rational Ritual and argues that the factoring of intellectual space that we’ve inherited from the 19th century German university has outgrown its usefulness.

Hoboken pigeons

Monday, April 27, 2026

The Platonic Representation Hypothesis [Not surprising]

John Oliver goes off on chatbots