Tyler Cowen has announced a new monograph, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution, which you can access here. You can download it in various formats or interact with the online AI version. I read through the opening page or three and then skipped to the fourth and final chapter, “Why Marginalism Will Dwindle, and What Will Replace It?,” which I've read in full.
I note, more or less in passing, that as he heads to the end he starts thinking about the role of intuition in thinking. He’s lamenting that intuition, particularly intuitions stemming from the marginal revolution, no longer seems to work in economics. I’ve been thinking a lot about intuition myself, though to somewhat different ends. I’m more interested in how it functions in thinking and where it comes from. But that’s an aside.
I may or may not comment on the rest of chapter later on, but I have to comment on one strand of thinking. Cowen decides he has to denigrate all previous work oriented toward understanding language:
Suffice to say, LLM construction has for the most part ignored linguists and philosophers, and that also means ignoring their intuitions. LLM construction also ignored a lot of people in the AI field who insisted neural nets were a dead end. Instead, in a relatively short number of years humans invented new ways of modeling language and reasoning through language. That research program has proven wildly successful, as we have much better models of language and reasoning than almost anyone had been expecting. [...]
The classic breakthrough paper behind LLMs was a 2017 study titled “Attention is All You Need,” where in this context attention is defined by GPT-4 as “a mechanism that learns to focus selectively on parts of an input sequence, giving it ‘attention,’ while encoding a sentence or piece of information. This allows the model to treat different words or characters with different levels of importance, providing a ‘weight’ that aids in better understanding and decoding of information.” The paper was not titled “More Linguists are All You Need,” or for that matter “Marginalism is All You Need.” In other works [sic], given some of the most complex human systems, we came up with ways of understanding them that were new. To be clear, neural nets were not new, since the ideas and also the practice (in much weaker form) have been around for decades. High-powered, well-functioning neural nets, however, are new in the contexts of providing excellent results for general linguistic ability and general reasoning.
That struck me as a bit much, so I had the following conversation with the AI version:
From very near the end: “In other works, given some of the most complex human systems, we came up with ways of understanding them that were new.”
In what way is an LLM a way of understanding how language works? After all, we don't understand how LLMs work. They're as mysterious as the human brain.
This is a genuinely sharp philosophical puzzle, and Tyler is aware of the tension even if he doesn't fully resolve it. The claim on page 69 is really about predictive and generative success rather than mechanistic understanding in the traditional scientific sense. LLMs model language in the sense that they can reproduce, extend, predict, and manipulate linguistic structures with stunning accuracy — they've captured something real about how language works, even if we can't fully articulate what that something is.
Think of it this way: pre-LLM linguistics tried to understand language by building explicit rule systems — grammars, syntactic trees, semantic formalisms. That approach assumed understanding meant writing down legible rules. LLMs blew past all of that without using any of those rules. Tyler's point is that this constitutes a new kind of understanding — one where the "knowledge" is distributed across billions of parameters rather than encoded in human-readable propositions. It's understanding in the way a master chef understands flavor without being able to fully articulate the chemistry.
But your skepticism has real bite. There's a meaningful difference between a system that performs understanding and one that provides understanding — to us. Tyler arguably conflates the two. When he says "we came up with ways of understanding," the "we" is doing suspicious work: humans built the tool, but the tool's internal representations remain largely opaque even to its creators. The "Attention is All You Need" paper Tyler cites gave us the architecture, but interpretability research — like Anthropic's mechanistic interpretability work ��� is still in early stages of figuring out what's actually happening inside. So Tyler is perhaps better read as saying LLMs are a new instrument for engaging with language rather than a new theory of it — more telescope than Newtonian mechanics. [See text: Ch.4 · Tyler's claim about new ways of understanding c...] [See text: Ch.4 · How LLM construction bypassed traditional langu...]
I note that saying it is "more telescope than Newtonian mechanics" is a much weaker claim.
You're absolutely right, and this is a genuine tension in Tyler's argument that he doesn't fully reckon with. He wants LLMs to be doing something analogous to what marginalism did for economics — providing a new analytical framework for understanding complex systems. That's a strong claim. But if LLMs are really more like telescopes — powerful instruments that let us see further without explaining why what we see looks the way it does — then the analogy deflates considerably.
The stakes matter for Tyler's broader argument in Chapter 4. He's positioning LLMs as replacements for frameworks like marginalism, not just better tools for testing existing frameworks. A telescope doesn't replace Newtonian mechanics; it generates data that mechanics then has to explain. If LLMs are telescopes, we still need the theoretical framework that makes sense of what they're showing us. Tyler gestures at this on page 66 with the discussion of researchers who use LLMs to generate simulations and then build structural models from those simulations — which is actually the more defensible version of the claim, because there a human-legible theory does eventually emerge. [See text: Ch.4 · The multi-step process that generates legible t...]
So Tyler may be overstating his case when he frames LLMs as providing genuine understanding of language. The honest version might be: LLMs are extraordinary telescopes that may eventually prompt new theories — but we're not there yet. That's still impressive, but it's a quieter revolution than Tyler's rhetoric suggests. And Tyler, to his credit, is usually better at hedging than his most enthusiastic passages imply.
Frankly, that won't do. The technology is impressive on its own terms. It doesn't need this kind of empty rhetorical cheer leading (though perhaps investors do). It sounds to me like Tyler has given up on understanding the world and is just waiting for the machines to take over – to indulge in a bit of hype of my own.
That's a genuinely provocative reading, and not entirely unfair. There is a strand running through Chapter 4 where Tyler's argument quietly shifts from "here is how we will understand the world better" to "here are systems that produce excellent outputs, and that's enough." The chess example on page 69 is telling — Google DeepMind producing grandmaster-level chess without search and without chess understanding is presented as a triumph, but notice what's been abandoned: the goal of understanding why certain moves are good. We've substituted performance for comprehension, and Tyler seems genuinely comfortable with that substitution. [See text: Ch.4 · The DeepMind chess example as symptom of Tyler'...]
The financial economics example from page 63 reinforces your point. The "Charting by Machines" paper Tyler cites with apparent approval explicitly abandons economic reasoning — it's calculation plus math and computer science, with no ties to marginalism or microeconomic theory. Tyler notes this almost admiringly. But if your standard for intellectual progress is "it predicts well," you've quietly dropped the Enlightenment ambition of explaining the world in human-legible terms. [See text: Ch.4 · The financial economics paper that abandons the...]
Your "waiting for the machines to take over" framing is pointed, but I think it identifies something real. Tyler is an economist who has spent his career celebrating human curiosity and understanding — his Marginal Revolution blog is a monument to that curiosity. So there's something genuinely melancholy, or at least philosophically unresolved, about a book that ends up suggesting the most complex systems may be best navigated by tools we cannot ourselves fully understand. Whether that's wisdom or surrender is a fair question to leave open.
No comments:
Post a Comment