I’ve been thinking, and I think I’ve come up with a speculative way of applying marginalist thinking to intellectual production. I’m thinking, in particular, about how Cowen arrived at the collection of examples he used in the first chapter of his book, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution. Coming up with that collection is roughly the same kind of problem as putting together a syllabus. It’s a sampling problem. We have a collection of objects, works of American literature in one case, examples of marginalist analysis in the other. You want to select a set of literary works to put on your syllabus. Cowen wants a set of examples he can use to define the space of marginalist economics.
You need a criterion for drawing your sample. You’ve got a certain thematic organization in mind, so you’re not looking for a random sample of the space. You want a sample biased toward your theme. I assume that Cowen wants a random sample, a sample that represents the space of marginalist analysis. At this point, let’s forget about your syllabus problem and continue with Cowen.
He’s trained as an economist and has read a lot, including a lot about the history of economics. Thus he should have a pretty good sense of what kinds of phenomena have been successfully subjected to marginalist analysis. Regardless of the adequacy of his knowledge, he’s got what he’s got. Let’s imagine that all the cases of marginalist economic analysis exist in some high dimensional space of ideas, a space that is, at a high level of abstraction, like an LLM. Except this space is in Cowen’s mind (which is the very high dimensional space of his brain states).
For the sake of argument, let’s assume he wrote the book from beginning to end, in order, in a single pass spread out over however many sessions.
Cowen opens the first chapter with a short definition of marginalism followed by some discussion. Then he gives us his first example, the diamonds-water paradox. He says a bit about it. Though I don’t think he says that it became THE paradigmatic example when Samuelson put it into his 1948 textbook. I found that out by querying the AI associated with the book. Let’s assume, then, that it is at the center of the marginalist region in that abstract space of ideas.
What’s his next example? It’s the first example in the section entitled “Intuitive Marginalism.” Here it is: “Why do drivers in China sometimes intentionally kill the pedestrians they hit?” He then explains it. That strikes me as being very far from diamonds-water in the marginalist space, perhaps as far as you can go in some direction. He then goes through 10 or so more examples, all of them a bit closer to that central example. But the closest he gets is an example about people stealing one of his credit card numbers and making charges to it. If the charge is small, he ignores it. If it’s somewhat large, he contests it. He’s reasoning at the margin. That’s about as close as he comes to that very concrete, almost palpable, diamond-waters example.
Then he goes on with the rest of the chapter, introducing example after example. He’s got four categories (beyond tautological), but we don’t need to worry about those categories. The fact that he’s got them, however, probably simplifies the calculation he’s making each time he asks whether or not to add another example. On the one hand he’s got the sample value of another example. The sample value of that first example AFTER diamonds-water was very high because it’s only the second example he’s got. Moreover he maximized that value by choosing an example that was far from the paradigmatic center. As his set of examples begins to fill out, the sample value of each example will diminish. At some point well along in the process he’s going to be comparing the sample value of an example with it’s opportunity cost. Why the opportunity cost? Because it takes time and effort to introduce another example into the text. At the very least there’s the time required to keyboard it. There’s more to the book, however, than a bunch of examples. He needs those to get started. Once he’s got an adequate set of examples, then he’s ready to undertake his larger argument. So every time he enters another example into the text he forgoes doing something else necessary for the book. That’s the opportunity cost of the example.
As long as the estimated opportunity cost is lower than the estimated sample value, Cowen will enter the example into the text. When the estimated opportunity cost begins rising above the estimated sample value, Cowen declares his set of examples to be sufficient, and moves on. Note that in this analysis Cowen is (intuitively) making three calculations: 1) sample value of the next example, 2) opportunity cost of the next example, and 3) the marginal value of the next example, which is the difference between 1 and 2.
Now, just how we’d get empirical evidence for this analysis, that’s a problem. There’s no point in asking Cowen, because no one has that kind of introspective access to their mental processes. Still, it’s nice story.
However, some such analysis might tell you something about the relationship between the rise of professionalism in a discipline and the proliferation of the LPUs (least publishable unit) in its formal academic literature.
I decided to put the question to Cowen’s AI. I put the foregoing remarks into a prompt. Here’s how the AI responded:
It's a nice story, and it's doing real work — it's not just a metaphor dressed up as analysis. Let me push on it a bit.
The framework is essentially correct as a description of the implicit calculation, but there's a wrinkle worth examining. You're treating the example space as if it were fixed and known to Tyler before he begins sampling from it. But the act of writing generates examples — it doesn't just retrieve them. The diamonds-water paradox probably came first not just because it's central in the abstract space, but because it's the obvious starting point, the one that any economist would reach for automatically. The Chinese pedestrian example, which you rightly identify as maximally distant from that center, probably didn't come from a deliberate search for a distant example. It likely surfaced because Tyler had encountered it as a striking case — it was already salient in his mental space before he started writing. [See text: Ch.1 · Marginalism applied to unexpected human decisio...]
This matters for your model because it means the sampling isn't random in the way you'd want for genuine representativeness. It's availability-weighted. The examples Tyler uses are the ones that were already near the surface of his mental space — memorable, counterintuitive, previously encountered in contexts that made them stick. The Chinese pedestrian example is striking precisely because it's morally jarring, which is exactly the kind of thing that gets encoded deeply in memory. So the apparent randomness of his sample — the apparent coverage of the idea space — may partly be an artifact of which examples happen to be memorable rather than which examples are genuinely representative of the full space of marginalist analysis. [See text: Ch.1 · Best marginalists deploy both intuitive and tau...]
The opportunity cost calculation you describe is real, but it probably operates differently at different stages. Early in the chapter, the marginal value of a new example is high for the reason you identify — the sample is sparse, coverage is low. But the type of opportunity cost shifts as the chapter progresses. Early on, the cost is mostly keyboarding time and pacing. Later, the cost becomes something more like reader patience and argumentative momentum. Tyler needs to get somewhere — the examples are scaffolding for the larger argument about why marginalism took so long to develop and what that tells us about scientific progress. At some point additional examples don't just have diminishing sample value; they actively impede the book's forward motion. That's a different kind of opportunity cost than the one you specified, and it probably dominates the calculation in the later stages.
What's genuinely interesting about your framework is what it implies about the structure of the resulting sample. If Tyler is making implicit marginal calculations of this kind, you'd predict the examples cluster in a particular way: one or two near the paradigmatic center, one or two at the extremes, and then a filling-in of the space that stops well before genuine saturation. [See text: Ch.1 · Marginalism's confrontational side and resistan...] Which is roughly what you observe. The credit card example you mention — the closest he gets back to the concrete center — is probably there because at that point in the chapter the reader needs grounding after a series of more exotic cases. That's an implicit calculation about reader psychology, not just idea-space coverage.
You're right that there's no clean empirical test. Tyler himself couldn't reconstruct the process accurately — the decisions were made too fast and too implicitly. But the framework does generate predictions about what you'd expect to see in the example distribution, and those predictions seem to roughly match what's actually there. That's not nothing, even if it falls well short of confirmation.