Monday, May 11, 2026

A.I. and the growth of the chore economy

Carl Benedikt Frey, This Is Why You’re Drowning in Busywork, NYTimes, May 11, 2026.

We have been told that A.I. will take people’s jobs. What no one mentions is that many of those jobs are landing on us. The A.I. revolution involves a massive transfer of labor — not from worker to machine, but from worker to consumer. The ability to do everything ourselves may be satisfying, but it can gradually overload us with busywork without our noticing. Tasks that we used to delegate will still be done. They will simply move out of the work force and into the household as new forms of invisible, unpaid labor.

The movement toward self-service is one of the most powerful and least appreciated forces in the history of work. Consider the washing machine. In many 19th-century cities, laundering was a major urban service occupation, and one of the hardest. [...] The washing machine, together with the infrastructure that made it possible — running water, electricity, synthetic detergents — gradually ended this world. But it did not end the work. Customers bought machines and did the laundering themselves. The laundress was displaced by her former clients.

And so it goes:

That pattern has been repeating ever since. Self-checkout makes scanning and bagging the shopper’s job. The internet gives travelers direct access to the flight schedules and hotel reviews that agents once controlled. Online brokerages put a trading terminal in every pocket. And the smartphone replaced the bank teller with you.

We’re used to being our own checkout assistants, travel agents and tellers. Handling these tasks ourselves often makes our lives more efficient. But A.I. is now extending the chore economy into territory that once required years of training, such as law and medicine. As of January, more than 40 million people worldwide were using ChatGPT daily for health questions — from symptoms to decoding bills and fighting insurers.

Trade-offs:

However, self-service does not automatically reproduce a professional’s judgment. The billing specialist notices the code the patient didn’t think to question. The accountant points out the deduction the taxpayer didn’t know existed. The tool answers what you ask, whereas the expert tells you what to ask. That is the A.I. trade-off: greater access, but thinner expertise.

Second, no single act of self-service feels like a major burden. We notice the accountant’s fee we didn’t pay. We rarely notice the evening we spent doing her job. There is a name for this: opportunity cost neglect — the well-documented tendency to overlook the value of what we give up when the cost is time rather than money.

There's more at the link.

Another white iris

Sunday, May 10, 2026

The kitty and the ink

Tyler Cowen is imagining that AI will kill the research paper

Here’s roughly the first half of his post:

Imagine taking a macroeconomics paper and adding a little button at the end “Press this button to update this paper with the latest macro data.”

All of a sudden you have multiple papers rather than one, and no single canonical version. It is the latter versions, not created directly by the authors, that people will look at.

Imagine adding another button, to either micro or macro papers “Please rerun these results using what the AI thinks might be five other different yet still plausible specifications.”

Then you have more papers yet.

Ultimately, why not just build a “meta-paper,” using AI, to answer any possible question about the subject area under consideration. This meta-paper would allow the reader, using AI, to make many sorts of modifications and additions to the basic work.

It goes on in that vein. Cowen’s final line:

It is funny, and tragic, how much some of you are still obsessed with writing and publishing “papers.”

And it’s funny how Cowen imagines that future research in economics and “many of the other sciences” will be just like current research, but more, more, more! And be produced by a different mechanism, perhaps, ultimately, by a mechanism without any humans in it whatsoever.

Cowen can’t seem to imagine that the future, aided by AI to be sure, will bring new questions, new theories, new models, that it may bring a whole new intellectual world in comparison to which the current intellectual world will look like the 19th century, if not the 12th. It’s amazing that, despite his knowledge of intellectual history, the history of economics in particular, and his recent historically-oriented books about the great economists and about marginalism, that despite all this he has no sense of what future ideas might be, for they will surely be different from current ideas in ways we cannot anticipate. But then he seems to have little sense of what ideas are except lumps of “stuff” falling from otherwise mysterious trees.

In this he’s pretty much like the rest of the Silicon Valley AI Mob. So many thoughts, all sprung from the same repertoire of ideas with no hint of a new repertoire in the offering. It’s all in the hands of the machines.

Four Horseman of the Apocalypse

The Chinese conceptualize A.I. differently than Americans and are deploying it differently

Jacob Dreyer, America’s A.I. Is Futuristic. China Is Just Making It Work. NYTimes, May 9, 2026.

The reality is that China and the United States are racing in different directions, because the two countries conceptualize A.I. very differently. Americans want to create the most powerful technology humans have ever known. In the quest for superintelligence, the U.S. government is encouraging private firms to move full speed ahead, regulation be damned. Under the very tightest regulation, by contrast, the Chinese want to make A.I. more practical and embedded in society, more carefully selecting how it is deployed and used by the population. If the Chinese achieve their A.I. goals, they may take a lead in the larger geopolitical contest between the two nations.

Most Chinese policymakers don’t believe A.I. superintelligence is arriving any time soon. Instead, the Chinese strategy is about advancing a government-directed strategy referred to as “A.I.+” that treats A.I. as if it were infrastructure. This includes government-coordinated plans, local subsidies and national computing-power programs to diffuse cheap, capable A.I. tools into every public service. Chinese people encounter A.I. as a natural part of their day-to-day lives. Sometimes it’s visible and palpable, like the “smile to pay” terminals used in many shops. Sometimes it’s invisible, like Hangzhou’s City Brain, which uses A.I. to analyze massive amounts of data for urban management needs like regulating traffic and environmental protection.

Unlike in the United States, where most people remain wary, A.I. seems to have had less of a backlash in China. The Chinese A.I.+ strategy is practical and comprehensible to the local population in a way that the U.S. strategy simply is not, which may explain why the Chinese appear so much more optimistic about A.I. than Americans.

Chinese leaders are trying to maximize the country’s resources. The country’s chief resource is not oil, soybeans or pork bellies, but Chinese people. [...]

But China’s A.I.-as-infrastructure strategy is about more than just improving the country’s domestic quality of life. It’s also about exporting Chinese influence. Chinese A.I. is already integrated into the supply chains that dominate world trade.

And increasingly, rather than selling individual goods or services, China is selling a whole suite — energy, infrastructure, telecoms, transportation, surveillance — with A.I. systems to manage it all.

Interesting. It seems that the Chinese are interested in improving people's lives and making them more effective. In contrast, one could almost say that Silicon Valley AI is about replacing people. After all, isn't that the Doomsday scenario at the heart of so much Silicon Valley AI mythology? 

There's more at the link.

White iris on 11th St. in Hoboken

Saturday, May 9, 2026

Analogical thinking: How to succeed in “wicked” environments

From down in the article:

Inventors with the most patents had worked in multiple unrelated fields before their breakthrough work. Comic book creators with the longest careers had drawn for the most different genres before settling. Scientists who won Nobel Prizes were dramatically more likely than their peers to be serious amateur musicians, painters, sculptors, or writers. 

The skill that mattered in wicked environments was not depth in one pattern. It was the ability to recognize when a pattern from one domain applied unexpectedly in another. That kind of thinking cannot be built by drilling a single subject. It can only be built by accumulating mental models from many subjects and learning to move between them. 

Douthat, A.I., and Pope Leo

Ross Douthat, The Atheist and the Machine God, NYTimes, May 9, 2026.

Douthout is anticipating Pope Leo XIV's encyclical on A.I. For Douthat the crucial issue is consciousness:

In one possible timeline, the advent of A.I. is widely understood as a win for atheism and a blow against religious ideas of soul and spirit, persuading more people that their own minds are just computers — no divine spark or immortal soul, just the meatspace equivalent of a helpful chatbot or an A.I. therapist.

In another potential future, the mystery of consciousness ends up seeming more profound in the shadow of machine intelligence, the mystical finds new appeal as a form of experience computers cannot emulate, and religion becomes a place for human exceptionalists to plant a defiant flag.

But between those two scenarios there’s a future where artificial intelligence mostly increases metaphysical uncertainty, leaving a lot of people simply unsettled about fundamental questions, increasingly “mysterian” rather than clearly atheistic or devout.

That’s how my encounters with Silicon Valley culture often feel: Beneath a materialist carapace, it’s a place where people who aren’t sure exactly what they’re building dabble in Buddhist metaphysics or consult with Catholic priests, adopt churchy or cultish attitudes toward their new creations or rebel into apocalyptic doomsaying.

Douthat then goes on to discuss Richard Dawkins’ recent encounter with Claude in which Dawkins, the Great Atheist, fools himself into thinking that Claude is conscious. Douthat asks:

Meanwhile, in its less-besotted passages, Dawkins’s essay circles around an important question for materialists like himself. The origin and nature of consciousness currently evades our understanding, but the good Darwinian is committed to the proposition that it evolved to serve some crucial evolutionary purpose. But if a digital entity seems to display the capacities that we associate with conscious minds, and we don’t believe that this entity is actually conscious, then what is consciousness’s true purpose? If we can have intelligence without self-awareness, a zombie that calculates and speaks, why does the self exist at all?

I'm not so sure that consciousness is the issue. It's complicated. My own view of consciousness is derived from the work of William Powers, which seems thoroughly materialist to me. But I'm not at all sure what that means, not in a world with quantum mechanics and deep complexity. Deep complexity itself presents issues, interesting ones. At the moment I'm considering the possibility that deep complexity gives us Aristotle's unmoved mover or Paul Tillich's God as the ground of being.

Douthat ends up with this as his penultimate paragraph:

As certain philosophers have argued, this harmony between the psychological and the physical seems more much likely to appear in a universe where consciousness is fundamental, where matter isn’t everything and Mind is where things start.

I'm not familiar with those certain philosophers, but who knows? I'm thinking that deep complexity gives us a generative universe, and that's all we need. But I'm still working on it.

What I'm looking for from Pop Leo's encyclical is how it discusses A.I. and idolatry. Now, just how is THAt entangled with consciousness?

Looking over an omelette and into the restaurant

Capitalists vs. Capitalism

In terms I’m developing for my book, Play: How to Stay Human in the A.I. Revolution, capitalism is an economic system both dominated by captalists and organized to suit the dictates of Homo economicus.

Looking over an omellete and out the window

Marginalism as a tool for rhetorical analysis: Cognitive effort in intellectual work [MR-AUX]

I’ve been thinking, and I think I’ve come up with a speculative way of applying marginalist thinking to intellectual production. I’m thinking, in particular, about how Cowen arrived at the collection of examples he used in the first chapter of his book, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution. Coming up with that collection is roughly the same kind of problem as putting together a syllabus. It’s a sampling problem. We have a collection of objects, works of American literature in one case, examples of marginalist analysis in the other. You want to select a set of literary works to put on your syllabus. Cowen wants a set of examples he can use to define the space of marginalist economics.

You need a criterion for drawing your sample. You’ve got a certain thematic organization in mind, so you’re not looking for a random sample of the space. You want a sample biased toward your theme. I assume that Cowen wants a random sample, a sample that represents the space of marginalist analysis. At this point, let’s forget about your syllabus problem and continue with Cowen.

He’s trained as an economist and has read a lot, including a lot about the history of economics. Thus he should have a pretty good sense of what kinds of phenomena have been successfully subjected to marginalist analysis. Regardless of the adequacy of his knowledge, he’s got what he’s got. Let’s imagine that all the cases of marginalist economic analysis exist in some high dimensional space of ideas, a space that is, at a high level of abstraction, like an LLM. Except this space is in Cowen’s mind (which is the very high dimensional space of his brain states).

For the sake of argument, let’s assume he wrote the book from beginning to end, in order, in a single pass spread out over however many sessions.

Cowen opens the first chapter with a short definition of marginalism followed by some discussion. Then he gives us his first example, the diamonds-water paradox. He says a bit about it. Though I don’t think he says that it became THE paradigmatic example when Samuelson put it into his 1948 textbook. I found that out by querying the AI associated with the book. Let’s assume, then, that it is at the center of the marginalist region in that abstract space of ideas.

What’s his next example? It’s the first example in the section entitled “Intuitive Marginalism.” Here it is: “Why do drivers in China sometimes intentionally kill the pedestrians they hit?” He then explains it. That strikes me as being very far from diamonds-water in the marginalist space, perhaps as far as you can go in some direction. He then goes through 10 or so more examples, all of them a bit closer to that central example. But the closest he gets is an example about people stealing one of his credit card numbers and making charges to it. If the charge is small, he ignores it. If it’s somewhat large, he contests it. He’s reasoning at the margin. That’s about as close as he comes to that very concrete, almost palpable, diamond-waters example.

Then he goes on with the rest of the chapter, introducing example after example. He’s got four categories (beyond tautological), but we don’t need to worry about those categories. The fact that he’s got them, however, probably simplifies the calculation he’s making each time he asks whether or not to add another example. On the one hand he’s got the sample value of another example. The sample value of that first example AFTER diamonds-water was very high because it’s only the second example he’s got. Moreover he maximized that value by choosing an example that was far from the paradigmatic center. As his set of examples begins to fill out, the sample value of each example will diminish. At some point well along in the process he’s going to be comparing the sample value of an example with it’s opportunity cost. Why the opportunity cost? Because it takes time and effort to introduce another example into the text. At the very least there’s the time required to keyboard it. There’s more to the book, however, than a bunch of examples. He needs those to get started. Once he’s got an adequate set of examples, then he’s ready to undertake his larger argument. So every time he enters another example into the text he forgoes doing something else necessary for the book. That’s the opportunity cost of the example.

As long as the estimated opportunity cost is lower than the estimated sample value, Cowen will enter the example into the text. When the estimated opportunity cost begins rising above the estimated sample value, Cowen declares his set of examples to be sufficient, and moves on. Note that in this analysis Cowen is (intuitively) making three calculations: 1) sample value of the next example, 2) opportunity cost of the next example, and 3) the marginal value of the next example, which is the difference between 1 and 2.

Now, just how we’d get empirical evidence for this analysis, that’s a problem. There’s no point in asking Cowen, because no one has that kind of introspective access to their mental processes. Still, it’s nice story.

However, some such analysis might tell you something about the relationship between the rise of professionalism in a discipline and the proliferation of the LPUs (least publishable unit) in its formal academic literature.

I decided to put the question to Cowen’s AI. I put the foregoing remarks into a prompt. Here’s how the AI responded:

It's a nice story, and it's doing real work — it's not just a metaphor dressed up as analysis. Let me push on it a bit.

The framework is essentially correct as a description of the implicit calculation, but there's a wrinkle worth examining. You're treating the example space as if it were fixed and known to Tyler before he begins sampling from it. But the act of writing generates examples — it doesn't just retrieve them. The diamonds-water paradox probably came first not just because it's central in the abstract space, but because it's the obvious starting point, the one that any economist would reach for automatically. The Chinese pedestrian example, which you rightly identify as maximally distant from that center, probably didn't come from a deliberate search for a distant example. It likely surfaced because Tyler had encountered it as a striking case — it was already salient in his mental space before he started writing. [See text: Ch.1 · Marginalism applied to unexpected human decisio...]

This matters for your model because it means the sampling isn't random in the way you'd want for genuine representativeness. It's availability-weighted. The examples Tyler uses are the ones that were already near the surface of his mental space — memorable, counterintuitive, previously encountered in contexts that made them stick. The Chinese pedestrian example is striking precisely because it's morally jarring, which is exactly the kind of thing that gets encoded deeply in memory. So the apparent randomness of his sample — the apparent coverage of the idea space — may partly be an artifact of which examples happen to be memorable rather than which examples are genuinely representative of the full space of marginalist analysis. [See text: Ch.1 · Best marginalists deploy both intuitive and tau...]

The opportunity cost calculation you describe is real, but it probably operates differently at different stages. Early in the chapter, the marginal value of a new example is high for the reason you identify — the sample is sparse, coverage is low. But the type of opportunity cost shifts as the chapter progresses. Early on, the cost is mostly keyboarding time and pacing. Later, the cost becomes something more like reader patience and argumentative momentum. Tyler needs to get somewhere — the examples are scaffolding for the larger argument about why marginalism took so long to develop and what that tells us about scientific progress. At some point additional examples don't just have diminishing sample value; they actively impede the book's forward motion. That's a different kind of opportunity cost than the one you specified, and it probably dominates the calculation in the later stages.

What's genuinely interesting about your framework is what it implies about the structure of the resulting sample. If Tyler is making implicit marginal calculations of this kind, you'd predict the examples cluster in a particular way: one or two near the paradigmatic center, one or two at the extremes, and then a filling-in of the space that stops well before genuine saturation. [See text: Ch.1 · Marginalism's confrontational side and resistan...] Which is roughly what you observe. The credit card example you mention — the closest he gets back to the concrete center — is probably there because at that point in the chapter the reader needs grounding after a series of more exotic cases. That's an implicit calculation about reader psychology, not just idea-space coverage.

You're right that there's no clean empirical test. Tyler himself couldn't reconstruct the process accurately — the decisions were made too fast and too implicitly. But the framework does generate predictions about what you'd expect to see in the example distribution, and those predictions seem to roughly match what's actually there. That's not nothing, even if it falls well short of confirmation.

11th Street Peeka-boo

Friday, May 8, 2026

The populist backlash against A.I. is accelerating

David Wallace-Wells, A.I. Populism Is Here. And No One Is Ready. NYTimes, May 8, 2026.

Americans still worry about the local impacts of data centers, storming to town halls en masse to protest them. They still worry about job loss and economic turmoil too, as do a growing number of politicians with their fingers lifted to the wind. But to many, the biggest A.I. labs now loom like the new faces of American oligarchy, as well — a fearsome concentration of economic and social power producing a self-compounding pattern of extreme inequality of the kind that has lacerated American life for decades. If the future lies with A.I., as we are so often told, it is unsettling to many and outrageous to some that so few people seem to stand in such absolute control of it.

In one sense, the vision peddled by A.I. companies is remarkably depersonalized: We hand more and more responsibility and judgment off to superintelligent black boxes, which rapidly begin shaping the course of the human future with decisions that remain illegible to the rest of us, including their designers. “People outside the field are often surprised and alarmed to learn that we do not understand how our own A.I. creations work,” Anthropic’s Dario Amodei wrote last year. “They are right to be concerned: This lack of understanding is essentially unprecedented in the history of technology.”

In another sense, and in the meantime, A.I. represents perhaps the most personalized sales pitch ever foisted on the passive American consumer — a vision of a near-total takeover of the country’s economic, social and cognitive lives by tools engineered by just five companies, run by five particular people, several of whom are widely described as sociopaths. The list is so short that you may know most of them by first name: Sam, Dario, Elon and Mark. (Demis Hassabis, who runs Google’s DeepMind, is perhaps less famous.)

We're building AI faster than homes:

Today, the United States is in the middle of a notorious cost-of-living crisis fueled in large part by a housing shortage of perhaps 10 million units, and last year, the country spent more money building A.I. infrastructure than single-family homes. We built 10 times as many data centers as the next biggest builder (Germany). We invested more than 20 times as much money into A.I. as the world’s next biggest investor (China). Among other things, artificial intelligence is an enormously big bet for the American economy to have made.

The White House is getting nervous:

This week, the White House signaled that it may make a sudden and dramatic U-turn on A.I. policy — once inclined toward hands-off support industry growth, the administration is now floating a proposal to force federal review of all new proprietary models before release. And Americans are drawing lines in the sand where they can, too. In September 2025, Americans seemed roughly ambivalent about the construction of new data centers in their communities, according to Heatmap polling, with voters 2 points more likely to support new construction than to oppose it. Four months later, in February of 2026, they were 24 points more likely to oppose it. That is a shockingly large swing in public opinion.

Diffusion?

And maybe we’re still on track for that. In the meantime, you’re more likely to hear pragmatic conversations about the thorny problem of what is called “diffusion”: the speed and shape of public uptake as new models spread out into the world beyond the lab, finding users and uses, hitting human bottlenecks and real-world obstacles and requiring new strategies or more narrowly trained models to navigate through or around them.

This is a pretty different vision, in which A.I. may continue to rapidly progress, even transform much of our lives, but without all the power necessarily accruing to the leading labs or the five individuals in charge of them.

There's more at the link.