Pages in this blog

Saturday, August 30, 2025

LLMs as cultural technologies: Four Views

Henry Farrell, Large language models are cultural technologies. What might that mean? Programmable Mutter, Aug. 18, 2025.

It’s been five months since Alison Gopnik, Cosma Shalizi, James Evans and myself wrote to argue that we should not think of Large Language Models (LLMs) as “intelligent, autonomous agents” paving the way to Artificial General Intelligence (AGI), but as cultural and social technologies. In the interim, these models have certainly improved on various metrics. However, even Sam Altman has started soft-pedaling the AGI talk. I repeat. Even Sam Altman.

So what does it mean to argue that LLMs are cultural (and social) technologies? This perspective pushes Singularity thinking to one side, so that changes to human culture and society are at the center. But that, obviously, is still too broad to be particularly useful. We need more specific ways of thinking - and usefully disagreeing - about the kinds of consequences that LLMs may have.

This post is an initial attempt to describe different ways in which people might usefully think about LLMs as cultural technologies. Some obvious provisos. It identifies four different perspectives; I’m sure there are more that I don’t know of, and there will certainly be more in the future. I’m much more closely associated with one of these perspectives than the others, so discount accordingly for bias. Furthermore, I may make mistakes about what other people think, and I surely exaggerate some of the differences between perspectives. Consider this post as less a definitive description of the state of debate than a one man presentation exchange that is supposed to reveal misinterpretations and clear the air so that proper debate can perhaps get going. Finally, I am very deliberately not enquiring into which of these approaches is right. Instead, by laying out their motivating ideas as clearly as I can, I hope to spur a different debate about when each of them is useful when and for which kinds of questions.

Gopnikism

I’m starting with this because for obvious reasons, it’s the one I know best. The original account is this one, by Eunice Yiu, Eliza Kosoy and Alison, which looks to bring together cognitive psychology with evolutionary theory. They suggest that LLMs face sharp limits in their ability to innovate usefully, because they lack direct contact with the real world. Hence, we should treat them not as agentic intelligences, but as “powerful new cultural technologies, analogous to earlier technologies like writing, print, libraries, internet search and even language itself.”

Behind “Gopnikism” lies the mundane observation that LLMs are powerful technologies for manipulating tokenized strings of letters. They swim in the ocean of human-produced text, rather than the world that text draws upon. Much the same is true, pari passu, for LLMs’ cousin-technologies which manipulate images, sound and video. That is why all of them are poorly suited to deal with the “inverse problem” of how to reconstruct “the structure of a novel, changing, external world from the data that we receive from that world.”

Interactionism

Interactionist accounts of LLMs start from a similar (but not identical) take on culture as a store of collective knowledge, but a different understanding of change. Gopnikism builds on ideas about how culture evolves through lossy but relatively faithful processes of transmission. Interactionism instead emphasizes how humans are likely to interpret and interact with the outputs of LLMs, given how they understand the world. Importantly for present purposes, cultural objects are more likely to persist when they somehow click with the various specialized cognitive modules through which human intelligence perceives and interprets its environment, and indeed are likely to be reshaped to bring them more into line with what those modules lead us to expect.

From this perspective, then, the cultural consequences of LLMs will depend on how human beings interpret their outputs, which in turn will be shaped by the ways in which biological brains work. The term “interactionism” stems from this approach’s broader emphasis on human group dynamics but by a neat coincidence, their most immediate contribution to the cultural technology debate, as best as I can see it, rests on micro-level interactions between human beings and LLMs.

Structuralism

I’ve recently written at length about Leif Weatherby’s recent book, Language Machines, which argues that classical structuralist theories of language provide a powerful theory of LLMs. This articulates a third approach to LLMs as cultural technologies. In contrast to Gopnikism, it doesn’t assume that culture’s value stems from its connection to the material world, and pushes back against the notion that we ought build a “ladder of reference” from reality on up. It also rejects interactionists’ emphasis on human cognitive mechanisms:     

A theory of meaning for a language that somehow excludes cognition—or at least, what we have often taken for cognition—is required.

Further:

Cognitive approaches miss that the interesting thing about LLMs is their formal-semiotic properties independent of any “intelligence.”

Instead of the mapping between the world and learning, or between the architecture of LLMs and the architecture of human brains, it emphasizes the mappings between large scale systems. The most important is the mapping between the system of language and the statistical systems that can capture it, but it is interested in other systems too, such as bureaucracy.

Language models capture language as a cultural system, not as intelligence. … The new AI is constituted as and conditioned by language, but not as a grammar or a set of rules. Taking in vast swaths of real language in use, these algorithms rely on language in extenso: culture, as a machine.

The idea, then, is that language is a system, the most important properties of which do not depend on its relationship either to the world that it describes or to the intentions of the humans who employ it.

Role play

Weatherby is frustrated by the dominance of cognitive science in AI discussions. The last perspective on cultural technology that I am going to talk about argues that cognitive science has much more in common with Wittgenstein and Derrida than you might think. Murray Shanahan, Kyle McDonell and Laria Reynolds’ Nature article on the relationship between LLMs and “role play” starts from the profound differences between our assumptions about human intelligence and how LLMs work. Shanahan, in subsequent work, brings this in some quite unexpected directions.

I found this article a thrilling read. Admittedly, it played to my priors. I first came across LLMs in early/mid 2020 thanks to “AI Dungeon,” an early implementation of GPT-2, which used the engine to generate an infinitely iterated role-playing game, starting in a standard fantasy or science fiction setting. AI Dungeon didn’t work very well as a game, because it kept losing track of the underlying story. I couldn’t use it to teach my students about AI as I had hoped, because of its persistent tendency to swivel into porn. But it clearly demonstrated the possibility of something important, strange and new.

There's much more at the link.

Needless to say, I am very sympathetic to this line of thinking. 

Cultural Technology, Old School (in Jersey City)

No comments:

Post a Comment