Why then should we talk of language evolution rather than language change? And the same with culture, why evolution rather than change? The reason, I suspect, is that evolution implies memory in a way that change does not. A system that changes may or may not have memory; but a system can only evolve if it has memory. Not necessary perfect or complete memory, but memory. Evolving systems can ‘track’ and learn from their environment in a way that merely changing ones cannot.
Though I’ve been thinking some version of this notion for years – going back to thinking about episodic structure in Dave Hays’s working group in graduate school in the 1970s – the prompt for this most recent version comes from a recent (and rather strange, but interesting) article by Willard McCarty . He’s interested in articulating a properly humanist way of investigating artificial intelligence. He notes (p. 154):
The best, most highly developed example for AI is the rapidly evolving AlphaGo system, designed to play the ancient board-game known as go in Japanese, weiqi in Chinese (Papineau and Black 2001; Fairbairn 2007). AlphaGo’s victories are impressive,11 especially given the antiquity and complexity of the game and the discipline required to master it. The significance for AI, however, is that in the 2017 (AlphaGo Zero) version, it acquired its skill by playing against itself repeatedly, starting off as tabula rasa with no historical knowledge of play beyond the basic rules (Silver, Schrittwieser, and Simonyan 2017), and that in doing so it deployed legal moves that no human player had thought to make in the approximately 2500-year history of the game. Ambitions for the latest algorithm, AlphaZero, are stronger yet: to achieve “superhuman performance in many challenging domains” (Silver, Hubert, and Schrittwieser 2017).
For the question of intelligence that I raise here, these developments are unquestionably important. What they import, however, is not so much a superhuman intelligence but a clearly different kind. However dependent on the rigidly defined rules and structure of this and other games, these developments in AI serve as an existence-proof that, in the lineage of Turing’s provocation, draws attention to possibilities of intelligence in the built world, in conversation with us.
Well, OK. One might reply with some version of “intelligence”. But what would that tell us?
Given McCarty’s insistence on the digital nature of these computing systems (pp. 152-153) I suspect that would be his answer. His opening salvo on that front (152):
The machine we have is a composite of hardware and of software that progresses stepwise in layers, from the circuitry that creates and maintains the crisp binary signals so often assumed to be simply a given, to the ever friendlier, “intuitive” interface that trains as much as reflects the user’s intuition. The low-level details can be left to electrical and software engineers, but this logic survives through all the layers of abstraction and has much to do with how its resources are applied and how we are affected (Evens 2015).
Color me skeptical. A pile of bricks no more determines what one can construct from them than the digital nature of computing machinery dictates the uses to which it can be put .
I’m inclined to look elsewhere, to history. When a human learns to play Go, or any other such board game, she enters into the history of game play. One must, of course, play others and they have, in turn, played others, a linkage of playing pairs that extends back in history to the game’s origins. But AlphaZero sidestepped this history. It did not play against humans. It played against itself.
That in itself would have been to little avail if its raw computing capacity – a function of memory size and CPU cycles – didn’t allow it to explore a very large area of the game space in a relatively short period of time. Consequently it was able to reach areas of the space that humans had not explored so had not accumulated into human knowledge of gameplay. The question we must ask, of course, is how it was able to learn from that experience, to accumulate and distill that history into tactics and strategies available in current play. I believe that, however that may be, the underlying digital nature of the computing circuitry is incidental.
To return to my original question, it is the ability to accumulate and distill historical experience (change) that makes a system an evolutionary one. Evolutionary systems must respond to the world, but they are not pushed around by it willy-nilly.
And that is why we must talk of cultural evolution, beyond and rather than mere cultural change.
 Willard McCarty, “Modeling, ontology and wild thought: Toward an anthropology of the artificially intelligent,” HAU: Journal of Ethnographic Theory 9, no. 1 (Spring 2019): 147-161, https://doi.org/10.1086/703872.
 See the discussion of implementation in William L. Benzon, “Pursued by Knowledge in a Fecund Universe”, Journal of Social and Evolutionary Systems 20(1): 93-100, 1997. https://www.academia.edu/8790205/Pursued_by_Knowledge_in_a_Fecund_Universe.