Jim Olds, The Chronology Problem, Mar. 12, 2026.
We are surprisingly bad at knowing when things began.
I’ve been thinking about this for a while, partly because I lived through several of the transitions we now misremember. In 1987, I used the Internet for early text-based email, file transfers, and reaching colleagues at other universities. In August of 1991, in the face of an impending direct hit of Hurricane Bob, I moved all of my image data from Woods Hole to NIH in Bethesda in a matter of minutes. This was entirely unremarkable at the time. And yet when I mention it today, people often look mildly startled, as if I’ve claimed to have owned a smartphone in 1987. In their minds, the Internet began sometime around 1994 or 1995, when the Web arrived and made it visible to everyone. Before that, apparently, there was nothing.
Olds then goes on to say more about the (deep) origins of the web, artificial intelligence, climate science, and economics. Here's what he had to say about AI:
The field of artificial intelligence may be the most dramatic case study in collective chronological confusion we have. Most people who interact with today’s language models and image generators believe they are witnessing something genuinely unprecedented — a technology that sprang into being sometime around 2017. What happened is more complicated and more interesting.
The mathematical foundations for neural networks were laid in 1943, when Warren McCulloch and Walter Pitts published a paper describing how neurons could, in principle, compute logical functions. Frank Rosenblatt simulated a working perceptron at the Cornell Aeronautical Laboratory in 1958 — a system that could learn from examples. The 1986 backpropagation paper by Rumelhart, Hinton, and Williams, which most practitioners treat as a founding document, was itself a rediscovery and refinement of ideas that had been circulating since the early 1970s. Yann LeCun was training convolutional neural networks to read handwritten digits for the U.S. Postal Service in 1989. The architecture underlying those systems is recognizably the ancestor of what powers modern computer vision.
None of this was secret. It was published, presented, and in some cases deployed in real systems. What happened instead was a kind of institutional forgetting, accelerated by two “AI winters” — periods when funding dried up, interest collapsed, and computer science turned its attention elsewhere. Researchers who had spent careers on neural approaches moved on or retired. Graduate students who might have built on their work were instead trained in other paradigms. When the hardware finally caught up with the ambitions of the 1980s, around 2012, the rediscovery felt like a revolution. In some ways, it was. But the conceptual foundations were not new, and the people who had laid them got less credit than they deserved, partly because so many of the field’s new practitioners didn’t know they existed.
The practical cost here is the same as elsewhere: repeated investment in problems that had already been partially solved, frameworks that were novel mainly to their authors, and a set of origin myths that flatter the present at the expense of the past. The deeper cost is that we don’t understand what was tried and discarded and why — which algorithms were abandoned for reasons of computational expense rather than theoretical inadequacy, and which might be worth revisiting now that the expense has fallen.
To Olds’s list I would add Miriam Yevick's 1975 paper, Holographic or fourier logic, published in Pattern Recognition. Unfortunately that paper got lost as it didn't fit into either cognitive science or artificial intelligence. What she proved was the for one class of visual objects, those with a complex geometry, neural networks provided the best computational regime while for another class of objects, those with simple geometry, symbolic computation provided the best computational regime. That has a direct bearing on the current debate over whether or not new architectures involving symbolic processing are necessary.
No comments:
Post a Comment