I am in the process of preparing a working paper based on my review of Grace Lindsey’s recent book, Models of the Mind. This is draft material that will go in the working paper just before that review.
* * * * *
Reality is not perceived, it is enacted – in a universe of great, perhaps unbounded, complexity.
This started, well, I don’t know, maybe way back when I was six or seven and thought the world was actually a movie projected on a giant screen for the enjoyment of the Baby Jesus, or perhaps when I was a bit older and wondered what there was before the universe came into existence, maybe still order when I thought, as a poem about poetry, “Kubla Khan” held the secret to literary criticism, but really, forget all that. It’s there to be sure, but has no direct bearing. This particular project started with a book, Grace Lindsay’s Models of the Mind, which I reviewed for 3 Quarks Daily. That review constitutes the next section of this working paper. The purpose of this section is to provide a context for that review.
I saw Lindsay’s book, and to some extent reviewed it, as a work of philosophy, though not philosophy as it exists in philosophy departments. I’m using the word in a different sense, one that I did in fact pick up from a philosopher, Peter Godfrey-Smith. In this view philosophy is a way of making sense of the world in the broadest possible conspectus. That is what philosophy was in the ancient world, but as we developed and accumulated knowledge, philosophers became specialists of various kinds, some became social and behavioral scientists, others became natural scientists, while still others practiced a humanities discipline. Philosophy itself became a humanities discipline, and, as such, became narrowly focused.
But we still need to be able to make sense of the world, all of it, in some way or another. And so in the last several decades we have seen intellectual specialists of one sort or another write books for a general audience – Richard Dawkins, E.O. Wilson, Stephen Mithen, Jared Diamond, Stephen Hawking, and Murray Gell-Mann come to mind, but there are many others (see my post on John Brockman’s Third Culture ). While these books may be directed at the general audience, I suspect that they are written to serve their authors’ need to see how things fit together. That is to say, they are written out of philosophical hunger, if you will. As such, they are works of philosophy in this extended sense.
It is in that sense that Models of the Mind, is a work of philosophy. I might even hazard the assertion that it betokens a new philosophy of mind, but that might confuse it with the philosophy of mind that exists in philosophy departments. If I did that I’m afraid I’d be asking the little word “new” to do an awful lot of work. Maybe Lindsay took a course in the philosophy of mind at some point, maybe she even reads around in it, but this book certainly didn’t come out of the questions raised in that discipline. Where does this book come from? Look at the subtitle, How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain. That’s where it comes from. That is to say, it doesn’t come from any one, two, or three, or even five or eight academic disciplines. It comes from many and none. I see this book as part of a larger intellectual development, one not well-defined (which is probably a good thing), that will replace the traditional philosophy of mind, and a few other disciplines as well, with a more adequate approach to understanding the mind and the brain.
The most fruitful conversations about mind and brain have been those between students of neural wetware, on the one hand, and software and hardware (digital and analog) on the other. In particular, it seems to me that those conversations have been far more consequential that philosophical discussions of whether or not computer can think or the brain is a computer (which I take up later in this document). It is time to liberate those conversations from constraints imposed by our Cartesian legacy. That, in effect, is what Lindsay proposes. And that is how I framed my review.
Speculative Engineering and the problem of design
I coined the term “speculative engineering” in the preface to my book on music, Beethoven’s Anvil: Music in Mind and Culture. Here is what I said (p. xiii):
Engineering is about design and construction: How does the nervous system design and construct music? It is speculative because it must be. The purpose of speculation is to clarify thought. If the speculation itself is clear and well-founded, it will achieve its end even when it is wrong, and many of my speculations must surely be wrong. If I then ask you to consider them, not knowing how to separate the prescient speculations from the mistaken ones, it is because I am confident that we have the means to sort these matters out empirically. My aim is to produce ideas interesting, significant, and clear enough to justify the hard work of investigation, both through empirical studies and through computer simulation.
That book is speculative a way that Lindsay’s is not. I aimed for an account of how music works, from the nervous system, in performance, to the social group, from human origins to the present.
Lindsay makes no pretense of presenting a theory about the mind or brain. Rather she offers a review of models, many models, that have been and are being used in studying the brain. But as her subtitle indicates, How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain, she understands that an engineering perspective is important.
As I said, engineering, unlike physics, but perhaps not so unlike mathematics, is about design and construction. If we want to understand how the mind works, we must understand it from an engineering perspective. We want to see how things function, what the parts are and how they fit together to achieve a particular end. Consider this passage where Newell and Simon talk about computer science:
Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. None the less, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. Each new program that is built is an experiment. It poses a question to nature, and its behavior offers clues to an answer. Neither machines nor programs are black boxes; they are artifacts that have been designed, both hardware and software, and we can open them up and look inside. We can relate their structure to their behavior and draw many lessons from a single experiment.
“They are artifacts that have been designed,” that’s a crucial statement. The human brain has been designed as well, but not by engineers. Rather it was ‘designed’ and ‘constructed’ in a process of biological evolution taking place over hundreds of millions of years of years. But we must think like engineers if we are to understand how it works.
In my review of Models of the Mind I pick out two models that are particularly important in understanding how the mind and brain have been engineered. One is the 1943 model that McCulloch and Pitts proposed for the function of neurons: think of them as logic elements in an electronic circuit. The other is the Perceptron that Frank Rosenblatt constructed in the late 1950s.
As you may know, McCulloch and Pitts proposed that neurons were logical units in brain-based electrochemical circuits (see p. 10). Their proposal was all but ignored by neuroscientists, found some favor among philosophers – Daniel Dennett was quite struck by it  – but researchers in the emerging study of artificial intelligence loved it, for it squared with their preconceptions about the nature of higher mental functioning. Those researchers thought of the mind as processing symbols. This resulted in a great deal of research, at least one Nobel Prize – yes, Herbert Simon’s 1978 prize was awarded in economics but, really, everyone knows he got it for his work in psychology and AI – a lot of hope for useful systems, but not much practical success. The research enterprise, now known as GOFAI (good old-fashioned artificial intelligence) collapsed in the mid-1980s.
What happened? The systems were brittle, failing catastrophically without warning, and required many hours of meticulously crafted knowledge representation (KR), as it came to be known, knowledge generally grounded in some version of logic – many versions were tried. Unlike human or, for that matter, animal minds, GOFAI systems were explicitly designed by humans.
Rosenblatt’s legacy was different (see p. 10). Perceptrons were learning machines. Their initial promise was high, but they failed to deliver. However, Rosenblatt’s ideas were revived and supplemented in the 1980s, as GOFAI was going down, under the guise of connectionism and eventually gave rise to the very powerful systems we see today, which are based on artificial neural nets (ANN). Researchers design a learning architecture ¬– there must be at least as many such architectures as there were schemes for KR during the GOFAI era, programmers implement it, and it computes over some database, of text, images, speech, what have you, and ‘learns’ the structure of objects in the database. Just what it learns, and how, that is somewhat obscure, but it works, after a fashion, but generally much better than the GOFAI systems.
The important point is that these systems teach themselves. AI researcher Yann LeCun has made some remarks that seem relevant to me. This is from a podcast quoted by Kenneth Church and Mark Liberman in a recent article, The Future of Computational Linguistics: On Beyond Alchemy:
All of AI relies on representations. The question is where do those representations come from? So, uh, the classical way to build a pattern recognition system was . . . to build what’s called a feature extractor . . . a whole lot of papers on what features you should extract if you want to recognize, uh, written digits and other features you should extract if you want to recognize like a chair from the table or something or detect...
If you can train the entire thing end to end—that means the system learns its own features. You don’t have to engineer the features anymore, you know, they just emerge from the learning process. So that, that, that’s what was really appealing to me.
That second paragraph is the important one. The system that performs the task at hand, the performing system – whether it is classifying images, translating from one language to another, playing chess, whatever – is not designed by humans. Rather it is designed by an architecture and that architecture is in turn designed by humans.
None of those architectures so far look much like the human brain. That is one thing, and very important. Just as important, however, is the fact that some of them produce very powerful systems, some of practical value. They do so because the design of the performing system has be displaced from human designers and onto a machine.
I take it then, that the problem of design is central to speculative engineering. And it is central, not just as an abstract issue, but as an inquiry into how the design process is implemented in a physical device, whether organic or artificial. How, specifically, does a system fit itself to, learn from, the environment in which it must function?
A timely book
Lindsay didn’t foreground the problem of design in Models of the Mind, but it is there, along with many other issues, or problematics as many humanists like to say. She lays them out for you, one after the other, chapter after chapter. I’ve been reading and thinking about mind and brain for half a century, but I’ve never seen a book quite like this.
It is a timely book, a book we need. It feels foundational. Certainly, students of neuroscience need to encounter it early in their education. But so should students of psychology, of artificial intelligence, and of the mind more generally, even literary critics. That's where I started half a century ago, with questions about the structure of a poem “Kubla Khan,” a structure that somehow smelled of computation. I still haven’t figured out that structure. Possibly I never will. But I am content to leave that task to others, confident that that puzzle is more likely to be resolved in the conceptual universe implied by Models of the Mind than in the universe available to me back in the Jurassic Era. I call that progress.
 That statement is taken from William Benzon and David G. Hays, A Note on Why Natural Selection Leads to Complexity, Journal of Social and Biological Structures 13: 33-40, 1990, https://www.academia.edu/8488872/A_Note_on_Why_Natural_Selection_Leads_to_Complexity.
 See my post “What is Philosophy?” New Savanna, May 17, 2013, https://new-savanna.blogspot.com/2013/05/what-is-philosophy.html.
 “Brockman’s Third Culture and the emergence of a new philosophical regime,” New Savanna, May 4, 2021, https://new-savanna.blogspot.com/2021/05/brockmans-third-culture-and-emergence.html.
 I have used the label “philosophy new” to tag relevant posts at New Savanna. This is a link to those posts, https://new-savanna.blogspot.com/search/label/philosophy%20new.
 Allen Newell and Herbert A. Simon, Computer Science as Empirical Inquiry, Communications of the ACM, 19(3), p. 114.
 See his informal remarks, “The Normal Well-Tempered Mind,” Edge, May, 2013, where he also speculates about memes coming to tame neurons that have become “a little bit feral,” https://www.edge.org/conversation/daniel_c_dennett-the-normal-well-tempered-mind.
 Front. Artif. Intell., 19 April 2021 | https://doi.org/10.3389/frai.2021.625341.
 I tell that story in Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life, November 2015, https://www.academia.edu/9814276/Touchstones_Strange_Encounters_Strange_Poems_the_beginning_of_an_intellectual_life.