A couple weeks ago Tyler Cowen’s Emergent Ventures announced an interest in funding work in artificial intelligence (AI). I decided to apply. The application was relatively short and straightforward: Tell us about yourself and tell us what you want to do. So that’s what I did. I ended up recounting my intellectual career from “Kubla Khan” to attractor nets.
So, I’ve reproduced that narrative below, except for the final paragraph where I ask for money. It joins the many pieces I’ve written about my intellectual life. I list most of them, with links, after the narrative.
* * * * *
In a recent interview with Karen Hao, Geoffrey Hinton proclaimed, “I do believe deep learning is going to be able to do everything” (MIT Technology Review, 11.3.2020). His faith is rooted in the remarkable success of deep learning in the past decade. This notion of AI omnipotence has deep cultural roots (e.g. Prospero and his magic) and is the source of both wild techno-optimism and apocalyptic fears about future relations between AI and humanity. Momentum seems to be on Hinton’s side. I believe, however, that by establishing a robust and realistic view of the actual difference between artificial and natural intelligence, we can speed progress by tamping down both the hyperbolic claims and the fears.
In the 2010s I employed a network notation developed by Sydney Lamb (computational linguistics) to sketch out how salient features in the high-dimensional geometry of complex neurodynamics could map into a classical symbolic system. (Gary Marcus argues that Old School symbolic computing is necessary to handle common sense reasoning and complex thought processes.) My hypothesis is that the highest-level processes of human intelligence are best conceived in symbolic terms and that Lamb’s notation provides a coherent way of showing how symbols can impose high-level organization on those “big vectors of neural activity” that Hinton talks about.
Here is a quick account of how I arrived at that hypothesis.
For my Master’s Thesis at Johns Hopkins in 1972 I demonstrated that Coleridge’s “Kubla Khan” was a poetic map of the mind, structured like a pair of matryoshka dolls, each nested three deep. It “smelled” of an underlying computational process, nested loops perhaps. Over a decade later I published that analysis in Language and Style (1985) – at the time perhaps the premier journal about language and literature.
In 1973 I started studying for a PhD in English at SUNY Buffalo. The department was in the forefront of postmodern theory and known for its encouragement of interdisciplinary boldness, with Rene Girard, Leslie Fiedler, Norman Holland and several prominent postmodern writers on the faculty. There I met David Hays in the linguistics department. He had led the RAND Corporation’s team on machine translation in the 1950s and 1960s and later coined the term “computational linguistics.” I joined his research group and used computational semantics to analyze a Shakespeare sonnet, “The Expense of Spirit.” I published that analysis in 1976 in the special 100th anniversary issue of MLN (Modern Language Notes) – an intellectual first. Much of my 1978 dissertation, “Cognitive Science and Literary Theory,” consisted of semi-technical work in knowledge representation, including the first iteration of an account of cultural evolution that Hays and I would publish in a series of essays in the 1990s.
Prior to meeting Hays I had been attracted by a 1969 Scientific American article in which Karl Pribram, a Stanford neuroscientist, argued that vision and the brain more generally operated on mathematical principles similar to those underlying optical holography, principles also used in current convolutional neural networks. Neural holography played a central role in a pair of papers Hays and I published in the 1980s, “Metaphor, Recognition, and Neural Process” (American Journal of Semiotics, 1987), and “The Principles and Development of Natural Intelligence” (Journal of Social and Biological Structures, 1988). Drawing on a mathematical formulation by Miriam Yevick, both papers developed a distinction between holographic semantics and compositional semantics (symbols) and argued that language and higher cognitive processes required interaction between the two.
I spent the summer of 1981 working on a NASA project, Computer Science: Key to a Space Program Renaissance, leading the information systems group. I left the academic world in 1985 – I’d been on the faculty of the Rensselaer Polytechnic Institute – and collaborated with Richard Friedhoff on a coffee-table book about computer graphics and image processing, Visualization: The Second Computer Revolution (Abrams 1989). During this period Hays and I began publishing our articles on cultural evolution, beginning with “The Evolution of Cognition” (Journal of Social and Biological Structures, 1990). We argued that the development of a major new conceptual instrument, such as writing across the ancient world, enabled a new cognitive architecture, and that new architecture in turn supported new modes of thought and invention. When Europe had fully absorbed positional decimal arithmetic from the Arabs, the result was a new conceptual architecture which enabled the scientific and industrial revolutions and indirectly, the novel. The twentieth century saw the development of the computer, first conceptually, and then implemented in electronic technology at mid-century. Another new cognitive architecture emerged, but also modernism in the arts.
At the end of the 1990s I entered into extensive correspondence with Stanford’s Walter Freeman about complex neurodynamics. That work became central to the account of music I developed in Beethoven’s Anvil: Music in Mind and Culture (Basic Books, 2001). Meanwhile literary scholars were finally discovering cognitive science. I jumped back into the fray and published several articles, including a general theoretical and methodological piece, “Literary Morphology: Nine Propositions in a Naturalist Theory of Form” (PsyArt: An Online Journal for the Psychological Study of the Arts, 2006). I argued, among other things, that literary form could be expressed computationally in the way that, say, parentheses give form to LISP expressions. My early work on “Kubla Khan” and “The Expense of Spirit” exemplifies that notion of computational form, which I also discussed in “The Evolution of Narrative and the Self” (Journal of Social and Evolutionary Systems, 1993). Over the last two decades I have described and analyzed over 30 texts and films from this perspective, though most of that work is in informal working papers posted to Academia.edu where I rank in the 99.9 percentile of publications viewed.
I am now who knows how many miles into my 1000-mile journey. The full range of the work I’ve done over a half century, all of it with computation in mind – language, literature, music, cultural evolution ¬– remains open for further exploration. I am now ready to make significant progress on the problem that started my journey: the form and semantic structure of “Kubla Khan.” In so doing I intend to clarify the difference between natural and artificial intelligence.
“Kubla Khan” is one of the greatest English-language poems and has left its mark deep in popular culture. It has a rich formal structure and through that draws on the full range of human mental capacities. By explicating them I will propose a minimal, but explicit, set of capabilities for a truly general intelligence and show how they work together to produce a coherent object, a poem. I expect to show – though I can’t be sure of this – that some of those capacities are beyond the range of silicon.
I undertake to do so, not to save the human from the artificial, but to liberate the artificial from our narcissistic investment in it - the tendency to project our fears of the unknown and anxieties about the future onto our digital machines. Only when we have clarified the difference between natural and artificial intelligence will we be able to assess the potential dangers posed by powerful artificial mentalities. Artificial intelligence can blossom and flourish only if it follows a logic intrinsic and appropriate to it.
As futurist Roy Amara noted: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. So it is with AI. Fear of human-level AI is short-term while the transformative effects of other-than-human AI will be long term.
I don’t intend to craft code. I’m looking to define boundaries and mark trails. I have spent a career examining qualitative phenomena and characterizing them in terms making them more accessible to investigators with technical skills I lack. I seek to provide AI with ambitious and well-articulated goals that are richer rather than simply “bigger and still bigger.”
* * * * *
The break – How I ended up on Mars
Here’s one way I’ve come to think about my career: I set out to hitch rides from New York City to Los Angeles. I don’t get there. My hitch-hike adventure failed. But if I ended up on Mars, what kind of failure is that? Lost on Mars! Of course, it might not actually be Mars. It might be an abandoned set on a studio back lot. Ever since then I’ve been working my way back to earth.
This material is about how I ended up on Mars while on the way to LA. That is, it is about I set out to analyze “Kubla Khan” within existing frameworks but ended up outside those frameworks.
Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life
https://www.academia.edu/9814276/Touchstones_Strange_Encounters_Strange_Poems_the_beginning_of_an_intellectual_life
This is about my undergraduate years at Johns Hopkins and my years as a master’s student in the Humanities Center, where I wrote my thesis on “Kubla Khan.” This is how I became an independent thinker with my own intellectual agenda. Among other things, I talks about the role that some altered mental states – two having nothing to do with drugs, one about and LSD trip (that wasn't trippy in the standard sense) – in my early intellectual development. If you read only one of these pieces, this is the one.
Into Lévi-Strauss and Out Through “Kubla Khan”
https://new-savanna.blogspot.com/2013/08/into-levi-strauss-and-out-through-kubla.html
This is a story told in diagrams, about how I went from Lévi-Strauss style structuralism to the computationally inspired semantic networks of cognitive science. Read this second.