Sunday, April 18, 2021

Horgan’s The End of Science, a reconsideration, Part 4: “Meat that thinks,” my personal quest

That’s one of the remaining mysteries Horgan names in his new preface: “How, exactly, does a chunk of meat make a mind?” That’s one I’ve been pondering in one form or another since my undergraduate years at Johns Hopkins in the middle and late 1960s and into my master’s work in the early 1970s. Literature, poetry in particular, has been at the center of my investigations, but I’ve ranged far and wide over the years.

What’s important is that arguably two of my central projects, understanding the structure (not the meaning) of Coleridge’s “Kubla Khan” and, more generally, playing a role in developing tools for the computational analysis of literary texts, have failed. But “failed” is too strong a word. When I set out on those investigations I had hopes and vague expectations about where they would lead. Didn’t get there in either case. But I did get somewhere, that I did, though just where...who knows? Something new has always turned up.

Why did those investigations fail? Cognitive limitations. Mine certainly. But also in the conceptual tools available to me. I wasn’t ready. Our intellectual culture wasn’t. Perhaps, surely, one day we will be ready. But I’m not offering any predictions.

* * * * *

I’d been a philosophy major at Johns Hopkins. But philosophy did not turn out to be the synthesizing discipline I’d been hoping for. Literature, on the other hand, was looking promising. For one thing I’d come within the ambit of Dr. Richard Macksey, a charismatic young humanist and polymath who seemed to know damned near everything, or at any rate, an impressively diverse lot of things.[1] He’d played a major role in organizing the now (in)famous structuralism conference in the fall of 1966, the one that brought Jacques Derrida, Jacques Lacan, Tzvetan Todorov, and Roland Barthes, among others, to America.[2]

While I was on campus at the time, I didn’t attend any of the sessions. As they were conducted in French, I would have been lost; I neither spoke nor read the language. But I was in that milieu at Hopkins: structuralism, semiotics, and – dare I say it? intellectual revolution. Note that while that conference is now cited as the beginning of post-structuralism and deconstruction, it wasn’t set up to be the beginning of the end of structuralism. It organized to be the beginning, period. And that’s how I took it.

Through Macksey I became steeped in structuralism, particularly Lévi-Strauss. I learned about generative grammar from James Deese, a psycholinguists, and about Jean Piaget (who wrote a slim volume on structuralism) from Mary Ainsworth. She also introduced me to attachment theory, primate ethology, and the work of John Bowlby, who coined the term “environment of evolutionary adaptedness” (aka EEA). In my senior year I took a two-semester course in Romantic Literature from Earl Wasserman. That’s where I became enthralled by “Kubla Khan.”

I decided to write a master’s thesis on it, with Macksey as my director. I set out to do a structuralist analysis of the poem. It took awhile and didn’t end up looking like any of the examples I’d been working from. I took me awhile to get used to what I’d found.[3]

What I discovered is that each of the poem’s two parts was organized like one of those nested Russian dolls, like this (the orange arrows indicated the direction of reading):

Nested journey

I didn’t know quite what to make of it, nor did Macksey, or anyone else. But it smelled like computation, some kind of nested loop working sub rosa. I would go on to publish a slimmed down version of that these in Language and Style in 1985.[4]

I took my degree and my thesis and hustled off to the State University of New York (SUNY) at Buffalo to get a doctorate. The English Department there was interdisciplinary to the core and the best experimental program in the nation at the time.[5] I had no idea whether or not they could support my inquiry, but what could I do? I wanted the degree.

I needn’t have worried. It turned out that there was a professor in the Linguistics Department there who could help me and the English Department was happy to have me work with him. His name was David Hays. I joined his research group even while working toward my Ph.D. in English.

Hays had been a first generation investigator (starting in the mid-1950s) in machine translation – the use of digital computation to translate a text from one language to another – directing the RAND Corporation’s efforts. The research was funded by the Defense Department; they wanted Russian technical publications translated into English. Alas, by the mid-1960s the prospects of high-quality translation seemed remote and the government dropped its funding. The fiscally devastated field regrouped under the rubric of computational linguistics. In 1969 Hays left RAND for SUNY Buffalo where he became the founding chairman of the newly formed Linguistics Department.

I met him in the spring of 1974 and joined his research group that fall. Hays had developed an approach to computational semantics which I learned with the intention of applying it to “Kubla Khan.” Alas, the poem resisted. But a Shakespeare sonnet, the famous “Lust in Action” (number 129) worked nicely. I published that in a special centennial issue of MLN (Modern Language Notes), the second oldest literary journal in the United States, and edited at Johns Hopkins.[6]

Here’s one of the diagrams (of eleven) from that article:

Shame s129 mln76

That’s a fragment of a semantic or cognitive network. The nodes represent concepts and the arrows (called “edges” or “arcs”) represent relations between them. This was a popular of formalism in the cognitive sciences in the 1970s and 1980s and it treats the mind as a collection of symbols. A text, whether written or spoken, is then analyzed as a path through such a network.

During that time period Hays was invited to review the literature in computational linguistics for a journal then called Computers and the Humanities. Since I had been scanning the literature of the field – AI, linguistics, computational linguistics, information science, and cognitive psychology – as bibliographer for The American Journal of Computational Linguistics (which Hays edited) he asked me to draft the piece. He then made some additions and changes. The article came out at roughly the same time as my article for MLN.[7]

Citing my MLN piece, we suggested that at some time in the future it would be possible to have a machine “read” a Shakespeare play so the investigator would then be able to examine the paths the computer took through the network. We called this rare device Prospero. We didn’t say when one could expect to use a Prospero device – Hays believed such predictions were foolish – but in my mind I thought 20 years would be about right. That paper appeared in 1976; twenty years later would be 1996, the year The End of Science appeared. Was that marvel then available? No. Is it available now? No. Sometime in the future? I’m not planning on it.

I’m getting ahead of myself. I finished my degree, writing a dissertation that was in large part a technical exercise in cognitive science and using Sonnet 129 as my major example. I then took a faculty job at the Rensselaer Polytechnic Institute in 1978. I failed to get tenure and began a catch-as-catch-can life. But I remained in touch with Hays. We collaborated until his death in 1995.

Our first project was to publish a paper entitled “Principles and Development of Natural Intelligence,” a clear shot across the bow of the Good Ship AI.[8] It was a crazy-ass paper, so crazy I probably shouldn’t mention it in polite company. We reviewed literature in a variety of fields – cognitive, perceptual, and developmental psychology, neuroscience, comparative neuroanatomy – and proposed five quasi-mathematical principles. We aligned these principles with gross features of neuroanatomy, human ontogeny, and vertebrate phylogeny. As I said, crazy.

Why’d we do it? Because we knew that those cognitive networks had to be implemented in neural tissue somehow and we knew that there was more to the mind than symbols. We wrote that paper to get the lay of those other lands. We also collaborated on a series of papers on cultural evolution. I’ll say a bit more about that work in the next section.

In the mid 1990s Horgan’s book came out, and I wrote my essay-review.[9] At the same time literary critics were finally becoming interested in cognitive science, though a brand of cognitive science that had lost touch with computation. So I began to think about literary criticism again. I presented some papers at conferences and published a small handful of papers in the new millennium, including a long paper proposing nine propositions in a theory of literary form, one of which is that literary form is computational.[10]

Meanwhile I’d been corresponding with the late Walter Freeman, a neuroscientist at Berkeley, at about the same time I had an opportunity to write a book about music. Freeman tutored me in neurodynamics. I made use – good use I hope – of that in the book, which looked at origins, performance and perception, feeling, interpersonal coordination (there’s now a growing neuroscience literature on that), ecstatic states, society, and history.[11]

In the second decade of the new millennium work in the so-called “digital humanities” began bubbling up through the ground. I was particularly interested in work in literary studies. Much of this used techniques in machine learning that had largely supplanted symbolic techniques of the 1970s and 1980s, the work I was trained in. This work examines large bodies of texts, while I was interested in detailed analysis of the structure of individual texts I saw no inherent conflict. I’ve spent a fair amount of time blogging about this work and think some of it is quite important.[12]

And then, just last year, GPT-3 landed. GPT-3 is a massive AI engine from OpenAI that does an eerie job of imitating human language. It had the cyber-kids breathless with anticipation: Is the era of artificial super-intelligence upon us?

I think not.

During that period I was conducting an email interview with Hollis Robbins, Dean of Arts and Humanities at Cal State at Pomona. Her new book, Forms of Contention: Influence and the African American Sonnet Tradition (UGA Press 2020), had come out and I was going to publish the interview in 3 Quarks Daily. We decided to end the review by replaying the classic battle between John Henry and the steam hammer. GPT-3 played the role of the steam hammer and Marcus Christian, an African-American poet from the first half of the 20th century, played the role of John Henry. An online acquaintance of mine, Phil Mohan, had access to GPT-3 and prompted it to produce a poem based on the initial lines of Christian’s sonnet, “The Craftsman.” GPT-3’s output was interesting, but it wasn’t much of a poem.[13]

As for the future, I don’t know, nor does anyone else, no matter what they say to the contrary.[14] When I say “I don’t know,” I’m speaking from experience, not uttering a forma platitude.

Speaking of experience, you might be thinking, what of those cognitive networks of your youth, of your dreams of a Prospero machine? Those dreams fell through, no?

Yes.

It’s complicated. Remember, Hays saw the first generation work in computational linguistics collapse in the mid-1960s. The era of symbolic computation – those cognitive networks as well as other approaches – fell through in the mid-1980s. I didn’t find that particularly surprising nor even distressing. Neither Hays nor I believed that symbolic computing was The Answer either for natural or artificial intelligence. We’d already moved on with that (crazy ass) paper on natural intelligence.

Yes, a lot has happened since the end of World War II. Yet John von Neumann’s slim volume, The Computer and the Brain, remains worth reading despite all the advances in neuroscience and computer since it was published in 1958. Think of that classic work and others – Chomsky on syntax, McCulloch on the brain, George Miller in cognition, and so forth – as first landing on the eastern seaboard of North America (or some other suitable place, like Mars). I’m not sure that we’ve reached the Rocky Mountains, much less crossed over to the Pacific. We’re a long way from laying track on a trans-continental railroad. By the time that happens we’ll have been through a cataclysmic insight or three.

Why am I so sure? Truth be told, I’m not sure. I don’t trust that our institutions will support the kind of exploration needed. The cataclysms are out there. I can smell ‘em, the possibility and the necessity. But the actuality?

I told you Hays didn’t believe in trying to predict when this or that ‘thing’ would be worked out or discovered. He was speaking from hard-won experience. He’d seen the pursuit of machine translation collapse in his face – and all those hopes and promises! I’d been watching when AI Winter struck in the mid-1980s and, as I’ve said all ready, Prospero didn’t happen and isn’t likely to happen in the foreseeable future.

Horgan grants that “our descendants will learn much more about nature,” so, by trivial implication he grants that they will learn more and likely even more about how meat makes mind. But, there are no more cataclysms in the future? How does he know?

I half suspect – no more – that there may already have been an unrecognized cataclysm or two, unrecognized because we don’t yet know enough to understand where we’ve already been. Horgan is right to call out the replication crisis in psychology and he’s right to call the cybernauts on their weirdly self-glorifying (not his term) predictions of computational superintelligence. Such things seem to be, as they say, the price of doing business. Why believe they’re all there is? They aren’t.

References

[1] “It got adults off your back” • Richard Macksey remembered (2019), 7 pp., https://www.academia.edu/40040691/_It_got_adults_off_your_back_Richard_Macksey_remembered.

[2] For a brief account see, Brett McCabe, “Structuralism’s Samson,” Johns Hopkins Magazine, Fall 2012, https://hub.jhu.edu/magazine/2012/fall/structuralisms-samson/.

[3] “Touchstones,” Paunch 42 - 43: 4 - 16, December 1975. I’ve revised and updated it as “Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life,” November, 2015, https://www.academia.edu/9814276/Touchstones_Strange_Encounters_Strange_Poems_the_beginning_of_an_intellectual_life.

[4] William Benzon, Articulate Vision: A Structuralist Reading of "Kubla Khan", Language and Style, Vol. 8: 3-29, 1985, https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_.

[5] Jackson, Bruce (February 26, 1999). “Buffalo English: Literary Glory Days at UB.” Buffalo Beat. Retrieved October 23, 2013, http://www.acsu.buffalo.edu/~bjackson/englishdept.htm.

[6] William Benzon, “Cognitive Networks and Literary Semantics,” MLN 91: 1976, 952-982, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics.

[7] William Benzon and David Hays, “Computational Linguistics and the Humanist”, Computers and the Humanities, Vol. 10. 1976, pp. 265-274, https://www.academia.edu/1334653/Computational_Linguistics_and_the_Humanist.

[8] William Benzon and David Hays, “Principles and Development of Natural Intelligence,” Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322, https://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence.

[9] William L. Benzon, Pursued by Knowledge in a Fecund Universe, Journal of Social and Evolutionary Systems 20(1): 93-100, 1997. https://www.academia.edu/8790205/Pursued_by_Knowledge_in_a_Fecund_Universe.

[10] Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

[11] William Benzon, Beethoven’s Anvil: Music in Mind and Culture, Basic Books, 2001.

[12] For example, see the following working paper, which is an expanded version of a review I published in 3 Quarks Daily, From Canon/Archive to a REAL REVOLUTION in literary studies, Working Paper, December 21, 2017, 26 pp., https://www.academia.edu/35486902/From_Canon_Archive_to_a_REAL_REVOLUTION_in_literary_studies.

[13] Here’s the interview online, “An Electric Conversation with Hollis Robbins,” 3 Quarks Daily, July 20, 2020, https://www.3quarksdaily.com/3quarksdaily/2020/07/an-electric-conversation-with-hollis-robbins-on-the-black-sonnet-tradition-progress-and-ai-with-guest-appearances-by- marcus-christian-and-gpt-3.html. There’s a downloadable version here, https://www.academia.edu/43668403/An_Electric_Conversation_with.

[14] For the record, while I think current work in deep learning and artificial neural nets in interesting and important, I don’t think it’s the last word. Count me among those who believe it will be necessary to find ways of integrating 70s and 80s era symbolic models with these newer models. See, e.g., Martin Kay, “A Life of Language.” Computational Linguistics 31(4), 425-438 (2005). http://web.stanford.edu/~mjkay/LifeOfLanguage.pdf

Note: I am collecting the posts in this series under the label, EndofScience

No comments:

Post a Comment