Pages in this blog

Tuesday, March 28, 2023

Inching “Kubla Khan” and GPT into the same intellectual framework @ 3 Quarks Daily

That framework is my intellectual history:

From “Kubla Khan” through GPT and beyond
https://3quarksdaily.com/3quarksdaily/2023/03/from-kubla-khan-through-gpt-and-beyond.html

I think I should have said a bit more, but it runs over 4000 words as it is.

What did I miss?

I should probably have mentioned Karl Pribram’s 1969 article in Scientific American about neural holography. I would have read that in during the same period as I took the course on Romantic literature which opens the essay. In the first place, the article grabbed me because I saw a (rough) analogy between what Pribram was proposing for the brain and what Lévi-Strauss had described in a conceptual structure he called the totemic operator, something I describe in this piece, Border Patrol: Arguments against the idea that the Mind is (somehow) computational in nature, which I oppose in the essay. That connection in turn piqued my interest in the brain.

Pribram became a central thinker for me. I devoured his Languages of the Brain when it came out in 1971. That’s where I learned about Ross Quillian’s work in computational semantics and that, in turn, led me more generally to semantic networks. This was during my initial work on “Kubla Khan” and my search for concepts and methods once I’d discovered the matryoshka doll embedding structures structures. This in turn links to the work Hays and I did in the mid-1980s on metaphor and on natural intelligence, both of which I do mention in the article.

The point, then, is that while I was trained in symbolic computing, I’ve always been aware of a fundamentally different approach to understanding mind-in-brain. Which is to say, I’ve NEVER seen an opposition between symbolic computing (GOFAI) and statistical techniques. Yes, they are different, and serve different purposes.

In that context I should also have mentioned Miriam Yevick’s work on holographic logic, which I found through Pribram. Hays and I give it a prominent place in our evolving understanding of the mind. Holography is based on convolution, and convolution became central to machine learning in the second decade of this century. But, as far as I have been able to tell, no one has read Yevick’s work. Why not? That it is from 1975 is not a good reason, any more than the 1958 publication date is a reason not to read John von Neumann’s The Computer and the Brain. Does the AI community have the collective curiosity and intellectual imagination that will be necessary to build on the extraordinary work of the last decade?

That’s one line of thought.

I should probably have laid more emphasis on thinking about the whole brain. On the one hand, great literary and artistic works, like “Kubla Khan” call on a wide range of mental, and hence neural, capabilities. If we want to understand how the mind works, we need to understand how those objects work in the mind. That’s major reason I’ve thought about “Kubla Khan” over the years. It forces me to think about the whole mind, not just language, but it also provides me a vehicle for organizing that effort. I suppose I could have a said a bit more about that, that risks getting caught up in a lot of detail that, in this context, would have been distracting.

Finally, an issue: Should I have explicitly said that the article was implicitly a critique of current work in AI? In the case of GPTs we're dealing with technology that pumps out language by the bucket full but is designed and constructed by people who know relatively little about linguistics, psycho-linguistics, or cognition. Lacking such knowledge, how can they possibly offer serious judgments about whether or not these engines are approaching thought? It's preposterous.

I lay some of the blame on Turing and his philosophical successors. It was one thing to propose the imitation game, as Turing called it, in a context where no one had access to a machine capable of playing it in a half-way interesting way. Once such machines existed, though, it quickly became obvious that people are willing to project humanity on the flimsiest of clues, thus trivializing the so-called Turing test. So now we’re witness to dozens of papers reporting the scores of AI engines on various tests. I suppose they have some value, but that value is not as indices of human thought.

Show me a device that is adequate to “Kubla Khan” and we’ll have something to talk about. Just what THAT adequacy means, just how it is to be DEMONSTRATED, those are serious issues. But if you think we’re going to get THERE without dealing with the range of ideas and considerations I’ve laid out in that essay, From “Kubla Khan” through GPT and beyond, then you seriously underestimate the richness of thought and subtlety of fabrication required for a deep adventure into artificial intelligence.

No comments:

Post a Comment