Saturday, April 1, 2023

Ramble into Spring with AI doom, deep learning vs. cultural evolution, and meaning in LLMs and literary texts

Obviously I’ve been much taken with ChatGPT and the issues it raises. Here’s some quick thoughts on some of them.

Why I’m skeptical of AI Doom

This issue, however, is not specifically about ChatGPT, but is something I think about a lot, under the heading of rogue AI, and is on my mind as a result of the hue and cry raised in the past week over the moratorium letter and Yudkowsky’s over-the-top piece in Time.

In the first place, the whole mythology of AI existential risk tends to be unmoored from technological plausibility, relying as it does on ideas of AGI and super-intelligence, which aren’t coherent. No one has any idea about how those things might work. Steven Pinker has made such arguments, as has Rodney Brooks, which I’ve linked here and there at New Savanna.

Secondly, millennial cults are a dime a dozen, and AI doom, on the face of it, acts like a millennial cult. The organizing belief, after all, is about the end of the world. Moreover, the believers are somewhat insular in their use of information – here I’m thinking particularly of LessWrong. That is to say, we’ve seen this kind of behavior before, why should this iteration be any more credible than others? Do we have good reason to believe that science fiction is a more reliable guide to the end of the world than Biblical prophesy?

AIs are strange, inanimate devices that have linguistic capabilities. Two centuries ago steam locomotives were strange in a similar way, inanimate devices capable of autonomous movement in a world where that capacity had previously been confined to animals and humans. This oddness makes it difficult to figure out how to deal with them. [David Hays and I wrote about this in The Evolution of Cognition (1990).]

Moreover, I have a somewhat different view of the future, that it will involve new modes of thought. This is implied by the evolution of cognition paper. I don’t need a belief in super-intelligence in order to imagine a future that is radically different from the present.

Is the culture of deep learning a drag on cultural evolution?

I hadn’t thought of this before. It goes like this: Nothing has been so intellectually fruitful over the last 3/4s of a century as the cross fertilization between computation, on the one hand, and the study of the mind and nervous system on the other. To the extent that the culture of deep learning opposes – or is at best indifferent to – understanding how artificial neural network models work, it works against this intellectual cross fertilization.

Meaning in LLMs and literary texts

Intentionalists have argued that the natural language texts produced by computers cannot have meaning because computers lack intention – a problem I’ve thought about from time to time, most recently in the post, MORE on the issue of meaning in large language models (LLMs). Back in the 1960s and 1970s literary critics faced a similar problem with respect to literary text. The problem wasn’t whether or not they are meaningful, of course they are, but about the determination of that meaning through the process of interpretation. Many argued that authorial intention was the touchstone of meaning and that the aim of interpretation was to recover authorial intention. The deconstructionists argued that authorial intention was, at best, inaccessible.

In a famous essay, “Form and Intent in the American New Criticism,” (collected in Blindness and Insight) de Man argued:

“Intent” is seen, by analogy with a physical model, as a transfer of a psychic or mental content that exists in the mind of the poet to the mind of a reader, somewhat as one would pour wine from a jar into a glass. A certain content has to be transferred elsewhere, and the energy necessary to effect the transfer has to come from an outside source called intention.

When someone reads a literary text, it is their intention that animates the word forms, giving them meaning, not the author’s. 

Can we apply similar reasoning to texts from LLMs? Is it practical matter of how we are to interact and use these devices? Is it an engineering question about how they work? Or is it a metaphysical question about what kind of thing they are, in the universe at large, with all its various kinds of things?

Rationalization: Propositional reconstruction of gestalts

David Hays and I talked of the distinction between rationalizations and the ‘true’ abstractions, but never really wrote about it. It was, however, on our minds in the metaphor paper, Metaphor, Recognition, and Neural Process (1987), and the brain paper (at the end), Principles and Development of Natural Intelligence (1988). The idea is that the ‘true’ abstraction is a gestalt over a wide range of material, and the rationalization is a propositional reconstruction. of it. It is not at all clear to me that such a process is possible in LLMs, or any other device proposed in artificial intelligence, whether current or in the past. 

More later. Perhaps.

No comments:

Post a Comment