Pages in this blog

Friday, April 2, 2021

Cognitive Science, AI, and Intuition: Or, what’s a word?

Let’s start with a passage from Douglas Hofstadter, Fluid Concepts and Creative Analogies, 1996, pp. 375-375:

...the field of cognitive science...is full of people profoundly misinterpreting each other's phrases and images, unconsciously sliding and slipping between different meanings of words, making sloppy analogies and fundamental mistakes in reasoning, drawing meaningless or incomprehensible diagrams, and so on. Yet almost everyone puts on a no-nonsense face of scientific rigor, often displaying impressive tables and graphs and so on, trying to prove how realistic and indisputable their results. This is fine, even necessary...but the problem is that this facade is never lowered, so that one never is allowed to see the ill-founded swamps of intuition on which all this “rigor” is based.

He’s certainly right about cognitive science, which never was and has yet to become a coherent discipline. There’s a lot of that – profoundly misinterpreting...unconsciously sliding and slipping...sloppy analogies and fundamental mistakes in reasoning, drawing meaningless or incomprehensible diagrams – going around, not just in cognitive science. There’s evolutionary psychology, for example, and digital humanities (whatever that is). It’s a common and, I suspect, inescapable, state of intellectual affairs.

Intuition

I’m particularly interested in those “ill-founded swamps of intuition.” I’m not sure what he means by “ill-founded.” Intuition is inevitable and necessary and I’m not sure what it would mean for it to be well-founded rather than ill-founded. Some intuitions will turn out to lead to dead ends, others will be fruitful. The only way to determine the value of your intuitions is to follow your nose and see where they lead.

As you may have gathered, intuition is something I think about a lot, and it has shown up in a number of posts over the years. I tend to think about it in two contexts: the description of form in literary criticism [1], and the way forward in AI [2].

Consider the second case, the current way forward in AI. These days it seems like statistical techniques of machine learning, especially those grounded in the idea of an artificial neural network, have captured all the marbles. The strong position seems to be that such techniques are all we need and ever will need into order to...[achiever whatever the goal is in this horse race]. Others disagree, claiming that we still need insights, concepts, and techniques from classical symbolic AI.

...the problem is that this facade is never lowered, so that one never is allowed to see the ill-founded swamps of intuition on which all this “rigor” is based.

What role does intuition play in this disagreement? If you have been trained in and actively worked in the symbolic approach to, say, natural language, you will have developed sophisticated intuitions about syntax and semantics, intuitions developed through and supporting the explicit models you have developed. But you aren’t going to have any intuitions about architectures for learning since you haven’t worked with those systems. Conversely, if your training and work is exclusively in learning architectures, you will have sophisticated intuitions about them, but be clueless about syntax and semantics.

My guess is that most of the investigators in neural nets (and allied technologies) have little or no training in classical symbolic AI. They know about it, may have studied it in an intro class, but they’ve not built anything with it. Hence they have no (or at least very weak) intuitions about the structure of syntax and semantics. What they know is that it (seems to have) failed while neural nets are going gang-busters. Who needs it?

What if both classes of techniques are required? Where do you get those intuitions? And how do you braid them together?

The word illusion

Crudely put, words are units of language that have meaning. They have verbal form and, in written languages, graphic form. The verbal and graphic forms are one thing, the meanings another. Words are conjunctions of both.

What you are seeing when you read this is simply a string of graphic word forms. There are no meanings on the screen, in these visual objects. The meanings are all in your head. Societies go to great pains to see that their members associate the same meanings with verbal forms.

What I mean by the word illusion is simply that we implicitly and unreflectively assume that the meanings are carried with/in the verbal forms. We don’t see graphic forms as mere graphic forms, rather we see and experience them as full-blown words, meaning and syntactic affordances and all. If you have worked with symbolic AI (or computational linguistics), you have worked with words (more or less) in full. If you’ve only worked in statistical natural language processing, then you’ve only worked with word forms, not meaning structures.

The “miracle” of course is that these statistical techniques nonetheless produce results that betoken the apparently underlying meanings of words. And this it is easy for you to forget that there are no meanings anywhere in the corpus. Just word forms.

It is thus easy to think that these statistical techniques are all we need, that it’s all there in the corpus. We just need to extend these techniques in some way, perhaps use a bigger corpus, and of course more computing power. Always that, more compute. But I’m not so sure we can so easily ignore the fact that language is grounded in our experience of the physical world. There’s a lot to say about that, but not here and now [3].

References

[1] For example, see my post, The hermeneutic hairball: Intuition, tracking, and sniffing out patterns, October 2, 2018, https://new-savanna.blogspot.com/2018/10/the-hermeneutic-hairball-intuition.html.

[2] See my post, Computational linguistics & NLP: What’s in a corpus? – MT vs. topic analysis [#DH], September 3, 2018, https://new-savanna.blogspot.com/2018/09/computational-linguistics-nlp-whats-in.html.

[3] I’ve discussed this in my recent working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, Working Paper, Version 2, Working Paper, August 20, 2020, 34 pp., August 20, 2020, https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2.

No comments:

Post a Comment