Pages in this blog

Wednesday, December 25, 2019

What intuitions does current AI afford practitioners? [What about “close-reading” and literary critics?]

Or: How do we think?

I think a lot about the role intuition plays in our thinking. It provides a sense of the world that guides us toward problems we regard as important and solutions we regard as promising. It provides hunches. We can’t fully articulate why we want to do or investigate this or that, but, yes, that’s the way to go.

Early in my career (mid-1970s) I was immersed in computational semantics. I worked on the problem that’s come to be known as “the common sense problem” and drew hundreds of diagrams more or less like this one, some less complex, a few even more (big sheets of paper):


That’s where my intuitions about conceptual structures in the mind are grounded, not so much in the diagrams themselves – which after all, are explicit – but in the activity of working on and with them, the problems they are embedded in, the way they point to the world.

That gave me a sense of the mind as highly structured, though a bit loose. Actually, come to think of it, it gave me a sense of the fluid mind as well, were the fluidity is the reflex of, the obverse of, the diagrams.

What intuitions arise from current work in AI, machine learning and neural networks? These practitioners don’t themselves build explicit models of conceptual structures. Rather, they build architectures for inducing conceptual structures from a corpus of examples (“big data”). The conceptual structures their systems build are somewhat opaque.

I would think that intuitions about the mind arising from this work would be rather different from intuitions grounded in explicit accounts of the mind’s conceptual structures, no? Better than, not so good as, I don’t, but different, that’s what interests me.

What about the intuitions of a neuroscientist? Of course, it matters just what kind of neuroscientist, but I’m thinking particularly of Walter Freeman, who modeled the complex dynamics of masses of neurons as they encounter the world.

And what of literary critics and so-called close-reading? For that is how literary critics deal with texts one by one, word by word, paragraph by paragraph (if that even). What intuitions about the mind does yield? Not, so far as I can tell, intuitions about the mind as richly structured, perhaps like a tangled ball of thread, but that doesn’t quite capture the notion of rich, if fuzzy and somewhat fluid, structure. And without any a of structure there can be no meaningful intuition about form or perhaps even about the text.

I leave it as an exercise to the reader to think about the role of diagrams in these various kinds of intuition. Diagrams, of course, are a different mode of thought from language and so attract their own penumbra of intuition. Literary critics don't make diagrams at all. Neuroscientists make diagrams of neurons and larger facets of neuroanatomy and examine neuroimaging and  plots of data. Computational linguists and AI investigators make diagrams, of what? Some of them, of system architectures – that's what I see in current articles. Back in the symbolic era (my early days) we made diagrams like that one up there, which we took to represent the mind.

And the role of mathematics, yet another medium of intellectual expression?

No comments:

Post a Comment