By “symbolic AI” I mean the research done from roughly the beginnings of AI in the mid-1950s though the 1970s and into the 1980s. In particular, I mean the work done on language. In this I include computational linguistics (CL), which was a different research tradition, and cognitive science more generally.
By “word illusion” I mean our ordinary understanding of words, an understanding fostered by dictionaries. In this understanding words are complex things having various properties, including pronunciation, spelling, grammatical usage, and meaning, sometimes two or more meanings. In this ordinary sense meaning is both reified (it’s there in the dictionary entry) and obscured in that it is often difficult, when one thinks about it, to distinguish between word meaning and the word reference. Thus in my post, The Word Illusion in Literary Criticism, I talk about how very difficult it was for me to, as an undergraduate learning about semiotics, to distinguish between signifier and signified. Oh, I understood, in principle, that a sign consisted of both signifier and signfied, and that the signified was not the “thing” to which a word referred. But that understanding was brittle. The difference between word (sign) and thing was easy enough, but the signified was obscure.
The problem, I maintain, was not specific to me. No, it was a general one. And – here’s my point – I don’t believe we had a flexible and usable way of distinguishing the signifier as a distinct object until the cognitive sciences, AI and CL in particular, explored specific formal and quasi-formal proposals about the structure and mechanisms in the domain of signifieds. I would argue that, in a sense, it was only then that the world of signifieds emerged from the shadows and became fully REAL.
What, you might ask, about symbolic logic? Symbolic logic and the predicate calculus were never intended as an account of the meaning of natural language, though learning to produce logical formulae corresponding to specific sentences belongs to the discipline of logic. Logic, if I’m not mistaken, was introduced as something superior to, because more precise, than natural language. And, yes, I know that formal logic was central to symbolic AI, but, I note: 1) such logics had to be modified and extended in order to do the work, and 2) it wasn’t until AI and CL that we actually had to produce micro-worlds of interlinked propositions representing some aspect of the conceptual universe. Wittgenstein may have imagined such a body of interlinked propositions in his Tractatus Logico-Philosophicus, but he didn’t actually attempt to produce it. And, as you know, he soon abandoned the Tractatus as a way of thinking about language.
No, it was work in AI and CL that made the domain of signifieds one which we could investigate. The fact that, for various reasons, those systems failed to produce robust practical tools should not be allowed to obscure that significant accomplishment. When current investigators in machine learning (ML) and artificial neural networks (ANN) denigrate those systems they risk throwing out the baby with the bath water. After all, MA and ANN achieve their results, which often have great practical value, at the expense of once more rendering the domain of signifieds obscure and intractable.
I don’t for a minute believe that we should attempt to implement symbolic mechanisms directly IN the models created by ML or ANN engines. No, our task is more subtle and difficult: How do we create ML or ANN engines that will, through their own mechanisms, arrive at symbolic computation where it is necessary, and only there? That’s a problem way beyond the scope of a single blog post. For some pointers about my current thinking, see my post, Geoffrey Hinton says deep learning will do everything. I’m not sure what he means, but I offer some pointers. Version 2.
No comments:
Post a Comment