I was exposed to semiotics early in my undergraduate years at Johns Hopkins (where, of course, Pierce once held an academic post). I read Barthes’s Elements of Semiology probably in my sophomore year in one of the many courses I took from Richard Macksey and, perhaps in the same course, Lévi-Strauss’s well-known essay, “The Structural Study of Myth.” That material was foundational for me. Moreover, I have published in semiotics journals. One of my earliest articles, “Sir Gawain and the Green Knight and the Semiotics of Ontology” was published in Semiotica in 1977 and Dave Hays and I published “Metaphor, Recognition, and Neural Process”, in the American Journal of Semiotics a decade later (1987).
But I’m not now a semiotician nor was I then. To be sure, I use Saussure’s terminology (sign, signifier, and signified) and certain ideas. But the conceptual apparatus of semiotics has not been foundational for me. Once I found the cognitive sciences, THAT became my intellectual home base. It seemed ‘deeper’.
Here’s a note I wrote in the context of a ‘session’ at Academia.edu. The session is about my manuscript, “Sharing Experience: Computation, Form, and Meaning in the Work of Literature”.
* * * * *
I’ve been thinking about this ‘code’ business. I more or less understand how Morse code can be used to encode the characters of the Latin alphabet, Arabic numerals, etc. for transmission in an appropriate medium. I also have a reasonable understanding of ASCII code, which performs a similar function for the world digital electronics. I’ll even go so far as to claim some understanding how segmental phonemes encode the morphemes and lexemes of a language.
In each case we have physical objects on both sides of encode/decode process. To send a message using Morse code, you start with an alphanumeric text and, encode it into Morse code and then transmit the encoded message. To decode it you take the transmitted message, segment it into individual code units, and then translate the individual Morse units into the appropriate alphanumeric characters. The story about ASCII code is similar. The story of phonemes, morphemes, and lexemes is somewhat different. In some ways it’s simpler (since we’re just concatenating phonemes into standard sequences) and in some ways it’s more complex (the actual sounds of phonemes are highly variable in ways that are contextually dependent, yet the identity of the phoneme is constant). But I have some reasonable idea of the kind of story that needs to be told.
But when we talk of how lexemes ‘encode’ meaning, that’s VERY different. What’s on the ‘other side’ of the physical object, the ‘meaning’ side? We don’t talk of the signifier encoding the signified, do we? The relationship between signifier and signified is NOT like the relationship between an alphanumeric character and its Morse code equivalent. It’s a different kind of relationship performing a different job.
That cognitive model I discuss in the paper in the section, “Computational Semantics: Network and Text,” that’s what I think is on the meaning side of sign relationship, the signified. That’s my deepest excursion into meaning, and Hays’s too. And I don’t think we ever talked of codes and encoding when working on that model. Code-talk just doesn’t seem useful there. Whatever it is that’s ‘connected’ to the signifier, it’s NOT a discrete unit in the way that Morse code units or ASCII units are discrete units. It’s something very different.
I’m wondering whether or not the whole notion of cultural codes, where it’s meaning that’s being encoded, isn’t just a hopeful metaphor. We (implicitly) start with something we understand, like Morse code, in the hope that THAT kind of system will tell us something useful about something we don’t understand, like how words and utterances and texts have meaning. For me, at least, the metaphor doesn’t work.