This is another post in a series that I’ve devoted to explicating Lévi-Strauss’s work on myth (starting with The King’s Wayward Eye: For Claude Lévi-Strauss at The Valve). In his earlier work Lévi-Strauss had considered myths one or two at a time. In The Raw and the Cooked he examines a collection of myths, 187 of them, and treats them as an integrated system of stories. He’s after the grammar, if you will, that underlies the whole lot of them. He continues that project through three more volumes.
He used a language of transformation from one myth to another which is, unfortunately, misleading. This post is an attempt to clarify what’s going on. I attack the matter from two angles. In the first section I imagine that myths are actually created in the way that Lévi-Strauss writes, M1 begets M2 by some transformation Tx, M2 begets M3 by Tz, M4 begets M267 by Tw, and so forth.
When that falls apart I get serious and take a brief look at some work done back in the 1960s by brilliant British linguist named Margaret Masterman. I never met her, but her husband, the philosopher R. B. Braithwaite, visited Johns Hopkins while I was an undergraduate and I took a course in decision theory from him. Later on I got my Ph. D. at the State University of New York at Buffalo where I studied under David Hays, who knew of her work, held her in the highest regard, and had once hired one of her students, Martin Kay, to work with him at RAND.
George Goes to Gary, and then Things Get Confused
So, our Universal Mythographer starts with the classic, George Goes to Gary, in which George travels to Gary, Indiana, to visit his grandmother on her 95th birthday. While there she convinces him that he really ought to marry Gina, who is such a nice girl. So he applies to the bank for a home improvement loan, gets it, and he improves his home. Gina’s father is suitably impressed and grants Gary permission to marry her. They honeymoon in Hawaii, return home and find out that, alas, Gary’s grandmother has died. But she left him the key to a safe deposit box. He opens the box, drinks the magic elixir he finds there, and becomes impotent. End of story.
The UM decides to swap George’s grandmother for his uncle. Grandmother out, uncle in. What happens? Well, instead of giving George a home improvement loan, the bank forecloses on his mortgage. Now that he’s homeless his girlfriend’s parents forbid her to marry him. He joins the Navy, sees the world, and ends up as a used-car salesman in Anchorage, Alaska. He goes out in the woods hunting grizzly. Just when he’s spotted a grizzly WHAM! he comes upon his uncle attempting to rape his old girlfriend. He rescues the girl, she marries him, and they live happily ever after.
Very interesting, thinks UM, I wonder what’ll happen if I change the grizzly into a possum?
What happens is that the George comes upon his uncle being abducted by little green aliens. He rescues his uncle, his uncle gives him his lucky plaid sports jacket as a reward, and he becomes the best salesman on the lot. One day he sees the boss’s daughter. They fall in love. Marriage. Happily ever after.
Most interesting, indeed, says UM to himself. Let’s make it a Harris tweed sports jacket.
As soon as he puts the jacket on George decides he has to learn how to play golf. He signs up for lessons with the pro, Gary, at the local public course, falls in love with him, and they become gay activists in Sarah Palin country. Not quite so happy, but more interesting.
Whoa! didn’t see that one coming. Not at all.
And so it goes, change after change, story after story. Every once in a while a change is so drastic that the apparatus can’t adapt and, instead, just falls apart—I mean, when George became an olive that Frank put in a martini he handed to Ava, and then, when she put me to her lips, I became a mouse riding in the cap of a flying elephant while singing “Fly Me to the Moon”… I had to get off that bus, fast! But, for the most part, the system is able to cope with this fiddling and produce a new story that more or less works.
So, now we’ve got, say, 200 stories, one we were given to start with, George Goes to Gary, and 199 we’ve generated by this procedure. Now what? Just what have we learned? Presumably we want to know how the system works. We’ve put it through its paces, but the mechanisms are invisible. We know the story we started with, we know what we changed, and we know how the system responded. That’s some kind of a clue to what’s going on, but not much. Since we’ve done this 199 times we’ve got 199 such clues, plus whatever we can infer from the overall order.
In order to use those clues we need to have some idea about, some model of, a mechanism that would behave in that way. You can play these counter-factual games ‘till hell freezes over and Johnny comes marching home, but they won’t get you anywhere until you start making models and investigating their properties.
But how do you make suitable models? What do you build them from? Clay? Wood blocks? Sticks and stones and sugar and spice? What?
That’s the problem Lévi-Strauss faced. But he didn’t have a good answer, though he did have an answer, and a reasonable one at that: the concept of algebraic groups and some simple diagrammatic conventions. Nor did he make his stories up. He started with a body of existing myths and treated them as-if they’d been generated by something rather like Universal Mythographer’s procedure.
Lévi-Strauss did that over four substantial volumes and stopped. As far as I know, no one’s taken the work any farther. The problem, as I’ve just indicated, is that no one knows how to construct the models. The obvious suggestion is that we try to build computer models, and one Sheldon Klein tried that in the 1970s. As I recall—it’s been a long time since I read his tech reports—he didn’t get very far. But at least he was playing in a more interesting sandbox. Computational linguistics has come a long way since then. IBM’s Watson, for example, is considerably more powerful than anything Klein would have had access to. But there’s no reason to think that we’d be able to craft a reasonable myth generator on top of Watson.
Margaret Masterman’s Computer Generates Haiku
During the 1960s, perhaps a bit after Lévi-Strauss had completed Mythologiques, a brilliant and pioneering British computational linguist, Margaret Masterman, was doing seminal work in computational linguistics. At one point she experimented with computer-generated haiku. What she did can give us some inkling about how to talk about what Lévi-Strauss was unable to do in theorizing about myth.
Masterman, M. (1971) “Computerized haiku”, in Cybernetics, art and ideas, (Ed.) Reichardt, J. London, Studio Vista, 175-184.
In her initial experiments the computer didn’t itself generate the haiku. Rather it presented a human with a frame and that user picked the words to fill the empty slots in that frame. Here are some initial haiku.
H1 I sense the sky in the street, All heaven in the road. Bang! The pool has touched. H2 I paint the cloud in the road, All space in the street. Bang! The shade has bent. H3 I touched the wind in the street, All space in the stream. Bang! The gale has heard.
Note that the syllable count is off because they got it wrong first time out (which Masterman acknowledges). If you examine them closely you should be able to deduce the outline of the frame. Notice, for example, that all three poems begin with the same words in each line. That was fixed, but the user could pick the second word in the first two lines, and the third word in the last line from a fixed list presented by the computer.
Here’s the frame with open slots.
F1 I . . . . the . . . . in the . . . . All . . . . in the . . . . Bang! the . . . . has . . . .
The computer had five lists from which users could fill the slots. I’ve listed them below. The initial pair of letters is the name by which the computer identified a list. It is, in effect, a variable. The words in the list are the values that variable could take.
PP: sense paint saw heart touched XX: sky cloud sun shade wind gale pool YY: pool sea plain stream street road shell shore ZZ: space heaven sound seed form world WW: bent shrank turned jammed cracked cleft lapsed sipped touched heard slid
Notice that variables XX, YY, and ZZ consist of nouns while PP and WW are mostly, but not entirely, verbs. Heart is not a verb; paint could be used as a noun; and cleft could be a noun, verb, or adjective. The lists are thus constrained; they’re more constrained that simply verbs or nouns, but we need not try to characterize those constraints. What’s important is the fact that they exist.
Here then is the frame with the blanks filled by variable names. The computer would present this to a user and the user would pick a word from the appropriate list.
F2 I [PP] the [XX] in the [YY] All [ZZ] in the [YY] Bang the [XX] has [WW]
Now let’s play at Lévi-Straussian myth theory. I’ve repeated the first (pseudo)haiku from above and have listed a highly similar one immediately after. What constraint is being satisfied by those two haiku?
H1 I sense the sky in the street, All heaven in the road. Bang! The pool has touched. H4 I sense the cloud in the street, All heaven in the road. Bang! The sun has touched.
The first thing we need to do is see how each differs from the original. This should pinpoint that:
I sense the [XX] in the street, All heaven in the road. Bang! The [XX] has touched.
They differ in the values used to specify the XX variable. The original uses sky and pool while the transformation uses cloud and sun. Yet something is preserved in both cases. Sky and pool are opposite in that one is above and the other below; cloud and sun are the opposite in that one obscures the other. Now let us imagine a further variation, this time in the YY position. So this
I sense the cloud in the [YY], All heaven in the [YY]. Bang! The sun has touched.
H5 I sense the cloud in the sea, All heaven in the stream. Bang! The sun has touched.
Just as street and road are the same kind of thing, paths on the earth’s surface, so sea and stream are the same kind of thing, bodies of water.
That a pseudo Lévi-Strauss (pLS) found H1, H4, and H5 in the archives. He would begin by asserting the H1 is the key haiku, but that that designation is arbitrary. Then he’d derive H4 from H1 by some transformation, call it T1 and I would derive H5 from H4 by another transformation, call it T2. Thus:
t1(H1) → H4t2(H4) → H5
In doing this pLS wouldn’t actually think that H1 was the first of these haiku, and then H4 was created from it, and H5 from H4. He’d be talking that way, but he’d actually be thinking that there is no obvious temporal or even logical priority among these haiku. But pLS needs this talk of transformation from one to the other to somehow explain what’s going. That’s what I had the Universal Mythographer doing above with all those nonsense stories.
But we don’t have to do that for these haiku because we know how they’re made and we’ve got an effective meta-language for talking about them. A haiku consists of a frame and a frame consists of a fixed number of slots. The content of some slots is fixed, as in F1. The content of the open slots is to be drawn from lists of variables (PP, XX, YY, ZZ, WW) as specified in F2. In structuralist terms, the frame and slots constitute the axis of combination while the variable lists constitute the axis of selection.
That’s it. That’s the “grammar” that Masterman specified. Let’s call that the base (B). Where the pseudo Lévi-Strauss is forced to talk about transformations from one haiku to another, we can talk of derivations from the base, thus:
d1(B) → H1d2(B) → H2d3(B) → H3
Of course, without specifying what those derivations are, that’s not terribly illuminating. But one thing is clear, the derivation of H2 is not linked to that of H1, nor is that of H3 linked to H2. Each of them is linked to the base. That is true of H4 and H5 as well:
d4(B) → H4d5(B) → H5
To be sure, in creating them I started from H1 and made substitutions into it. I also placed additional constraints on those substitutions. But those constraints don’t belong to the haiku, they belong to the base. They are parts of the grammar and the come into play during the derivation.
Of course, when I concocted H4 and H5 I didn’t check to see whether the constraints I used also apply to Masterman’s H2 and H3. If by chance they hold, wonderful. If not, well, then we’ll have to do something.
But it doesn’t matter. This is just a pedagogical exercise and by now you should have the point, which is simply that Lévi-Strauss’s peculiar language that implies that one myth is derived from another by a transformation results from the fact that he doesn’t have a good metalanguage in which to formulate a grammar of myth.
Whatever’s going on in myths, its more complex than this, by far. And Lévi-Strauss’s invocation of the notion of transformation isn’t quite so empty as this comparison might suggest. For he never simply uses transformation as a fancy label for whatever it is that accounts for the limited difference between two myths. He always explicates a given transformation in terms of some aspect of social structure of cultural practice. Just what that is, obviously, varies from case to case. But it is always there.
And that’s what makes his work worth reading. If we want to advance our understanding of myth beyond LS’s project, then we’re going to have to come up with some proposals for real computational procedures. And for that, we’re going to need a better understanding of the human mind than we’ve currently got. I’m rather inclined to think that a close study of his work will give us some useful clues.