Monday, June 28, 2010

Light Blogging

Blogging's going to be light for awhile. I'm busy preparing my post for The National Humanities Center's Forum. Once that's done I've got supporting material to get ready. And then, when the post goes live on July 5, I'll have to spend time replying to comments. I will, however, find time to do some more posting here. For one thing, I've got the Mind Hacks series to finish. And I'd like to post some more flix from the Gordys concert and say a word or two about photographing people. It's not the same as shooting graffiti, landscapes, or cityscapes.


'Till next time, here's a shot of the moon over Manhattan.

Saturday, June 26, 2010

Gordys Concert 1

This is where I live, Hoboken, NJ. Some of these people are friends. All are neighbors.







Where's the line between the band and the audience?



Friday, June 25, 2010

Cultural Evolution 9: Language Games 2, Story Telling

Let me begin by recalling an analogy I offered in one of my comments on the CE8, the first post on language games. I originally used this analogy to think about a group of music-makers. Now I offer it as a way of thinking about story-telling. For that purposes, as you will see, it offers distinct disadvantages, for the analogy is about a situation where everyone has a hand in the action, where story-telling typically involves an author telling the story to an audience. With that caveat in mind, let’s begin.

A number of years ago I saw a TV program on the special effects of the Star Wars trilogy. One of the things the program explained was how operators gave motion to the puppet for the Jabba the Hut, a rather large, corpulent character. Jabba never moved about, but his face had to be animated, as did his tail and his arms and hands. The puppet required, I think, perhaps a half dozen operators: one for the eyes, one for the mouth, one for the tail, etc. Each had a TV monitor which showed him what Jabba was doing, all of Jabba, not just the operator’s particular segment.

Each could see the whole, but manipulate only a part. Each operator had to manipulate his part so it blended seamlessly with the movements of the other parts. Thus each needed to see all of the puppet.

It is easy to see how this analogy applies to, for example, a group of jazz musicians, where each musicians hears the sound of the entire group and must shape their contribution to complement that sound. It is not so easy to see, as I’ve already indicated, to see how it could be applied to story-telling. I’ll get to that a bit later.

For the moment I want to you reconceptualize the story a bit, do a gestalt switch, and see it as being about how a group of people use the Jabba puppet as a vehicle for coordinating their behavior. For that is close to the role I see stories playing in society, not all stories, though, just certain ones. These certain stories are a way individuals coordinate their norms and values. In the language of game-theory, these stories serve as focal points or coordination devices (see previous post) is a group-wide coordination game (cf. Pinker 2007, pp. 418 ff.). And with that observation, I want to turn to Steven Pinker’s recent work on indirect speech.

Indirect Speech

Pinker devotes the penultimate chapter of The Stuff of Thought to indirect speech. As an example, there’s that traditional sexual come on, “would you like to come up and see my etchings?” Nothing is said about sex, but that meaning is understood to be implicit in the language. Or there’s the driver who offers a bribe to a traffic officer, not by baldly saying “will you let me off for $50” but by suggesting he’d like to take care of the matter “here, rather than having to go to the trouble of writing a check.” Given that both parties know what is in fact being said, why use such indirection?

Pinker’s answer is complex, subtle, and resists easy summary, involving, as it does, both game theory and Alan Fiske’s relational models theory of social relationships. So I’m not going to try. But it hinges of the fact that speaker doesn’t know the values of the hearer and so cannot anticipate the response to an overt statement. In the event that the hearer’s values aren’t consistent with the implicit request – the woman doesn’t want to have sex, the officer is honest – the indirection allows the speaker to save face in the case of the sexual overture and to deny intent to bribe in the case of the traffic ticket.

In the course of making his argument Pinker introduces a distinction between shared knowledge and mutual knowledge. Shared knowledge is knowledge that each of several parties has. But they may not know that each has the knowledge. Mutual knowledge is shared knowledge plus the knowledge that everyone knows that they share the same bit of knowledge. He illustrates the difference by reference to the well-known tale of the emperor who has no clothes. Everyone can see that the emperor is naked. That is knowledge they share. But that knowledge is not mutual. No one knows that everyone else is aware of the emperor’s nakedness. Once the little boy blurts it out, however, the situation changes drastically. Now that shared knowledge has become mutual knowledge. And in that mutual knowledge there is potentially dangerous social power. So the difference between shared and mutual knowledge is crucial.

As I thought about Pinker’s account, I began to wonder whether or not stories can be a vehicle for providing members of a social group with mutual knowledge (cf. Chwe 2001). Not just any stories, however, but only those particular stories one finds in the literary and sacred traditions of all cultures. The purpose of those stories is to create mutual knowledge of fundamental matters that are otherwise difficult to talk about, either because they are taboo – as in excretory, sexual, and sacred matters – or because they are difficult to verbalize under any circumstances.

Literature

Let us then consider oral story-telling in traditional cultures. People, of course, tell stories of various kinds on many occasions. I’m only interested in those stories that a group considers to be especially important, sometimes even sacred. And I’m interested in oral story telling because it must have preceded written story stories in the evolution of human culture. The situation is therefore a more basic one.

These stories are handed down from one generation to another. They are thus well-known in the group. On any given occasion most audience members will have heard a story before, perhaps many times. While the teller will have his individual style, he will endeavor to tell the story the same way every time, as it was told to him. Of course, in an oral culture, the same doesn’t mean what it does in a literate culture, where texts can be compared with one another, character for character. It means only that the same characters participate in the same incidents. No more, no less. Further, because speaker and audience are in one another’s presence, the speaker knows how well things are going and can modulate the details of his performance in order more effectively to hold the audience’s interest.

The upshot of these considerations is that, for all practical purposes, we may consider the group itself to be the author of the story, not the individual story-teller. The tell is merely the social device through which the group tells the important stories.

Now let’s consider some example stories from the Winnebago trickster cycle as recounted by Paul Radin (The Trickster, 1956).
Note: the next few paragraphs are from an old paper of mine on the evolution of narrative form.
The trickster cycle has many episodes and not all episodes are told in each telling, nor are all episodes present in all cultural groups. Indeed, Radin asserts that the trickster
is admittedly the oldest of all figures in American Indian mythology, probably in all mythologies. It is not accidental that he is so frequently connected with what was regarded in all American Indian cosmologies as the oldest of all natural phenomena, rock and sun. Thus he was a figure that could not be forgotten, one that had to be recognized by all aboriginal theological systematizers. [1956: 164]
Among the Winnebago the trickster stories are sacred, with trickster being presented as the giver of culture. The story can be narrated only by those who have a right to do so, and only under the proper conditions.

The basic action of the story is simple. Trickster, the tribal chief, is preparing for war. This preparation violates tribal tradition, for the tribal chief is not permitted to go to war. While there is no explicit retribution for this, no character who says something like, "Because you have failed to observe the proper rituals, you are going to be punished," the preparations fail and Trickster ends up in the wilderness, completely stripped of culture. He then undergoes a series of adventures in which, in effect, he learns how to operate his body and his culture. These episodes are a catalog of behavioral modes, with hunger and sexuality being prominent. For example, there is one incident (Episodes 12, 13, and 14) where Trickster learns that his anus is part of his body. He had killed some ducks and started roasting them overnight. When he went to sleep, he instructed his anus to ward off any intruders. Some foxes came and his anus did the best it could, but the foxes ignored the flatulence and ate the ducks anyhow. So, to punish his anus he burns it with a piece of burning wood. Naturally he feels pain. Only then does he realize that his anus is a part of himself.

Cultural Evolution 8A: Addendum on Language as Game

I did my graduate work at the State University of New York at Buffalo, where I was enrolled in the English Department, which, at that time, had the most innovative graduate program in the nation. I got my training in cognitive science from the late David Hays, who was in the SUNY Linguistics Department, indeed, he founded the department. Hays had been educated at Harvard, spent a year at the Stanford Institute for Advanced Studies in the Behavioral Sciences, and the went off to the RAND corporation, where he became one of the founders of the discipline of computational linguistics.

I first learned about Hays in the summer of 1973 ¬– the summer before I entered graduate school at SUNY Buffalo – when he published at article in an issue of Dædalus devoted to language:

Hays, D. G. (1973). "Language and Interpersonal Relationships." Dædalus 102(3): 203-216.

In that article Hays reports the following experiment, which is directly relevant to the game theory approach to language that I outlined in the previous post (pp. 204-205)
The experiment strips conversation down to its barest essentials by depriving the subject of all language except for two pushbuttons and two lights, and by suggesting to him that he is attempting to reach an accord with a mere machine. We brought two students into our building through different doors and led them separately to adjoining rooms. We told each that he was working with a machine, and showed him lights and pushbuttons. Over and over again, at a signal, he would press one or the other of the two buttons, and then one of two lights would come on. If the light that appeared corresponded to the button he pressed, he was right; otherwise, wrong. The students faced identical displays, but their feedback was reversed: if student A pressed the red button, then a moment later student B would see the red light go on, and if student B pressed the red button, then student A would see the red light. On any trial, therefore, if the two students pressed matching buttons they would both be correct, and if they chose opposite buttons they would both be wrong.

We used a few pairs of RAND mathematicians; but they would quickly settle on one color, say red, and choose it every time. Always correct, they soon grew bored. The students began with difficulty, but after enough experience they would generally hit on something. . . . The students, although they were sometimes wrong, were rarely bored. They were busy figuring out the complex patterns of the machine.

But where did the patterns come from? Although neither student knew it, they arose out of the interaction of two students.

Thursday, June 24, 2010

Mind Hacks 4: 1968 – The Crazy Computer Trips the Light Fantastic, 2001

From the origins of man to the trip leading us to a new world, Stanley Kramer’s Kubrick's 2001 was both a counter-cultural and a mass cultural event. Based on a story by science fiction writer Arthur C. Clark, 2001 features HAL, a computer that goes crazy through being rigorously rational. The film’s final sequence utilizes abstract color imagery – reminiscent of Fantasia, but also of acid trips – ending in an enigmatic image of a cosmic infant. Many of the counter-cultural young were prepared to see that sequence as the Ultimate Trip. It was as though that Final Trip were the only antidote to a computer-induced madness that brings one human epoch to a close and places us at the brink of another one. 2001 was an attempt to create a new mythology of humankind, one thoroughly grounded in the science and technology of the late twentieth century.

 
 On the moon with the Monollith.

Psychedelic drugs broke free of the laboratory and the elite clinic and became available on the street. Many young people experimented freely with these drugs and not a few adopted a lifestyle featuring drug use. Eastern religion, rock and roll, colorful clothes, and back-to-nature utopianism were all part of the formula. However superficial much of this turned out to be, it was quite sufficient to shock and scare people into wondering what was happening in the world. One concrete effect was that psychedelic use and experimentation was made illegal. By that time, however, the cultural cat was out of the middle-class bag. If anything, the prohibition increased the attractiveness of psychedelics as agents of adolescent and generational rebellion and both popular and elite culture in all spheres felt the influence of psychedelia.

HAL speaks and blinks.

This same generation was the first in which many individuals got their hands on computers at a relatively young age. Courses in computer programming entered the college curriculum and departments of computer science were formed. Computers were still large and expensive, but the invention of time-sharing – an outgrowth of the AI movement – made terminals available to many.

 
HAL sees and observes.

Computer culture began to spawn imaginative forms of its own. MIT gave birth to the first video game, Space War, in the 1960s. It ran on a mainframe computer of the time and was the progenitor of those point-and-shoot games that would soon replace pinball machines and invade television sets. In the mid-1970s Stanford University’s Artificial Intelligence Lab gave birth to Adventure, a computer game in which players entered Colossal Cave and went from room to room in search of treasure. This game was played by typing and reading text. There were no graphics; one had to imagine the cave in one’s mind. Adventure became the prototype of the role-playing games which blossomed into elaborate fictional worlds with meticulously constructed visual settings and multiple characters the players could assume. As personal computers became more powerful and graphics techniques advanced, both types of games – and hybrids of the two – became ubiquitous and, when the web emerged, they went online and became communal.

Wednesday, June 23, 2010

JC Wall of Fame


Jersey City’s Wall of Fame is about half a mile from the western end of the Holland Tunnel. It’s a block long, stretching from Coles St. to Jersey Avenue between 10th and, well 12th, since 11th no longer exists. You can’t see it from the street because it’s up on an embankment. Fifty years ago there were railroad tracks and a freight station on the embankment, but those are gone. This particular block is slated to have apartment buildings on it, but the stagnant economy as put those plans on hold.

Because the wall is hidden from the street, it is ideal for graffiti writers, who paint for one another, not for members of the general public, some of whom would surely complain to the authorities and they, in turn, would have the wall “buffed” (that is, be painted over). Also, because the walls are semi-hidden (they’re visible to people living in the upper stories of nearby apartment buildings), the writers can work without fear of interruption.


This is by Jersey Joe, who also writes as Rime and who has (or had, some have been painted over) two or three other pieces on or near the wall. He has a world-wide reputation and, in fact, makes a living as a graffiti writer. But he still does “illegals” in addition to his legit work so as to keep up his street cred.


This is in a style that’s distinctly different from anything else on the wall. It’s one of several pieces by a crew from the UK – notice the “UK” to the right. Notice, as well, that fragments of older graffiti are visible around the edges of this piece.


This is one of several pieces that was done on a black background. Sometimes, but not always (see the previous photo), a crew of writers will paint the surface a single color in order to completely cover-over older graffiti on the surface. Notice the dirt and junk piled in from of this piece. Site preparation work had begun for an apartment complex before the bottom dropped out of the economy. As a result a lot of perfectly fine graffiti was damaged or destroyed for no good reason.


This is by Taboo, a writer with Brooklyn’s DYM crew. As far as I know this piece is still up, though it’s been well over a year since I’ve been up on the embankment.

Tuesday, June 22, 2010

Street Sign

Cultural Evolution 8: Language Games 1, Speech

The key to the treasure is the treasure.
– John Barth

But I’m not talking of language games in Wittgenstein’s sense, though the Wittgenstein of the Tractatus had a considerable influence on me as an undergraduate. No, I’m thinking of game theory, not something I’ve studied, though I did have an undergraduate course on decision theory taught by R. B. Braithwaite. But I’m getting ahead of the game.

As the title says, this post is about language. There’s been a fair amount of work done on language from an evolutionary point of view, which is not surprising, as historical linguistics has well-developed treatments of language lineages and taxonomy, the “stuff” of large-scale evolutionary investigation. While this work is directly relevant to a consideration of cultural evolution, however, I will not be reviewing or discussing it. For it doesn’t deal with the theoretical issues which most concern me in these posts, namely, a conceptualization of the genetic and phenotypic entities of culture. This literature is empirically oriented in a way that doesn’t depend on such matters.

The Arbitrariness of the Sign
In particular, I want to deal with the arbitrariness of the sign. Given my approach to memes, that arbitrariness would appear to eliminate the possibility that word meanings could have memetic status. For, as you may recall, I’ve defined memes to be perceptual properties – albeit sometimes very complex and abstract ones – of physical things and events. Memes can be defined over speech sounds, language gestures, or printed words, but not over the meanings of words. Note that by “meaning” I mean the mental or neural event that is the meaning of the word, what Saussure called the signified. I don’t mean the referent of the word, which, in many cases, but by no means all, would have perceptible physical properties. I mean the meaning, the mental event. In this conception, it would seem that that cannot be memetic.

That seems right to me. Language is different from music and drawing and painting and sculpture and dance, it plays a different role in human society and culture. On that basis one would expect it to come out fundamentally different on a memetic analysis.

This, of course, leaves us with a problem. If word meaning is not memetic, then how is it that we can use language to communicate, and very effectively over a wide range of cases? Not only language, of course, but everything that depends on language. Literature obviously – which I’ll take up in the next post – but much else as well.

Speech as a Means of Communication
Willard van Orman Quine has given us a classic thought experiment that points up the problem of word meaning. He broaches the issue by considering the problem of radical translation, “translation of the language of a hitherto untouched people” (Quine 1960, 28). He asks to consider a “linguist who, unaided by and interpreter, is out to penetrate and translate a language hitherto unknown. All the objective data he has to go on are the forces that he sees impinging on the native’s surfaces and the observable behavior, focal and otherwise, of the native.” That is to say, he has no direct access to what is going on inside the native’s head, but utterances are available to him. Quine then asks us to imagine that “a rabbit scurries by, the native says ‘Gavagai’, and the linguist notes down the sentence ‘Rabbit’ (of ‘Lo, a rabbit’) as tentative translation, subject to testing in further cases” (p. 29). And thus begins one of the best known intellectual romps in the philosophy of language.

Quine goes on to argue that, in thus proposing that initial translation, the linguist is making illegitimate assumptions. Perhaps he begins his argument by noting that the native might, in fact, mean “white” or “animal” and later on offers more exotic possibilities, the sort of things only a philosopher would think of. Quine also notes that whatever gestures and utterances the native offers as the linguist attempts to clarify and verify will be subject to the same problem. Quine’s argument is thorough and convincing.

When he did that work, however, he did not, of course, have access to a range of more recent work in cognitive anthropology and evolutionary psychology that indicated that our adapted minds have a preferred way of parsing the world, as do baboons. To be sure, this is “overwritten” and augmented in culture-specific ways, but those underlying perceptual and cognitive systems do not disappear. To consider a specific example, the work on folk taxonomy (Berlin 1992) suggests that there is a so-called basic level of designation, and that is at the level of “rabbit” and not “animal” (in fact, many languages don’t even have a word at that level of generality). So the linguist is reasonable in assuming “rabbit” is a more likely translation than “animal.” Other considerations are likely to rule out “white” or Quine’s other suggestions. I have no reason to believe that this cognitive architecture so constrains matters that there is only one possible referent for “Gavagai.” But I do think that it is likely to turn out that, all other things being equal, “rabbit” is in fact the best guess.

This situation, of course, is rather different from that of ordinary speech between people who share a common language. In the common situation both parties would know the meaning of “Gavagai.” Yet, however effective it is, ordinary speech sometimes fails to secure understanding between people and, where such understanding is achieved, that achievement has required back-and-forth speech. The mutual understanding is achieved through a process of negotiation. As William Croft reiterates in chapter 4 of Explaining Language Change, we cannot get inside one another’s heads and so must negotiate meanings in conversation.

That is to say, communication through language is not a matter of sending information through a pipeline. It does not happen according to what Michael Reddy (1993) has called the conduit metaphor. Reddy’s article is based on 53 example sentences. Here are the first three (p. 166):
1. Try to get your thoughts across better
2. None of Mary’s feelings came through to me with any clarity
3. You still haven’t given me any idea of what you mean
Reddy’s argument is that many of our statements about communication seemed to be based on the notion of sending something (the thought, idea, feeling) through a conduit, hence he calls it the conduit metaphor. He knows that communication doesn’t work that way, but that’s not is central issue. His central concern is to detail the way we use the conduit metaphor to structure our thinking about communication.

Reddy’s argument is reminiscent of a somewhat earlier argument by Paul de Man, “Form and Intent in the American New Criticism” (1983, first published in 1971). Consider this passage (p. 25):
“Intent” is seen, by analogy with a physical model, as a transfer of a psychic or mental content that exists in the mind of the poet to the mind of a reader, somewhat as one would pour wine from a jar into a glass. A certain content has to be transferred elsewhere, and the energy necessary to effect the transfer has to come from an outside source called intention.
De Man’s point was that, when we read a text, the intention (de Man uses the term in its somewhat rarified philosophical sense) that gives life to those signs on the page is our intention, not the author’s. And he is right.

De Man’s insight, and similar ones by Derrida, Barthes, Foucault and others, had an electrifying effect on literary critics in the United States, leading to a tremendously fertile period in academic literary criticism that, however, became increasingly sclerotic in the 1990s. But that story’s neither here nor there. My point is simply that these thinkers were attempting to deal with a real problem and, ultimately, they failed.

What, for example, could Derrida (1976, p. 158) have possibly meant by proclaiming “There is nothing outside of the text”? What he did not mean is that the world is nothing but a text and a text created by more or less arbitrary social conventions. Read sympathetically, and in context, the phrase seems to mean something to the effect that there is no way we can “step outside” language so as to examine, in full omniscient and transcendental objectivity, the relationship between language and the world. And that, it seems to me, is true. We’re always going to be immersed in “language,” whether natural or the various languages of science and mathematics.

How, then, do we fly free of the bottle? We play games.

Monday, June 21, 2010

Where're you gonna’ be when the asteroid hits?

I’m speaking metaphorically, of course. And the asteroid is a mega-change in culture that’s coming.

Of course, we’ve all been seeing something coming since when? the 60s? Whenever. The Information Age, the age of Aquarius, globalization, post-industrial capitalism, whatever. It’s been heading at us at a mighty clip.

I don’t think the current asteroid is something else. Is it the Big Tipping point coming? The one where we can’t go back? Heck, we already can’t go back.

What’s the pattern? I don’t know how to connect the dots. But here’s some of them, in no particular order.


OTW: Organization for Transformative Works – A non-profit dedicated to preserving fan culture, with archiving projects, legal advocacy, a peer-reviewed scholarly journal, and a blog.

Fan culture, yes. But the institutionalization of fan culture in this way, yes, that’s new.

QCO: Question Copyright Organization – A non-profit dedicated to free culture (on the model of open software, another dot in the pattern). Restrictive copyright laws are harming culture, taking private would should be public.

When copyright disappears, what happens to the difference between fan culture and professional culture?


Face-o-Matic: A deck of cards created by cartoonist and animator, Nina Paley. Half of them are the upper parts of (cartoon) heads, half are the lower parts of heads. The idea is to mix them up and see what you get. She created them to get people into drawing faces depicting emotion. Could you use them in a game? Think of them as a metaphor for all of culture. (I bet Nina’s got a million ideas like this.)

Graffiti: The stuff that started on walls and subway cars in Philadelphia and New York City. Since then it’s spread around the world. The interesting thing is that most of it is done for free and some of the best stuff is in little known places where only other graffiti artists can see. Them and the homeless people who live there.

Silly Talk About the Brain & Pleasure Centers

Sydney Lamb begins Pathways of the Brain with a story about his daughter (p. 1):
Some years ago I asked one of my daughters, as she sat at the piano, "When you hit that piano key with your finger, how does your mind tell your finger what to do?" She thought for a moment, her face brightening with the intellectual challenge, and said, "Well, my brain writes a little note and sends it down my arm to my hand, then my hand reads the note and knows what to do." Not too bad for a five-year old.
Lamb goes on to suggest that an awful lot of professional thinking about the brain takes place in such terms (p. 2):
This mode of theorizing is seen in ... statements about such things as lexical semantic retrieval, and in descriptions of mental processes like that of naming what is in a picture, to the effect that the visual information is transmitted from the visual area to a language area where it gets transformed into a phonological representation so that a spoken description of the picture may be produced....It is the theory of the five-year-old expressed in only slightly more sophisticated terms. This mode of talking about operations in the brain is obscuring just those operations we are most intent in understanding, the fundamental processes of the mind.
I agree with Lamb whole-heartedly. My impression is that most of the neuroscientific effort goes into getting observations – using some really cool technology, too – but when it comes to thinking about what's going on, the inner child just takes over – and a rather dim one at that.

Some talk of pleasure centers, for example, strikes me as being equally dim. This kind of talk asks you to imagine that there is some center in the brain which, when stimulated, gives you pleasure. Call it the P-Center. Such talk is grounded in a misinterpretation of some interesting experiments James Olds did back in the 1950s – which I discuss in Beethoven’s Anvil (pp. 82-85).

Sunday, June 20, 2010

Twinkle Time in the Big City

William Benzon, 1912-1998: He Went Out Swinging and Smooth Shaven

My father, William Benzon, was born on Jan. 31, 1912 in Baltimore, MD, and died November 21, 1998 in Allentown PA. His parents were Danish immigrants. For about the first sic years of his life he lived with his parents and two older sisters, Karen and Signe, on Curtis Bay in Baltimore in a house which had formerly been a yacht club. The family then moved to Longwood St. He attended Boy's Latin School and, according to his high school year book, was regarded as an intellectual and as the 2nd brightest in his class. He was on the boxing, fencing, and football teams and his favorite expression was "Well blow me down."

He did two years at the University of Virginia, where he managed to run up gambling debts that his father had to satisfy. He completed his college education at Johns Hopkins, graduating in 1934 with a degree in chemical engineering. He then went to work for Bethlehem Steel (initially at Sparrows Point) where he spent his entire career. He entered their "loop" program which was for hotshot college graduates they wanted to nurture. I don't what his initial duties were or how he ended up in the mining division of the company (Bethlehem Mines Corp.). He spent most of his career there and rose to Superintendent of Coal Preparation, a position that was created for him. He stayed in that post until his retirement in 1974. He continued to consult on coal preparation after retirement.

Early in his career he moved to Johnstown Pa. where Bethlehem Mines had its headquarters. There he met my mother, Elizabeth Tredennick, and married her in 1940. I was born in 1947 and my sister in 1951.

What was he like?

He was a brilliant man, attaining an international reputation in his field, coal preparation. He was also a loving father, expressing his love in various ways, including making some very fine things for me and my sister. He made furniture for my sister’s dolls, a high chair, and a very elegant play pen with the letters of the alphabet cut into the slats. He made me a gorgeous Indian headress – feathers of various kinds, ermine tails, abalone shell ornament, a beaded head band – and various swords and knives etc. as appropriate for various Halloween costumes. He encouraged my sister and me in whatever we wanted to do. When I went off to Hopkins and grew long hair, a beard and mustache, that was OK. And so was going to graduate school in something as impractical as English literature. When, as an adult I needed to borrow money from him because I was out of work, he loaned me money (even after he had retired and was obviously living on a fixed income). He never ever suggested that I "face reality" and move into the corporate world, etc. He knew my intellectual work was important and helped me pursue that, as he has helped my sister pursue her interest in poetry.

He was an intellectual and books were important to him; he had many of them, including many he inherited from his father. I remember a number of Christmas seasons where he read Dickens' A Christmas Carol to the family after dinner on a number of evenings. And he certainly read stories to me before bed as a child – I remember him reading me from Mark Twain, Rafael Sabatini, and others. I was particularly struck by his ability to read dialog so naturally, with expression, like people would actually have said it. He spent a great deal of time helping me and my sister with our homework. He never just told me answers or worked problems. He always asked questions designed to lead me to the answers myself.

My father had an excellent sense of humor, which he slyly attributed to his Danish heritage. He loved Mark Twain, Charles Dickens, Lewis Carroll, and one Jerome K. Jerome, and the Marx Brothers. He was also quite fond for Victor Borge. For one thing, Borge was from Denmark, where his parents were born and raised. Borge's humor was often of a linguistic nature, and my father was interested in language (he owned books by Otto Jesperson, the Danish scholar of the English language, and H. L. Mencken). Borge was a musician and much of his humor involved committing mayhem on various pieces of classical and not-so-classical music.

An Athlete - Golf

He was a good and dedicated athlete. Beyond his high-school sports he was also an excellent swimmer and loved the sea. As an adult he was a cyclist (mostly in his youth, in Baltimore and in his early days in Johnstown) and, above all, golf. 

He took up the sport as an adult and pursued it passionately. During his prime – say from 30 though 55 – he had a single digit handicap, even as low as two or three. He kept systematic notes about his game throughout his entire golfing career. One winter he painted golf balls in red nail polish and golfed in the snow. He experimented with putters he either built himself or modified from putters he'd bought. He also had ideas about custom golf shoes which he half-way finished (my sister and I found the half-completed shoes in the basement).

Saturday, June 19, 2010

True Colors?

This is another exploration of color in photography.

I took this photograph at about 8 in the morning on a sunny. I was standing on the short of the Hudson River on the New Jersey side (in Hoboken) and was shooting east across the Hudson  River and, ipso facto, into the sun. In this particular shot I was pointed almost directly in the sun's direction, though the sun was above the focal point of the shot, though still relatively low on the horizon. Thus the visual field, and the camera, was flooded with light. The buildings appeared as ghostly forms behind a luminous gauze; they did not appear solid.


The above image is what came out of the camera. I converted from RAW (the camera's native format) to JPG without any manipulation of the image. As best I can recall, the scene didn't actually look like that. The scene was far more luminous, while the photo is dull. Yet, it does convey the sense of immateriality of the buildings. The next image is my first rendering of the scene.


The rest of the images are other renderings of the scene. Is any truer than the others? By what criterion?



Friday, June 18, 2010

Mind Hacks 3: 1956 – The Forbidden Planet Within

 Forbidden Planet, Robbie the Robot on the right.

Based loosely on Shakespeare’s The Tempest, Forbidden Planet takes us on a Freudian trip to another world where we meet Robbie the Robot, the progenitor of Steven Spielberg’s R2D2 and C3PO, and the Monster from the Id, the progenitor of those irrational computers that crop up in science fiction. This use of Shakespeare underscores the point that the imaginative devices used publicly to comprehend and present computers often have old cultural roots. Part of the world that Disney had carved out for an indeterminate audience is now being crafted to fit the needs of young adults through the guise of science fiction and the fantasy fiction of J. R. R. Tolkein. Disney’s abstract imagery becomes the stuff of special effects. Similar imagery would be reported by subjects of the LSD experiments that were conducted by the CIA – in search of truth drugs and agents for psychological warfare – and by various clinicians in the United States and Canada.

Forbidden Planet, the super-hot Monster from the Id melting the door at the left.

The term “artificial intelligence” was coined at a 1956 conference held at Dartmouth; the Russians launched Sputnik a year later. Noam Chomsky vanquished behaviorism and revolutionized linguistics by making the study of syntax into a technical discipline modeled on the notion of abstract computation. The human mind was declared to be fundamentally computational in nature.

In the literary world Aldous Huxley initiates modern writing on drug experiences with The Doors of Perception while Jack Kerouac, Allen Ginsberg, and William Burroughs all published major drug-influenced works. At the same time banker R. Gordon Wasson reaches the general public with an ecstatic article in Life magazine about having discovered that the hallucinogenic mushroom, Amanita muscaria, was the root of much religious experience throughout the world. Psychiatric experts were predicting great things of LSD-aided psychotherapy.

Forbidden Planet, Morbius (seated at the table) in using the mind-amplification technology of an advanced, but dead, civilization.

The Josiah Macy Foundation was funding conferences in both arenas, cybernetics and psychedelics. It seemed as though, at last, we were on the verge of discovering the material basis of the human mind and harnessing it to our will. Yet, however this played out in the press, the “real goods” were restricted to relatively small and elite groups of people. Computers were very large and expensive devices that had to be kept in environmentally controlled rooms; very few people saw or worked directly with them. Similarly, these wonderful psychedelic drugs were not readily available; one had to travel to Mexico, or one had to live in a big city and know the right psychiatrist.

People were wishing upon a distant star, imagining a future world over which they, in fact, had no control and for which they had little responsibility. That safe remoteness was about to change. In the 1960s psychedelics became freely available on many college campuses and in their surrounding neighborhoods while the 1970s would see the emergence of personal computers, computers small enough and cheap enough that individuals could own them.

Selected Milestones:
  • 1953: Watson and Crick publish the double helix structure of DNA and thus initiate the age of microbiology, initiating biology into the information paradigm.
  • 1954: J. R. R. Tolkein publishes The Fellowship of the Ring, The Two Towers.
  • 1954: Thorazine, the first major tranquilizer, is marketed.
  • 1956: The Bathroom of Tomorrow attraction opens at Disneyland.
  • 1959: John McCarthy proposes time-sharing to MIT’s director of computing; time-sharing would make computers much more accessible.
  • 1960: Robert Heinlein publishes Stranger in a Strange Land, which would become a major point of literary reference in the drug and mystical counter-culture of the 1960s.

Thursday, June 17, 2010

Happy Birthday, Mike!

It's not often that one gets to work with a genuine pioneer, even if only in a small way. I'm talking of animation historian, Michael Barrier, who turned 70 yesterday, 16 June 2010. I feel honored for his generous correspondence over the past several years and for the fact he has published some of my thoughts at his website.

From Amid Amidi's tribute at Cartoon Brew:
Barrier began interviewing animation artists in the late-1960s, and by the 1980s, he (along with his Los Angeles-based research partner Milt Gray) had recorded the most comprehensive collection of interviews with artists from the Golden Age of theatrical animation. To put his work into perspective, when he started chronicling the lives of these artists, few film critics took animation seriously, and even fewer regarded the classic Hollywood cartoons as a field worthy of study. In the face of such indifference, Michael had the audacity to not only interview the famous directors but hundreds of little known artists who contributed to the success of Hollywood theatrical cartoons ranging from animators and layout artists to cameramen and composers.
The spade work Barrier's done will serve us for generations. It's a real contribution to the study of cartoons and animation and, through that, to the crafts themselves. Without a sure knowledge of the past, we cannot advance with confidence into the future.

One-point stance × 3

Wednesday, June 16, 2010

From Yesterday's Catch: Visions of Xanadu



 

Xanadu didn't happen. It was built. The building left/leaves scars/traces. A state of mind.  On the real tip.

Mind Hacks 2: Adventures in Fantasia


Frame grabs from "The Nutcracker Suite" segment of Fantasia. Is this the seed of the glowing pastoral utopia in Cameron's Avatar?

 

Adventures in Wonderland

When, in her 1967 hit song, “White Rabbit,” Gracie Slick sang that “one pill makes you large and one pill makes you small” she was alluding to the contemporary use of LSD to explore altered states of consciousness. But she was specifically invoking words from Lewis Carroll’s Alice’s Adventures in Wonderland. Carroll’s Alice books are that sort of children’s book that are also, albeit perhaps secretly, aimed at adults. In the guise of providing charming entertainment for children, Carroll wrote metaphysical fables for adults in which the most fundamental aspects of reality – the conformation and behavior of objects in space and time, their identity from moment to moment – are curiously and marvelously fluid. There is some suspicion that Carroll himself experimented with hallucinogenic drugs – the caterpillar in the famous Tenniel illustration is sitting on a hallucinogenic mushroom, Amanita muscaria. Whether or not that is so, drug use and its hallucinogenic effects was not exactly a secret among educated Europeans of the nineteenth and early twentieth centuries. To name only a few obvious examples:
  • Samuel T. Coleridge’s most famous poem, “Kubla Khan,” was supposed to have been composed under the influence of opium .
  • Charles Dickens’ last, and unfinished, novel, The Mystery of Edwin Drood, features an opium den among its settings.
  • Sigmund Freud experimented with cocaine.
  • William James wrote about his experiences with nitrous oxide.
  • Drug use (cocaine and morphine) was part of the cerebral mystery of Conan Doyle’s Sherlock Holmes, known best as a paragon of deductive rationality.
For Alice’s author, an instructor in mathematics and logic, and the adults in his audience, a child’s book was a way to escape the reality-based thinking demanded of Victorian adults. This trio – a dreamland, childhood, and mathematical logic – is the mental venue which hosted the late 20th century dance of drugs and computers.

1940 – Fantasia

First released in 1940, Fantasia was the third animated feature produced by Walt Disney and was a tour de force in the century’s new aesthetic and entertainment medium, film. Disney wanted to break new cinematic ground, to present moving images of a kind that had never been seen before. His animators set non-narrative and even abstract imagery to music, packaging avant-garde visual material as new form of family entertainment. Animation freed image-makers from the bounds of ordinary reality allowing Disney to create worlds beholding only to the idealized patterns and rhythms of the human nervous system. The public, however, was not ready for Disney’s vision and it failed at the box office; it did not find a large audience until the counter-cultural 60s, hardly the audience Disney had originally imagined.

Tuesday, June 15, 2010

Cultural Evolution 7: Where Are We At?

I’ve got three more posts planned before I write the post that’ll go up at the Forum of the “On the Human” project of National Humanities Center. But this isn’t one of those three. Rather, I want to step back and take a look around. That’s first. Then I’m going to describe what I’m planning for the last three posts. And I’ll conclude with a note on design.

What’s Up? Two cultures, again

The general website for “On the Human” describes the project thus:
On the Human (OTH) is an online community of humanists and scientists dedicated to improving our understanding of persons and the quasi-persons who surround us. As persons are biological, psychological, historical, moral, and autobiographical beings, we employ modes of inquiry from the sciences and humanities. Contributors explore issues in metaphysics and biology, ethics and neuroscience, experimental philosophy and evolutionary psychology.
Thus it is one of those “hands across the two cultures divide” enterprises, of which I tend to be skeptical despite the fact that I’ve been crossing that pseudo-divide my whole career. But enough of my skepticism.

Let’s take the divide at face value. In those terms, what’ve we got?

Scientists and cultural evolution

We have a great deal of work on human cultural evolution over the past two decades or so, and most of it has been done by people who are trained as or think of themselves as scientists. For the most part these thinkers have cultural evolution without reference to 1) human psychology, including perceptual and cognitive, and 2) without out detailed descriptions of cultural objects and artifacts. On one, Colin Martindale is an exception. On two, linguistics is an exception.

Thus studied, human cultural evolution is a bit like the study of biological evolution without molecular biology and without plant and animal physiology and anatomy. Without those things, how could you study biological evolution at all? Where would Darwin have been if he didn’t have three or four centuries worth of natural history on which to build his thinking? How can one understand comparative morphology in ecological context if you don’t have detailed accounts of morphologies and lifeways? And yet that’s how the study of cultural evolution is proceeding.

Well, it’s not that bad. But as a first approximation, that will do.

As I’ve said, Martindale is an exception. His account of cultural evolution, unlike other accounts, is grounded in perceptual, cognitive, and affective psychology. His empirical work, of course, rests on characterizations of cultural artifacts – poems, musical compositions, tombstones, and the like – though they are not detailed descriptions comparable to standard descriptive work on flora and fauna. For what he wants to do, such detailed descriptions aren’t necessary.

By contrast, memetics, especially at its popularizing extremes, seems like an attempt to replace psychology entirely. Just dump the entire set of disciplines and think deep thought about the mind by talking of memes.

This won’t do, not at all. My own thinking makes perceptual and cognitive psychology essential (see Cultural Evolution 3: Performances and Memes). Memes, as I have defined them, are perceptual entities and so one must call on perceptual and cognitive psychology to understand them. Similarly, the cultural analog to a phenotype is a performance, understand as the mental events through which one apprehends cultural products. The trick is to understand these things as collective phenomena, not just individual ones.

Correlatively, one also needs detailed descriptive control of cultural artifacts. My two posts on Rhythm Changes give an indication of what’s entailed. Studies of language evolution entail that level of descriptive detail, but most work doesn’t seem to be aware of that level of description – which is, by the way, only a crude indicator of whats necessary and possible.

Thus, the contemporary study of cultural evolution, on the whole, does not seem like it arose as an attempt to solve problems that have arisen through close analysis and description of human culture. Rather, it is something that has been imported from biology and applied to human culture in ways that, for the most part, don’t require detailed understanding of cultural morphologies and processes. In some cases, the thinking is mostly theoretical speculation. While I am in no position to object to speculation on point of principle, I am a bit skeptical about the possible success of a speculative enterprise that betrays so little interest in the properties of the objects and processes which are the objects of speculation. Where would biology have gotten if those naturalists had been indifferent as to whether or not the creature had three legs or four, or, indeed, whether or not it had any legs at all?

Humanists on Cultural Evolution

That, in crude caricature, is the scientific side of the ledger. What about the humanists?

Monday, June 14, 2010

Mind Hacks R Us: The Psychedelic Computer


In scientific prognostication we have a condition analogous to a fact of archery – the farther back you are able to draw your longbow, the farther ahead you can shoot.
– R. Buckminster Fuller

The child is father to the man.
– William Wordsworth


A bit earlier in this millennium I thought it would be interesting to write a book on the parallel evolution of computer culture and psychedelic culture in the United States from mid-century to the end of the millennium. I wrote up a proposal, called it Dreams of Perfection, and it went nowhere. Subsequently John Markoff and Fred Turner each published parts of the story:
John Markoff, What the Dormouse Said: How the Sixties Counterculture Shaped the Computer Industry, 2005.

Fred Turner, From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism, 2006.
So, now I’m publishing a slightly revised portion of that old book proposal as a series of posts. I’ve cut the marketing portions of the proposal, leaving only the conceptual overview and the chapter summaries. I’d envisioned five longish chapters, one for each decade. Each chapter was to have been named after a thematically relevant mass-market movie that showed (more or less) during that decade. I’ve retained that structure in this series of posts.

The rest of this post consists of the conceptual overview, slightly revised. The next post will have a short prelude and the chapter summary for first chapter, which used Disney’s Fantasia as its point of departure. That will be followed by posts taking off from Forbidden Planet, 2001, Tron, and The Matrix.

Graffiti by Plasma Slugs, with saturation and contrast boost from Photoshop.

The Dance of Drugs and Computers

During the last half of the 20th century various groups of insiders and outsiders adopted mind-altering drugs and computer technology to create cultural spaces in which we imagined and realized new venues for the human mind. These spaces engaged fundamental issues of freedom and control, of emotion and reason, which have bedeviled humans everywhere, and elaborates them in the through modern science and technology. The psychoactive drugs which, in some sense, free us, have been synthesized through laboratory techniques we have invented, but only recently. The computers which extend our powers of control and order in often surprising ways embody logical forms that date back to Aristotle but where only recently brought to fruition in the late nineteenth century work of George Boole and others. Science and technology thus provide us with objective physical touchstones for the otherwise abstract powers and activities of our hearts and minds.

Taken together with that great Victorian invention, childhood innocence, the technologies of drugs and computers would constitute a cultural arena which served as incubator, nursery, and playground for some of the major lines of development in late twentieth century culture. For, if a society is to progress it needs cultural playgrounds where new ideas can be conceived, tested and developed. Psychedelic drugs and computing – and their associated cultures – functioned as such playgrounds in the latter half of the 20th century. They were, in fact, among the most important cultural playgrounds in America. 

Saturday, June 12, 2010

The B’s Have It: Bordwell and Boyd

In his most recent post, Glancing backward, mostly at critics, David Bordwell praises Gilbert Seldes “as a worthy critic not because of one-off reviews but in virtue of his pointed, sometimes daring ideas, his knowledge, and the zest they arouse in the reader.” He also quotes him generously. From The Great Audience:
The movies live on children from the ages of ten to nineteen, who go steadily and frequently and almost automatically to the pictures; from the age of twenty to twenty-five people still go, but less often; after thirty, the audience begins to vanish from the movie houses. Checks made by different researchers at different times and places turn up minor variations in percentages; but it works out that between the ages of thirty and fifty, more than half of the men and women in the Unites States, steady patrons of the movies in their earlier years, do not bother to see more than one picture a month; after fifty, more than half see virtually no pictures at all.

This is the ultimate, essential, overriding fact about the movies. . . .
Bordwell then observes that “what we’ve been told for years was characteristic of our Now—the infantilization of the audience—has been in force for at least sixty years.” In view of my recent remarks on universal kid space I find these observations most interesting. Most interesting. A bit later Bordwell quotes from Seldes’ best-known book (which I’m going to have to read one of these days), The 7 Lively Arts: “The daily comic strip of George Herriman (Krazy Kat) is easily the most amusing and fantastic and satisfactory work of art produced in America today.” Think about that one, folks, and think about cartoons and universal kid space. And, while you’re at it, dig out Scott McCloud’s Understanding Comics, which is, I believe, the best introduction we’ve got to an understanding of cognition and story-telling, by which I mean story-telling in general, not just cartoons.

Meanwhile, now that my review of Brian Boyd’s The Origin of Stories is well in the past, two passages from the book stand out. The first speaks for itself (pp. 16-17):
An evolutionary view of human nature, far from threatening freedom, offers a reason to resist the molding of our minds by those who think they know best for us. The cultural constructivist’s view of the mind as a blank slate is “a dictator’s dream.” If we were entirely socially constructed, our “society” could mold us into slaves and masters, and there would be no reason to object, since those would henceforth be our socially constructed natures.
The second provides a useful qualification to the notion that religion is one of the things that happened when this or that evolutionarily adapted mental module went wrong and started seeing things that aren’t there (pp. 202-203):
Science has improved immensely on the fictive agential explanations of the past—although even scientists find they cannot help anthropomorphizing causal factors; but science could not have begun without our persistent inclination and ability to think beyond the here and now, to invent agents and scenarios not limited to the actual or the probable but exploring also the merely possible or the eerily improbable.
Could it be that science has the same evolutionary roots as religion? (Pssst. Don't tell Richard Dawkins.)

Dusk/Sunset 21 May 2010





Friday, June 11, 2010

Links: Origins of Language and the Problem of Design

So, I get an email informing me of a link post at John Wilkins' Evolving Thoughts. I go there where I click through to James Winters's A Replicated Typo, which has a post, Answering Wallace's challenge: Relaxed Selection and Language Evolution. I start reading and find out that it's mostly about a recent article by Terrence Deacon in the Proceedings of the National Academy of Sciences. So I click through to the paper, A role for relaxed selection in the evolution of language capacity. And, naturally, I start reading that, not too much, just a little to get the flavor.

Then I start poking around in the immediate neighborhood, as Winters had indicated that Pinker, too, had a recent paper on language evolution. As indeed he does: The cognitive niche: Coevolution of intelligence, sociality, and language. So I read a little of that and did some more poking.

And what do I find? PNAS has a whole pile of articles on human evolution in that issue, and you can read them all online. For free, though, if you wish, you can pony up $25 and spend a week downloading PDFs to your heart's content. I might just do that. Douglas Wallace has a paper, Bioenergetics, the origins of complexity, and the ascent of man, which looks particularly interesting. It's the energetics and complexty stuff that has my attention. Here's the abstract:
Complex structures are generated and maintained through energy flux. Structures embody information, and biological information is stored in nucleic acids. The progressive increase in biological complexity over geologic time is thus the consequence of the information-generating power of energy flow plus the information-accumulating capacity of DNA, winnowed by natural selection. Consequently, the most important component of the biological environment is energy flow: the availability of calories and their use for growth, survival, and reproduction. Animals can exploit and adapt to available energy resources at three levels. They can evolve different anatomical forms through nuclear DNA (nDNA) mutations permitting exploitation of alternative energy reservoirs, resulting in new species. They can evolve modified bioenergetic physiologies within a species, primarily through the high mutation rate of mitochondrial DNA (mtDNA)–encoded bioenergetic genes, permitting adjustment to regional energetic environments. They can alter the epigenomic regulation of the thousands of dispersed bioenergetic genes via mitochondrially generated high-energy intermediates permitting individual accommodation to short-term environmental energetic fluctuations. Because medicine pertains to a single species, Homo sapiens, functional human variation often involves sequence changes in bioenergetic genes, most commonly mtDNA mutations, plus changes in the expression of bioenergetic genes mediated by the epigenome. Consequently, common nDNA polymorphisms in anatomical genes may represent only a fraction of the genetic variation associated with the common “complex” diseases, and the ascent of man has been the product of 3.5 billion years of information generation by energy flow, accumulated and preserved in DNA and edited by natural selection.
That third sentence is the one that got me: "The progressive increase in biological complexity over geologic time is thus the consequence of the information-generating power of energy flow plus the information-accumulating capacity of DNA, winnowed by natural selection." Sweet. What's especially sweet is the casual assertion of "biological complexity over geologic time." That's certainly how I see it, but I was under the impression that biologists bridle at the notion of increasing complexity over time. Has that notion become more acceptible while I was looking the other way? I surely hope so. In any event, David Hays and I wrote a little article on the subject, A Note on Why Natural Selection Leads to Complexity, which you may download here.

But let's get back Winters' post at A Replicated Typo. Here's a passage that get me thinking a bit:
The first aspect we need to appreciate is how Darwinian-like processes operate at the developmental-level. Deacon cites many instances, such as the fine-tuning of axonal connection patterns in the developing nervous system, where developmental processes are achieved through selection-like operations. Importantly, though, the logic differs from natural selection in one respect: “selection of this sort is confined to differential preservation only, not differential reproduction. In this respect, it is like one generation of the operation of natural selection”.The point he’s trying to get across is that these intraselection processes are taking place right across nature.
That fine-tuning of axonal connections, that, of course, has been known for some time, and Gerald Edelman has built his own theories of the brain around such phenomena, talking of neural Darwinism. What happens is this: At some relatively early stage in ontogenetic development a population of neurons sprouts dendrites like mad, forming random connections all over the place. These connections are then "pruned back" through use. Those connections that are used become stronger; those that are not, weaken and the dendrites die away. In this way the pattern of neural connectivity is "sculpted" to "match" the affordances (to use J.J. Gibson's term) offered by the environment.

It's the problem of design all over again. The brain's got billions of neurons, each of which has thousands of connections. All of these must be wired up just right. How's the brain to do this? That is to say, how do you cram all the necessary wiring information into the genome? You don't, because you don't have to. You just set up a process of random variation and selective retention, one that operates inside the organism.

One must keep in mind, of course, that those neurons that constitute the brain are each living entities and, as such, are trying to survive and thrive in their environment, which is filled with other neurons (not to mention the glial cells which surround them). The neurons don't "know" that they're "strapped" together in an organism such that they ALL survive or die according to the fate of that organism. And, there was a time when all life on earth consisted of single-celled organisms, each trying to survive, and variously cooperating and competing with their fellows.

And I've recently been making the same argument with respect to culture, especially in my recent post on design. What I haven't talked about, and what may prove a tricky little business, is where we get random variation among memes. Recall that I've defined memes as physical properties of cultural artifacts and processes, namely those properties that allow individuals to cooperate through those artifacts and processes. Certainly there will be variations in the way those memetic properties are expressed in individual performances. Indeed, there may be downright errors.

That's certainly the case in improvisation. You intend the line to go that way, but it doesn't. Maybe a finger slipped, causing a wrong note to come out, or one of the other musicians did something that changed the "valence" of your intended line. Whatever. Things got off track. So you've got to scramble to get them back on track. If you're successful, and your invention is particularly felicitous, you'll repeat it, and others will copy it, and before you know it, another meme is born.

But enough of this. I'm rambling. More later.

Thursday, June 10, 2010

Relationship Advice


Nina Paley is the creator of Mimi & Eunice and is unleashing them on the world under a copyleft license.

What’s the true color of this scene?


There’s no way of knowing, certainly not for you, as – I assume – you weren’t there that day. But not for me either, but not only because I don’t remember the scene independently of the photograph. But because the notion of “true color” doesn’t make sense, not quite.

I did a post on this at The Valve some time ago, so I won’t repeat that in full. But, in brief, wavelength is physically real, but perceived color is not. The color you see is not a direct function of the wavelengths that enter your eye from (through and/or reflected by) an object. If it were, the color of objects would vary more widely than it does. Perceptual psychologists call that color constancy (which is one of several perceptual constancies). Perceived color is relatively constant because the eye/brain makes context sensitive adjustments. There isn’t anything such thing as non-perceived color; there’s wavelength, but that’s not color.

There are, for example, these demonstrations where a patch that reflects some wavelength will appear as color X in one context and color Y in a different context (see examples here). Same wavelength, different colors.

What this means, then, for the photographer sitting in front of a computer intent on “developing” a digital image, is that “true color” is rather evasive. To make things worse, neither the monitor nor any photographic paper is capable of displaying the dynamic range that the eye can see. The eye can see light brighter than any reflected off of paper or transmitted from a monitor and no monitor or paper can be as black as a room sealed from all light.

So, when you’re developing an image, you have considerable discretion about just how to render the colors so they look real. These colors are obviously not real:


But what about these?


Or these?


Wednesday, June 9, 2010

Cultural Evolution 6: The Problem of Design

To this point I have been taking it as obvious that a theory of cultural evolution would be a good thing to have. The only thing at issue is just what that theory would be like. Now, let’s step back for a minute and ask: Do we really need a theory of cultural evolution? Historians have been doing fine without such a theory, so why go to the trouble of creating one just because we can?

That is to say, what’s a theory of cultural evolution supposed to do? Saying that, well, it’s supposed to explain how culture evolves is no answer, because it simply assumes that culture does evolve. That’s what I want to question, if only rhetorically.

In comparison, when Darwin elaborated his account of biological evolution, he was trying to solve a particular problem. Living things appeared to be designed. Unless we’re going to posit the existence of a Divine Designer, how can we account for that appearance?

So, specifically, what are we trying to account for with a theory of cultural evolution? As near as I can tell, the gene-culture coevolution folks are trying to account for the rate of cultural change in human history. It’s too fast for biological inheritance mechanisms, so the mechanism must be cultural. It’s not at all clear to just what, specifically, Dawkins was up to. He wasn’t happy with other accounts of human culture, but he seems to have mostly been interested in memes as an example of another kind of replicator (a term he coined I believe). He wasn’t trying to solve any particular problem about human culture.

And for the most part, much of my own thinking about cultural evolution has proceeded without any specific problem in view. But I’m no longer willing to proceed with this enterprise simply because it is intrinsically interesting. We need a fairly specific problem that needs solved. What problem could that be?

I note that I’ve worked rather hard to produce an account of human cultural evolution that meets two criteria:
  1. it’s consistent with what we know about human psychology and neuropsychology and with some body of thinking about at least one major aspect of human culture (music), and
  2. it has the same logical and causal form as biological evolution, selection on phenotype elements and variation among genotype elements.
Since, in biology, the purpose of that logical and causal form is to account for design, I propose that it play the same role in the study of culture: to account for the design of human culture.

The Paradox of Human Design

The most obvious objection to this proposal is one that John Lawler raised to the third post in this series and that others have raised as well: “. . . whatever else may be true about it, any real evolution in language (and this goes pretty much for any other cultural phenomena as well), is that evolution of language (and culture) does not follow Mendelian rules, but rather Lamarckian.” Everything about human culture is designed, by human beings, not by some Divine Designer. So the idea that we need an account of human cultural evolution to account for culture’s design, that would appear to be a non-starter.

Not so fast. Let me repeat the response I made to Lawler: