Tuesday, March 31, 2015

Time after Time: Music and Memory in the Group

Or, messing around in one another’s mind space for fun and funk
This is about a simple, but profound, matter: that we cannot always and reliably access the contents of our own minds. In some circumstances we need external prompts. For reasons that I'll have to explain in a later post, this absolutely sinks the superficial notions of "information transfer" from one mind to another that seem to be the norm in discussions of cultural evolution. Certain activities seem to be irreducibly GROUP activities. And THAT's why I'm bumping this post from 2011 to the top of the stack.
John Miller Chernoff has an interesting observation in his book, African Rhythm and African Sensibility. It’s about playing one’s part in a percussion choir. Each of three, four, five, etc. players has a specific, simple, repetitive pattern to play. These individually simple patterns interlock to create a rich sonic wall of rhythm.

Chernoff notes that even highly skilled percussionists often have a difficult time playing a specific part without also hearing the other accustomed parts. This suggests that what the percussionist has learned is not just a motor pattern, but an auditory-motor gestalt that includes the whole rhythmic soundscape and not just one specific part of it. In such a gestalt the motor and auditory components would so intertwined that one cannot simple ‘extract’ the sound of one’s own phrase, along with the motor pattern, in order to execute in isolation. Rather, proper execution requires the entire gestalt and that includes the sounds executed by other players, the group sound, not the individual sounds.

It is thus not just that individual parts only sound correct in the context of other parts, but that the motor schema for executing one part is interwoven with the auditory schema for hearing the entire complex of parts. The drummer needs to hear the sound that the others are playing in order properly to activate his own motor schemas. The performance is thus inextricably a collective one.

In order to be as explicit as possible I want to suggest a very strained analogy. I am imagining a scene in a certain kind of action movie where it is time to launch the nuclear missiles. The process requires that two people insert keys into locks and turn them simultaneously. In the case of our African drummer, his auditory-motor schema for a rhythm is analogous to the launch of the nuclear missles. The drummer has the key to one lock, but that isn’t sufficient. Think of the auditory gestalt of the entire pattern as being the other key. Both keys are necessary. The drummer cannot access motor patterns in his own brain and body without help from others. Again: The drummer cannot access motor patterns in his own brain and body without help from others. Both keys must be inserted into their respective locks and then turned in order for the drummer to play his component of the rhythm. Without the full sound the drummer can certainly play something, but it will not be just exactly the appropriate part.

What is so very peculiar about my argument is that it contravenes a deep an unquestioned assumption of almost all of our thinking about the human mind. That assumption is that we are masters of our own mind and body. To be sure, there is, for example, the Freudian unconscious, which I do not here deny. But that does not seem germane in this situation. Our drummer’s inability to drum alone is not a matter of neurotic repression. It is quite different.

A strange image of sweet delicacy

IMGP6837rd-Ysat-HC

This could almost be a shaky-cam image, but it's not. I was not moving the camera as I took the shot, but most of the objects in the shot are out of focus. Only bits of grass are in focus.

Monday, March 30, 2015

Red triangle on water

20150329-_IGP2806

Memetics is Dead but What’s the Study of Cultural Evolution Otherwise About?

In the waning years of the previous century an online journal for serious work in cultural evolution was established: Journal of Memetics - Evolutionary Models of Information Transmission. The first issue came out in 1997 and the last in 2005. The journal closed for lack of interest; it wasn’t getting enough high-quality submissions.

In the last issue one of the editors, Bruce Edmonds, published a short swansong, The revealed poverty of the gene-meme analogy – why memetics per se has failed to produce substantive results. Those remarks remain valid today, a decade later. Edmonds made a careful distinction
between what might be called the “broad” and the “narrow” approaches to memetics. The former, broad, approach involves modelling communication or other social phenomena using approaches which are evolutionary in structure. Work within this approach is often done without appealing to “memes” or "memetics" since it can be easily accommodated within other frameworks. In other words, it does not require an analogy with genetics. The later, narrow, sense involves a closer analogy between genes and memes – not necessarily 100% direct, but sufficiently direct so as to justify the epithet of “memetics”. What has failed is the narrow approach – that of memetics. Work continues within the broad approach, albeit under other names, and in other journals.
Work on culture that is broadly cultural in nature continues today and, with the proliferation of “big data” approaches to research in the social sciences and humanities will likely grow in the future. This work requires that we count and classify things but, as Edmonds has said, it doesn’t require that we conceptualize them as cultural genes. This work can in fact be empirical in nature with no particular theoretical commitments to specific causal models.

Edmonds goes on the point out that much of memetics is mere redescription: “The ability to think of some phenomena in a particular way (or describe it using a certain framework), does not mean that the phenomena has those properties in any significant sense.” He further notes that
The study of memetics has been characterised by theoretical discussion of extreme abstraction and over ambition. Thus for example, before any evidence is available or detailed causal models constructed, attempts have been made to “explain” some immensely complex phenomena such as religion in general or consciousness.
Frankly, memetics, both in its pop versions and more serious academic versions, has the feel of an intellectual get-rich quick scheme. Just get the definition right, stick to it, and untold intellectual riches will be ours.

I note, however, that the sense of breathless elation and wonderful revelation seemed refreshingly absent from the recent workshop that Daniel Dennett hosted at the Santa Fe Institute. I mention this because Dennett, who is a serious academic, has in the past been an enthusiastic booster of memetics, as has Susan Blackmore, who was also at the workshop. Dennett, after all, is one of those who saw memetics as an explanation for religion.

So What? Getting from There to Here

But why go over this territory once again? Mostly because I’m trying to figure out what, if anything, to do next. That in turn requires getting a sense of where we are now.

Friday, March 27, 2015

Friday Fotos: Flickr Picks

These are the photos that have been getting attention in the last day over at my Flickr site.

flexi-van.jpg

IMGP0113rd.jpg

IMGP8839rd

IMGP4672rd

Thursday, March 26, 2015

Q: Why is the Dawkins Meme Idea so Popular?

I've been reading on cultural evolution, in particular, about Sperber's cultural attractors (which I've criticized in a comment at Replicated Typo) and getting disgusted with the whole business of accounting for cultural evolution at the micro-scale. Should I continue to criticize these folks or simply give up? One thing is clear to me: you can't get there from here. (See this decade-old article by Bruce Edmonds in the now-defunct Journal of Memetics) These people don't use serious examples and don't have any significant interest in mental mechanisms. It's all leveled to information or representation. So, while I'm treading water on this one, I thought I repost this piece from a couple of years ago.
Q: Why is the Dawkins Meme Idea so Popular? A: Because it is daft.

I believe there are two approaches to that question. For most people it’s convenient. That requires one explanation, which I’ll run through first.

For some people, however, memetics is more than convenient. Some, including Dawkins himself and his philosophical acolyte, Dan Dennett, use it as a way of explaining religion. In that role the meme idea is attractive because it is, or has evolved into, an egregiously bad idea, one almost as irrational as the religious ideas whose popularity it is supposed to explain away. By analogy to an argument Dawkins himself has made about religion, that makes memetics the perfect vehicle for the affirmation of materialist faith.

But I don’t want to go there yet. Let’s work into it.

Ordinary Memetics

When Dawkins first proposed the idea in The Selfish Gene (1976), it wasn’t a bad idea—nor even a new one. Ted Cloak, among others, got there first, but not with the catchy name. Having worked hard to conceptualize the gene as a replicator, Dawkins was looking for  another set of examples. and coined the term “meme” as a replicator for culture. The word, and the idea, caught on and soon talk of memes was flying all over the place.

I suspect that the spread of computer technology is partially responsible for the cultural climate in which the meme idea found a home. Computers ‘level’ everything into bits: words, pictures, videos, numbers, computer programs of all kinds, simulations of explosions, traffic flow, moon landings, everything becomes bits: bits, bits, and more bits. The meme concept simply ‘levels’ all of culture—songs, recipes, costumes, paintings, hazing rituals, etc.—into the uniform substance of memes.

What is culture? Memes.

Simple and useful. As long as you don’t try to push it very far.

Wednesday, March 25, 2015

You say "tomato"...

IMGP7394rd

Beyond the Nation State

Writing in the New York Review of Books, Jessica T. Matthews reviews current books by Henry Kissinger (World Order) and Bret Stephens (America in Retreat: The New Isolationism and the Coming Social Disorder). The Stephens is a doubtful polemic while the Kissinger is more reflective. Both are inscribed within the international regime established by the Treaty of Westphalia that ended the Thirty Years War in 1648 and established the state as the sole broker of international relations. The final two paragraphs:
While the Westphalian system has shaped relations among states for three and a half centuries, and continues to do so, its reign has profoundly changed during just the past two to three decades. Borders, to put it simply, are not what they used to be. In 1648, nearly everything that mattered could be located within a fixed boundary—not so today. The trillions of dollars sloshing around in cyberspace, pollution, globalizing culture, international criminal networks, and the stressed global commons of oceans, air, and biodiversity are all changing the world profoundly. So are tightly knit but nongeographic communities of national diasporas, ethnic groups and violent jihadists, corporations largely unmoored from any one country, and the gigantic global financial market—now almost twice as large as the global GDP.

These limits on global resources, porous borders, a globalizing culture that both fragments and amalgamates, and growing requirements for states to work together for mutual well-being if not survival all mean that today’s world order, and certainly tomorrow’s, cannot be seen only as a matter of the distribution of state power or as a system in which only states matter. The Westphalian order is not going away, but it is no longer what it once was. It’s too soon to see what that system and the new forces will produce as they co-exist; but it’s safe to say it won’t look anything like the familiar past.
H/t 3QD.

Tuesday, March 24, 2015

Terrified: How Anti-Muslim Fringe Organizations Became Mainstream

This is new from Princeton University Press. I read it as a case study in cultural evolution:
Here's a paragraph from the opening chapter (which you can download at the link above)
The principal contribution of this book is a new theory that explains how cultural, social psychological, and structural processes combine to shape the evolution of shared understandings of social problems in the wake of crises such as the September 11th attacks. Such events provide fringe organizations with the opportunity to exploit the emotional bias of the media. Media amplification of emotional fringe organizations creates the misperception that such groups have substantial support and therefore deserve reprimand. Yet when mainstream organizations angrily denounce the fringe they only further increase the profile of these peripheral actors within the public sphere. This unintended consequence creates tension and splintering within the mainstream—but also gives fringe organizations the visibility necessary to routinize their shared emotions into networks with more powerful organizations that help them raise funds that consolidate their capacity to create cultural change. From this privileged position, these once obscure organizations can attack the legitimacy of the mainstream precisely as it begins to tear itself apart. With time, these countervailing forces reshape the cultural environment—or the total population of groups competing to shape public discourse about social problems—and fringe organizations “drift” into the mainstream.

The evolution of cultural environments is particularly powerful because it is largely invisible. None of the civil society organizations that inhabit the cultural environment can view it in its entirety because of its sheer size, complexity, and ever-shifting boundaries. Instead, they rely upon powerful institutions such as the media to communicate the contours of the cultural environment back to them. The inevitable distortion that occurs throughout this process sets in motion a chain of irreversible events in which mainstream organizations inadvertently transform their environment through their very attempts to prevent it from changing. Meanwhile, such distortion enables fringe organizations to disguise themselves as part of the mainstream until such deception becomes real—or until the irreversible cultural, structural, and social psychological pro- cesses just described transform the contours of the cultural environment outside the media as well. These broad shifts continue to shape the evolution of media discourse in turn, but also the ways in which policy makers and the broader public understand social problems—as later chapters of this book describe.

Kids These Days: Media Use and Parental Fear

My colleague Charlie Keil is worried that kids these days spend too much time with media of one sort or another (as detailed, e.g. in this report) – TV, computer, video games, whatever – and not enough time interacting directly with one another (in particular, not enough time engaging in music and dance). Meanwhile danah boyd has been researching teen media use and discovers that one reason they spend so much time online is that they can’t easily get together physically. Their lives are tightly scheduled and meeting places are few and far between.

So, is children’s media-use the result of adult micro-management? That is, kids aren’t over-using media because they’re so seductive, but because their parents won’t let them play out-doors and play together.

Meanwhile, there’s a growing movement in favor of so-called “free-range childhood”. As far as I can tell that means growing up like I did. As long as I was home for dinner, for bed, practiced my trumpet, and got my homework done, I could roam the neighborhood as I wished. And I could take public transportation wherever I needed to go. Of course, this was calibrated to my age. I had more freedom at ten than at five, and more at fifteen than at ten. Still, within fairly generously limits, I could wander at will.

Over the past several years I’ve been reading that this kind of childhood is disappearing in favor of one where kids are taken everywhere by their parents and are slotted into all kinds of activities where they are supervised by adults, having less time for free play among themselves.

I have no sense of how prevalent such restrictions are. Over at Free Range Kids I found this: “Today, only 13 percent of U.S. children walk to school. One study found that only 6 percent of kids age 9-13 play outside in a given week.” I haven’t tried to track down that first number, but following the link for the second didn’t get me to the source document. If true, it’s shocking.

Once in a while I see question like this on Facebook, "At what age is it safe to let your children play outside alone?" Without fail, many parents will answer, "After 13 years," and "After 15 years," and most alarmingly, "Never." You always see a few parents who disagree, but not many. The fact that the majority of parents on Facebook think that kids require adult supervision at all times, matches up with national statistics. Surveys collected by Christie Barnes, author of The Paranoid Parents Guide, found that the biggest worry among parents is kidnapping. Another study by pediatricians at the Mayo Clinic, showed that nearly 3/4 of parents said they are afraid that their children may be abducted. In fact, parents in the Mayo Clinic study were more worried about kidnapping than car accidents, sports injuries, or drug addiction. Many other surveys show that as many as half of American parents worry about kidnapping often, which in turn prevents these moms and dads from letting their kids go outside to play.
And that takes us back to danah boyd, who is concerned that exaggerated fears of online sexual predation distorts our sense of real dangers.

So, we keep kids indoors because we fear what will happen to them outdoors and, once we’ve driven them to media, we worry about what will happen to them there.

Meanwhile another bunch of folks are concerned about not getting enough contact with nature:
Although human beings have been urbanizing, and then moving indoors, since the invention of agriculture, social and technological changes in the past three decades have accelerated that change. Among the reasons: the proliferation of electronic communications; poor urban planning and disappearing open space; increased street traffic; diminished importance of the natural world in public and private education; and parental fear magnified by news and entertainment media. An expanding body of scientific evidence suggests that nature-deficit disorder contributes to a diminished use of the senses, attention difficulties, conditions of obesity and overweight, and higher rates of emotional and physical illnesses. Research also suggests that the nature deficit weakens ecological literacy and stewardship of the natural world. These problems are linked more broadly to what health care experts call the “epidemic of inactivity,” as well as to a devaluing of independent play. Nonetheless, we believe that society’s nature-deficit disorder can be reversed.
What’s going on?

Common/mutual knowledge in the laboratory

I talked about mutual knowledge in my open letter to Steve Pinker (blog version HERE, PDF version with additional materials HERE). In the story about the emperor's new clothes, once the emperor stepped outside in his new finery, everyone could see that he was, in fact, naked. The finery was a scam. But it was only when the little boy blurted out "he's naked" that everyone knew that every other person also saw the emperor as naked. That's mutual knowledge.

Now Pinker, a Harvard graduate student, Kyle Thomas, and colleagues at two other schools, Peter DeScioli at Stony Brook University and Omar Haque at Harvard Medical School, have investigated mutual knowledge in the laboratory and reported the results in the Journal of Personality and Social Psychology. Medical Express reports (note: common knowledge = mutual knowledge):
While the notion of common knowledge has existed for decades and has been applied to fields as varied as philosophy and computer science, studies that focused on the actual psychology of common knowledge have been few and far between, Thomas said. 
The chief reason, he said, is that "paying costs to benefit others poses obvious evolutionary puzzles that are not apparent when both people benefit. Because they do not present any evolutionary puzzles, the coordination problems of common knowledge are not nearly as obvious to researchers. The question is, how do we anticipate what our social partners will do, when what they do depends on what they expect us to do? This is a profound social cognition problem. How does one read the mind of a mind reader?"
The had subjects play an economic game where some played the role of a baker and other the role of a butcher. Would they cooperate by producing complementary products (e.g. hot dogs and buns) or would each go his or her own way? It depends on what they knew about one another's activities AND knowledge:
"What we found was that, for private knowledge, even if we varied the payouts, or the number of people involved, only about 15 percent of people cooperated," Thomas said. "With shared knowledge, we saw about 50 percent, and with common knowledge, it was 85 percent. It was just a whopping effect. That indicated to us that we are very sensitive to this previously unappreciated mental state. Our minds evolved to understand this important kind of social structure, and how different kinds of knowledge can impact it."

Monday, March 23, 2015

Dogs as actors

From a NYTimes article about White God, a prize-winning Hungarian film with a cast of 200+ dogs who have to act in sophisticated ways:
It took more than five months to prepare the dogs: training them and dismissing dominant ones from the pack.

Dozens of trainers took part, forming small packs to allow the animals to acclimate to one another, then increasing the size of the packs as the dogs grew friendlier with one another.

“It’s unheard-of in the typical modern-day film industry to take that much time to put this together instead of quickly just using special effects,” Ms. Miller said.

“A lot of the introverted or depressed dogs involved really found a sense of purpose,” she added. “You started to see their confidence built in having this routine. They had something to look forward to.” All the dogs were adopted after shooting was completed, according to the filmmakers, and Ms. Miller said the shift in their demeanor probably helped.

They like cabbage at my Flickr site

I'll say this much: It sure is green. LOL!


IMGP8374rd

 

And wrinkled, wrinkled most wondrously.

Sunday, March 22, 2015

Free range kids in New Zealand, and lessons from the Māori

Daniel Davies, blogger at Crooked Timber, is going around the world with his family for a year (he made a bit of money in investment banking). They've just spent a couple of months in New Zealand.
Part of the whole purpose of bringing the children round the world – god knows, it wasn’t the sheer joy of home-schooling – was to let them see that different ways of doing things are possible, and the way that children live in New Zealand really contrasted with how things were in London.

The kids have much more independence, and a much more outdoor lifestyle. When there isn’t so much traffic on the roads and there’s more empty space, they can play in it. My ten-year-old nephew was able to go out alone into the bush to look after his father’s possum traps (it’s considered civically responsible to take care of a few traps in the local bush, because possums and rats eat kiwi eggs). After school, kids would arrange to meet up, parent’s absent, and “jump off the wharf”. (Jumping off things is actually the national sport, as far as I can tell – it doesn’t get as much TV coverage as rugby, but has many more participants. In the course of a fifteen minute lift I gave to a hitch-hiker who had missed his bus back home from one of the higher local wharves, I was given a pretty comprehensive run down of all the tall objects with bodies of water beneath them in the surrounding district. There’s a sign on the bridge over the Whakatane River saying “Do not jump off this bridge”, but I don’t know why they bothered).

On enrolling our five-year old at her infants’ school we were pretty much immediately handed a piece of paper with details of the two-day camp that the tots would be going on. It’s a totally different world.
He also has some observations about the Māori:
How does a basically tribal society adapt to a modern industrial lifestyle? That, in my view, is a really important question for the world at the moment, as it’s the key to the Afghanistan conflict, among other things. The Pashtun tribes who make life so difficult for any and all occupiers of that territory are not stupid, and they are aware of what happened to the Khoi-San, the Native Americans, the Aborigines, and more or less any tribal society that has ever adopted any position other than one of strictly “no compromise, no retreat, nothing except trade in firearms and textiles” with respect to the modern world. Economic development in that region is more or less impossible if it has to be attempted in the context of a society which is basically the largest surviving tribal society in existence and wants to stay that way. For that reason, if no other, I think it’s a good idea for the rest of us to look at New Zealand and see if there’s anything to learn.

Saturday, March 21, 2015

The economic value of slaves in antebellum America: 2nd most valuable capital asset

A startling statistic emerged in the 1970s, when economists taking a hardheaded look at slavery found that on the eve of the Civil War, enslaved black people, in the aggregate, formed the second most valuable capital asset in the United States. David Brion Davis sums up their findings: “In 1860, the value of Southern slaves was about three times the amount invested in manufacturing or railroads nationwide.” The only asset more valuable than the black people was the land itself. The formula Jefferson had stumbled upon became the engine not only of Monticello but of the entire slaveholding South and the Northern industries, shippers, banks, insurers and investors who weighed risk against returns and bet on slavery. The words Jefferson used—“their increase”—became magic words.

Why is there so little opposition to the hegemony of the super-rich?

What we have here is a failure of political memory and imagination.

In 2014, when Oxfam arrived in Davos, it came bearing the (then) shocking news that just 85 individuals controlled as much wealth as half of the world’s population combined. This January, that number went down to 80 individuals.
Fraser terms out current era the second Gilded Age. The first ran from the end of the Civil War through to the stock market crash of 1929. In that first Gilded Age:
American elites were threatened with more than embarrassing statistics. Rather, a “broad and multifaceted resistance” fought for and won substantially higher wages, better workplace conditions, progressive taxation and, ultimately, the modern welfare state (even as they dreamed of much more).
So far there is little popular resistance in the current Gilded Age. What's missing?
Fraser offers several explanations for the boldness of the post-Civil War wave of labor resistance, including, interestingly, the intellectual legacy of the abolition movement. The fight against slavery had loosened the tongues of capitalism’s critics, forging a radical critique of the market’s capacity for barbarism. With bonded labor now illegal, the target pivoted to factory “wage slavery.” This comparison sounds strange to contemporary ears, but as Fraser reminds us, for European peasants and artisans, as well as American homesteaders, the idea of selling one’s labor for money was profoundly alien.

This is key to Fraser’s thesis. What ­fueled the resistance to the first Gilded Age, he argues, was the fact that many Americans had a recent memory of a different kind of economic system, whether in America or back in Europe. Many at the forefront of the resistance were actively fighting to protect a way of life, whether it was the family farm that was being lost to predatory creditors or small-scale artisanal businesses being wiped out by industrial capitalism. Having known something different from their grim present, they were capable of imagining — and fighting for — a radically better future.

It is this imaginative capacity that is missing from our second Gilded Age, a theme to which Fraser returns again and again in the latter half of the book. The latest inequality chasm has opened up at a time when there is no popular memory — in the United States, at least — of another kind of economic system. Whereas the activists and agitators of the first Gilded Age straddled two worlds, we find ourselves fully within capitalism’s matrix. So while we can demand slight improvements to our current conditions, we have a great deal of trouble believing in something else entirely.

Friday, March 20, 2015

Where are the polymaths of days gone by? Who's going to make sense of it all?

Jonathan Haber dove whole-hog into online courses put up by major universities and worked his way through bachelor's worth of courses in a year, documenting his work online (@ degreeoffreedom.org). As J. Peder Zane says in the Good Old Gray Lady (aka the NYTimes):
Mr. Haber’s project embodies a modern miracle: the ease with which anyone can learn almost anything. Our ancient ancestors built the towering Library of Alexandria to gather all of the world’s knowledge, but today, smartphones turn every palm into a knowledge palace.

And yet, even as the highbrow holy grail — the acquisition of complete knowledge — seems tantalizingly close, almost nobody speaks about the rebirth of the Renaissance man or woman. The genius label may be applied with reckless abandon, even to chefs, basketball players and hair stylists, but the true polymaths such as Leonardo da Vinci and Benjamin Franklin seem like mythic figures of a bygone age.
Well, of course not. The Renaissance is, you know, so quattrocento.

Friday Fotos: Some more or less circles

IMGP2764

IMGP0201rd

IMGP9713rd

IMGP2310rd

IMGP6184

Are we ruining our children by micromanaging their lives?

Clemens Wergin and his family had just moved from Germany to America, where he'd taken a new job. On the family's first day here his 8 year-old daughter slipped out to explore the neighborhood. Writing in The New York Times, he tells us that, when she'd returned, "Beaming with pride, she told us and her older sister how she had discovered the little park around the corner, and had made friends with a few local dog owners." It seems, though, that their new American friends "are horrified by the idea that their children might roam around without adult supervision." He goes on to point out that:
In Berlin, where we lived in the center of town, our girls would ride the Metro on their own — a no-no in Washington. Or they’d go alone to the playground, or walk a mile to a piano lesson. Here in quiet and traffic-safe suburban Washington, they don’t even find other kids on the street to play with. On Halloween, when everybody was out to trick or treat, we were surprised by how many children actually lived here whom we had never seen.

A study by the University of California, Los Angeles, has found that American kids spend 90 percent of their leisure time at home, often in front of the TV or playing video games. Even when kids are physically active, they are watched closely by adults, either in school, at home, at afternoon activities or in the car, shuttling them from place to place.

Such narrowing of the child’s world has happened across the developed world. But Germany is generally much more accepting of letting children take some risks. To this German parent, it seems that America’s middle class has taken overprotective parenting to a new level, with the government acting as a super nanny.

Just take the case of 10-year-old Rafi and 6-year-old Dvora Meitiv, siblings in Silver Spring, Md., who were picked up in December by the police because their parents had dared to allow them to walk home from the park alone. For trying to make them more independent, their parents were found guilty by the state’s Child Protective Services of “unsubstantiated child neglect.” What had been the norm a generation ago, that kids would enjoy a measure of autonomy after school, is now seen as almost a crime.
Danah boyd discusses the same phenomenon in It’s Complicated: The Social Lives of Networked Teens (Yale UP 2014), where it's the flip side of intense use of social media by teens.

Is that where the modern world is headed, to lives controlled by authority where action is limited to choosing which media channel to consume? Are we preparing to turn ourselves over to our computer overlords?

* * * * *

Addendum: Here's a website advocating for Free-Range Kids.

Thursday, March 19, 2015

The English Language as Agent of Oppression in Contemporary India

“English is not a language in India,” a friend once told me. “It is a class.” This friend, an aspiring Bollywood actor, knew firsthand what it meant to be from the wrong class. Absurd as it must sound, he was frequently denied work in the Hindi film industry for not knowing English. “They want you to walk in the door speaking English. Then if you switch to Hindi, they like it. Otherwise they say, ‘the look doesn’t fit.’ ” My friend, who comes from a small town in the Hindi-speaking north, knew very well why his look didn’t fit. He knew, too, from the example of dozens of upper-middle class, English-speaking actors, that the industry would rather teach someone with no Hindi the language from scratch than hire someone like him.

India has had languages of the elite in the past — Sanskrit was one, Persian another. They were needed to unite an entity more linguistically diverse than Europe. But there was perhaps never one that bore such an uneasy relationship to the languages operating beneath it, a relationship the Sanskrit scholar Sheldon Pollock has described as “a scorched-earth policy,” as English.

Tuesday, March 17, 2015

"Intelligence" is bullsh•t, and AI is chasing a mirage

In MIT's Tech Review, a proposal for a revised Turing Test:
Riedl agrees that the test should be broad: “Humans have broad capabilities. Conversation is just one aspect of human intelligence. Creativity is another. Problem solving and knowledge are others.”

With this in mind, Riedl has designed one alternative to the Turing test, which he has dubbed the Lovelace 2.0 test (a reference to Ada Lovelace, a 19th-century English mathematician who programmed a seminal calculating machine). Riedl’s test would focus on creative intelligence, with a human judge challenging a computer to create something: a story, poem, or drawing. The judge would also issue specific criteria. “For example, the judge may ask for a drawing of a poodle climbing the Empire State Building,” he says. “If the AI succeeds, we do not know if it is because the challenge was too easy or not. Therefore, the judge can iteratively issue more challenges with more difficult criteria until the computer system finally fails. The number of rounds passed produces a score.”

Riedl’s test might not be the ideal successor to the Turing test. But it seems better than setting any single goal. “I think it is ultimately futile to place a definitive boundary at which something is deemed intelligent or not,” Riedl says. “Who is to say being above a certain score is intelligent or being below is unintelligent? Would we ever ask such a question of humans?”
Eh. From Benzon and Hays, The Evolution of Cognition (1990; emphasis mine):
A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add "because . . ." to the assertion "I'm important." But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.

Perhaps adults who, as children, grow up with computers might not find these issues so troublesome. Sherry Turkle (1984) reports conversations of young children who routinely play with toys which "speak" to them—toys which teach spelling, dolls with a repertoire of phrases.The speaking is implemented by relatively simple computers. For these children the question about the difference between living things and inanimate things—the first ontological distinction which children learn (Keil 1979)—includes whether or not they can "talk," or "think,"which these computer toys can do.

These criteria do not show up in earlier studies of children's thought (cf. Piaget 1929). For children who have not had exposure to such toys it is perfectly sensible to make the capacity for autonomous motion the crucial criterion. To be sure it will lead to a misjudgment about carsand trains, but the point is that there is no reason for the child to make thinking and talking a criterion. The only creatures who think and talk are people, and people also move. Thus thinking and talking won't add anything to the basic distinction between living and inanimate things which isn't covered by movement.

Sunday, March 15, 2015

Another R G B Study

20150207-_IGP2473 EqSmth HSR

20150207-_IGP2473 EqSmth HSG

20150207-_IGP2473 EqSmth

What is philology?

Eric Ormsby reviews James Turner, Philology: The Forgotten Origins of the Modern Humanities, in The New Criterion. Well into the review we have this paragraph:
In several of his most intriguing asides, Turner discusses Thomas Jefferson’s abiding fascination with American Indian languages. On one occasion, in June of 1791, Jefferson and James Madison “squatted in a tiny Unquachog village on Long Island” to compile a wordlist of the now-extinct Quiripi language from the three old women who still spoke it. Jefferson sought to discover the origins of American Indians; he wondered whether they were ultimately of Asian origin or, less plausibly, whether they had originated in Wales, his own ancestral homeland. In any case, the image of two future American presidents hunkering down in a freezing wigwam on Long Island, driven purely by intellectual curiosity, seems to come from some alternative universe now forever lost to us.
That, of course, doesn't tell you much about that mysterious discipline, philology. Somewhat later he gives us this:
While Turner excels at pithy profiles, he is also quite superb at illuminating certain recurrent debates in the history of philology, such as the “Transcendentalist Controversy” in 1830s Massachusetts in which Ralph Waldo Emerson disputed Andrews Norton, his former teacher of divinity. The disagreement was over the age-old question of whether language developed by convention, Norton’s view, or was inherent in the things it denoted, as Emerson argued. It isn’t really so surprising that such a dispute would crop up in nineteenth-century New England: the very nature of discourse, let alone consensus on the interpretation of Scripture, depended upon its resolution. The most compelling such clash, which Turner describes at length, occurred between Friedrich Max Müller, the German-born scholar of Sanskrit who became Oxford’s first professor of comparative philology, and William Dwight Whitney, the first Yale professor of Sanskrit—both eminent authorities even if Müller had become a kind of academic superstar through his public lectures. (But Whitney’s Sanskrit grammar of 1879 is not only still in use, but still in print.) The dispute, yet again, centered on the origins of language. Müller had made fun of Darwin’s idea that language developed when humans first began imitating animal cries, calling it “the bow-wow theory.” Müller believed instead that language exhibited “natural significancy” and that its origins could be uncovered by a search of Sanskrit roots. Whitney argued for its basis in convention: “the fact that an American says ‘chicken’ and a Frenchwoman ‘poulet’ to refer to the same fowl is purely arbitrary: ‘gibbelblatt’ and ‘cronk’ would work just as well.” Though this was a trans-Atlantic debate, it might as well have been between Plato and the Sophists; such questions were perennial just because they were unanswerable.
I suppose one might conjecture that echos of that one remain with us today in the dispute between language nativists (that is, language is an instinct) vs. those who believe language is a purely cultural matter. But the opening paragraphs perhaps give a more useful gloss on this thing called "philology":

Saturday, March 14, 2015

Not abstract ink wash on rice paper

20150305-_IGP2790 Vrt losat

Wooly mammoths were around during the reign of the pharaohs

We usually think of woolly mammoths as purely Ice Age creatures. But while most did indeed die out 10,000 years ago, one tiny population endured on isolated Wrangel Island until 1650 BCE. So why did they finally go extinct?

Wrangel Island is an uninhabited scrap of land off the northern coast of far eastern Siberia. It's 37 miles from the nearest island and 87 miles from the Russian mainland. It's 2,900 square miles, making it roughly the size of Delaware. And until about 4,000 years ago, it supported the world's last mammoth population. For 6,000 years, a steady population of 500 to 1,000 mammoths endured while their counterparts on the mainland disappeared.

It's truly remarkable just how recent 1650 BCE really is. By then, the Egyptian pharaohs were about halfway through their 3000-year reign, and the Great Pyramids of Giza were already 1000 years old. Sumer, the first great civilization of Mesopotamia, had been conquered some 500 years before. The Indus Valley Civilization was similarly five centuries past its peak, and Stonehenge was anywhere from 400 to 1500 years old. And through all that, with all of humanity in total ignorance of their existence, the mammoths lived on off the coast of Siberia.

Friday, March 13, 2015

Friday Fotos: Irises

IMGP0057rd
George Mantor had an iris garden, which he improved each year by throwing out the commoner varieties. One day his attention was called to another very fine iris garden. Jealously he made some inquiries. The garden, it turned out, belonged to the man who collected his garbage.
John Cage, Silence, p. 263.
IMGP9361rd

IMGP9162rd

IMGP9288rd

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism

Another working paper is now available. URLs:
Abstract: People in oral cultures tell stories as a source of mutual knowledge in the game theory sense (think: “The Emperor Has No Clothes”) on matters they cannot talk about either because they resist explicit expository formulation or because they are embarrassing and anxiety provoking. The communal story is thus a source of shared value and mutual affirmation. And the academic profession of literary criticism came to see itself as a repository of that shared value. Accordingly, in the middle of the 20th century it turned toward interpretation as its central activity. But critics could not agree on interpretations and that precipitated a crisis that led to Theory. The crisis has quited down, but is not resolved. 
Introduction: Two Cultures in Conversation 1
Seven Sacred Words: An Open Letter to Steven Pinker 2
Pinker’s Reply 11
Bodies of Literary Knowledge 12
To a Fellow Critic on Cognitivism and Literary Form 16
The Key to the Treasure IS the Treasure, Version 2 19

Consciousness is globally distributed in the brain

Medical Express reports some work on consciousness at Vanderbilt. You'll need to read the whole article to understand what's going on, but it's not long. Here's the key paragraphs:
Unlike the focal results seen using more conventional analysis methods, the results via this network approach pointed toward a different conclusion. No one area or network of areas of the brain stood out as particularly more connected during awareness of the target; the whole brain appeared to become functionally more connected following reports of awareness.

"We know there are numerous brain networks that control distinct cognitive functions such as attention, language and control, with each node of a network densely interconnected with other nodes of the same network, but not with other networks," Marois said. "Consciousness appears to break down the modularity of these networks, as we observed a broad increase in functional connectivity between these networks with awareness."

The research suggests that consciousness is likely a product of this widespread communication, and that we can only report things that we have seen once they are being represented in the brain in this manner. Thus, no one part of the brain is truly the "seat of the soul," as René Descartes once wrote in a hypothesis about the pineal gland, but rather, consciousness appears to be an emergent property of how information that needs to be acted upon gets propagated throughout the brain.
H/t 3QD.

Thursday, March 12, 2015

What is it about this strange little cucumber?

March 12, 6:50 AM: 2021 views, 4th most popular overall.

* * * * *

I originally posted this on Jan 19, 2015 with over 1000 views. Now, Feb 18, 2015, 8:39 PM, it's now got 1501 views and is my 5th most popular photo.

* * * * *

This is one of my most popular photos at Flickr, with over a 1000 views:

IMGP8382rd

Why?

Yes, the cucumber has a strange shape (for a cucumber). But it's not that interesting, is it? It's not like it looks like the face of Jesus or anything like that. And the photograph itself is nothing special. It's simply a straight-forward technically competent photo.

Wednesday, March 11, 2015

A Flickr "favorite", shooting the sun

Someone at Flickr just marked this flick as a "favorite":

20141104-IMGP1910

This is one of many photos where I just pointed the camera into the sun and hoped for / counted on interesting results. The sun light  eats through the tree branches like its burning a hole in the photo, or perhaps in the world itself. And then there's that bit of green lens flare down there at  bottom-center. I suspect, though I'm not sure, that the photo appears a shade darker than it actually was and that when I first got it out of the camera it was a lot darker.

If it hasn't already been done, someone should do a history of lens flare in photos. It's not something you're going to get in a painting since it is an artifact of camera optics. I have no idea when I first noticed it in still photos, but I first noticed lens flare in the motion pictures Easy Rider and 2001. And when I say "notice" I mean that I saw the effect and explicitly noted it to myself in the form of a question: "What're they doing with the camera?"

The first time I noticed it in a photo of mine I was annoyed. Then I remembered those (and later) films and decided, OK, I can put up with it. Some time later I decided to seek it. And then still later, only two or so years ago, I decided to up the intensity to the point where getting a 'viewable' image out of the photo became an interesting process.  And it was after that that I discovered shaky-cam – night-time shots with moving camera and long exposure where the representational nature of the image all but disappears in abstract patterns of light.

Words as Instruments: Some Passages from Paleopoetics

I recently hosted a “session” at Academia.edu that centered on my open letter to Steven Pinker. One Christopher Collins made some interesting remarks, so I went to his Academia.edu page to see what he’d uploaded. I found a pre-publication version of the opening chapter of his Paleopoetics: The Evolution of the Preliterate Imagination (Columbia University Press, 2013), “The Idea of Paleopoetics”. Reading through that I came upon this passage roughly halfway through:
My own attention, as I recounted in my preface, has continued to be focused on the verbal work of art, not as an object of hermeneutical analysis, but rather as an instrument of poetic action. The purpose of poetics, as I see it, is to study how that instrument is made and how the mind employs it, whether the verbal artifact is mediated by performers or by a written text. This experience, after all, is causally and, therefore, logically prior to literary interpretation, which can never be more insightful than the poetic action of reading that precedes it.
That is certainly consistent with my current view of things. Getting at that experience, however, is difficult. Collins continues:
We should expect no less of literary interpretation than of travel writing, a genre that presupposes a real trip to, and real perceptions of, some real place. Being prior to hermeneutics, poetics cannot use hermeneutics as its disciplinary foundation, much less use hermeneutical practice to justify its own existence. It must build instead upon disciplines that are situated prior to itself, beginning with rhetoric, then, going backwards, linguistics, cognitive science, psychology, semiotics, and evolutionary biology.
Right, poetics is prior to hermeneutics, though I’m not sure that I agree with Collins has ordered those other disciplines. It’s not so much that I propose another ordering but simply that I’m not sure how far we can get using such labels which are, after all, somewhat arbitrary with respect to the structure of the world. I my current view, poetics begins with description, and description is no simple matter, a topic that has taken up a number of posts and working papers.

A bit later Collins offers: “At this point I can venture an anticipatory definition of the verbal artifact as an instrument that the brain uses to play visual mental images.” I like his sense that the verbal artifact is an instrument, and that one uses that instrument to manipulate and organize other mental phenomena. Here are the final two paragraphs of the chapter:
Cognitive poetics, as I envisage it, is the study of verbal artifacts as made tools. When we engage these tools and they shift their status from that of objects to that of instruments, they reveal their rhetorical affordances. Since this engagement activates the words, transforming them into the simulations of perceptions, memories, thoughts, and emotions, the verbal artifact is a cognitive tool that can only be understood in reference to the cognitive actions it facilitates.

Tuesday, March 10, 2015

Amazing: Wet Seal+ and Chuck Close does Philip Glass

For some reason this photo has proved unusually popular in the last however many hours (it's 4:08 PM, Mar. 10, 2015). Why all the views all of a sudden? – 19 views today out of 113 all-time views:

20141005-IMGP1080

I took it in the Newport Mall in Jersey City on 5 Oct. 2014. I guess that "Wet Seal+" is the name of a store. What interests me, though, is that face at the right. It's the minimalist composer, Philip Glass, as painted by the artist, Chuck Close. How did THAT end up in a shopping mall? Neither are middle-American icons, though I'd guess that Glass is better known than Close.

Since the image is OF Glass I'd think he's the more important of the two in this context. Perhaps there's something at the bottom of the poster that indicates who put the image there and, by implication, why. But I didn't take note of those things. I just snapped the photo while walking through the mall.

For what it's worth, here's a companion photo:

20141005-IMGP1078

It's not so popular – only 45 total views, none today. The poster at the left is of an Andy Warhol, and we can see the identification at the bottom. Looks to me like some kind of campaign.

What makes an interesting, perhaps even a good, photo?

Here are two recent photos, both of which I like a lot:

20150305-_IGP2754smooth

20150305-_IGP2766smooth

The first of those has gotten a bit of praise over at Facebook. I haven't even posted the second to Facebook, but I wouldn't expect it to get much attention. And yet I rather like it; in fact, I like it a lot.

But is it as good or interesting as the first? It's hard to say. The first one has rather obvious atmospherics and I assume that's where it gets much of its impact. It's a night shot, of a city, and the diffused light and softness of the photo makes it very moody. And there's that boat in the foreground at the left – could almost be a ferry on the River Styx. Whatever impact this photo gets from composition an color values would, however, be meaningless without our knowledge of what is being represented.

Monday, March 9, 2015

3QD Politics & Social Science Prize (Vote for Me)

Voting is open for the 5th Annual 3QD Politics and Social Science Prize. The rules are explained HERE. Public voting will close on 11 Mar 11:59 pm EST. Kenneth Roth will then choose the winners from among the finalists. Kenneth Roth is the executive director of Human Rights Watch, one of the world’s leading international human rights organizations. Under Roth’s leadership, Human Rights Watch has grown eight-fold in size and vastly expanded its reach. It now operates in more than 90 countries, among them some of the most dangerous and oppressed places on Earth.

Saturday, March 7, 2015

"Seeing the elephant"

Ringling Brothers has decided to retire their elephant acts, ending a 200 year long tradition in American entertainment. Traditionally, elephants have been creatures of wonder, writes Janet M. Davis in the NYTimes:
Audiences spoke solemnly of “seeing the elephant” as an awe-inspiring encounter with a wondrous being. Others, who missed her appearances, pined for an opportunity to “see the elephant.” Soldiers during the Mexican-American War and Civil War even spoke of “seeing the elephant” as a metaphor for the incomprehensible experience of battle.

The sensational popularity of the Crowninshield Elephant led the way for others. The first elephant appeared in an American circus at the turn of the 19th century, and by the 1870s, impresarios defined their shows’ worth by the number of elephants they had. In response to decades of evangelical censure for displaying scantily clad human performers, circus owners pointed to their popular elephants as proof of their broader mission to educate and entertain.
See my many posts on Disney's Dumbo.

A Million Flickr Views

million flickr views

Friday, March 6, 2015

The East India Company: Capitalism and Colonialism Hand-in-Hand

For the corporation – a revolutionary European invention contemporaneous with the beginnings of European colonialism, and which helped give Europe its competitive edge – has continued to thrive long after the collapse of European imperialism. When historians discuss the legacy of British colonialism in India, they usually mention democracy, the rule of law, railways, tea and cricket. Yet the idea of the joint-stock company is arguably one of Britain’s most important exports to India, and the one that has for better or worse changed South Asia as much any other European idea. Its influence certainly outweighs that of communism and Protestant Christianity, and possibly even that of democracy.

Companies and corporations now occupy the time and energy of more Indians than any institution other than the family. This should come as no surprise: as Ira Jackson, the former director of Harvard’s Centre for Business and Government, recently noted, corporations and their leaders have today “displaced politics and politicians as … the new high priests and oligarchs of our system”. Covertly, companies still govern the lives of a significant proportion of the human race.

The 300-year-old question of how to cope with the power and perils of large multinational corporations remains today without a clear answer: it is not clear how a nation state can adequately protect itself and its citizens from corporate excess. As the international subprime bubble and bank collapses of 2007-2009 have so recently demonstrated, just as corporations can shape the destiny of nations, they can also drag down their economies. In all, US and European banks lost more than $1tn on toxic assets from January 2007 to September 2009. What Burke feared the East India Company would do to England in 1772 actually happened to Iceland in 2008-11, when the systemic collapse of all three of the country’s major privately owned commercial banks brought the country to the brink of complete bankruptcy. A powerful corporation can still overwhelm or subvert a state every bit as effectively as the East India Company did in Bengal in 1765.
H/t 3QD.

A Note on Dennett’s Curious Comparison of Words and Apps

I continue to think about Dan Dennett’s inadequate account of words-as-memes in his paper, The Cultural Evolution of Words and Other Thinking Tools (PDF), Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIV, pp. 1-7, 2009. You find the same account in, for example, this video of a talk he gave in 2011: “A Human Mind as an Upside Down Brain”. I feel it warrants (yet another) long-form post. But I just don’t want to wrangle my way through that now. So I’m just going to offer a remark that goes a bit beyond what I’ve already said in my working paper, Cultural Evolution, Memes, and the Trouble with Dan Dennett, particularly in the post, Watch Out, Dan Dennett, Your Mind’s Changing Up on You!.

In that article Dennett asserts that “Words are not just like software viruses; they are software viruses, a fact that emerges quite uncontroversially once we adjust our understanding of computation and software.” He then uses Java applets to illustrate this comparison. I believe the overstates the similarity between words and apps or viruses to the point where the comparison has little value. The adjustment of understanding that Dennett calls for is too extreme.

In particular, and here is my new point, it simply vitiates the use of computation as an idea in understanding the modeling mental processes. Dennett has spent much of his career arguing that the mind is fundamentally a computational process. Words are thus computational objects and our use of them is a computational process.

Real computational processes are precise in their nature and the requirements of their physical implementation – and there is always a physical implementation for real computation. Java is based on a certain kind of computational objects and processes, a certain style of computing. But not all computing is like that. What if natural language computing isn’t? What happens to the analogy then?

Galen Strawson on consciousness: the problem we face is simple, we underestimate the subtleties of matter

Writing in TLS Galen Strawson runs through the history of philosophical struggles with consciousness from Descartes to the present day. It's a fascinating read. The take-home is in his penultimate paragraph (emphasis in the original):
At the root of the muddle lies an inability to overcome the Very Large Mistake so clearly identified by Eddington and others in the 1920s – not to mention the lovely Irishman John Toland in 1704, Anthony Collins in 1707, Hume in 1739, Joseph Priestley in 1777, and many others. The mistake is to think we know enough about the nature of physical reality to have any good reason to think that consciousness can’t be physical. It seems to be stamped so deeply in us, by our everyday experience of matter as lumpen stuff, that not even appreciation of the extraordinary facts of current physics can weaken its hold. To see through it is a truly revolutionary experience.
H/t 3QD.

Friday Fotos: Thursday Evening in Hoboken

Last week I went for the photos that were most popular on my Flickr page over the last half day of so. Today I'm posting photos I took last evening in Hoboken along the waterfront, though not all of the photos are oriented toward the waterfront.

This is on one of the old piers. As you can see, there's a planter on it, with pine trees. That's Manhattan in the background. The glow, of course, is the city lights.

20150305-_IGP2700

I'm standing on that same pier, but looking in toward the shore. That father and daughter had been playing on the pier. She'd been playing in the snow and he'd been photographing her.

20150305-_IGP2772

A soda machine on a different pier:

20150305-_IGP2684

We're a block or two off the river's edge. I'm shooting south along Hudson street. You can see a bit of snow hanging in the air and diffusing the light.

Tuesday, March 3, 2015

Myth-Logic and a Lady Librarian in The Rockford Files 2

This is a follow-up to my previous post about “The Return of the Black Shadow”, an episode of The Rockford Files. If you’ve not read that one, you may want to do so.
This episode is about a brutal gang rape. The victim, Gail Cooper, is the sister of one of Jim Rockford’s friends and associates, John Cooper. What had me puzzled is whether or not the fact that she was a librarian merely contingent, or whether it was part of a pattern. If so, what’s the pattern? The pattern, if there is one, would have more to do with how stories are constructed than with the way the world works.

I’ve decided that it’s part of a pattern, a pattern of the sort I characterize as myth-logic. My primary sense of myth logic comes from the work of the French anthropologist, Claude Lévi-Strauss. And he emphasizes the importance of binary logic, of contrasts. The world, of course, is full of binary contrasts. The trick is to figure out which ones are at work in any given situation.

“Animal” and the Librarian

Let’s start at the point where Rockford and Ms. Cooper have stopped for gas. A motorcycle gang pulls into the gas station and Rockford is talking with the attendant. One of the bikers gets into the driver’s seat of Rockford’s car, thereby trapping Gail in the passenger’s seat. He’s big and fat and is known as “Animal.”

That’s our contrast right there, between “Animal” and the librarian. That is, the rape victim was set as an intellectual woman so as to afford maximum contrast to a brutish man known as “Animal.” As a shy intellectual she needs help getting a date – Rockford’s taking her out as a favor to his friend – and that also makes her maximally different from her rapists,

There’s another small detail that now falls into place. Where were they going on their date? Deep-sea fishing. And that’s just odd. There’s nothing strange about Rockford going deep-sea fishing; we know he likes to fish. But it’s strange that he wants to take Gail Cooper deep-sea fishing. Why would he agree to go out with her in the first place and then decide to do something so out-of-character for her–something she remarks on–for their date? He’s taken many women out to dinner; why not this one? Remember, this is not about what makes sense for real people, but about myth-logic. The “real” Jim Rockford wouldn’t do anything so stupid. But there are over-riding considerations in this story.

The point of taking her on such a strange date is simply the strangeness of the juxtaposition. Here she is talking about her project at work, which involves cataloging physics books, and she makes a remark about something totally different, deep-sea fishing. It just makes the situation seem even stranger and more awkward.

Not only is it awkward for Rockford and Gail, but it’s awkward for us, the audience. “It does not compute.” We’re at the edge of Brechtian distancing, not in the avant-garde theatre, but in mainstream American TV.

And why not? Rape is a very strange and uncomfortable thing. Distance is just what’s needed to make it a subject for mainstream TV.

Monday, March 2, 2015

Walls of an ancient temple just around the corner

Where's Tarzan and Jane?.jpg

Place in Sir Gawain and the Green Knight

This is another post that’s primarily descriptive in character and is in response to a paper by Per Aage Brandt, Forces and Spaces – Maupassant, Borges, Hemingway. Toward a Semio-Cognitive Narratology, in which he outlines (to quote from his abstract) “a model of the constitutive architecture of narrative meaning as manifested by ‘good stories’, stories that make sense by conveying a vie& of the human condition.” In this model he proposes that actions move back and forth between “a canonical set of narrative spaces, each encompassing and contributing a significant part of the meaning of a story” and that these narrative spaces are typically “staged as distinct locations” in the physical space depicted in the narrative. He labeled these four spaces: Condition, Catastrophe, Consequence, and Conclusion.

I want to look at one text, Sir Gawain and the Green Knight. The story takes the form of a quest and so is rather explicitly about a journey. Setting aside a frame story, we can lay out the places as follows:
King Arthur’s Court: The story begins here and ends here.
Bercilak’s court at Hautedesert: This is something of an anti-court (e.g. where a woman sits at the head of the table) and has some specific sub-spaces we’ll get to in a moment.
The Green Chapel: This is the terminus of the quest.
The hinterlands between the first three.
Here’s how the story goes:
1) Frame material (about Troy and Britain).
2) A challenge, the beheading game, is proposed to Arthur’s knights in Arthur’s court at Camelot and Gawain accepts. Gawain prepares himself.
3) Gawain travels the hinterlands between Camelot and Hautedesert.
4) While at Hautedesert Gawain engages in an elaborate exchange with his host, Sir Bercilak. (we’ll look at this later.)
5) Gawain travels the hinterlands from Hautedesert to the Green Chapel.
6) Gawain completes the quest at the Green Chapel, and, as it turns out, the exchanges as well.
7) Gawain travels the hinterlands back to Camelot.
8) Return to Camelot.
9) Frame material.
On each of three days at Hautedesert Gawain engages in an exchange with his host, Sir Bercilak. Bercilak goes hunting in the woods and Gawain is pursued by Bercilak’s wife in her bedroom. They exchange their “winnings” at the end of the end. So this involves the spaces: the woods, the bedroom, and a neutral place of exchange within the palace.

The narrative is very stylized and has a ritual feel to it. I’ve got quite a bit to say about these elements on the text and, in particular, about the exchanges, in this essay:
Also see the following blog posts:

Sunday, March 1, 2015

Two photos

This is the third most popular photo over the past half day or so (the top two were in my most recent Friday Fotos):

IMGP7555rdB&W

I'd almost forgotten about this one (from a set of discarded toys):

IMGP8554rd.jpg

Replication of experimental results and cultural evolution

Replication of experimental results has become a hot issue in the behavioral sciences and medicine. While some some of this reflects fraud, its mostly about "sloppy" science. Some results may be flukes, but even valid results can present problems. I tend to think replication is needed because the observations are often obtained through procedures that are so complex that it is not clear what's central to the procedure and what's not. Mark Liberman at Language Log has a post on "Reliability" that speaks to these issues. Here's a passage (emphasis mine):
Some of the reasons for the problems are well known. There's the "file drawer effect", where you try many experiments and only publish the ones that produce the results you want. There's p-hacking, data-dredging, model-shopping, etc., where you torture the data until it yields a "statistically significant" result of an agreeable kind. There are mistakes in data analysis, often simple ones like using the wrong set of column labels. (And there are less innocent problems in data analysis, like those described in this article about cancer research, where some practices amount essentially to fraud, such as performing cross-validation while removing examples that don't fit the prediction.) There are uncontrolled covariates — at the workshop, we heard anecdotes about effects that depend on humidity, on the gender of experimenters, and on whether animal cages are lined with cedar or pine shavings. There's a famous case in psycholinguistics where the difference between egocentric and geocentric coordinate choice depends on whether the experimental environment has salient asymmetries in visual landmarks (Peggy Li and Lila Gleitman, "Turning the tables: language and spatial reasoning", Cognition 2002).
The general idea is that meaning is always negotiated and that experimental replication is an aspect of the negotiations.

Machine Translation and Artificial Intelligence, a quick and dirty view

The folks at Language Log have been having an interesting discussion about machine translation, "They called for more structure", that was started by a passage from Donald Barthleme. Down in that discussion Jason Eisner has a useful remark:
The field of AI includes both neat and scruffy approaches. A neat system for MT would be a faithful implementation of some linguistic theory. Current leading MT systems are somewhat scruffy. They contain various hacks and shortcuts that help to produce a decent translation quickly.

Researchers with a scruffy-AI mindset may think that's just fine. Either they suspect that brains themselves are much scruffier than linguists admit, or they have no opinion about brains and simply want to engineer a working product.

A scruffy-AI researcher may want to enrich the current system to make more use of syntax, but will be perfectly happy to use a "big hairy four-by-four" approximation of syntax that is nailed onto the rest of the system with railroad spikes. The goal is to improve the end results by any expedient method.

Other researchers working on the same system may be true believers in neat AI. They really wish that the system had been designed on clean linguistic and statistical principles from the ground up. Unfortunately such systems would be hard to build and have not worked as well in the past, so these neat-AI researchers settle for helping to nail syntax onto an existing scruffy system. They feel proud of themselves for using (more) linguistics. But does this route really lead toward the utopian system they dream of? Can the hybrid system be gradually made more principled, as the old hacks are gradually phased out? Or is that just a comforting fantasy that sustains them, as it sustains Barthelme's construction workers? "The exercise of our skills, and the promise of the city, were enough."