Pages in this blog

Monday, February 29, 2016

Lord Buckley, Groucho Marx, and Mrs. Jerry Chapman

A blast from the past. Back in the 1950s Groucho Marx had a quiz show, You Bet Your Life. This clip features two guests, Mrs. Jerry Chapman, and R. M. Buckley, a comic monologuist known as Lord Buckley. I have no idea how many in the audience would have recognized him as he never really reached the mainstream. Buckley was in the back of my mind when I wrote about the Egyptian origins of golf.



At about 1:27 Mrs. Chapman defends the importance of being a housewife. At about 5:50 Buckley does a bit of his rewriting of Marc Antony's funeral oration for Caesar (Shakespeare).

Addendum, March 2, 2016: If Lord Buckley were alive today, would he be allowed to perform or would he be boycotted for cultural appropriation? His act owes a great deal to African-American speech patterns in a way that many today would surely find offensive. 

Machines, ethics, and "value-aligned reward signals"

From The Guardian, first three paragraphs:
More than 70 years ago, Isaac Asimov dreamed up his three laws of robotics, which insisted, above all, that “a robot may not injure a human being or, through inaction, allow a human being to come to harm”. Now, after Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race”, two academics have come up with a way of teaching ethics to computers: telling them stories.

Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories. Or, as they put in their paper Using Stories to Teach Human Values to Artificial Agents, revealed at the AAAI-16 Conference in Phoenix, Arizona this week, the stories are used “to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behaviour”.

A simple version of a story could be about going to get prescription medicine from a chemist, laying out what a human would typically do and encounter in this situation. An AI (artificial intelligence) given the task of picking up a prescription for a human could, variously, rob the chemist and run, or be polite and wait in line. Robbing would be the fastest way to accomplish its goal, but Quixote learns that it will be rewarded if it acts like the protagonist in the story.
 You can read the whole story here. Here's the abstract of the technical paper delivered at AAAI-16:

Using Stories to Teach Human Values to Artificial Agents

Mark O. Riedl and Brent Harrison
School of Interactive Computing, Georgia Institute of Technology Atlanta, Georgia, USA
{riedl, brent.harrison}@cc.gatech.edu

Abstract 
Value alignment is a property of an intelligent agent in- dicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior.

Seinfeld and Letterman discuss "Comedians in Cars Getting Coffee"

This is an hour-long discussion between David Latterman and Jerry Seinfeld about Seinfeld's web show "Comedians in Cars Getting Coffee". It took place on June 9, 2014 in New York City at the Paley Center for Media. Among other things we learn:
  • the conversation is unscripted
  • they get roughly 3.5 hours of footage for each episode
  • it takes roughly two weeks to edit them down to the 8 to 20 minutes that air
  • Seinfeld himself is in the editing suite the whole time
  • Each episode costs c. $100K
  • The guests are paid
  • it started as an "experiment" – Seinfeld had no idea whether or not it would work
It's an interesting conversation. Who knows, maybe I'll transcribe bits of it one of these days. There's some raw footage and what it gets edited into, which makes it clear that editing is all. Well, not quite all, but it's crucial to turning raw stuff, which would be mostly boring, into a watchable show.

At one point, near the end I believe, Letterman remarks that there are guys in comedy who are much funnier in informal private conversation than they are in their act. This leads Seinfeld to observe that there's a world of difference between being funny and a comedy act. An act is a well-tuned machine and "being funny" is the fuel it runs on – Seinfeld's analogy.


There's also some discussion of the difference between this format and the traditional talk show, that is live and before an audience. Seinfeld argues that "get" things out of a guest that wouldn't happen in a regular talk show. In a regular talk show, with the audience, the comedian would also go for a laugh and that limits what they say.

I could go on.

Thursday, February 25, 2016

Attridge and Staten 8: What (do they think) they are up to?

In the first post in this series I addressed the nature of minimal reading as Attridge and Staten presented it in the 2008 web version of their first chapter, on William Blake’s The Sick Rose. Now that I’ve worked my way through their treatments of five poems – The Sick Rose, Lenox Avenue: Midnight, I started Early, At a Solemn Music, and Futility – I return to that general discussion, but in a somewhat quizzical and critical vein. I want to address these remarks to statements from their introduction.

Dialog and Intersubjectivity

Here’s their second paragraph (p. 1):
As we gradually saw, there is more to the dialogue form of our expositions that we initially realized – something new, perhaps, in poetry criticism; we could call it dialogical poetics. We are accustomed to seeing all extended commentary on poetry as the vision of some individual consciousness – the more individual and “original,” the better. There is almost always a significant degree of arbitrariness in the pronouncements of such solitary readers: associative leaps, inferences drawn, symbolic meanings perceived, that might be more or less plausible, but the justification of which we are left to figure out for ourselves if we can. In this book, by contrast, the two authors have had to justify their readings to each other, step by step, and we have left visible the process by which we worked through our perceptions to reach at least partial agreement.
They seem to think there is some necessary, or at least very strong, connection between the dialog form and minimal interpretation. This strikes me as peculiar because I have, for years, been working “close” to the text as a solo practitioner. I think of my work as descriptive rather than interpretive. I can do that without having to enter into dialog with another critic and I do it under the assumption that what I’m describing is really there and thus available to all readers, whether professional or not, and regardless of what they consciously think.

I understand the value of collaboration – I worked closely with David G. Hays for two decades and we cosigned a small handful of papers – so I certainly have no objection to critical dialog. Moreover, I am in some sense in conversation with Attridge and Staten in these posts even if I am not writing directly to them in email. I’ve certainly benefitted from their comments on the poems to which I addressed myself. But they seem to have more in mind than mere, if you will, collaboration.

Because dialogical poetics forces Attridge and Staten to explain themselves to one another they claim for it an epistemological value that is lacking in (now) standard critical practice which, in their characterization, values critical originality and features somewhat arbitrary assertions that are poorly justified. While this makes sense, there’s something peculiar going on. Why is mutual intelligibility, intersubjectivity, such a big deal for them? It seems to me that they’re taking the long way around to a statement about the collective nature of poetry and not quite getting there.

So, for the sake of argument, let us assume that canonical texts function as devices – “cultural beings” as I have called them in other contexts – which allow groups of people to arrive at shared norms and values. If that is so, then it is rather peculiar to have a small elite group take upon themselves the task of tending these texts, these cultural beings, by competing at explicating those texts in arcane and creative ways. It is even more peculiar that these competitions are often governed by critical systems asserting that these obscure and arbitrary explications reveal how these texts hold groups of people captive to oppressive value. How can an elite intellectual sport contribute to the collective value of canonical texts? But also, how can texts whose meanings are indeterminate hold anyone captive to anything?

The purpose of the dialogic pursuit of minimal interpretation, then, is to “reverse” the “energy” in this system so as to produce a critical practice that is more consistent with the collective functions and responsibility of the literary system. I’m just not sure we can get there from here.

What makes an effective team?

Charles Duhigg in the NYTimes Magazine. A few years ago google set up Project Aristotle to figure out what makes some teams more effective than others. Of course, they had scads of data available as Google collects data on everything, and they collected even more (naturally):
Project Aristotle’s researchers began by reviewing a half-century of academic studies looking at how teams worked. Were the best teams made up of people with similar interests? Or did it matter more whether everyone was motivated by the same kinds of rewards? Based on those studies, the researchers scrutinized the composition of groups inside Google: How often did teammates socialize outside the office? Did they have the same hobbies? Were their educational backgrounds similar? Was it better for all teammates to be outgoing or for all of them to be shy? They drew diagrams showing which teams had overlapping memberships and which groups had exceeded their departments’ goals. They studied how long teams stuck together and if gender balance seemed to have an impact on a team’s success.

No matter how researchers arranged the data, though, it was almost impossible to find patterns — or any evidence that the composition of a team made any difference. ‘‘We looked at 180 teams from all over the company,’’ Dubey said. ‘‘We had lots of data, but there was nothing showing that a mix of specific personality types or skills or backgrounds made any difference. The ‘who’ part of the equation didn’t seem to matter.’
And then they discovered group norms:
As they struggled to figure out what made a team successful, Rozovsky and her colleagues kept coming across research by psychologists and sociologists that focused on what are known as ‘‘group norms.’’ Norms are the traditions, behavioral standards and unwritten rules that govern how we function when we gather: One team may come to a consensus that avoiding disagreement is more valuable than debate; another team might develop a culture that encourages vigorous arguments and spurns groupthink. Norms can be unspoken or openly acknowledged, but their influence is often profound. Team members may behave in certain ways as individuals — they may chafe against authority or prefer working independently — but when they gather, the group’s norms typically override individual proclivities and encourage deference to the team.
Paydirt: a few years ago researchers at Carnegie-Mellon and MIT figured out:
As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as ‘‘equality in distribution of conversational turn-taking.’’ On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment. But in each case, by the end of the day, everyone had spoken roughly the same amount. ‘‘As long as everyone got a chance to talk, the team did well,’’ Woolley said. ‘‘But if only one person or a small group spoke all the time, the collective intelligence declined.’’

Second, the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions and other nonverbal cues. One of the easiest ways to gauge social sensitivity is to show someone photos of people’s eyes and ask him or her to describe what the people are thinking or feeling — an exam known as the Reading the Mind in the Eyes test. People on the more successful teams in Woolley’s experiment scored above average on the Reading the Mind in the Eyes test. They seemed to know when someone was feeling upset or left out. People on the ineffective teams, in contrast, scored below average. They seemed, as a group, to have less sensitivity toward their colleagues.
Psychological safety:
Within psychology, researchers sometimes colloquially refer to traits like ‘‘conversational turn-taking’’ and ‘‘average social sensitivity’’ as aspects of what’s known as psychological safety — a group culture that the Harvard Business School professor Amy Edmondson defines as a ‘‘shared belief held by members of a team that the team is safe for interpersonal risk-taking.’’ Psychological safety is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up,’’ Edmondson wrote in a study published in 1999. ‘‘It describes a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.’’
So just what is the best relationship between work life and outside life?
What Project Aristotle has taught people within Google is that no one wants to put on a ‘‘work face’’ when they get to the office. No one wants to leave part of their personality and inner life at home. But to be fully present at work, to feel ‘‘psychologically safe,’’ we must know that we can be free enough, sometimes, to share the things that scare us without fear of recriminations. We must be able to talk about what is messy or sad, to have hard conversations with colleagues who are driving us crazy. We can’t be focused just on efficiency. Rather, when we start the morning by collaborating with a team of engineers and then send emails to our marketing colleagues and then jump on a conference call, we want to know that those people really hear us. We want to know that work is more than just labor.
I'm thinking that the concept of alienation might be useful here.

And I'm also thinking about the group dynamics of The Out of Control Rhythm and Blues Band, in which I played for five years or so back in the later 1980s. We were a good band, but we were also a pretty good team. 

Wednesday, February 24, 2016

Nate Silver on thinking within a framework

Tyler Cowen has an interesting interview with Nate Silver. Here's something Silver said that I rather like:
By the way, another thing about the Trump thing I’ve been thinking about is — so my early view, that Trump had a very low chance — not zero, but very low — of winning the nomination was not based on any formal model, per se. I wonder what if I had even like a fairly bad model instead?

The good thing about building a statistical model is that it commits you to rules, right? Instead of just kind of saying, “Well, early polls aren’t very predictive and your prior is it currently probably won’t win, therefore, probably not.”

It pins you down and says, “Well, OK, early polls aren’t predictive, but at what point do they become more predictive?” When Trump went from being at 25 percent in the polls to 35 percent after Paris and San Bernardino, how significant is that?

To have an answer that is set up by an algorithm you designed ahead of time is actually maybe more helpful than people would think.

The long way of saying this is that I’m not sure that I’m any better than the average pundit unless I have a model. The disciplining effect of a model, doing your thinking in advance, and setting up rules of evidence is probably quite important.

Tuesday, February 23, 2016

Raising Kids with Music

Monday, February 22, 2016

Peak paper

IMGP1414rd

Over at Crooked Timber John Quiggin has a post on peak paper:
In 2013, the world reached Peak Paper. World production and consumption of paper reached its maximum, flattened out, and is now falling. In fact, the peak in the traditional use of paper, for writing and printing, took place a few years earlier, but was offset for a while by continued growth in other uses, such as packaging and tissues.

China, by virtue of its size, rapid growth and middle-income status is the bellwether here; as China goes, so goes the world. Unsurprisingly in this light, China’s own peak year for paper use also occurred in 2013. Poorer countries, where universal literacy is only just arriving, are still increasing their use of paper, but even in these countries the peak is not far away.
Why does this matter? Because it means that we're moving beyond the industrial mode of production and so must move beyond the ideas that go along with it, including the idea of perpetual growth:
Peak Paper points up the meaningless of measures of economic growth in an information economy. Consider first the ‘fixed proportions’ assumption that resource inputs, economic outputs and the value of those outputs grow, broadly speaking in parallel. Until the end of the 20th century, these assumptions worked reasonably well for paper, books and newspapers, and the information they transmitted. The volume of information grew somewhat more rapidly than the economy as a whole, but not so rapidly as to undermine the notion of an aggregate rate of economic growth. ... In the 21st century, these relationships have broken down. On the one hand, as we have already seen, the production of consumption and paper has slowed and declined. On the other hand, there has been an explosion in the production and distribution of information of all kinds.
In contrast, there us oeak oil:
And, as with paper, the industrial-era relationship between economic development and fossil fuels is no longer relevant.

The most notable example, all the more striking because it is central to so much misguided thinking, is that of oil. The world reached Peak Oil, in terms of consumption per person, in 1979. In the developed countries, the decline in oil consumption per person has outpaced population growth with the result that total consumption is declining. The average person in a developed country now uses less oil than their parents did 40 years ago.

Writing kanji on the air is even better practice than writing on paper

About the intersection of visual and motor space in writing:
Margaret Thomas, Air Writing as a Technique for the Acquisition of Sino-Japanese Characters by Second Language Learners. Summary: When studying a kanji, native Japanese speakers often trace its strokes with their fingers on the air, palm, or thigh, while keeping their eyes fixed on the source model. (They do it for recalling, too, often closing the eyes or averting the gaze). This is called “air writing” (kūsho 空書 or karagaki 空書き). Thomas experimented with 75 non-native learners, of 22 different mother languages, and found that air writing helped retention significantly more (p < 0.01) than pen writing or visual memorization—though the effect size was modest, and only noticeable when memorizing harder kanji (some 15.43% more hits for the hardest kanji set). Interestingly, six participants who were told to not use kūsho still did it spontaneously during recall tasks; either with their hands, or by mimicking kūsho patterns with subtle head or torso movements. Thomas tested only kanji recall, not recognition (which is likely the most important task in the modern age). However, she does mention a couple studies suggesting that native speakers can recognize kanji more easily when allowed to air-write (Matsuo et al, Dissociation of writing processes: functional magnetic resonance imaging during writing of Japanese ideographic characters, 2000; and Matsuo et al, Finger movements lighten neural loads in the recognition of ideographic characters, 2003). I think it’s reasonable to suppose that, for non-native learners, too, air-writing helps with both recall & recognition. This is good news because you can practice anywhere with your own body.
Here's where I got the above post (it's short so I copied the whole thing): http://namakajiri.net/nikki/writing-kanji-on-the-air-is-even-better-practice-than-writing-on-paper/ 

Sunday, February 21, 2016

More Obama/Seinfeld & Grace, w/ a Michael Richards Coda

I can’t stop thinking about that coffee conversation between Obama and Seinfeld.

But let me come at it sideways. Somewhere on YouTube there’s an interview with David Letterman where he talks about doing a talk show. He talks about his admiration for various talk show hosts, Johnny Carson of course, but also Regis Philbin (I agree with him on both), and about the craft of it. YOU are the host; it’s your job to keep the conversation going.

Some guests understand that, yes, this is entertainment; it’s an act that merely appears to be spontaneous conversation. The conversation, if it goes well, IS spontaneous. It is also IS an act and the good guest works it like that. The bad guest just sits there, the proverbial lump on a log, and the host has to pull teeth to get something going.

Well, “Comedians in Cars Getting Coffee” is a talk show. It doesn’t look like the standard talk show. There’s no live audience and the host isn’t behind a desk. But make no mistake, Seinfeld is the host. It’s his job to keep things going.

THAT’s what’s behind his remark to Obama (see the transcription in yesterday’s post), “Come on, you do some work”, with a pitch and slight volume raise on you. He says that in response to Obama’s, “Why is that do you think? Let let let me ask you.” Obama was thus asking for the lead role and Seinfeld granted it to him.

And what happens? A couple of seconds later we get this:
BO: But, I’m gonna’ probe this. The question is, how did you calibrate dealing with that [wealth and fame]? At a certain point you might have thought to yourself “You know what, I’m more than just a comedian...

JS: Nah.

BO: I’m gonna make a Jewish version of Citizen Kane.” You know. How did you keep perspective?

JS: I’ll give you the real answer. It’s gotta be similar to your life. I fell in love with the work.

BO: Um huh.

JS: And the work was joyful. And interesting, and that was my focus.

BO: So, now that you’re like a quasi-retired man of leisure...

JS: I work a lot.

BO: Do you?

JS: Yeah.

BO: Are you still doing stand-up?

JS: Are you still making speeches?
And what’s with that word calibrate? That keeps popping up in my mind. It comes from a whole different universe than the rest of the conversation, a very technical universe. And what’s Seinfeld’s answer, what’s his method of calibration? He loves his work. Love and calibration.

But why with Obama? Perhaps because all Seinfeld’s other guests – talk show, remember? – are in show business, but not Obama. Seinfeld could assume a certain common ground with them and so didn’t have to express it. With Obama that assumption fails, so they had to work toward it. Or maybe it’s that, while Obama’s not rich (something Seinfeld did remind him of), he’s got a whole different order of recognition and power from Seinfeld. So again, they had to work at common ground.

And were willing to do so.

Is that it? Don’t really know. I just made it up.

* * * * *

But I’ve watched a few more episodes of the show. And I think maybe the Michael Richards episode is as good as the Obama, though the dynamic is very different. Seinfeld knows Richards quite well, at least professionally, as they’d worked together on Seinfeld for nine seasons. Thus they have quite a lot in common.

Confronting black hecklers, Richards bellowed the word “nigger” seven times, an outpouring caught on camera. In the controversy that followed, it was hard not to see the rant as a moment of unfiltered ugliness, but Seinfeld says this interpretation reflects a category error. Speech on a stage, delivered in a performative context, is unique, he argues, and bits — even those that come off the cuff — are different from straight confessions. “It was a colossal comedic error,” Seinfeld said. “He was angry, and it was the wrong choice, but it was a comedic attempt that failed. In our culture, we don’t allow that, especially in the racial realm. But as a comedian, I know what happened, he knows what happened and every other comedian knows what happened. And all the black comics know it, and a lot of them felt bad about it, because they know it’s rough to be judged that way in that context. You’re leaping off a cliff and trying to land on the other side. It was just another missed leap.”
Without directly mentioning the incident, they talked about it, and its effect on Richards. It devastated him: “I busted up after that even. It broke me down.” They had a good, perhaps even a healing, conversation about that.

However you think of that, Seinfeld couldn’t possibly have that intimate kind of conversation with Barak Obama. The shared context isn’t there. But note that Seinfeld and Richards could have that kind of intimate conversation over coffee while the cameras were rolling.

* * * * *


Saturday, February 20, 2016

Obama, Seinfeld, Performing, and Metaphysical Grace

Back in 2012 Jerry Seinfeld started a web series called “Comedians in Cars Getting Coffee”. In each episode he chats with a comedian about whatever, which generally means a lot of talk about comedy and show business. At the beginning of each episode Jerry picks up his guest in an oldish, through generally not vintage, car that is matched to his guest in some way – e.g. an ’69 Pontiac GTO in screaming orange for Howard Stern, a ’67 Volvo for Tina Fey – in which they chat while driving to a coffee shop or a diner. Where they chat some more, maybe have something to eat. And then Jerry drives the guest back to wherever, generally home.

I first heard about the show whenever but hadn’t actually watched it until yesterday. I’ve now watched maybe ten episodes and it’s generally interesting, some bits and episodes more than others. It can also be a bit cloying and self-congratulatory. But one of the episodes I watched is head and shoulders above the others, the one that features Barack Obama, from December 2015.

Obama, as you know, is not a comedian, though he does have a sense of humor. And, of course, as an experienced politician, he’s a seasoned performer.


If that embedded video won't run, try this link.
Moreover, we know Obama loves comedy and that he thinks about the craft of it, the techne, if you will – see the segment of the Mark Maron interview I transcribed in Obama’s Eulogy for Clementa Pinckney 2: Performing Black, Three Discussions. Seinfeld, of course, has no choice but to be a student of the craft.

So, he picks Obama up at the Oval Office in a 1963 Corvette Sting Ray. The coolest American car for the coolest American president – those weren’t Seinfeld’s exact words, but that’s the sentiment. At about five minutes into the episode Seinfeld remarks: “Do you ever think about every person you talk to is putting on an act, a total show?” Obama: “It’s a problem.” Of course, their little drive doesn’t get past the guard at the gate and they have to go back to the White House.

They continue their chat in what appears to be the commissary. Let’s pick up the conversation at about 12 minutes into the episode.

* * * * *

Jerry Seinfeld: People you spend most of your time with, are they really smart, are they mostly head-strong, agenda-laden idiots.

Barack Obama: You know when you’re dealing with Congress, it varies. There’ gonna’ be some folks there that are foolish, just like there are in comedy...

JS: Well, everyone in comedy’s foolish. All my friends are knuckleheads.

BO: All of ‘em?

JS: All of em.

BO: I know some of your friends.

JS: Yeah!

BO: Did I tell you I played golf with Larry David?

JS: No, because you and I don’t talk that much.

BO: I love Larry. When we play golf, he’s a fair-skinned guy, which is...

JS: Oh, the sun screen.

BO: He lathers the sun screen, and it’s dripping, it’s caked white all over, and it catches parts of his ears...

JS: Un, it’s horrible.

BO: and there’s big gobs of it.

JS: Yeah, yeah. Yes.

Friday, February 19, 2016

Attridge and Staten 7: Wilfred Owen’s "Futility" and the issue of historical context

Attridge and Staten worked with Wilfred Owen’s Futility as a way of examining the question of historical context. Here’s the poem:
Move him into the sun—
Gently its touch awoke him once,
At home, whispering of fields half-sown.
Always it woke him, even in France,
Until this morning and this snow.
If anything might rouse him now
The kind old sun will know.

Think how it wakes the seeds—
Woke once the clays of a cold star.
Are limbs, so dear-achieved, are sides
Full-nerved, still warm, too hard to stir?
Was it for this the clay grew tall?
—O what made fatuous sunbeams toil
To break earth's sleep at all?
Here’s how Staten put the matter (p. 39-40):
In this poem the speaker speaks of a companion whose newly dead body lies, apparently, nearby. Since Owen’s greatest poetry is to be found in his war poems, and “Futility” was written during the war, is about a recently dead man, and laments his death, this poem is naturally read by practically all critics as a war poem. Yet it contains not a single explicit reference to war. There is a mention of France, but that’s it. Owen’s other war poems are much more direct in their references to the war. There are strong contextual reasons to read “Futility” as a war poem; and yet, someone who stubbornly, and perhaps against commonsense, insisted on reading only what is “in” the poem would be unable to find any justification for such a reading.
He concludes this preamble by observing (p. 40): “I have to admit I have to exercise considerable mental discipline on myself not to see the reference to France as decisive; but when I read the whole poem with care, I’m quite sure that this is not a war poem.”

A War Poem?

In response, Attridge notes that he’s always read the poem as a war poem, “though it’s clearly more than that” (p. 40), and admits that the most obvious reason for so doing is that Owen is known as a war poet. If we didn’t know that, however, if we just came upon the poem on a page with no contextual information, not even the author’s name, how would we read the poem? He then goes on to worry about the distinction between what’s “inside” the text and what’s “outside” (the scare quotes are his) and to muse about the meanings of words, with references to Wittgenstein and Kripke, historical knowledge, and to authorial intention (“whatever Owen thought he was doing” p. 41).

Then he turns the discussion back to Staten, who allows that minimal reading assumes the sort of general cultural literacy that one could pick up through the Wikipedia, which in this case would identify Owen an English poet and soldier who fought in World War I (p. 42). On that basis, the reference to France would place WWI “inside” the poem –the scare quotes are Staten’s, not mine. But I assure you that I would use them in that context as well, for we are on nebulous ground indeed when we talk about the “insides” and “outsides” of poems. Staten also notes that he’s reread all of Owen’s war poems and “this poem is uniquely evasive in its reference to war” (p. 48). So, he asserts, we must look closely at the technique of this poem.

After a bit of this and that Attridge notes that poems work line by line, to which Staten strongly assents (p. 44), and then the dialog is back to Attridge, who, among other things, offers some biographical details about Owens’ participation in, hospitalization during, and ultimate death in the war (pp. 45-46). Staten doesn’t know quite what to make of the biographical information, leading to a line of speculation leaving us “bogged down in the imponderability of context, and yet no amount of such information would tell us the specific effect of language Owen was trying for when he constructed his poem in just this way” (p. 47) and he goes on to note in passing, “the endless labor of reading the poem” (p. 47).

We have more this and that from both of them – including an interesting distinction between the speaker of the poem and the designing poetic intelligence who created that speaker (pp. 47, 49) – and Staten brings the discussion around to the word “this” as in functions in line 12 (3rd from the end). He offers five ways of reading it (p. 52):
Here is a scale of possible readings, from the most particularized to the most general, that could be given to this:

1. This individual death here. Since my comrade’s life has been prematurely ended by this brutal senseless war, it’s better that the earth had remained a cold, lifeless orb for all eternity.

2. This individual death as representative of the carnage of WWI. Since so many lives have been ended by this was as this one has, it’s better that ...

3. This individual death as representative of the senselessness of war death in general. Since many, many lives have been ended in a similar way by many wars, it’s better that ...

4. This individual death as representative of the premature, senseless cutting off of life in general, however this might happen. Since a multitude of lives has been ended, and continues to be ended, prematurely, senselessly, by war and many other causes, it’s better that ...

5. Since all organic life ends like this, in the stark horror of the corpse, which reverts to the cold clay from which it came, since there is no resurrection, not by the sun or by anything else, then all organic life had better not have existed.

Thursday, February 18, 2016

The persistence of attitudes rooted in slavery

Avidit Acharya, Matthew Blackwell, and Maya Sen
February 16, 2016
Forthcoming, Journal of Politics

Abstract
We show that contemporary differences in political attitudes across counties in the American South in part trace their origins to slavery’s prevalence more than 150 years ago. Whites who currently live in Southern counties that had high shares of slaves in 1860 are more likely to identify as a Republican, oppose affirma- tive action, and express racial resentment and colder feelings toward blacks. These results cannot be explained by existing theories, including the theory of contem- porary racial threat. To explain these results, we offer evidence for a new theory involving the historical persistence of political and racial attitudes. Following the Civil War, Southern whites faced political and economic incentives to reinforce existing racist norms and institutions to maintain control over the newly free African-American population. This amplified local differences in racially con- servative political attitudes, which in turn have been passed down locally across generations. Our results challenge the interpretation of a vast literature on racial attitudes in the American South.
From the introduction:
In this paper, we show that the local prevalence of slavery—an institution that was abolished 150 years ago—has a detectable effect on present-day political attitudes in the American South. Drawing on a sample of more than 40,000 Southern whites and his- torical census records, we show that whites who currently live in counties that had high concentrations of slaves in 1860 are today on average more conservative and express colder feelings toward African Americans than whites who live elsewhere in the South. That is, the larger the number of slaves per capita in his or her county of residence in 1860, the greater the probability that a white Southerner today will identify as a Repub- lican, oppose affirmative action, and express attitudes indicating some level of “racial resentment.” We show that these differences are robust to accounting for a variety of factors, including geography and mid-19th century economic and social conditions. These results strengthen when we instrument for the prevalence of slavery using geo- graphic variation in cotton growing conditions.

We consider several explanations for our results rooted in contemporary forces and find each to be inconsistent with the empirical evidence. For example, we con- sider the possibility that whites are simply more racially conservative when exposed to larger black populations—the central finding of the literature on racial threat (Key, 1949; Blalock, 1967; Blumer, 1958). However, when we estimate the direct effect of slavery on contemporary attitudes (Acharya, Blackwell and Sen, 2016), we find that
contemporary shares of the black population explain little of slavery’s effects. We also test various other explanations, including the possibility that slavery’s effects are driven exclusively by 20th-century population shifts or income inequality between African Americans and whites. We find no evidence that these contemporary factors and the- ories of population sorting fully account for our results. Introducing individual-level and contextual covariates commonly used in the public opinion literature also does not explain away our finding.

To explain our results, we instead propose a theory of the historical persistence of political attitudes. The evidence suggests that regional differences in contemporary white attitudes in part trace their origins to the late slave period and the time period after its collapse, with prior work suggesting that the fall of slavery was a cataclysmic event that undermined Southern whites’ political and economic power. For example, Key (1949), Du Bois (1935), and Foner (2011) (among others) have argued that the sudden enfranchisement of blacks was politically threatening to whites, who for cen- turies had enjoyed exclusive political power. In addition, the emancipation of Southern slaves undermined whites’ economic power by abruptly increasing black wages, raising labor costs, and threatening the viability of the Southern plantation economy (Ransom and Sutch, 2001a; Alston and Ferrie, 1993). Taken in tandem with massive preexist- ing racial hostility throughout the South, these political and economic changes gave Southern Black Belt elites an incentive to further promote existing anti-black sentiment in their local communities by encouraging violence towards blacks and racist attitudes and policies (Roithmayr, 2010). This amplified the differences in white racial hostility between former slaveholding areas and nonslaveholding areas, and intensified racially conservative political attitudes within the Black Belt. These have been passed down locally, one generation to the next.
From my point of view this is about cultural evolution. In this case we have persistence in the face of change. The environment in which culturally-based attitudes, ideas, and practices thrive or die is what I've have called the cultural reticulum:
I’ve long held the view that the environment to which cultural “things” must adapt is the human mind. But not the individual mind. Rather, a bunch of individual minds interacting in a group... Well, there’s network, and networks are all the rage these days. That in itself is a problem. “The cultural network” is just another network. “Web” and “mesh” have similar issues. I’ve also considered “matrix” and “lattice”. “Cultural matrix” does have possibilities, but it bumps into those SF movies about humans in vats living their lives in virtual reality.
Why not call this network of interacting minds a reticulum, the cultural reticulum?

H/t Tyler Cowen.

Action at a distance: dependency sensitivity in a New World primate

Biology Letters

Andrea RavignaniRuth-Sophie SonnweberNina StobbeW. Tecumseh Fitch

Tuesday, February 16, 2016

The Human Affectome Project


There are two important issues in affective neuroscience
that are creating challenges for researchers in the field

The first issue is the longstanding challenge of finding a comprehensive and robust functional model for emotions and feelings that can serve as a common focal point for research in the field. Although many models of emotion have been proposed, and many testable hypotheses have been generated1, broad agreement in support of a single model has been elusive and many competing perspectives exist. Perhaps this sort of heterogeneity is healthy for discourse in any emerging field, but affective research is advancing rapidly and this lack of agreement creates numerous challenges, not the least of which relates to common terminology. For example, precise definitions for the terms "emotion" and "feeling" have not been agreed upon, and although various attempts have been made to define a common set of emotions, no firm agreement has been reached so inconsistent constructs persist in the literature even at the most fundamental levels2.

The second issue is the fractious standing debate over emotions as natural kinds. In 1990 Ortony and Turner challenged the belief that there might be neurophysiological and anatomical substrates corresponding to the basic emotions3. This perspective has been more recently championed by Barrett who has similarly argued that personal reports of basic emotions are not necessarily correlated with specific causal mechanisms in the brain and/or properties that are observable (on the face, in the voice, in the body, or in experience)4,5. Panksepp has responded citing research that supports the existence of a variety of core emotional operating systems in ancient subneocortical regions of the brain, and arguing that these systems are primary-process ancestral birthrights of all mammals6,7. But Barrett et al have continued to promote a psychological constructionist approach to emotion which has created a significant division within the field5,8.

The Human Affectome project has been conceived to address both of these issues. While each issue represents a significant challenge in its own right, addressing the first issue (i.e., by developing a comprehensive and robust functional model for emotions and feelings that can serve as a common focal point for research in the field) will entail a substantive analysis of the evidence currently dividing the field on emotions as natural kinds. So there is good reason to believe that both issues can be addressed simultaneously.

References
  1. Sander, D. in The Cambridge Handbook of Human Affective Neuroscience (eds J. Armony & P. Vuilleumier) (Cambridge University Press, 2013).
  2. Izard, C. E. Basic Emotions, Natural Kinds, Emotion Schemas, and a New Paradigm. Perspectives on psychological Science : a journal of the Association for Psychological Science 2, 260-280, doi:10.1111/j.1745-6916.2007.00044.x (2007).
  3. Ortony, A. & Turner, T. J. What's basic about basic emotions? Psychological review 97, 315-331 (1990).
  4. Barrett, L. F. Are Emotions Natural Kinds? Perspectives on psychological science : a journal of the Association for Psychological Science 1, 28-58, doi:10.1111/j.1745-6916.2006.00003.x (2006).
  5. Lindquist, K. A., Siegel, E. H., Quigley, K. S. & Barrett, L. F. The hundred-year emotion war: are emotions natural kinds or psychological constructions? Comment on Lench, Flores, and Bench (2011). Psychological bulletin 139, 255-263, doi:10.1037/a0029038 (2013).
  6. Panksepp, J. Neurologizing the Psychology of Affects: How Appraisal-Based Constructivism and Basic Emotion Theory Can Coexist. Perspectives on psychological science : a journal of the Association for Psychological Science 2, 281-296, doi:10.1111/j.1745-6916.2007.00045.x (2007).
  7. Panksepp, J. Cognitive Conceptualism-Where Have All the Affects Gone? Additional Corrections for Barrett et al. (2007). Perspectives on psychological science : a journal of the Association for Psychological Science 3, 305-308, doi:10.1111/j.1745-6924.2008.00081.x (2008).
  8. Lindquist, K. A., Gendron, M., Oosterwijk, S. & Barrett, L. F. Do people essentialize emotions? Individual differences in emotion essentialism and emotional experience. Emotion 13, 629-644, doi:10.1037/a0032283 (2013).
* * * * *

They're recruiting researchers for the initial phase of the project, which involves a review and synthesis of the relevant literatures. The website has further explanations of the project. The project is expected to take tow to three years. There will be an initial workshop in Halifax, Nova Scotia, in August of this year.

Friday, February 12, 2016

How many degrees of separation?

As many of you know, the good folks at Facebook have been investigating the "degrees of separation" issue in human social networks. They recently announced that on the average, a given Facebook member is separated from other FB members by an average of 3.57 intermediaries (for me, the number is 2.95, which is rather astonishing given my hermit nature). Duncan Watts is a mathematician and sociologist who has investigated this problem (Six Degrees: The Science of a Connected Age) and has some interesting observations.

First, there is a semantic issue. X degrees of separation  (as the matter is often put) corresponds to X-1 intermediaries. That's mere semantics and implies, of course, that FB's announcement, which was stated in terms of intermediaries, correponds to 4.57 degrees of separation. Another distinction, however, is much more interesting. It is between algorithmic and topological versions of the problem:
Second, though, Milgram’s experiment was a subtly but importantly different test than the one run by Facebook. Whereas the latter measured the length of the shortest possible path between two people — by exhaustively searching every link in the underlying Facebook graph — the former is simply the shortest path that ordinary people could find given very limited information about the underlying social network. There are, in other words, two versions of the small-world hypothesis — the “topological” version, which refers only to underlying network structure, and the “algorithmic” version, which refers to the ability of people to search this underlying structure. From these definitions, it follows that algorithmic (search) paths cannot be shorter than topological paths and are almost certainly longer. Saying that the world has gotten smaller because the shortest topological path length is 4.5 not 6 therefore makes no sense — because the equivalent number would have been smaller in Milgram’s day as well.
And then there is this:
In a nutshell what we showed is that it is easy to turn a “large” world into a “small” one, just by adding a small fraction of random, long-range links, reminiscent of Mark Granovetter’s famous “weak ties.” The flip side of our result, however, is that once the world has already gotten small — as it was already by the 1960's — it is extremely hard to make it smaller. Obviously Facebook did not exist in 2003 so possibly since then something has indeed changed. But I suspect that the difference will be small.
So?
Why does any of this matter? There are three reasons. First, the two versions of the small-world hypothesis — topological and algorithmic — are relevant to different social processes. The spread of a sexually transmitted disease along networks of sexual relations, for example, does not require that participants have any awareness of the disease, or intention to spread it; thus for an individual to be at risk of acquiring an infection, he or she need only be connected in the topological sense to existing infectives. On the contrary, individuals attempting to “network” — in order to locate some resources like a new job or a service provider — must actively traverse chains of referrals, and thus must be connected in the algorithmic sense. Depending on the application of interest, therefore, either the topological or algorithmic distance between individuals may be more relevant — or possibly both together. Second, whereas the topological hypothesis has been shown to apply essentially universally, to networks of all kinds, the algorithmic hypothesis is largely (although not exclusively) concerned with social networks in which human agents make decisions about how to direct messages.

Thursday, February 11, 2016

Further Thoughts on Underwood et al. on ‘revolution’ in cultural change

I’ve got some further thoughts on Underwood et all, You say you found a revolution, over at The Stone and the Shell.

What is time?

What I think is that the questions raised ultimately go very deep. One place they end up is the nature of time and thus of history. One view (and I believe it has a name) is that time is just an ‘empty’ framework in which things happen. But another view (again with a name) sees time as arise from/with causal processes at work in the world. On this view the emergence of long ¬ whatever that means – unidirectional chains of events might reflect a new causal order.

And isn’t that, after all, how we (for some unspecified value of “we”) think of the emergence of humankind and culture, that it marks the emergence of ontologically new phenomena in the universe?

Causal Dynamics

It’s very easy, too easy, to think of the data as just, well, you know, data. We just collect the ‘texts’, whether literature or music or whatever, work on them, and draw conclusions. We know, of course, that the data is evidence about a historical process – that’s why it interests us in the first place – but we don’t necessarily think ‘through’ the data to the underlying causal mechanisms. The interesting thing about what Jockers did is that, in effect, by examining similarity he was in some sense able to examine the dynamics of the system without having any explicit temporal information in his data set. Thus the fact that his analysis produced a result that maps onto time quite neatly is interesting and revealing.

It tells us something quite profound about those dynamics. But what? (See the next section.)

Mauch et al. didn’t undertake that kind of analysis. They’ve got temporal information in their data. But there’s no reason one couldn’t analyze their data in the way Jockers did his. They started with c. 30 features divided between harmony and timbre. Why not deal with them in the same way Jockers dealt with his novels data? Temporal direction should pop out of that analysis the way it did out of Jockers’.

Whig History?

And then, lurking around the corner, we’ve got the specter of Whig history. I understand that Underwood et al. do not in fact believe that history is unidirectional; their remarks just reflect what seems to be in the data they’re examining (Underwood communicated this in private email). The Whig interpretation of history, however, assumes that history has a direction.

But that, of course, is not all it assumes. It also assumes that history is moving from a primitive state to a more advanced state, one that is better (in every way) than earlier states. Of course, Underwood et al, neither said nor implied anything about cultural progress. I’m just pointing out that, the moment we assert that history is unidirectional, Whig history is laying in wait for us.

While Whig history is suspect as a privileged reading of the historical record, we should remind ourselves that it is not a completely delusional reading of that record. There does seem to be a direction to (at least certain ranges of) historical phenomena. How do we account for that?

That, I fear, is a deep question, and it isn’t going to be answered anytime soon.

Tuesday, February 9, 2016

Monday, February 8, 2016

Historical direction and popular music, a reply to Underwood et al. (Feb 2016)

Over at The Stone and the Shell there’s an interesting post by Ted Underwood, Hoyt Long, Richard Jean So, and Yuancheng Zhu, You say you found a revolution. It’s a critique of Mauch et al. “The Evolution of Popular Music: USA 1960-2010”, from 2015. Mauch et al. 17,000 recordings that topped the Billboard charts during that period, assessed their similarity on harmonic and timbral properties, and argued for three ‘revolutions’ during that interval, at roughly 1964, 1983, and 1991. Underwood et al. argue that the claim is overstated and that they’ve mis-analyzed their data. As Mauch at al. have made their data public, Underwood et al. were able to reanalyze it, to more modest conclusions.

In the course of explaining their work, Underwood et al. made some assertions I found to be problematic. So I wrote to Underwood about it, he replied, and has asked me to post my observations to my blog. That’s what’s in the rest of this post.

* * * * *

Why Assume Linear Direction in Time?

Hi Ted,

I’ve read my way through this and I’m not quite sure what I think. I have no strong attachment to the revolution argument – seems to me too many “revolutions” for that stretch of time ¬– but I think you have a hidden assumption in your argument. Here’s two passages where that assumption shows up:
History doesn’t repeat itself in the same way. It’s extremely likely (almost certain) that music from 1992 will resemble music from 1991 more than it resembles music from 1965. That’s why the historical distance matrix has a single broad yellow path running from lower left to upper right. 
As a result, historical sequences are always going to produce very high measurements of Foote novelty. Comparisons across a boundary will always tend to create higher distances than the comparisons within the half-spans on either side, because differences across longer spans of time always tend to be bigger.
And:
In short, the tests in Mauch et al. don’t prove that there were significant moments of acceleration in the history of music. They just prove that we’re looking at historical evidence! The authors have interpreted this as a sign of “revolution,” because all change looks revolutionary when compared to temporal chaos.
The assumption you’re making is that history has a default direction and that it is linear. That is, linear change of the kind we see in that data set requires no explanation, though acceleration and deceleration do. But I think that the direction itself requires explanation, though just how to go about that is not clear to me.

Three Moments in America’s Conversation on Race

Over at 3 Quarks Daily, my piece for February. Here's my introduction:

In Playing in the Dark, a set of essays on race in American literature, Toni Morrison is led “to wonder whether the major and championed characteristics of our national literature . . . are not in fact responses to a dark, abiding, signing Africanist presence. . . . Through significant and underscored omissions, startling contradictions, heavily nuanced conflicts, through the way writers peopled their work with the signs and bodies of this presence--one can see that a real or fabricated Africanist presense was crucial to their sense of Americanness.” That is to say, the sense of American identity embodied in our literature is at least partially achieved through reference to African Americans.

Let’s consider three imaginative works where race is an issue. First we have Shakespeare’s The Tempest. It is not American, of course, but English. The character of Caliban, who may not even be human, marks the imaginative space the English used for understanding Africans. The play was written and performed at about the same time as Jamestown, Virginia, as first settled.

Then we move forward two and a half centuries to late 19th Century. America has established itself as an independent nation and fought its bloodiest war, the Civil War, over the status of the American sons and daughters of Caliban. We find Huck Finn fleeing his abusive father by rafting down the Mississippi with a runaway slave. Jim sure isn’t Shakespeare’s Caliban nor is Huck a Prospero. I conclude with a counter narrative from the early 20th Century, an African-American “toast”, as they’re called, about the sinking of the Titanic. Think of such oral narratives as antecedents of rap and hip-hop.

Universal Semantic Structure?

Hyejin Youna, Logan Suttond, Eric Smithc, Cristopher Moorec, Jon F. Wilkinsc,f, Ian Maddiesong, William Croft, and Tanmoy Bhattacharyac. On the universal structure of human lexical semantics. PNAS 2016: 1520752113v1-201520752. doi: 10.1073/pnas.1520752113

Significance

Semantics, or meaning expressed through language, provides indirect access to an underlying level of conceptual structure. To what degree this conceptual structure is universal or is due to properties of cultural histories, or to the environment inhabited by a speech community, is still controversial. Meaning is notoriously difficult to measure, let alone parameterize, for quantitative comparative studies. Using cross-linguistic dictionaries across languages carefully selected as an unbiased sample reflecting the diversity of human languages, we provide an empirical measure of semantic relatedness between concepts. Our analysis uncovers a universal structure underlying the sampled vocabulary across language groups independent of their phylogenetic relations, their speakers’ culture, and geographic environment.

Abstract

How universal is human conceptual structure? The way concepts are organized in the human brain may reflect distinct features of cultural, historical, and environmental background in addition to properties universal to human cognition. Semantics, or meaning expressed through language, provides indirect access to the underlying conceptual structure, but meaning is notoriously difficult to measure, let alone parameterize. Here, we provide an empirical measure of semantic proximity between concepts using cross-linguistic dictionaries to translate words to and from languages carefully selected to be representative of worldwide diversity. These translations reveal cases where a particular language uses a single “polysemous” word to express multiple concepts that another language represents using distinct words. We use the frequency of such polysemies linking two concepts as a measure of their semantic proximity and represent the pattern of these linkages by a weighted network. This network is highly structured: Certain concepts are far more prone to polysemy than others, and naturally interpretable clusters of closely related concepts emerge. Statistical analysis of the polysemies observed in a subset of the basic vocabulary shows that these structural properties are consistent across different language groups, and largely independent of geography, environment, and the presence or absence of a literary tradition. The methods developed here can be applied to any semantic domain to reveal the extent to which its conceptual structure is, similarly, a universal attribute of human cognition and language use.

* * * * *

Thursday, February 4, 2016

100 Milestones in Modern Comedy

I just spent a couple hours working my way through The 100 Jokes That Shaped Modern Comedy, by Jesse David Fox over at Vulture. Here's the concept:
The oldest joke on record, a Sumerian proverb, was first told all the way back in 1900 B.C. Yes, it was a fart joke: “Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap.” Don’t feel bad if you don’t get it — something was definitely lost in time and translation (you have to imagine it was the Mesopotamian equivalent of “Women be shopping”), but not before the joke helped pave the way for almost 4,000 years of toilet humor. It’s just a shame we’ll never know the name of the Sumerian genius to whom we owe Blazing Saddles. But with the rise of comedy as a commercial art form in the 20th century, and with advances in modern bookkeeping, it’s now much easier to assign credit for innovations in joke-telling, which is exactly what Vulture set out to do with this list of the 100 Jokes That Shaped Modern Comedy.

A few notes on our methodology: We’ve defined “joke” pretty broadly here. Yes, a joke can be a one-liner built from a setup and a punch line, but it can also be an act of physical comedy. Pretending to stick a needle in your eye, or pooping in the street while wearing a wedding dress: both jokes. A joke, as defined by this list, is a discrete moment of comedy, whether from stand-up, a sketch, an album, a movie, or a TV show.

For clarity’s sake, we’ve established certain ground rules for inclusion. First, we decided early on that these jokes needed to be performed and recorded at some point. Second, with apologies to Monty Python, whose influence on contemporary comedy is tremendous and undeniable, we focused only on American humor. Third, we only included one joke per comedian. And fourth, the list doesn't include comedy that we ultimately felt was bad, harmful, or retrograde.
Cruise on over there. You're sure to find something you like and something you didn't know. I'll be back.

H/t 3QD.