Pages in this blog

Monday, March 31, 2014

The Perils of Big Data

The Financial Times has an interesting article on Big Data. Yes, it's all over the place, and, yes, it allows us to do things we couldn't before. But let's not get ahead of ourselves:

Four claims for Big Data:
Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.
But, each has a catch:
Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives, as with Target’s pregnancy predictor. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing (as with Flu Trends) or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naive in data sets where spurious patterns vastly outnumber genuine discoveries.

MOOCs and Liberal Education: for the autodidact in all of us

John Holbo is discussing MOOCs at Crooked Timber. Part 1, Reason and Persuasion On Coursera – or – Look, Ma, I’m a MOOC; part 2, The Game of Wrong, and Moral Psychology. Here's a comment in part 2:
John Holbo 03.31.14 at 4:10 am 
Not to yet further break the butterfly of mdc’s objections on the wheel of what Clay was actually saying, but it’s worth noting that the ideal of liberal arts education as spiritual good in itself is a kind of four-year program of assisted auto-didacticism. One problem with holding up this ideal, to reproach MOOC’s, is that MOOC’s are actually good at assisted auto-didacticism, for those capable of it. (If there’s a problem, it is that MOOC’s are only good for this, not that they are not good at this.) If you are the sort of student who could get the spiritual benefit of going to Harvard, and taking philosophy, you are probably the sort of student who would benefit from a philosophy MOOC. I’m not saying Harvard isn’t better. But the MOOC is a lot cheaper and less rivalrous, as goods go.

Ring Composition in Alan Liu’s Essay, “The Meaning of the Digital Humanities”

I would like to conclude Alan Liu week at New Savanna by returning to the point where I began, his essay, “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423). As I was writing my original post I was struck by a paragraph-opening sentence near the end: “It is not accidental, I can now reveal, that at the beginning of this essay I alluded to Lévi-Strauss and structural anthropology.” The sentence clearly states that Liu did something at the beginning of the essay so as to set us up for something coming at the end. There’s nothing unusual in that; skillful, and even not so skillful, writers do that sort of thing all the time.

But Liu’s telling us about his rhetorical slight of hand? Why? I don’t know and, to be honest, that particular question only just now occurred to me as I’m writing this post.

What struck me a week ago when I pasted that sentence into my post and then reread it was the possibility that Liu’s article was an example of ring-composition – which, you may know, is one of my interests. While I first learned about ring forms in a 1976 article by R. G. Peterson, Critical Calculations: Measure and Symmetry in Literature (PMLA 91, 3: 367-375) it was the late Mary Douglas who got me to think seriously about ring-forms; she devoted her final book to them, Thinking in Rings: An Essay in Ring Composition (Yale 2007).

But that was about literary texts of one sort or another. Douglas was particularly interested in Old Testament texts, but had also worked on Tristram Shandy; others were interested in the Homeric texts; I’ve been most interested in films, most recently Gojira (Godzilla), a 1954 Japanese film. But Liu’s essay is not a literary text, nor a religious one. No one to my knowledge has investigated ring composition in expository prose.

So, why would such a question arise in the case of Liu’s essay? Well, consider what is generally meant by ring-form. Such texts generally a form like this: A, B, … X … B’, A’. So, let us substitute “Levi-Strauss” for A in that structural pattern:

Lévi-Strauss, B, … X … B’, Lévi-Strauss’

If Liu’s essay follows that pattern then I need to find something that’s structurally central and at least one pair of elements that are closely related to one another where one comes before and the other after the structural center.

Humans in the Americas for 22K years rather than only 13K

Whoops! The past has done it again: tossed up a fact that upended decades of conventional wisdom. NYTimes reports:
Researchers here say they have unearthed stone tools proving that humans reached what is now northeast Brazil as early as 22,000 years ago. Their discovery adds to the growing body of research upending a prevailing belief of 20th-century archaeology in the United States known as the Clovis model, which holds that people first arrived in the Americas from Asia about 13,000 years ago.

“If they’re right, and there’s a great possibility that they are, that will change everything we know about the settlement of the Americas,” said Walter Neves, an evolutionary anthropologist at the University of São Paulo whose own analysis of an 11,000-year-old skull in Brazil implies that some ancient Americans resembled aboriginal Australians more than they did Asians.

Interesting interview with John Searle

In my part of the intellectual woods he's perhaps best known for his Chinese room thought experiment, which has always struck me as being beside the point. Here's his quick gloss on it:
Well, in this particular case I imagined what it would be like if I followed a program for answering questions in Chinese and giving back answers in Chinese, even though I don’t understand a word of Chinese. And that was a very useful thought experiment because it enables us to see that computation by itself isn’t thinking.
So?

On consciousness:
Consciousness is a biological property like digestion or photosynthesis. Now why isn’t that screamingly obvious to anybody who’s had any education? And I think the answer is these twin traditions. On the one hand there’s God, the soul and immortality that says it’s really not part of the physical world, and then there is the almost as bad tradition of scientific materialism that says it’s not a part of the physical world. They both make the same mistake, they refuse to take consciousness on its own terms as a biological phenomenon like digestion, or photosynthesis, or mitosis, or miosis, or any other biological phenomenon.
Yes.

Other contemporary philosophy doesn't much interest him, but he likes fiction:
I don’t read much philosophy, it upsets me when I read the nonsense written by my contemporaries, the theory of extended mind makes me want to throw up…so mostly I read works of fiction and history. I love reading history books and I love reading works of fiction, there’s just an enormous amount of great stuff written.

Faulkner, the great American modernists, I can’t tell you the influence they’ve had on me. No philosopher has influenced me as much as Hemingway, Faulkner and Fitzgerald – they’ve had an enormous influence on my whole sensibility – and the whole American modernist tradition. There are so many great history books and great novels, not to mention poetry and other forms of literature, that I spend much more time on literature than I do on philosophy. I’m not boasting about that, I’m complaining, I probably should read more philosophy than I do. But I think a lot of works of philosophy are like root-canal work, you just think you’ve got to get through that damn thing.
On the history of philosophy:
Now, I admire the history of philosophy, but not for the right reasons. I don’t think I learnt a lot of truths from reading Leibniz or Kant. I think Leibniz was probably the most intelligent person who ever lived, but I think his philosophical views are probably pretty much mistaken. I mean, the bit about the monads and so on. Kant was probably the greatest philosopher that ever lived and he is an obsession, but I think the whole thing is based on a mistake – that you can’t have a direct knowledge of things in themselves. You can. I’m looking at a desk and I see a thing in itself.
Read the full interview at NewPhilosopher.

Thursday, March 27, 2014

Alan Liu: Reengaging the Humanities

Alan Liu week continues at New Savanna. Now I want to look at an interview he gave to Scott Pound at Amodern: Reengaging the Humanities. While the interview is worthwhile in its entirety I’ll select passages where Liu’s thinking (usefully) intersects with my own.

First, though, I want to cite a biographical passage which speaks to attitude and intuition. While talking about Bruno Latour’s interrogation of critique in the name of compositionalism Liu informs us that he’s “the son of a structural engineer and descended from a whole clan who were allowed to immigrate to the U.S. because they were engineers and builders.”

Yes.

And I’m the son of a chemical engineer who spent his professional career in the coal business. Engineering is about designing things and building them.* I used “Speculative Engineering” as the title for the preface to my book on music (Beethoven’s Anvil) because I wanted to emphasize the constructive nature of my enterprise. I talked about the building blocks I played with as a child and posed the question: “How does the nervous system design and construct music?” (xiii). I think about literary texts and systems of texts in the same way: How are they built?

Returning to Liu, he has quite a bit to say about critique and compositionism. Note his use of engineering language in the following passage:
Critique and compositionism are best understood as arcs in a common cycle of thought, whether at the level of individual projects or of longer generational agendas. Think of it this way: in any project there are tactically important moments when critique is constructive, e.g., at the beginning when assessing what is wrong with precedents or in the middle after the first prototype. Equally, there are tactically shrewd moments when composited methods and viewpoints are constructive, e.g., when the architect pitches a project to a client and has to incorporate the client’s views, when the architect then has to adjust plans in response to the structural engineer, and when the engineer subsequently has to adjust plans in response to the contractors, not to mention the tactically decisive construction workers who actually wield the hammers). It’s just that neither critique nor compositionism has a right to rule as the “last word” in the process–the terminal stage, the end result, the payoff, the final record. In the humanities, I feel, we have fallen into the rut of thinking that interpretive discourse (e.g., a critical essay) should be the final statement of a project, and, further, that critique should be the final payoff of interpretation. [Emphasis mine, WLB] But what if we were to position interpretive discourse and critique elsewhere in the cycle of thought that goes into a project? Compositionism would then not be antithetical to critique; it would include the arc of critique, and vice versa, as part of the rolling launch of thought.
Perhaps we should remind ourselves that the primacy of interpretation is a relatively recent development in our disciplinary history, mostly post World War II, and that literary culture managed to function for centuries without a guild of professional interpreters.

Wednesday, March 26, 2014

The Thinkable and the Interesting: Katherine Hayles Interviews Alan Liu

Back in October of 2008 Katherine Hayles interviewed Alan Liu about the use of digital technology in the humanities. That interview is one of 20 you can access on the web HERE.

I’m particularly interested in Liu’s remarks here and there on “thinkability” and what people find interesting. Liu wonders why, for example, when online publication allows for multimedia texts, most online scholarship is fairly traditional in form and presentation. This particular topic takes from a discussion of an online journal Liu has been involved with, Postmodern Culture.

The following remarks are relatively early in the 50 minute interview (my transcription):
Hayles (6:05): Have you seen a kind of real evolution in the kind of articles submitted to your journal? That is, do you see many more multi-media projects?

Liu: On the whole I’d have to say “No.” Obviously there has been some of that, and there are journals like Vectors that have emerged recently and have tried to promote that kind of thinking. But it’s really quite amazing that the profession as a whole has been pretty conservative and resistant to the sorts of things that you can now do and that I think would be wonderful if more people did, and that I’d really like to see. I love to see stuff like that come into the journal more often.

It does appear but for people to be doing serious thinking and scholarship in a new mode, that simply takes a lot more than I guess 10 or 15 years to actually happen. It does happen, but ask yourself when was the last time you’ve written an essay that requires the new technology in a way that couldn’t be done in a print journal.

Tuesday, March 25, 2014

Video of parachute jump from NYC Freedom Tower

Jump happened at at 3 a.m. on September 30, 2013. Four men were involved; three jumped (Mr. Rossig, James Brady, Marco Markovich) and one remained on the ground to keep watch (Kyle Hartwell). NYTimes reports that they turned themselves in to the police yesterday (Monday) afternoon.
Mr. Rossig described his descent from the top of the World Trade Center as “exhilarating.”

“It’s a fair amount of free fall time,” he said. “You really get to enjoy the view of the city and see it from a different perspective.”

Monday, March 24, 2014

Weaving Fiction on the Web

Serializing fiction online brings readers in touch with writers (like the old days of oral story telling?). NYTimes tells all:
Wattpad is a leader in this new storytelling environment, with more than two million writers producing 100,000 pieces of material a day for 20 million readers on an intricate international social network.

When Jeff Bezos wonders if Amazon’s dominance of e-books might be outflanked, or Mark Zuckerberg ponders whether Facebook will be deserted by young people in search of something cooler, Wattpad is likely to come to mind.

“Now that everyone’s been given permission to be creative, new ways of telling stories, of being entertained, are being invented,” said Charles Melcher, a publishing consultant who hosts the annual Future of StoryTelling conference. “A lot of people are lamenting the end of the novel, but I think it’s simply evolving.”

Wattpad is not the sort of site where writers talk about suffering for their art or spend hours searching for the mot juste. Much of the most popular work is geared to young women and draws its energy from fan fiction. (Harry in “After” is inspired by Harry Styles, the teen heartthrob from the band One Direction.) Other popular categories are vampire fiction and mysteries.
Keeping in touch:
Wattpad eliminates any remaining distance between creator and consumer. The reader has been elevated to somewhere between the writer’s best friend and his ideal editor, one who offers only adoration. “This sentence literally broke my heart,” exclaimed one “After” reader. Enthused another: “What’s the point of life without ‘After’?”

Acquiring such fans is the most important job of a Wattpad writer. Then comes keeping them happy, not only by doling out new work on a regular basis — for a while Ms. Todd posted a chapter a day — but also by responding to their comments and questions.
And even participating directly in the story and the process:

Saturday, March 22, 2014

Bookclubs R Us

James Atlas on book clubs, which claim some five million Americans:
But the most prevalent way of conducting a book club is still in someone’s living room. The basic ritual is the same all over: A small group gets together every few weeks to discuss a pre-assigned title; to eat, whether that means noshing on cheese and crackers accompanied by a glass of wine or a four-course dinner; and to gossip in a dedicated way. It may be social, but it’s also serious; members devote long hours over many weeks to getting to the last page. For most of them, it’s all about the book.

Reading is a solitary act, an experience of interiority. To read a book is to burst the confines of one’s consciousness and enter another world. What happens when you read a book in the company of others? You enter its world together but see it in your own way; and it’s through sharing those differences of perception that the book group acquires its emotional power....

In the end, book groups are about community. The success of the One City, One Book initiatives in Chicago, Seattle and smaller towns across the country, where everyone is encouraged to read the same book, reflects the longing to share. So does Oprah; her book club binds together a nation disparate in its customs, classes, religions and ethnicities by putting it in front of the TV and telling it what to read.

Friday, March 21, 2014

Computer as Symbol and Model: On reading Alan Liu

I’ve now had a chance to read Alan Liu’s essay, “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423) and have some thoughts about the way he stages computing. In this post I want to follow the outer edges of his argument in an effort to situate myself in his discursive field.

Let’s start with this early passage (p. 410):
For the humanities, the digital humanities exceed (though they include) the functional role of instrument or service, the pioneer role of innovator, the ensemble role of an “additional field,” and even such faux-political roles assigned to new fields as challenger, reformer, and (less positively) fifth column. This is because the digital humanities also have a symbolic role. In both their promise and their threat, the digital humanities serve as a shadow play for a future form of the humanities that wishes to include what contemporary society values about the digital without losing its soul to other domains of knowledge work that have gone digital to stake their claim to that society.
I think that’s right. I also think that assigning the digital humanities a symbolic role is a clever and crucial bit of staging, for it places digital humanities at the distance Liu needs to treat them as an object requiring interpretation. And interpretation has been the central activity of literary studies for the last half-century or so. We know how to interpret things.

But I want to narrow the scope a little. What interests me is not the digital humanities as symbol, but the computer and computing and, I would suggest, that’s what all but consumes Liu’s attention in this essay. The computer as symbol is quite familiar to many humanists, for computers, robots, and other artificial beings figure centrally in many of the texts we examine. There the computer generally functions as an Other, often malevolent.

Liu wisely chooses to ground his inquiry in a specific example, Ryan Heuser and Long Le-Khac’s A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (May 2012, 68 page PDF), which I’ve previously discussed on New Savanna, From Telling to Showing, by the Numbers. There are two reasons he chose this paper. In the first place, it is methodologically sophisticated, state of the art as they say. Second, Heuser and Le-Khac arrive at a tentative conclusion about those 2958 texts that is meaningful in traditional humanistic terms, terms argued by Raymond Williams. That allows Liu to examine just how they were able to wrestle meaning from a computer.

Sean O'Sullivan: Carma and Ridesharing

Some years ago, in the previous century, I worked for Sean O'Sullivan when he was president of MapInfo. He's since gone on to do this and that. Richard Grigonis at Newsmax interviews him about his latest venture, the ride-sharing app, Carma:
A cab company must charge a few dollars a mile — that’s what the Ubers, Lyfts, and Sidecars do too, because you have to pay wages and be at the beck-and-call of people.

But when you’re just sharing a ride to the same area, you don’t have to detour and so you can just share costs. The economics thus change completely and it becomes very attractive.

I take Carma into work myself every weekday. People who use Carma more than a few times tend to get addicted to it because it supports a much higher quality of life where you don’t have to drive all the time. When you use the app, you spend a half an hour with one or more human beings in the same car. By sharing time with somebody else, you don’t feel that time is as wasted as when you’re sitting alone in your car stuck in traffic. There’s no more social an activity you can do with an app than actually physically meet up with people.

With Carma, commuting is no longer downtime, but a quite nice social activity.

Wednesday, March 19, 2014

Tuesday, March 18, 2014

Meaning of the Digital Humanities - Alan Liu



Delivered  May 1, 2013 at NYU. Starting around 50 minutes in, Liu has some interesting observations on the appearance of abstract and concrete terms.

See this entry at Liu's blog where you find this preview (taken from a PMLA essay covering the same material: “The Meaning of the Digital Humanities.” PMLA 128 (2013): 409-23. ):
Yet even if we were to complete our hypothetical ethnographer’s chart [of the digital humanities], it would not adequately explain the digital humanities. This is because we would leave unexplained the relation of the digital humanities to the humanities generally. My thesis is that an understanding of the digital humanities can only rise to the level of an explanation if we see that the underlying issue is the disciplinary identity not of the digital humanities but of the humanities themselves. For the humanities, the digital humanities exceed (though they include) the functional role of instrument or service, the pioneer role of innovator, the ensemble role of an “additional field,” and even such faux-political roles assigned to new fields as challenger, reformer, and (less positively) fifth column.

This is because the digital humanities also have a symbolic role. In both their promise and their threat, the digital humanities serve as a shadow play for a future form of the humanities that wishes to include what people value about the digital without losing its soul to other domains of knowledge work that have gone digital to stake their claim to contemporary society. Or, precisely because the digital humanities are both functional and symbolic, a better metaphor would be something like the register in a computer’s central processor unit, where values stored in deep memory are loaded for rapid shuffling, manipulation, and testing–in this case, to try out new humanistic disciplinary identities evolved for today’s broader contention of knowledges and knowledge workers.

The question of the meaning of the digital humanities best opens such an argument to view because it registers both a specific problem in the digital humanities and the larger crisis of the meaningfulness of today’s humanities.


Saturday, March 15, 2014

Miles Davis at Harvard: “the shit sounded good as a mother-fucker”

Miles Davis didn’t speak those words at Harvard. To my knowledge, he never set foot on the Harvard campus; though it is of course possible that he did so on some occasion. It was Homi Bhabha who uttered those words, quoting a statement Davis had used in describing Herbie Hancock’s piano playing. Bhaba was introducing Hancock as the 2014 Charles Eliot Norton Professor of Poetry.

But Hancock’s not a poet, you might say. No, he’s not. The word is used with poetic license.

As Bhabha said at the beginning of his introduction, the Norton Lectures, which it is the duty of the Norton Professor to deliver, are “Harvard’s singular tribute to the most creative minds in the arts and humanities.” The Norton Professorship has previously been held by such luminaries as T.S. Eliot (1932-33), Igor Stravinsky (1939-40), Ben Shahn (1956-57), Leonard Bernstein (1972-73), Frank Stella (1982-84), John Cage (1988-89), and Luciano Berio (1992-93), only one of whom was a poet. But all of them were some variety of white. Hancock is the first African-American to hold that distinguished post.

Thereby hangs a tale, no, many tales, tales crossing over a half millennium in time and half the globe in space.

For the socio-cultural forces that gave birth to Harvard in the New World in 1636 for the purpose of educating young men in the ministry are the same forces that engaged in the trans-Atlantic slave trade. Without those forces, no Norton Lectureship and no Herbie Hancock. Without those forces, no Homi Bhabha, for those same forces extended the British Empire to India. Bhabha was born in Mumbai and educated at Oxford. Mumbai to Oxford to Cambridge, Massachusetts–that’s quite a journey, one spanning the East-West extent of the old British Empire, which gave up its hold on India only in 1947, two years before Bhabha was born.

Thursday, March 13, 2014

"Life" is a concept, but be wary of thinking that such a thing as LIFE exists

Science writer Ferris Jabr in the NYTimes:
You might think botanists have a precise unfailing definition of a tree — they don’t. Sometimes it’s really difficult to say whether a plant is a tree or shrub because “tree” and “shrub” are not properties intrinsic to plants — they are ideas we impinged on them.

Likewise, “life” is an idea. We find it useful to think of some things as alive and others as inanimate, but this division exists only in our heads.

Not only is defining life futile, but it is also unnecessary to understanding how living things work. All observable matter is, at its most fundamental level, an arrangement of atoms and their constituent particles. These associations range in complexity from something as simple as, say, a single molecule of water to something as astonishingly intricate as an ant colony. All the proposed features of life — metabolism, reproduction, evolution — are in fact processes that appear at many different regions of this great spectrum of matter. There is no precise threshold.
Right, there is no precise threshold. And we understand a great deal about how living things function without, however, being able to create a living thing from scratch–life in a testtube. And we may NEVER be able to do so.

Consciousness is probably like that as well. No precise threshold, and we may never be able to create it artificially. So what?

Meanwhile, enjoy this video of Theo Jansen's Strandbeests, which have no internal source of propulsion, but move fluidly:


Tuesday, March 11, 2014

Actor's Re-entry

Actors can become so immersed in their roles that they need a "re-entry" routine to reconnect with their mundane lives after a performance:
Indeed, one of the ways that she, Deborah Margolin, and others in the profession insulate themselves from their own work is by disengaging from their heightened level of concentration just as doggedly as they build it up. Naomi Lorrain, a student in the MFA Graduate Acting Program at NYU Tisch School of the Arts mentioned the importance of safe spaces, explaining that for her, school is a safe space to do the unsafe things that are required in acting.

“I can’t do a really intense role and then snap out of it. Mine is a slower progression out of a character, but I’m learning a lot of physical things that help me shake it off,” Lorrain says. “I’ve learned to develop a ritual, whether a vocal exercise or yoga, to bring me back to my center.”
This is a matter of behavioral mode, as I've discussed it so often on New Savanna.

Repetition Makes Music?

From Elizabeth Hellmuth Margulis, One more time, in Aeon. She presents the so-called speech to song illusion:
The illusion begins with an ordinary spoken utterance, the sentence ‘The sounds as they appear to you are not only different from those that are really present, but they sometimes behave so strangely as to seem quite impossible.’ Next, one part of this utterance – just a few words – is looped several times. Finally, the original recording is represented in its entirety, as a spoken utterance. When the listener reaches the phrase that was looped, it seems as if the speaker has broken into song, Disney-style.
You should go to the article (link above) to listen to the sound files. The effect is interesting. The repeated phrase does take on a musical quality. Margulis then remarks:
You’d think that listening to someone speak and listening to someone sing were separate things, distinguished by the objective characteristics of the sound itself. It seems obvious: I hear someone speak when she’s speaking, and sing when she’s singing. But the speech-to-song illusion reveals that the exact same sequence of sounds can seem either like speech or like music, depending only on whether it has been repeated. Repetition can actually shift your perceptual circuitry such that the segment of sound is heard as music: not thought about as similar to music, or contemplated in reference to music, but actually experienced as if the words were being sung.

This illusion demonstrates what it means to hear something musically. The ‘musicalisation’ shifts your attention from the meaning of the words to the contour of the passage (the patterns of high and low pitches) and its rhythms (the patterns of short and long durations), and even invites you to hum or tap along with it. In fact, part of what it means to listen to something musically is to participate imaginatively.
So here's what I'm wondering. When know that the brain areas for music perception and language perception overlap. What I'm wondering is if the repetition of the phrase habituates some language circuits, thereby dampening them, and thus allows the musical circuits greater subjective prominence.

Ecstasy and Voudun – on being mounted and ridden (possessed) by a horse

Maya Deren, an experimental film-maker, went to Haiti in the early 1950s where she studied Voudoun, and participated in rituals. From that came a book, The Divine Horsemen – The Living Gods of Haiti, and a film of the same name:

)

This by way of an introduction to my recent post at 3 Quarks Daily, Some Varieties of Musical Experience.

Sunday, March 9, 2014

In conversation with… Steven Pinker

I've been following Steve Pinker since the late 1990s, when I read How the Mind Works as background for my book on music, Beethoven's Anvil. While I disagree with him on many things, I find his work interesting and challenging. The following interview, conducted by Oliver Burkeman on human nature, violence, feminism and religion, originally appeared in Mosaic: The Science of Life.



* * * * *

In the week that I interview the cognitive psychologist and bestselling author Steven Pinker in his office at Harvard, police release the agonising recordings of emergency calls made during the Sandy Hook school shootings. In Yemen, a suicide attack on the defence ministry kills more than 50 people. An American teacher is shot dead as he goes jogging in Libya. Several people are killed in riots between political factions in Thailand, and peacekeepers have to be dispatched to the Central African Republic.

In short, it’s not hard to find anecdotes that seem to contradict a guiding principle behind much of Pinker’s work – which is that science and human reason are, slowly but unmistakably, making the world a better place.

Repeatedly during our conversation, I seek to puncture the silver-haired professor’s quietly relentless optimism. If the ongoing tolls of war and violence can’t do it, what about the prevalence in America of unscientific beliefs about the origins of life? Or the devastating potential impacts of climate change, paired with the news – also released in the week we meet – that 23 per cent of Americans don’t believe it’s happening, up seven percentage points in just eight months? I try. But it proves far from easy.

At first glance Pinker’s implacable optimism, though in keeping with his sunny demeanour and stereotypically Canadian friendliness, presents a puzzle. His stellar career – which includes two Pulitzer Prize nominations for his books How the Mind Works (1997) and The Blank Slate: The modern denial of human nature (2002) – has been defined, above all, by support for the fraught notion of human nature: the contention that genetic predispositions account in hugely significant ways for how we think, feel and act, why we behave towards others as we do, and why we excel in certain areas rather than others.

This has frequently drawn Pinker into controversy – as in 2005, when he offered a defence of Larry Summers, then Harvard’s President, who had suggested that the under-representation of women in science and maths careers might be down to innate sex differences.

“The possibility that men and women might differ for reasons other than socialisation, expectations, hidden biases and barriers is very close to an absolute taboo,” Pinker tells me. He faults books such as Lean In, by Facebook’s chief operating officer, Sheryl Sandberg, for not entertaining the notion that men and women might not have “identical life desires”. But he also insists that taking the possibility of such differences seriously need not lend any justification to policies or prejudices that exclude women from positions of expertise or power.

“Even if there are sex differences, they’re differences in the means of two overlapping populations, so for any [stereotypically female] trait you care to name, there’ll be many men who are more extreme than most women, and vice versa. So as a matter of both efficiency and of fairness, you should treat every individual as an individual, and not prejudge them.”

Could Vermont Really Secede from the Union?

A year an a half ago I went to a meeting in Vermont where people talked seriously about seceding from the Union. But I don't recall any discussion of what that would actually involve. The current conflict in the Ukraine over Crimea has put the issue of secession on the front page and the NYTimes has an op-ed on the subject: Sovereignty vs. Self-Rule: Crimea Reignites Battle. Here's a couple of paragraphs:
Of course, the fractiousness that has chopped up the Soviet empire into increasingly smaller and often dysfunctional pieces is not relegated only to that part of the world, although in the West in recent years it has played through political and legal processes rather than military ones.

In September, for example, Scotland will hold a referendum on secession, a vote being held with the acquiescence of London. In November, Catalonia plans its own vote on independence from Spain, although in that case the Madrid government has called it illegal. Quebec held unsuccessful referendums on independence from Canada in 1980 and 1995 and as recently as last week its separatist government was discussing whether another should be held.
I assume that a Vermont – or, for that matter, a Texas secession – secession would be more like Scotland or Quebec than like what's been happening with the former Soviet empire. Under what circumstances could that actually happen? How would the free republic of Vermont disentangle itself from the Federal system, the military, banking, and tax systems?

Thursday, March 6, 2014

This is your brain on Fox news

anxiety attack.jpg

I'm amused that this photograph has been used to illustrate a news story where it has this caption: "The brain’s response when someone is experiencing a panic attack looks something like this — a chaotic tumble of neuron pathways and firing neurons." Yes, it does look a bit like a tumble of neurons. But it isn't. It's construction debris, concrete and rebar mostly.

Monday, March 3, 2014

Tim Perper: Polymath, Friend, and Colleague

My friend Tim Perper died unexpectedly on January 21 of this year. An obituary was published in the Philadelphia Inquirer. This post contains some more personal remarks.
* * * * *

I met Tim Perper online sometime in the late 1990s. It was on a listserve devoted to memetics. Neither of us had much patience for the more or less prevalent concept of memes as pesky little mental thingies that flitted about from one brain to another, taking over neural real estate in orgies of self-perpetuation. And so we argued against that idea in its various forms while attempting, against the tide, to formulate more serious ideas about cultural evolution. We also hung out on a listserve devoted to evolutionary psychology. Tim was a biologist by training – molecular biology and genetics – while I had been interested in both primate ethology and neuroscience since my undergraduate years at Johns Hopkins.

In mid-career, however, Tim was able to convince the Guggenheim Foundation to fund two years of research on human courtship in which he observed couples interacting in venues such as bars and church socials. Tim had thus become something of a cultural anthropologist. He incorporated that field research into Sex Signals: The Biology of Love (1985), which also drew on the Western literary tradition going back to classical times.

Tim and I had a lot to talk about: culture, biology, sex, love, and literature. And we talked about it all, mostly through email, but also over the phone and in-person as well. When I was researching my book on music, Tim drew my attention to the work of Walter Freeman, a neurobiologist at Berkeley. Tim was interested in complex dynamics and chaos, as was Freeman. So was I, but I didn’t have the technical background that Tim had, so I would turn to him to discuss such matters, interactional synchrony – which he’d observed in his courtship research – in particular.

Saturday, March 1, 2014

Plucky Heroines from Haggard to Hikaru and Buffy

This is a guest post by Timothy Perper, PhD and Martha Cornog, MA, MS, who served as Book Review editors for Mechademia: An Annual Forum for Anime, Manga, and the Fan Arts, a scholarly journal about Japanese cartooning and popular culture. Cornog is the graphic novel columnist for Library Journal. They have written extensively about manga and anime and edited Mangatopia: Essays on Manga and Anime in the Modern World (Libraries Unlimited 2011). Perper and Cornog, who are married, have also written extensively about gender and sexuality.
In an essay on The Valve (and republished on New Savanna), Bill Benzon asked a series of questions about H. Rider Haggard, a 19th century English novelist who wrote adventures set in Africa about supernatural white heroines. Haggard’s characterizations raise questions about how women and girls were portrayed in his adventure fiction, and are prescient in view of later portrayals in American and Japanese popular culture in both text and visual media.

It's not hard to place Haggard's views of women. In principle, he idealized the "pure womanhood" of England – for example, missionary Mackenzie's wife in Allan Quatermain – but at the same time his work reflects his male readers' uneasy yearning, heavily though not explicitly eroticized, for women who are passionate, active, heroic, and sexually motivated. Haggard was not writing Victorian pornography, but in his treatment of white women from Africa – ESPECIALLY white women like Ayesha in She and many other of his mature white heroines – he reveals the same impulse to eroticize the exotic that Steven Marcus saw as fundamental to Victorian pornography. (note 1) For Marcus, such pornography finds its closest genre relatives in utopian fantasy – hence Marcus' term "pornotopia" for the worlds portrayed in such literature. In such a pornotopia, women are not passive wallflowers, timid, meek, and obedient to their masters or to the patriarchy; instead, they are active agents of their own sexual purposes and desires, intermingled with a desire and capacity to rule the state as queens and empresses. (You will find an echo of this set of attributions to women in the Empress card in the standard Tarot deck.)

Haggard, Women, and the Wild

In many ways, Flossie Mackenzie in Allan Quatermain – the 10-year old daughter of a Scottish missionary – is simply a very young version of a heroine like Ayesha. Flossie is self-reliant, unafraid, armed, and dangerous, and kills a Masai warrior with the two-barreled Derringer pistol she carries when he attacks her. Although she is therefore attractive, Haggard seems uneasily aware that such a life is somehow not right for Woman, that is, not what a patriarchal God intended: here are his comments about Flossie, put into the mouth of his spokesman Allan. When Mackenzie decides to return to England with his wife and Flossie, Allan praises him:
"I congratulate you on your decision," answered I, "for two reasons. The first is, that you owe a duty to your wife and daughter, and more especially to the latter, who should receive some education and mix with girls of her own race, otherwise she will grow up wild, shunning her kind..." [Chapter 8]
Allan – and therefore Haggard – seems well aware that for Flossie a life in Africa is a ticket to the "wild," a place where 10 year-old girls like Flossie shoot leopards and men and live their own lives under their own rule, certainly not (Allan thinks) a world or role suited to an ideal Englishwoman.