Pages in this blog

Monday, September 30, 2013

Ethical Criticism is About the Future

This post starts where the previous one, Ethical Criticism in the Wild, ends, looking forward.

Let me begin with a passage from the end of Wayne Booth, The Company We Keep. He’s just quoted from a Chekov story (“Home”) observes (p. 484)
We all have “this foolish habit,” [liking stories] and we all are by nature caught in the ambiguities that trouble the prosecutor. Yet we are all equipped, by a nature (a “second nature”) that has created us out of story, with a rich experience in choosing which life stories, fictional or “real,” we will embrace wholeheartedly. Who we are, who we will be tomorrow depends thus on some act of criticism, whether by ourselves or by those who determine what stories will come our way – criticism wise or foolish, deliberate or spontaneous, conscious or unconscious: “You may enter; you must go away – and I will do my best to forget you.”
Each culture provides every member with an unlimited number of “natural” choices that seem to require no thought.

Let me repeat: “Who we are, who we will be tomorrow depends thus on some act of criticism…” But we cannot know the future. We can only try to guide present actions in a certain direction.

* * * * *

Here’s a longish comment I made in John Holbo’s discussion, Raiders of the Last Arc, a Pretty Good Film. First the set-up: Bloix had suggested that Waring read Jonathan Franzen’s Freedom, “with a woman protagonist written for the most part in the first person” (comment 165). In comment 171 Waring replies, “Why should I be compelled to read Freedom [because it has a female narrator]?”

Ethical Criticism in the Wild

As even casual users know, the world-wide web is jammed with commentary on pop culture and, for that matter, high culture covered as well, but not so extensively. It seems to me that an ethnographically minded critic might be interested in examining this activity and that the tools of corpus linguistics would be useful here. What could you find by trolling through millions of words of fan commentary?

There’s so much of this material that it’s hardly necessary for me to give specific examples. But I will, in part because much of this discussion tends to go on over time and among people who’ve come to know one another, if only online. These groups thus form what Stanley Fish called interpretive communities (a term he introduced in 1975 in “Interpreting the Variorum”, one of the essays included in Is There a Text in This Class?). First I look at a single-topic website devoted to popular culture, then to an academic group blog that covers various topics.

Buffy Rules

The Phoenix Board is home to group of Buffy the Vampire Slayer fans who found one another on Salon’s Table Talk and then, in the TT diaspora (when TT went pay-to-post and most people left), regrouped at their own website, The Phoenix Board. I followed these discussion quite closely on TT (when Buffy was still running and I was watching the series myself). Real-time commentary on current episodes was common. When the show took a commercial break people would go online to make short comments.

I never read anything like a full-dress interpretation of an episode, but there were scads and scads of interpretive comments, and disagreements, and the use of various moves to keep disagreements from erupting into flame wars, thus preserving civility within the community. Much of the commentary treated characters as though they were real people and involved speculations about their motives and desires, guesses about coming actions, and speculations about slices of their lives never actually depicted in any episode. Such speculation thus shades over into fanfic, fan-produced fiction about the characters in the Buffy universe (the Buffyverse).

I stopped following conversations here some time ago, although board still seems to be active. I just took a quick peek and there are recent comments.

Great American Novelists

And then we have Crooked Timber, an academic group blog that’s mostly about the social sciences and current affairs. But there’s also quite a bit of cultural commentary there as well. CT is one of my regular hang-out spots on the web and I comment there with modest frequency.

Sunday, September 29, 2013

Apocalypse 'R Us

Steve Almond in the NYTimes:
Yes, Virginia, apocalyptic ideation has migrated from the realms of science fiction and horror into the fun house of shtick. Back in the ’60s, “Dr. Strangelove,” a satire that played nuclear war for laughs, held the power to shock. This past summer, two apocalyptic comedies (“This Is the End” and “Rapture-Palooza”) came out within a week of each other, followed two months later by “The World’s End.”

As a form of disposable entertainment, the apocalypse market is booming. The question is why. The obvious answer is that these narratives tap into anxieties, conscious and otherwise, about the damage we’re doing to our species and to the planet. They allow us to safely fantasize about what might be required of us to survive.
After pointing out that the Biblical Book of Revelations was protest literature of a sort:
It’s only natural that the apocalyptic canon has radically expanded in the past few decades. Never has our species been so besieged by doomsday scenarios. If our ancestors channeled their collective death instinct into religious myth, we now face a raft of scientific data that suggest the end might be truly nigh.

This may explain the most perverse trend to yet emerge in the genre: child heroes. The exemplar here is “The Hunger Games,” which is about a girl who participates in a gladiatorial contest against other kids for the entertainment of a debased population. Most of the film is devoted to watching Katniss Everdeen become a trained killer and engage in combat. A sequel will be coming out in November, right on the heels of “Ender’s Game,” which features an exceptional child who is transformed into a super-soldier and fights off a horde of aliens. The recent Will Smith vehicle “After Earth” offered a father-son variation on this theme.

Saturday, September 28, 2013

Two Disciplines in Search of Love

That's a guest post I've contributed to Language Log. The disciplines are literary criticism on the one hand, and computational linguistics on the other. Here's an abstract:

Though computational linguistics (CL) dates back to the first efforts in machine translation in the mid 1950s, it is only in the last decade or so that it has had a substantial impact on literary studies through the statistical techniques of corpus linguistics and data mining (know as natural language processing, NLP). In this essay I briefly review the history of computational linguistics, from its early days involving symbolic computing to current developments in NLP, and set that in relationship to academic literary study. In particular, I discuss the deeply problematic struggle that literary study has had with the question of evaluation: What makes good literature? I argue that literary studies should own up to this tension and recognize a distinction between ethical criticism, which is explicitly concerned with values, and naturalist criticism, which sidesteps questions of value in favor of understanding how literature works in the mind and in culture. I then argue that the primary relationship between CL and NLP and literary studies should be through naturalist criticism. I conclude by discussing the relative roles of CL and NLP in a large-scale and long-term investigation of romantic love.

Dr. Sue Henderson Inauguration Concert

Last week I posted some photographs of trumpeter Jon Faddis performing in Jersey City. He was here to play a concert celebrating the inauguration of Dr. Sue Henderson as the 12th President of NJCU. Here are some other photographs from that concert, starting with one of the new president.

IMGP3745

IMGP3729

IMGP3723

Friday, September 27, 2013

How to Notice Things in Texts, or The Key to the Treasure Really Is the Treasure

Working, as I do, at some remove from the world of academic literary criticism, I don’t have a very sure sense of how things are, “boots on the ground” so to speak. A lot of grumbling about this and that – adjunctification, MOOCs, have we lost our intellectual way? to the barricades, comrades! – but I can’t really know.

One bit stream tells me that “close reading” is a lost critical art, or at least it’s dying. Another bit stream wants it back. I have no idea how to read those digital tea leaves, but I am sure that “close reading” (that wretched and misleading phrase) is important.

There are patterns in literary texts that betoken and betray deep operating patterns of the human mind. It is important that we learn to see them. I would argue that at this point in our ongoing collective conceptual development learning to spot the patterns is MORE important than how we account for them. The explanatory schemes we’ve come up with so far are rather shaky and must be replaced. With what, that’s not clear.

J. Hillis Miller on Burke and Derrida

In his minnesota review interview, J. Hillis Miller spoke of his intellectual debt to Kenneth Burke:
Burke was very important for me because of the notion that the work of literature is a way for the author (I wouldn't use this anymore) to attempt to work through a difficult or insoluble impasse or problem. So it's a symbolic action in the sense that it symbolically attempts to resolve some kind of aporia. This idea motivated my reading of Dickens. If someone said, "I don't want to read Derrida," I still would say, "Read Kenneth Burke."

Burke, for that epoch, was the best psychoanalytic critic in the United States, and also the best Marxist critic. The theory of symbolic action presupposes that the aporia that you're stuck with most likely has to do both with a family or a sexual situation, and with a social class impasse. It still seems to me that works for Dickens. It gives you a set of questions to ask.
Later on:
I learned a lot from myth criticism [referring to Northrup Frye], especially the way little details in a Shakespeare play can link up to indicate an "underthought" of reference to some myth or other. It was something I had learned in a different way from Burke. Burke came to Harvard when I was a graduate student and gave a lecture about indexing. What he was talking about was how you read. I had never heard anybody talk about this. He said what you do is notice things that recur in the text, though perhaps in some unostentatious way. If something appears four or five times in the same text, you think it's probably important. That leads you on a kind of hermeneutical circle: you ask questions, you come back to the text and get some answers, and you go around, and pretty soon you may have a reading.
There we have it, noticing patterns in texts. Or rather, noticing symptoms (as in symptomatic reading?) What’s “behind” them?

Thursday, September 26, 2013

All Hail Our Darwinian Overlords!

This piece was originally published in The Valve, 30 May 2009. I'm republishing it now because it stands in thematic counterpoint to a major essay I'm currently slogging my way though, Computational Linguistics and Literary Studies in Search of Love (working title). As the reach of the title suggests, I'm taking stock of where literary studies is and where it might go in view of a very different discipline. Well, Joseph Carroll has his ideas on those matters as well. Color me skeptical.
Joseph Carroll has been the chief theorist and proselytizer of literary Darwinism. David DiSalvo has a long interview with him at Neuronarrative.

DiSalvo first asks what literary Darwinism is:
Literary Darwinists integrate literary concepts with a modern evolutionary understanding of the evolved and adapted characteristics of human nature. They aim not just at being one more “school” or movement in literary theory. They aim at fundamentally transforming the framework for all literary study. They think that all knowledge about human behavior, including the products of the human imagination, can and should be subsumed within the evolutionary perspective.
An ambitious program, to say the least. But let's leave it alone for a bit – I'll offer an alternative view a bit later – and allow Carroll continue. He talks broadly about reductionism, culture, literacy, the adaptive value of literature, an empirical study of characters in canonical 19th C. British novels, the brain, human emergence and interaction between scientists and humanists.

The conversation then returns to the future of literary studies. Here's the downside scenario:
Literary study could continue to insist on disconnecting itself from empirically discernible facts about human nature and human cognition, or it could realize that science is not a threat and a competitor but an ally in the quest for human understanding. If it takes the former course, I think it will continue to decline catastrophically in prestige, enrollments, and funding. Its practitioners will either continue to invent arcane verbal systems designed for the superficial reprocessing of canonical literary texts, or they will resign themselves to the ever more tenuous elaboration of the sophistical quibbles at the heart of postmodern literary theory.
Unless the humanities in general come to grips with the last three or four decades of work in psychology, biology, and neurosciences, and so forth, Carroll believes "they are doomed to irrelevance and triviality."

If, however, the humanities, and literary studies in particular, undertakes to assimilate this work, here's the possible upside:
Let’s assume for the sake of argument that literary study manages to get past its own blockages. What then? All the world is before them: large-scale explanatory principles to hash out, a whole taxonomy to found on underlying principles of human nature, whole cultural epochs to analyze from a bio-cultural perspective, multitudes of texts to locate, with all their specific meaning structures and imaginative forms, in these yet-to-be-established bio-cultural contexts. We have before us the macro-world of human evolutionary history and the micro-world of the brain, cultural history to incorporate with human universals; neuroimaging and neurochemical analysis to integrate with tonal and stylistic analysis.

The kind of work I’m describing here would not merely offer new lenses through which to view existing knowledge. It would provide a starting point for a continuous, progressive program in creating new knowledge. Literary Darwinists have to assimilate the best insights of previous theory and criticism, but they have to reformulate those insights within a completely new framework located within the larger, total field of the human sciences. They cannot merely take concepts ready-made from existing evolutionary theories of culture. They have to absorb evolutionary theories, examine them critically, push back when the theories are inadequate to the realities of literary experience, and formulate new fundamental concepts in literary study-formal, generic, and historical. They have to participate in fashioning the linkages between their own specific fields of endeavor and the broader field of the evolutionary human sciences. They have to make the world anew.
We've got two decades to prepare ourselves for the new dispensation:
Five years ago, literary Darwinism was just a mild sensation on the margins of literary study. It is now a swelling tide. In five years, ten, maybe fifteen, possibly even twenty, I think it will have fundamentally altered the framework within which literary study is conducted.
I have no objections, in principle, to a new framework. I just don't see that literary Darwinism is deep enough to be such a framework. That's a bit much to take on here and now (more griping here).

Wednesday, September 25, 2013

Beyond Good and Evil?

As Breaking Bad nears the end, I thought I'd repost this one from The Valve, 30 March 2008.
As the final season of The Wire moved past its midpoint I began reading assertions and arguments that it is one of the three best (dramatic) shows that has even been on TV; The Sopranos and Deadwood are the other two (for example, see this discussion by three TV critics). In point of principle I don’t know about “the best,” but I do know that Deadwood and The Wire (I’ve not seen the last season) are very good. I’ve only seen the first seven episodes of The Sopranos and believe they’re very good. The level of excellence, however, is not what most interests me.

What interests me is that, whatever their differences, all three of these shows elicit our sympathy and concern for brutal and violent people, mostly male, operating outside the law. What’s that about? Is it merely a random circumstance, or does it speak to our historical moment? If the latter, what is it saying?

I don’t have a fine-grained knowledge of just how common such stories have been or, in a large sense, how common they are now. Crime stories and lawless frontier stories have been around for a long time. My sense, however, is that they have not always been so sympathetic to the bad guys. Why now? And how long has this been going on?

I note that The Godfather came out in 1972 (and was based on a novel that came out in 1969) and produced two sequels. Take that as the beginning this current wave. By 1972 America had been badly divided by the war in Vietnam and by the Civil Rights movement and was about to sail into a period of economic dislocation triggered by the 1973 oil embargo and continuing, through one means or another, to this day. This timing leads to the hypothesis that widespread doubts about legitimate social order led to an “atmosphere” which was receptive to movies (and other fictions) that didn’t depend on the legitimate social order implied by that film. If the legitimate order cannot be counted on to provide just rewards and punishments, then we have to look elsewhere for a moral compass.
[But in Breaking Bad? Moral compass? You gotta' be kidding. There's even less honor among those thieves than in Tony Soprano's bunch.]

Green Onions Rule!

This tune has been on my mind since forever! Well, not that long since the tune has only been around since 1962. But I heard it then and it got fixed in my soul, even if, back then I wouldn't publicly own up to liking this tune (I was a jazz man in those days, still am), my feet couldn't keep still. And the feet don't lie, not about music they don't.

Here's the original version, "Green Onions," as performed by Booker T. and the MGs:



Here's a somewhat different version, by Count Basie. Yes, Count Basie, one of New Jersey's many gifts to the Kingdom of Ever-Loving Funk, Fun, and Fabulosity. Don't let the sly intro fool you, he's just funnin', teasing, but he brings on the heat at about 55 seconds, then turns it back to sly, heat, sly, and a roarin' shout chorus:

How would you like to teach philosophy in a cave?

Two of [Singapore's] public universities, Nanyang Technological University and the National University of Singapore, have completed preliminary studies on developing the space beneath their campuses for lecture theaters, laboratories, sports facilities and performance halls. A third school, Singapore Management University, has already constructed a basement-level space linking its main above-ground buildings.
Does this mean the my old colleague from The Valve, John Holbo, is going to have his very own cave in which he can teach Plato's allegory of the cave? How cool!

Was JS Bach a punk?

Sir John Eliot Gardiner has a new biography coming out that documents the thuggish world of the schools Bach attended in his youth, full of "gang warfare and bullying, sadism and sodomy" according to an article in The Guardian.
Among Lüneburg's town records, he found reports on antisocial behaviour of two schoolboys in a local hostelry – "thoroughly drunk and … slashing … with [their] dirks and hunting knives". One is believed to have been Bach's mentor. Gardiner writes of sufficient evidence "to dent the traditional image of Bach as an exemplary youth … surviving unscathed the sinister goings-on in the schools he attended. It is just as credible that [he]… was in a line of delinquent school prefects – a reformed teenage thug." He added that Bach's repeated absences – 258 days in his first three years – are traditionally attributed to his mother's illness and his work in the family music business. But there could be a more sinister interpretation, he said, that the school conditions may have been so unappealing and even threatening.
Sounds like he'd have been at home with Tom Sawyer and Huck Finn. 

Sunday, September 22, 2013

Conjunctions on the Autumn Equinox

IMGP3821

Early yesterday afternoon I found myself sitting in the sanctuary at St. Bartholomew’s Episcopal Church in Manhattan. The Parish was founded in 1835; this is its third church, built in the second decade of the 20th Century. It is Byzantine in style, with glittering mosaics on the interior.

The pipe organ is the largest in New York City, and one of the ten largest in the world. I didn’t know this when I sat there yesterday, for that was the first time I’d even been in the church. “Byzantine” didn’t even click in my mind yesterday, as I sat between my sister and her friend, Yasuko. But I was certainly thinking “icons” (“iconoclasm”), “Greek Orthodox,” and even “Russian,” the conjunction of which all but added up to Byzantine. But didn’t quite get there for me. This was, after all, an Episcopal Church, no?

Yes.

The Wikipedia tells me that it is this parish that brought Leopold Stokowski from Europe in 1905 to be its organist and choir director.

Holy crap! says I to myself, no way!

Way.

Stokowski went on to direct the Philadelphia Symphony Orchestra. He had become something of a celebrity by the time when, over 30 years later, he ran into Walt Disney at a restaurant in Los Angeles. Walt invited him over to his table and Fantasia was hatched. Not then and there, mind you, it took awhile. But that’s when the wheels started turning.

Walt’s father, Elias, had been one of many carpenters who worked on The Chicago World’s Fair in 1893. And that fair featured a Japanese exhibit and pavilion on a small 16 acre in a lagoon. It was the unexpected hit of the fair and the first time most Westerners would have had an opportunity to encounter the Japanese, who’d only recently been subject to forced entry by Admiral Perry in 1853.

Saturday, September 21, 2013

Friday, September 20, 2013

J. Hillis Miller on the Profession of Literary Criticism

J. Hillis Miller is now one of the Grand Old Men of literary criticsm. When I first saw him, lecturing on, among other texts, The Secret Agent and A Passage to India, in perhaps my very first college literature course, he wasn't old, but he seemed grand enough to me, not quite the proverbial country bumpkin, but close enough. That was a year before the French landed in Baltimore for the structuralist symposium and rummage sale of '66. None of us knew what the future would foist upon us.

We still don't. While I'm not about to predict the future, not in this post, I look back in the spirit of Buckminster Fuller, who once observed: "In scientific prognostication we have a condition analogous to a fact of archery – the further back you are able to draw your longbow, the further ahead you can shoot" (Critical Path, 1981, p. 229).

First I want to present, with commentary, some remarks Miller published in the ADE Bulletin in 2003. Then I want to present some passages, with little commentary, from an interview Miller gave more recently.

Hillis Miller, the Long View

I first published these remarks in The Valve in October, 2008.

I’ve been browsing the archives of the ADE Bulletin, which is full of articles on the nature and state of the profession, more articles than I care to read. I recommend “Days of Future Past," by Michael Bérubé (2002) and “The Situation of the Humanities; or, How English Departments (and Their Chairs) Can Survive into the Twenty-First Century," by Annette Kolodny (2005). But I’d like to quote some passages from J. Hillis Miller, “My Fifty Years in the Profession" (2003).

Why Miller? Well, he is a prominent and honored member of the profession. That is one thing.

There is a more personal reason: his lectures captivated me when I was an undergraduate at Johns Hopkins. He was a model of wit and erudition, the very essence of a humanities professor. But also, when he talks about Hopkins, I know what he’s talking about because I was there.

And that sense of connection is important to me as I ponder these issues, the nature of and future of the discipline. The questions are important, but also abstract and remote. As accustomed as I am to abstraction, it also makes me antsy. This business of evaluative criticism, for example. The people who urge it are very earnest; but their talk seems very abstract, quote remote from the fact of doing such criticism time and again. Right now, the evaluative practice that is most meaningful to me concerns my photographs: which ones are worth processing and posting online, and just how do I tweak this or that one? I have some notion of how to talk about such things - after all, I really do make such decisions and I do have terms in which I think about them. Compared the demands of that simple task a list of evaluative criteria strikes me as almost hopelessly remote.

Enough about my photos and judgments. Back to Hillis Miller. My other reason for singling out his essay is that he’s reflecting about his 50 years in the profession. And that’s what interests me.

Wednesday, September 18, 2013

Bye Bye Space and Time?


The amplituhedron is "a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components of reality."

That "simplifies calculations" sounds interesting:
“The degree of efficiency is mind-boggling,” said Jacob Bourjaily, a theoretical physicist at Harvard University and one of the researchers who developed the new idea. “You can easily do, on paper, computations that were infeasible even with a computer before.”

The new geometric version of quantum field theory could also facilitate the search for a theory of quantum gravity that would seamlessly connect the large- and small-scale pictures of the universe. Attempts thus far to incorporate gravity into the laws of physics at the quantum scale have run up against nonsensical infinities and deep paradoxes. The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity....

Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.

In keeping with this idea, the new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct.
Of course, I haven't got the foggiest idea what this is all about. But I like it, the idea that space and time are derived from something else.

H/t Alex Tabarrock.

World Oral Literature Project

Hosted by Yale and Columbia. About the project:
The World Oral Literature Project is an urgent global initiative to document and disseminate endangered oral literatures before they disappear without record. The Project supports local communities and committed fieldworkers engaged in the collection and preservation of all forms of oral literature by providing funding for original research, alongside training in fieldwork and digital archiving methods.

For many communities around the world, the transmission of oral literature from one generation to the next lies at the heart of cultural practice. Local languages act as vehicles for the transmission of unique cultural knowledge, but the oral traditions encoded within these languages become threatened when elders die and livelihoods are disrupted. These creative works are increasingly endangered as globalisation and rapid socio-economic change exert complex pressures on smaller communities, often eroding expressive diversity and transforming culture through assimilation to more dominant ways of life. Of the world’s living languages, currently numbering over 6,000, around half will cease to be spoken by the end of this century.

Established at the University of Cambridge in 2009 and co-located in Yale, US since 2011, the World Oral Literature Project collaborates with local communities to document their own oral narratives, and aspires to become a permanent centre for the appreciation and preservation of oral literature. The Project provides small grants to fund the collecting of oral literature, with a particular focus on the peoples of Asia and the Pacific, and on areas of cultural disturbance. In addition, the Project hosts training workshops for grant recipients and other engaged scholars. The World Oral Literature Project also publishes oral texts and occasional papers, and makes collections of oral traditions accessible through new media platforms. By stimulating the documentation of oral literature and by building a network for cooperation and collaboration, the World Oral Literature Project supports a community of committed scholars and indigenous researchers.
Here's a link to the collections available online.

Tuesday, September 17, 2013

Jamming with Isabella

This is Isabella working on a drawing:

IMGP3691

Judging from her size and the way she talks I’d say she’s between three and four.

I met her the other day at Wayquay’s Curiosity House and Soul Gynasium. She was there with her mother and older sister. That’s her sister strumming the guitar (don’t know who the frog is; perhaps it’s Michigan J):

IMGP3696

That was a couple of days ago.

Just yesterday I went over to Wayquay’s with a couple trumpets in tow. I’m a musician, Wayquay’s a musician; she’d asked me to bring my trumpet along some time. So I did.

The door was open. Couldn’t see Wayquay, but some other folks were there, looking through the curiosities. So I got out my 1930s King Liberty and started playing. I forget just what, but it was simple and melodic.

Well, it turns out that Wayquay and Isabella were in another room. When I was done Isabella came up to me, big-eyed, smiling, and happy. She said something I couldn’t understand, but I understood what she was doing. She was heading straight to this piano:

Once was bustling with people and trains

Taken on 22 May 2011, from the eastern end of the platform area:

IMGP0109rd

Taken on June 26, 2011. I forget whether I was standing at the eastern end or on the northern side, but in either case I was using a long lens and shooting into the terminal. Notice the Ionic capital at the head of the column.

IMGP0979rd

IMGP0980rd

Visible Hands are the Devil’s Workshop: The Problematic of Control in Fantasia

This is a companion piece to Elephant Regression: On the Couch with Dumbo. Until the final two sections this is one of those pieces where psychoanalytic ideas pervade my thinking in ways “too diffuse for meaningful citation.” At the end I cite both Freud and John Bowlby. Caveat: this is another long one. You’ll probably have to schedule an intermission or two.
x13partiing of the waves.jpg

As we all know, Fantasia is staged as a concert. Each segment of the film presents a visual realization of a specific piece of classical music. The music is performed by the Philadelphia Symphony Orchestra, which was conducted by Leopold Stokowski at that time. He was perhaps more of a public figure in America than any other conductor, before or since, with the possible exception of Leonard Bernstein. Even moviegoers with little knowledge of classical music would have been aware of him. As we’ll see shortly, he was even played by Bugs Bunny in a cartoon.

While most classical conductors use a baton, Stokowski did not. He used only his hands. That was part of his shtick, his public persona.

Each segment of Fantasia, save the last, is preceded with a shot of Stowkoski on the podium, hands raised, gloved hands at the ready:

5 Stowkowski.jpg

Two segments, however, feature hands as an aspect of their imagery and do so in a way, I suggest, that betrays deep ambivalence. These episodes are The Sorcerer’s Apprentice and Night on Bald Mountain, each of which occupies the third slot of four in each of the two halves of the program. Before examining those episodes, however, we have to set things up, first with a look at the general issue of control in Fantasia, then with a look at the motor system and the way it differentiates between whole-body control and hand-control.

Conducting and Control

As I argued in Elephant Regression: On the Couch with Dumbo, Dumbo is a story about maturation. Fantasia is not. To be sure, we do see maturation in the Pastoral. The young Pegasus who is awkward in flight at the beginning of the episode is fluent at the end. That is to say, the young creature now has more effective control over his body than at the beginning of the segment.

Control is what Fantasia is about. That and boundaries: what are the limits of control?

It is my impression that classical music was more visible to and more problematic for the general public during in the first half of the 20th Century. The conflict between classical (aka “long hair”) music and popular music was played out in live-action films and cartoons. By the time Chuck Berry was urging Beethoven to roll over, he had already done so, pretty much.

It’s in that cultural context that Disney made Fantasia. While it advocated for classical music, it did so in a popular medium, animation. And, as I’ve already observed, Disney called on the most popular conductor of the day to serve as a mediating figure between the auditorium where the audience sat and the magical on-screen world depicted in each cartoon.

But the use of Stokowski also establishes control as a thematic concern within the film. The conductor controls the orchestra. That is to say, by convention, the conductor’s control over his arms and hands extends to control over the musicians in the orchestra. The boundary between conductor and musicians dissolves in a very specific way. Fantasia raises the issue of control more generally: Who, or what, controls the objects and events depicted within each segment of the film.

We can see this thematics of control enacted in a classic Bugs Bunny cartoon from 1948, Long-Haired Hare. As the cartoon opens Bugs is happily strumming away on his banjo somewhere up in the hills. Not far away a classical tenor, Giovanni Jones, is practicing for a recital. Bugs’ music disturbs him, so he tries to shut Bugs down, resulting in a standard cartoon shuffle between the Little Guy (Bugs) and the Big Guy (Jones). When Jones goes to the Hollywood Bowl for his recital, Bugs follows and makes a pest of himself, disrupting the performance in various ways. And then Bugs gets serious.

Introspection Through the Ages

Carlos G. Diuk, D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi, A quantitative philology of introspection, Frontiers in Integrative Neuroscience, 2012; 6: 80; Published online 2012 September 24. Prepublished online 2012 August 14. doi: 10.3389/fnint.2012.00080

Abstract: The cultural evolution of introspective thought has been recognized to undergo a drastic change during the middle of the first millennium BC. This period, known as the “Axial Age,” saw the birth of religions and philosophies still alive in modern culture, as well as the transition from orality to literacy—which led to the hypothesis of a link between introspection and literacy. Here we set out to examine the evolution of introspection in the Axial Age, studying the cultural record of the Greco-Roman and Judeo-Christian literary traditions. Using a statistical measure of semantic similarity, we identify a single “arrow of time” in the Old and New Testaments of the Bible, and a more complex non-monotonic dynamics in the Greco-Roman tradition reflecting the rise and fall of the respective societies. A comparable analysis of the twentieth century cultural record shows a steady increase in the incidence of introspective topics, punctuated by abrupt declines during and preceding the First and Second World Wars. Our results show that (a) it is possible to devise a consistent metric to quantify the history of a high-level concept such as introspection, cementing the path for a new quantitative philology and (b) to the extent that it is captured in the cultural record, the increased ability of human thought for self-reflection that the Axial Age brought about is still heavily determined by societal contingencies beyond the orality-literacy nexus.


The Political Legacy of American Slavery

A new paper (PDF) by Avidit Acharya, Matthew Blackwell, and Maya Sen.

Abstract: We show that contemporary differences in political attitudes across counties in the American South trace their origins back to the influence of slavery’s prevalence more than 150 years ago. Whites who currently live in Southern counties that had high shares of slaves population in 1860 are less likely to identify as Democrat, more likely to oppose affirmative action policies, and more likely to express racial resentment toward blacks. These results are robust to accounting for a variety of attributes, including contemporary shares of black population, urban-rural differences, and Civil War destruction. Moreover, the results strengthen when we instrument for the prevalence of slavery using measures of the agricultural suitability to grow cotton. To explain our results, we offer a theory in which political and racial attitudes were shaped historically by the incentives of Southern whites to propagate racist institutions and norms in areas like the “Black Belt” that had high shares of recently emancipated slaves in the decades after 1865. We argue that these attitudes have, to some degree, been passed down locally from one generation to the next.


My recent post on lynching is directly relevant to this paper as the authors identify lynching as one means by which those old attitutudes "have, to some degree, been passed down locally from one generation to the next."

Sunday, September 15, 2013

Chomsky's Linguistics, a Passing Fancy?

Dismissing Chomsky's political writing as ephemeral, albeit effective, journalism, Pieter Seuren begins an assessment of his work in linguistics:
He dominated linguistics for four full decades, from 1960 till 2000 and since 2000 his influence has still been reverberating in many ways... [But] it looks very much as if his influence is rapidly declining, and with good reason, because his theoretical approach has been shown to be flawed in too many ways by a large variety of critics. For all we know, his reputation may turn out, sub specie humanitatis, to have been a momentary flare.
And this, more or less, is the Chomsky I read during my undergraduate years in the mid-60s and had abandoned by the time I entered SUNY Buffalo in 1973 for my Ph. D. work:
Chomsky subsequently tried to reinterpret the notion of an algorithmic generative grammar in realist terms, that is, as a theory of how the human mind deals with language, and no longer in the purely instrumentalist terms current in the brand of structuralism that he had been taught and according to which all that counts is to provide a precise and concise statement of the facts of each language, regardless of how they are implemented in human brains. This move from instrumentalism to realism was brought about by his contacts with a small group of young Harvard psychologists, headed by Jerome Bruner and George Miller, who were in the process of creating the new cognitive science in opposition to behaviourism. The incorporation of the concept of a generative grammar into the new paradigm of cognitive science provided him with the opportunity of being part of a much more general and more profound revolution affecting the whole of society, the passing from ‘inhuman’ behaviourism to ‘human’ cognitive science and thus the reinstatement of human values in the modern world, giving impetus to a new and wide-ranging emancipatory movement.
This is the Chomsky of Aspects of the Theory of Syntax (1965).

Muriquis Monkeys Live the Egalitarian Life

When [Karen] Strier was first getting to know the muriquis, primatology was still largely focused on just a handful of species that had adapted to life on the ground, including baboons, or that had close evolutionary relationships with humans, such as apes. This emphasis came to shape public perception of primates as essentially aggressive. We picture chest-beating, teeth-flashing dominant male gorillas competing to mate with any female they choose. We picture, as Goodall had witnessed beginning in 1974, chimpanzees invading other territories, biting and beating other chimps to death. Primates, including possibly the most violent one of all—us—seemed to be born ruffians.

...Strier’s research introduced the world to an alternative primate lifestyle. Female muriquis mate with a lot of males and males don’t often fight. Though bonobos, known for their casual sex, are often called the “hippie” primates, the muriquis in Strier’s study site are equally deserving of that reputation. They are peace-loving and tolerant. Strier also showed that the muriquis turn out to be incredibly cooperative, a characteristic that may be just as important in primate societies as vicious rivalry.

Strier’s ideas shook up primatology, making her an influential figure in the field. Her widely used textbook, Primate Behavioral Ecology, is in its fourth edition and “has no peers,” according to the American Society of Primatologists.
And they're relaxed about sex:

Saturday, September 14, 2013

One Good Thing About George W. Bush

He was no micro-manager.

Dick Chaney, unfortunately, was. But that’s beside the point.

George Bush apparently wasn’t. He liked to take his time away from the office. And that’s good.

The Presidency is such an insane job that it’s nuts for anyone to even try to do it all. Get a handful of trusted associates, some of them superb manages, if not all, and let them do their jobs. And see that they, in turn do likewise: pick good subordinates, etc.

That way you can all set a good example of work/life balance.

Why? Because micro-managers are a pain in the ass and just make things worse for their direct reports without actually achieving any increase in effectiveness.

It’s about the job, not you.

I’m thinking G. W. Bush understood that. He just had a lousy set of ideas about what needed to get done.

Interdisciplinary work must be, well, interdisciplinary

This is an excerpt from a comment I made at the current Language Log discussion of computational linguistics and literary scholarship:

Some time ago, in connection with curriculum design, I took a look at the human sciences and concluded that there were roughly three broad conceptual styles: 1) qualitative: interpretive/hermeneutic and narrative, 2) behavioral or social scientific, with an emphasis on statistically controlled observations, and 3) structural/constructive: linguistics, cognitive science, where the idea is to construct a grammar or machine that generates the observed behavior. The humanities concentrate on the first and almost all interdisciplinary work in the humanities is confined to qualitative disciplines. Thus literature and psychology is mostly qualitative psychology. Classically, if I may, that's Freud or Jung. More recently there's been interest in cognitive science, but almost entirely on the qualitative side of work with cognitive metaphor, conceptual blending, other minds, and such. All of the interdisciplinary work associated with Theory, so-called, is qualitative.

But NLP (natural language processing, a branch of CL (computational linguistics)) is in the other two camps. The data gathering, preparation, and analysis is statistical; but there is, I believe, an underlying motivation in the structural/constructive camp. Working across the boundary between the qualitative methods of traditional humanities and the more mechanistic methods in the other two camps is much harder. That will require cooperation between researchers, some of whom have internalized qualitative methods, while others have internalized mechanistic methods. Getting that discourse up and running, that's where the real excitement and deep intellectual potential lies.

Friday, September 13, 2013

Apes Plan Ahead

Citation: van Schaik CP, Damerius L, Isler K (2013) Wild Orangutan Males Plan and Communicate Their Travel Direction One Day in Advance. PLoS ONE 8(9): e74896. doi:10.1371/journal.pone.0074896

Abstract: The ability to plan for the future beyond immediate needs would be adaptive to many animal species, but is widely thought to be uniquely human. Although studies in captivity have shown that great apes are capable of planning for future needs, it is unknown whether and how they use this ability in the wild. Flanged male Sumatran orangutans (Pongo abelii) emit long calls, which females use to maintain earshot associations with them. We tested whether long calls serve to communicate a male's ever-changing predominant travel direction to facilitate maintaining these associations. We found that the direction in which a flanged male emits his long calls predicts his subsequent travel direction for many hours, and that a new call indicates a change in his main travel direction. Long calls given at or near the night nest indicate travel direction better than random until late afternoon on the next day. These results show that male orangutans make their travel plans well in advance and announce them to conspecifics. We suggest that such a planning ability is likely to be adaptive for great apes, as well as in other taxa.

Humanists and Scientists: Asymmetrical Knowledge

In response to my post, Humanists & Scientists – Get a Life!, Margaret Freeman made this observation:
Though I agree with much of what you've said, I think it rather the case that while many humanists (I include myself among them) are attempting to educate themselves in cognitive science research, with some success, the reverse tends not to be true. Only infrequently does one come across any attempt on the part of the cognitive scientists to immerse themselves in arts and humanities research.
Why should they?

That’s not the response I made in the comment thread, but it’s one that occurs to me after thinking about the question.

We (humanists) study cognitive science (and other disciplines) because we think it will help us in our study of literature. Cognitive scientists, by and large, are not interested in studying literature (or film, whatever). It’s just too difficult and complex for their techniques and theories. They may appreciate literature as private citizens, but they don’t need to read academic literary criticism in order get pleasure out of reading Jane Austen, David Foster Wallace, or, for that matter, Stephen King or J. K. Rowling. Just as one can drive a car without knowing anything about the physics of internal combustion engines, so one can read The Left Hand of Darkness without knowing anything about literary criticism much less immersing oneself in it.

The trouble comes when these folks, whether Steven Pinker, Dan Dennett, or whomever, start criticizing humanists.

Thursday, September 12, 2013

The morning mist

space
makes Manhattan


IMGP3519

seem so very


IMGP3579

distant

IMGP3576

Graffiti: Drop Back and Punt

That’s not the phrase I want – “drop back and punt” – but it’s the best I can do at the moment. The phrase I want is French (a language I do not know). I found it in, I believe, Arthur Koestler’s The Ghost in the Machine, which had a mighty influence on me in my late teens and early 20s. What the phrase meant was, well, drop back and punt, but in a somewhat more elevated and sophisticated context.

The phrase characterized a phenomenon found in evolutionary biology. A population is caught in a dead end; its niche is contracting and the species is so well adapted to that niche that it cannot survive elsewhere. How can the population survive? One thing that happens is that subsequent generations become dedifferentiated, simpler if you will (“dropping back...”), which allows them to move to a new niche (“... and punting”), and there the population can develop new specializations appropriate to that niche. Koestler saw certain forms of (cultural) creativity as functioning like that. (Bleg: Does anyone know the French term that Koestler used?)

That’s what graff culture is up to, which puts us in territory I explored in a recent post, Through Duchamp and Beyond: Graffiti in the Promised Land?, and a somewhat older one, Graffiti Hit the Reset Button on Culture. As we know, graffiti started totally outside the art world. It was made by people who simply wanted to put a claim on the world, to be recognized. They weren’t interested in making art; that came later, and fitfully.

As Susan Farrell, founder of Art Crimes, put it to me in email, graffiti is a combination of art and extreme sport. Take this example by Distort:

IMGP3487

While it has a certain style, it is not Art in any conventional sense. What’s remarkable is that it’s at the top of an abandoned five or six story building.

Wednesday, September 11, 2013

What about math, anyhow?

How is it that math, a product of the human mind, manages to describe the world so well? Perhaps because, when asking that question, we pick and choose just what aspects of the world we have in mind. Derek Abbot, an Australian engineer argues that math really isn't all the effective. As reported Phys.org:
So if mathematicians, engineers, and physicists can all manage to perform their work despite differences in opinion on this philosophical subject, why does the true nature of mathematics in its relation to the physical world really matter? 
The reason, Abbott says, is that because when you recognize that math is just a mental construct—just an approximation of reality that has its frailties and limitations and that will break down at some point because perfect mathematical forms do not exist in the physical universe—then you can see how ineffective math is. 
And that is Abbott's main point (and most controversial one): that mathematics is not exceptionally good at describing reality, and definitely not the "miracle" that some scientists have marveled at.
And some of us spend a lot of time dealing with phenomena that aren't handled very effectively by mathematics.
"Analytical mathematical expressions are a way making compact descriptions of our observations," he told Phys.org. "As humans, we search for this 'compression' that math gives us because we have limited brain power. Maths is effective when it delivers simple, compact expressions that we can apply with regularity to many situations. It is ineffective when it fails to deliver that elegant compactness. It is that compactness that makes it useful/practical ... if we can get that compression without sacrificing too much precision.




Humanists & Scientists – Get a Life!

By now you’ve heard the news: Steven Pinker’s tried to offer some friendly advice to humanists and folks are all in a tizzy. Um, err, guys and gals, his bark is worse than his bite.

Wieseltier is afraid that the humanities are being overrun by thinkers from outside, who dare to tackle their precious problems—or “problematics” to use the, um, technical term favored by many in the humanities. He is right to be afraid. It is true that there is a crowd of often overconfident scientists impatiently addressing the big questions with scant appreciation of the subtleties unearthed by philosophers and others in the humanities, but the way to deal constructively with this awkward influx is to join forces and educate them, not declare them out of bounds. The best of the “scientizers” (and Pinker is one of them) know more philosophy, and argue more cogently and carefully, than many of the humanities professors who dismiss them and their methods on territorial grounds. You can't defend the humanities by declaring it off limits to amateurs. The best way for the humanities to get back their mojo is to learn from the invaders and re-acquire the respect for truth that they used to share with the sciences.
I can’t fault that.

Um, err, yes you can. That dig at "problematics" was gratuitous and hence unnecessary.

The first half of Brian Boyd’s The Origin of Stories: Evolution, Cognition, and Fiction gives an excellent summary and synthesis of work in the newer psychologies. But the second half, where he examines two texts, Odyssey and Horton Hears a Who!, is, frankly, dull stuff (my review is HERE). Except for the vocabulary, it could have been written 40 or 50 years ago.

We Are Creatures of Light

space
I love it when flowers and light

IMGP3588rd

dissolve into one another. As though

IMGP3588rdBW

they were the same thing.

IMGP3588rd2

Which – of course – they are.

Tuesday, September 10, 2013

Walt Disney: A Career in Three Acts

Republished from The Valve, April 3, 2007.
We have recently been blessed with two comprehensive biographies of Walt Disney. Neal Gabler's Walt Disney: The Triumph of the American Imagination came out late last year while Michael Barrier's The Animated Man: A Life of Walt Disney is scheduled for release later in April - though Amazon has already been shipping copies. The two books are very different in method, tone and achievement. Gabler's main text comes in at 633 pages while Barrier's has 325 pages; both books have extensive notes. Gabler had access to official Disney archives while Barrier did not - at least not this time around, however, he'd been in the archives on an earlier project, Hollywood Cartoons: American Animation in Its Golden Age. As the page count suggests, Gabler crams in more information - more about the company and business affairs, more about the general context, and more about Disney's ancestors. Barrier's book is more focused on Disney and, I believe, more empathetic to him - though Gabler has written that his method is to identify with his subject.

Despite these differences, both present a career in three acts: animation, Disneyland, and the Florida project. To be sure, Disney's studio has always been involved with cartoons and, during his life, Disney was always involved with those cartoons. But the nature of his involvement changed in quality and intensity, allowing other projects to attract his most passionate attention and activity.

Animation

Disney began learning the craft of animation in Kansas City in 1920, but left for Hollywood in 1923 with his business affairs in a shambles. There he hooked up with his older brother Roy and they formed the company that, in time, set the standard in animation. Initially Disney did everything - drew the pictures, painted the cels, and photographed them. As Walt Disney Productions became more successful, however, the Disney's hired others and by the mid 1920s Walt was no longer doing the animation himself. But he remained deeply involved in planning the cartoons, coming up with gags and story lines, and supervising every detail from start to finish.

Toward the end of the decade the novelty of cartoons had worn off and the business was getting tighter. In 1928 Disney had the idea of adding a fully synchronized sound to one of his cartoons, Steamboat Willie, the third Mickey Mouse cartoon. Other producers had been playing around with sound, but none had done so very effectively. With the help of a recent hire, Wilfred Jackson, who knew more about music than anyone else on staff, Disney was able to add a musical soundtrack to the film such that music and images were synchronized from beginning to end. Steamboat Willie was a smash hit; in consequence, Disney's business affairs began to turn around.

Xena more Real than the fashion industry

When she was 14 Jennifer Sky became a fashion model, thereby entering a world of abuse and exploitation. In her early 20s she got a six-week role in “Xena: Warrior Princess.”
I traveled to New Zealand, where the show was filmed, and I soon realized that acting was nothing like modeling. Everyone was constantly asking me if I was O.K.; if I needed to take a break. They assured me that the stunt person could do this or that move if I was not comfortable with it.

Perhaps the main difference, then and now, is that actors have a union and models do not.

Xena, however, was also special. It was feminism at work, with female lead characters who were unapologetically powerful and sexy. During my time on the show, on six episodes from the fourth to the fifth seasons, I kicked butt. Off screen, I was trained in numerous fighting techniques, in archery and horseback riding. On screen, I hung with a Christ figure called Eli; I had a same-sex lover and a boyfriend of a different race than mine; I threw bombs and walked along high wires. ...

For me, in my early 20s, still recovering from an adolescence of exploitation at the hands of the fashion industry, it was shout-it-to-the-heavens inspiring. Joining this world of warrior princesses reignited the hope-driven child in me.

Monday, September 9, 2013

Does 'traveling salesman' = 'connect the dots'?

The traveling salesman problem is a well-known problem: A salesman has to make X stops. Of all the routes he can take, which is the shortest one? Tom Vanderbilt has a fascinating article about this problem and its many cousins and great grandchildren: Unhappy Truckers and Other Algorithmic Problems.
Powell’s biggest revelation in considering the role of humans in algorithms, though, was that humans can do it better. “I would go down to Yellow, we were trying to solve these big deterministic problems. We weren’t even close. I would sit and look at the dispatch center and think, how are they doing it?” That’s when he noticed: They are not trying to solve the whole week’s schedule at once. They’re doing it in pieces. “We humans have funny ways of solving problems that no one’s been able to articulate,” he says. Operations research people just punt and call it a “heuristic approach.”

This innate human ability was at work in Santilli’s daughter’s class, too. The fifth graders got it about right. As James MacGregor and Thomas Ormerod note, “the task of the traveling salesman problem may happen to parallel what it is natural for the perceptual system to do in any case when presented with an array of dots.” Curiously, using this heuristic approach, they note, subjects in experiments were “exceptionally good at finding the optimum tours.” In other experiments, when subjects were shown images of optimal tours, they were thought to be more aesthetically pleasing than sub-optimal tours.
H/t Tyler Cowen.

There IS no Theory of Everydamnthing, Get Over It

Philip Kitchner, Things Fall Apart (yes indeed they do!), The New York Times:
Thinkers in the grip of the Newtonian picture of science want a general basis for general phenomena. Life isn’t like that. Unity fails at both ends. To understand the fundamental processes that go on in living things — mitosis, meiosis, inheritance, development, respiration, digestion and many, many more — you need a vast ensemble of models, differing on a large number of details. Spelling out the explanations requires using metaphors (“reading” DNA) or notions that cannot be specified generally and precisely in the austere languages of physics and chemistry (“close association”). But the phenomena to be explained also decompose into a number of different clusters.

The molecular biologist doesn’t account for life, but for a particular function of life (usually in a particular strain of a particular species). Nagel’s 19th-century predecessors wondered how life could be characterized in physico-chemical terms. That particular wonder hasn’t been directly addressed by the extraordinary biological accomplishments of past decades. Rather, it’s been shown that they were posing the wrong question: don’t ask what life is (in your deepest Newtonian voice); consider the various activities in which living organisms engage and try to give a piecemeal understanding of those.
Mind is like that too:
Minds do lots of different things. Neuroscience and psychology have been able to explore a few of them in promising ways. Allegedly, however, there are “hard problems” they will never overcome. The allegations are encouraged by an incautious tendency for scientists to write as if the most complex functions of mental life — consciousness, for example — will be explained tomorrow.

The route to the molecular account of organic functions began in a relatively humble place, with eye color in fruit-flies. A useful moral for neuroscience today would be to emulate the efforts of the great geneticists in the wake of the rediscovery of Mendel’s ideas. Start with more tractable questions — for example, how do minds direct the movement of our muscles? — and see how far you can get. With luck, in a century or so, the issue of how mind fits into the physical world will seem as quaint as the corresponding concern about life.
Frankly, it seems a bit quaint now. And, come to think of it, though I've given Dennett a hard time for his boneheaded (again, with the frankness) views on memes, he seems to realize this. So give him some props, people!

Of Nagel and many others, Kitchner observes: "the phenomena that concern him, mind and value, are not illusory, but it might nevertheless be an illusion that they constitute single topics for which unified explanations can be given." YES! It's pluralism all the way down!

Neuroaesthetics

Given my longstanding interest in the arts and in the brain you’d think I’d be interested in something called neuroaesthetics, wouldn’t you? Well, in principle I am. In practice, I’ve moved on, or back, whatever. I really have to get down to analyzing and describing particular works.

Still, if the idea of neuro aesthetics excites you, where would you go online to find out more? Well, you could go where I go, to the Wikipedia. As I already aware of much of the material mentioned in that article (mostly Zeki, Ramachandran) I went right to the list of external links.

I don’t know whether Semir Zeki is responsible for coining the term, but he’s probably the researcher most identified with neuroaesthetics. His institute has a website where you’ll find lots of stuff, including links to publications (and not just from his lab), and Zeki’s blog.

The International Network for Neuroaeathics appears to have a wealth of material, including links to articles (theory, visual arts, music, dance, facial attractiveness, expertise, design, and neurophysiology), books, and media coverage.

Here’s a useful critique of this emerging field: Conway BR, Rehding A (2013) Neuroaesthetics and the Trouble with Beauty. PLoS Biol 11(3): e1001504. This passage from the conclusion has special resonance for me:
The field will benefit from developing models relating observations from the humanities to the careful neuroscience that has uncovered computations at cellular resolution within the value-judging structures of the monkey brain. These structures, not coincidentally, are analogous to those identified in fMRI studies of beauty in humans. Some neurons within these structures encode the value of the choices on offer, while others encode the value of the selected choice. Moreover, the neurons adapt on different timescales, displaying “menu-invariant” firing at short timescales and adaptable behavior on longer timescales. This adaptation may account for our ability to make choices across vastly different scales, for example from a restaurant menu in one instance and from houses offered for sale in the next instance [48]. It seems entirely reasonable—even likely—that these neurons are also implicated in the thorny task of deciding what is beautiful. Reformulated in this way, neuroaesthetics is decoupled from beauty and can exploit advances across a range of empirical neuroscience, from sensory encoding to decision making and reward.
This business of time-scales will come up later in this post, where I reference a “perspectives” piece by Son Preminger.