Pages in this blog

Tuesday, April 30, 2013

Connections: Topography in the Brain

A profile of two neuroscientists, May-Britt Moser and Edvard I. Moser: "In 2005, they and their colleagues reported the discovery of cells in rats’ brains that function as a kind of built-in navigation system that is at the very heart of how animals know where they are, where they are going and where they have been. They called them grid cells."

A short (6 min) video about "Sharon Roseman of Denver, who gets lost every day — in the streets she’s lived in for 20 years, even in her own house. When she wakes, her walls seem to have moved overnight. Her world can be transformed in the blink of an eye." She's been diagnosed with "Developmental Topographical Disorientation (D.T.D.), a rare neurological disorder that renders people unable to orient themselves in any environment."

Dennettisms

The New York Times has an article on Dan Dennett, "perhaps America’s most widely read (and debated) living philosopher." I've read some of his stuff, but haven't found it terribly useful for my own work.  But, FWIW, I'm sympathetic to these two bits:
The self? Simply a “center of narrative gravity,” a convenient fiction that allows us to integrate various neuronal data streams.

The elusive subjective conscious experience — the redness of red, the painfulness of pain — that philosophers call qualia? Sheer illusion.
And, if he wishes to dismantle Searle's famous Chinese room argument, more power to him. But he really ought to give up on Dawkinsian memetics.

Monday, April 29, 2013

Talking Chimps and UFOs: A thought experiment

This is an out-take from Beethoven’s Anvil, my book on music. It’s about a thought experiment that first occurred to me while in graduate school in the mid-1970s. Consider the often astounding and sometimes absurd things that trainers can get animals to do, things the don’t do naturally. Those acts are, in some sense, inherent in their neuro-muscular endowment, but not evoked by their natural habitat. But place them in an environment ruled by humans who take pleasure in watching dancing horses, and . . . Except that I’m not talking about horses.


It seems to me that what is so very remarkable about the evolution of our own species is that the behavioral differences between us and our nearest biological relatives are disproportionate to the physical and physiological differences. The physical and physiological differences are relatively small, but the behavioral differences are large.

In thinking about this problem I have found it useful to think about how at least some chimpanzees came to acquire a modicum of language. All of them ended in failure. In the most intense of these efforts, Keith and Cathy Hayes raised a baby chimp in their household from 1947 to 1954. But that close and sustained interaction with Vicki, the young chimp in question, was not sufficient. Then in the late 1960s Allen and Beatrice Gardner began training a chimp, Washoe, in Ameslan, a sign language used among the deaf. This effort was far more successful. Within three years Washoe had a vocabulary of Ameslan 85 signs and she sometimes created signs of her own.

The results startled the scientific community and precipitated both more research along similar lines—as well as work where chimps communicated by pressing ironically identified buttons on a computerized panel—and considerable controversy over whether or not ape language was REAL language. That controversy is of little direct interest to me, though I certainly favor the view that this interesting behavior is not really language. What is interesting is the fact that these various chimps managed even the modest language that they did.


The string of earlier failures had led to a cessation of attempts. It seemed impossible to teach language to apes. It would seem that they just didn’t have the capacity. Upon reflection, however, the research community came to suspect that the problem might have more to do with vocal control than with central cognitive capacity. And so the Gardners acted on that supposition and succeeded where others had failed. It turns out that whatever chimpanzee cognitive capacity was, it was capable of surprising things.

Note that nothing had changed about the chimpanzees. Those that learned some Ameslan signs, and those that learned to press buttons on a panel, were of the same species as those that had earlier failed to learn to speak. What had changed was the environment. The (researchers in the) environment no longer asked for vocalizations; the environment asked for gestures, or button presses. These the chimps could provide, thereby allowing them to communicate with the (researchers in the) environment in a new way.

It seemed to me that this provided a way to attack the problem of language origins from a slightly different angle. So I imagined that a long time ago groups of very clever apes – more so than any extant species – were living on the African savannas. One day some flying saucers appeared in the sky and landed. The extra-terrestrials who emerged were extraordinarily adept at interacting with those apes and were entirely benevolent in their actions. These creatures taught the apes how to sing and dance and talk and tell stories, and so forth. Then, after thirty years or so, the ETs left without a trace. The apes had absorbed the ETs’ lessons so well that they were able to pass them on to their progeny generation after generation. Thus human culture and history were born.

To the Dark Side

The NYTimes' David Carr documents a shift to the dark side in American TV: "We used to turn on the television to see people who were happier, funnier, prettier versions of ourselves. But at the turn of the century, something fundamental changed and we began to see scarier, crazier, darker forms of the American way of life." He dates the change-over to The Sopranos, of course. Reading from a book to be released in July, Difficult Men: Behind the Scenes of a Creative Revolution by Brett Martin, Carr notes that this is not only a shift in what's on the screen, but in who's in control:
What becomes remarkable in retrospect is not just the rise of a new kind of storytelling, but the realization that an entire industry was built and controlled by writer-producers, men who typed for a living. Among others, Mr. Martin recounts the rise of David Chase, the creator of “The Sopranos”; David Milch, who came out of “NYPD Blue” to create “Deadwood”; David Simon, a former reporter for The Baltimore Sun who created “The Wire”; and Matthew Weiner, a “Sopranos” alumnus who conjured “Mad Men.”

That cohort and several others produced a small-screen equivalent to the revolution in American cinema during the 1970s, led by Martin Scorsese, Robert Altman and Francis Ford Coppola. The most remarkable narrative ambitions are now defined by a television season more often than a film, and show runners like Mr. Chase became all-powerful overlords of the worlds they created.
And now we have Netflix creating its own content, such as House of Cards. How will that play out?

Donald Shirley, Betwixt and Between

The New York Times has an interesting obituary about Donald Shirley, the son of an Episcopal priest who became "a pianist and composer who gathered classical music with jazz and other forms of popular music under a singular umbrella after being discouraged from pursuing a classical career because he was black." His parents were Jamaican and he performed Tchaikovsky’s Piano Concerto No. 1 in B-flat with the Boston Pops in 1945 when he was 18. The impresario Sol Hurock advised him that the classical public would not accept a black pianist, so
Mr. Shirley took to playing at nightclubs and invented what amounted to his own musical genre. First as part of a duo with a bassist and later as the leader of the Don Shirley Trio, featuring a bassist and a cellist — an unusual instrumentation suggesting the sonorities of an organ — he produced music that synthesized popular and classical sounds. He often melded American and European traditions by embedding a well-known melody within a traditional classical structure.

Saturday, April 27, 2013

Gatsby, then and Now

The New York Times has an interesting article contrasting the original cover art for The Great Gatsby with a new design pitched at the movie remake coming out shortly. The original art is stunning, the new art, not so much. The book has been on my mind as I've worked my way through Man Men. Don Draper is a descendant of James Gatz, the All American boy who strove to remake himself from nothing through the capacity of myth.

End-of-Life

From at article in The Atlantic Monthly about medical care for people who are dying:
Though no one knows for sure, unwanted treatment seems especially common near the end of life. A few years ago, at age 94, a friend of mine’s father was hospitalized with internal bleeding and kidney failure. Instead of facing reality (he died within days), the hospital tried to get authorization to remove his colon and put him on dialysis. Even physicians tell me they have difficulty holding back the kind of mindlessly aggressive treatment that one doctor I spoke with calls “the war on death.” Matt Handley, a doctor and an executive with Group Health Cooperative, a big health system in Washington state, described his father-in-law’s experience as a “classic example of overmedicalization.” There was no Conversation. “He went to the ICU for no medical reason,” Handley says. “No one talked to him about the fact that he was going to die, even though outside the room, clinicians, when asked, would say ‘Oh, yes, he’s dying.’ ”

“Sometimes you block the near exits, and all you’ve got left is a far exit, which is not a dignified and comfortable death,” Albert Mulley, a physician and the director of the Dartmouth Center for Health Care Delivery Science, told me recently. As we talked, it emerged that he, too, had had to fend off the medical system when his father died at age 93. “Even though I spent my whole career doing this,” he said, “when I was trying to assure as good a death as I could for my dad, I found it wasn’t easy.”
My father's last moments, from an old post:

Music Bits: Infants, Synch

Infants Benefit from Music

Research conducted at McMaster University shows that year-old infants who interact with their parents through music benefit in ways that passive listeners do not:
Babies from the interactive classes showed better early communication skills, like pointing at objects that are out of reach, or waving goodbye. Socially, these babies also smiled more, were easier to soothe, and showed less distress when things were unfamiliar or didn't go their way... 
Babies from the interactive classes showed better early communication skills, like pointing at objects that are out of reach, or waving goodbye. Socially, these babies also smiled more, were easier to soothe, and showed less distress when things were unfamiliar or didn't go their way.
Different People Respond to Music in Highly Similar Ways

Researchers at Stanford used FMRI to investigate neural response to classical music:
"We spend a lot of time listening to music—often in groups, and often in conjunction with synchronized movement and dance," said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study's senior author. "Here, we've shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention."

Thursday, April 25, 2013

Tuesday, April 23, 2013

Strategic Thinking in Jane Austen

The New York Times gives notice of Michael Chwe, Jane Austen, Game Theorist, just published by Princeton University Press. Chwe is a stellar young economist whose 2001 Rational Ritual: Culture, Coordination, and Common Knowledge is fascinating, though a bit technical. A couple years ago Chwe published "Rational Choice and the Humanities: Excerpts and Folktales" in Occasion, which is available online, and in which he discusses examples from Shakespeare and Richard Wright before moving on to folktales, which also get a chapter in the Austen book.

From the NYTimes article, after noting that game theory was invented by John von Neumann in the 1940s and quickly became a staple of Cold War strategic analysis and in several academic disciplines, including economics and biology:
Take the scene in “Pride and Prejudice” where Lady Catherine de Bourgh demands that Elizabeth Bennet promise not to marry Mr. Darcy. Elizabeth refuses to promise, and Lady Catherine repeats this to Mr. Darcy as an example of her insolence — not realizing that she is helping Elizabeth indirectly signal to Mr. Darcy that she is still interested. 
It’s a classic case of cluelessness, which is distinct from garden-variety stupidity, Mr. Chwe argues. “Lady Catherine doesn’t even think that Elizabeth” — her social inferior — “could be manipulating her,” he said. (Ditto for Mr. Darcy: gender differences can also “cause cluelessness,” he noted, though Austen was generally more tolerant of the male variety.)
I've not read the Austen book, but I do have one misgiving. Game theory is about interactions between autonomous individuals. While novels depict their characters as being autonomous actors, they are not autonomous. They are creatures of the author. 

Criticism and Learning


There are works of art that we like immediately. And those we don’t. What of it?

What I’m thinking about is that immediate liking may be based on superficial characteristics, and so with immediate dislike. In the first case, again as I’m imaging things, there’s nothing beyond those superficial characteristics, the one’s that are pleasing. In the second case I’m imagining that those superficial characteristics, the ones provoking a negative reaction, are, well, superficial, and that there’s more going on. What’s the role of criticism in these cases?

In the first case, and to a first approximation, the criticism points out the superficiality of the work, whatever it and they may be. This may or may not be difficult to do, but given the attraction of the superficial work, it may be difficult to take. In the second case the role is to educate the reader, viewer, or listener (as the case may be), to bring them to change so that they not only understand, but they enjoy the more profound work. Such change takes work and work takes time; it is not accomplished in the mere reading of a critical essay.

Such time and effort should not be squandered. The burden of criticism is thus a heavy one. The criticism must be exact if it is to be profitably exacting.

Saturday, April 20, 2013

Religion on the Ground

The New York Times Magazine has a profile of Robert Coogan, a Catholic priest who is chaplain at a prison in Mexico. For all practical purposes by the Zetas, a powerful crime syndicate.
It’s true that for all their infamous cruelty ­­­­ — beheadings, kidnappings, the mass murder of 72 Central and South American migrants in 2010 ­ — the Zetas are also known for their respect of the Catholic Church. After I wrote in 2011 about a chapel that Lazcano, one of the cartel’s founders, built in his hometown, word trickled back to Saltillo’s Zetas, who insisted on doing something similar for Coogan. “What color would you like the chapel painted?” one of the leaders asked him. Coogan said he liked it the way it was and told them not to bother because the roof leaked. “Two hours later they had people on the roof,” he said. “There was nothing you could do about it. They made a decision.”

Occasionally there have been more significant moments of solidarity between the cartel members and the priest. In January 2012, dozens of soldiers and police officers raided the Saltillo Cereso. In addition to confiscating drugs and alcohol and electronics, they ransacked the chapel and broke apart the tabernacle. Coogan called it a sacrilege as he showed me the destruction. But the raid ultimately deepened his relationship with the Zetas, who see the Mexican military as villains, not because they represent law and order but because they are presumed to be in the pocket of the Sinaloa Cartel. A few months later, when Coogan strongly resisted a Zetas request to bless a building that included a shrine to Santa Muerte, the idolatrous saint of death, the Zetas moved the shrine and replaced Santa Muerte with Pancho Villa, the revolutionary hero. “To call the Zetas evil, I wouldn’t want to do that,” Coogan said. In a country where the government is corrupt, the church is weak and business tycoons exploit workers while protecting lucrative monopolies, he said of the group’s vicious behavior, “It’s what they were taught.”
That's a world away from matters of reason and rationality that dominate so much discussion of religion offered by atheist intellectuals.

Gene Siskel on Dumbo

From his Playboy interview in tandem with Roger Ebert:
The movie with the strongest emotional pull of my youth—and it has to do with my psychological history—was Dumbo. The separation from the mother was terrifying to me. And also Dumbo’s flying. It was like my whole ego was riding right on his trunk when he had to fly and believe in that mouse. I felt that I had big ears and I think most people feel that they have big ears stashed somewhere in their life.
BTW, the whole interview is worth a quick read.


Sinatra on Song

From his Playboy interview in 1963:
Sinatra: ... I get an audience involved, personally involved in a song—because I’m involved myself. It’s not something I do deliberately: I can’t help myself. If the song is a lament at the loss of love, I get an ache in my gut. I feel the loss myself and I cry out the loneliness, the hurt and the pain that I feel. 
Playboy: Doesn’t any good vocalist “feel” a song? Is there such a difference.… 
Sinatra: I don’t know what other singers feel when they articulate lyrics, but being an 18-karat manic-depressive and having lived a life of violent emotional contradictions, I have an overacute capacity for sadness as well as elation. I know what the cat who wrote the song is trying to say. I’ve been there—and back. I guess the audience feels it along with me. They can’t help it. Sentimentality, after all, is an emotion common to all humanity.

Friday, April 19, 2013

Mad Men: Some Brief Notes

I've been working my way through back episodes of Mad Men–I don't have cable, so I can't watch the current episodes. Interesting show. But what's it about?

Well, Don Draper, Peggy Olson and a bunch of others, the 1960s, the advertizing bizness, life, and all that, of course. But why tell these stories, and tell them now? Just as science fiction is always about the present, though set in the future, so these stories set in the past are about the present. That is to say, they have been crafted to resonate with the present.

Just what is the resonant spill-over?

Don Draper, of course, is a handsome train wreck of a guy. That he is the central character is interesting, since he seems precisely to lack a center. But then, does anyone of us reall have a center? (As I recall my 60s intellectual history, Derrida came to prominence around a 1966 conference paper that shot holes in the idea of a center: "Structure, Sign, and Play in the Discourse of the Human Sciences").

And yet there's few episodes in season 4 that center on his relationship with Anna Draper that make him seem like a decent human being. Anna is the real wife of the man whose identity he (that is Dick Whitman) assumed in Korea. His concern for her has a quality that's quite different from any of his other relationships.

Celebrity Philosophers: Russell, Sartre, and Comments by Harman

Graham Harman has had two recent posts on celebrity philosophers. The first looks at a NYTimes column on the subject and defends Žižek: he may be a celebrity, but he's also serious. He then goes into an anti-analytic philosophy riff, as the Times pieces seems to have been the work of an analytic snarking on Continental philosophy. Harman's second post responds to a post Jon Cogburn had made in response and is yet another discussion of the analytic/Continental divide. In this post Harman mentions Bertrand Russell, who was, of course, an analytic philosopher and a public intellectual.

I don't know what Playboy magaine is up to these days, assuming it still exists, but there was a time when the monthly interview feature had some pretty interesting pieces, including interviews of Jean-Paul Sartre and Bertrand Russell. Does that mean the Playboy was straddling the analytic Continental divide?

That divide, incidentally, seems to be a recurring motif in Harman's thought, at least on his blog, though he also points out that Latour seems to operate outside that divide. If one were to work a deconstructive turn, this little riff strikes me as being an interesting way in. The distinction seems to matter a great deal to him as a philosopher.

Monday, April 15, 2013

Live Music is Good for Premature Infants

Reported in The New York Times:
Beth Israel Medical Center in New York City led the research, conducted in 11 hospitals, which found that live music can be beneficial to premature babies. In the study, music therapists helped parents transform their favorite tunes into lullabies.

The researchers concluded that live music, played or sung, helped to slow infants’ heartbeats, calm their breathing, improve sucking behaviors important for feeding, aid sleep and promote states of quiet alertness. Doctors and researchers say that by reducing stress and stabilizing vital signs, music can allow infants to devote more energy to normal development...

The study, published Monday in the journal Pediatrics, adds to growing research on music and preterm babies. Some hospitals find music as effective as, and safer than, sedating infants before procedures like heart sonograms and brain monitoring. Some neonatologists say babies receiving music therapy leave hospitals sooner, which can aid development and family bonding and save money.

Sunday, April 14, 2013

Automated Grading

By now you've probably read, in one context or another, about the use of computers to grade essay questions. And perhaps you've asked yourself, "How's that possible?" Elijah Mayfield mayfield has written an interesting essay on the subject: Six Ways the edX Announcement Gets Automated Essay Grading Wrong

Some quick points: The computer doesn't "read" the essays in any ordinary sense. It simply learns to classify them. How does it do that? It's trained. First, expert human readers grade a largish number of essays–in the hundreds. Then then computer examines what the humans have done, noting the features that characterize the A essays, the B essays, and so forth. It then takes those characterizations and applies them to essays it is given.

It is not useful for grading seminar papers or creative writing courses. It only makes sense in large courses where students are writing relatively circumscribed essays on the same topic.

And then there's this:
Machine learning gives us a computer program that can be given an essay and, with fairly high confidence, make a solid guess at labeling the essay on a predefined scale. That label is based on its observation of hundreds of training examples that were hand-graded by humans, and you can point to specific, concrete features that it used for its decision, like seeing webbed feet in a picture and calling it a duck.

Let’s also say that you can get that level of educated estimation instantly – less than a second – and the cost is the same to an institution whether the system grades your essay once or continues to give a student feedback through ten drafts. How many drafts can a teacher read to help in revision and editing? I assure you, fewer than a tireless and always-available machine learning system.

We shouldn’t be thinking about this technology as replacing teachers. Instead, we should be thinking of all the places where students can use this information before it gets to the point of a final grade. How many teachers only assign essays on tests? How many students get no chance to write in earlier homework, because of how much time it would take to grade; how many are therefore confronted with something they don’t know how to do and haven’t practiced when it comes time to take an exam that matters?
H/t Tyler Cowen.

Thursday, April 11, 2013

Graffiti: It's the Name that Does It

"The irony is I think it's the name, the writing your name, which caused it to become universally attractive, because of what that meant. To give an individual a voice, a visibility, that they could never have in any other way," he said. "If people just started out with murals and political statements, it would never have had the traction in the same way."
From my collection:

Ceaze
CEAZE

Monday, April 8, 2013

Neuroaesthetics?

Conway BR, Rehding A (2013) Neuroaesthetics and the Trouble with Beauty. PLoS Biol 11(3): e1001504. doi:10.1371/journal.pbio.1001504

Note that here neuroaesthetics means visual art, paintings and drawings. The final paragraph (there is no abstract):
There may well be a “beauty instinct” implemented by dedicated neural machinery capable of producing a diversity of beauty reactions, much as there is language circuitry that can support a multitude of languages (and other operations). A need to experience beauty may be universal, but the manifestation of what constitutes beauty certainly is not. On the one hand, a neuroaesthetics that extrapolates from an analysis of a few great works, or one that generalizes from a single specific instance of beauty, runs the risk of missing the mark. On the other, a neuroaesthetics comprising entirely subjectivist accounts may lose sight of what is specific to encounters with art. Neuroaesthetics has a great deal to offer the scientific community and general public. Its progress in uncovering a beauty instinct, if it exists, may be accelerated if the field were to abandon a pursuit of beauty per se and focus instead on uncovering the relevant mechanisms of decision making and reward and the basis for subjective preferences, much as Fechner counseled. This would mark a return to a pursuit of the mechanisms underlying sensory knowledge: the original conception of aesthetics.



Animated Acting: Sporn on Tytla

Vladimir Tytla was one of the great Disney animators. He was just before the era of the so-called "nine old men," but is arguably better than any of them. Michael Sporn has a post where he examines two scenes Tytla did, one in Dumbo and the other in Night of Bald Mountain (from Fantasia), with frame grabs and drawings. In general Sporn observes:
animation, there was animation technique and styles. These rarely had anything to do with acting. However, there were a number of animators at the Disney studio who wanted to put the focus on their acting and actually studied Stanislavsky and Boleslavsky so that their characters would give a great performance. Tytla was certainly a leader among the animators to do this.
Of the Dumbo performance, which Sporn believes to be one of the greatest ever, he says: "Dumbo was gentle, all truth. The honest performance meant keeping everything  above board and on the table. That is undoubtedly the performance Tytla drew."

And of the devil in Night on Bald Mountain:
The devil’s motion throughout this piece is very slow, tightly drawn images of the devil lyrically moving through the musical phases. It’s pure dance. Any distortion is done via the tight editing that Tytla has constructed. Very close images of the hands with the flame shaped dancers moving about in tight close up as Chernobog’s large face with searing eyes closely watching the fallen creatures dancing in his hands. It’s distortion enough. 
Tytla has constructed the most romantic sequence imaginable, and the emotion of the dance acts as the climax for all of Fantasia, and it succeeds in spades. All hoisted by the animation, itself. No loud crushing peak, just a dance done in a tightly choreographed number completely controlled by Tytla. It’s the ultimate tour de force of animation, and we’ll never see the likes of it again.
My own verdict on this scene: "There’s a heft and grandeur in Chernobog that is worthy of Milton’s Satan in Paradise Lost. Without Tytla’s powerful acting, his powerful realization of Chernobog, Night on Bald Mountain would only be a magnificent swirling freak show. Tytla gave it the weight of Greek tragedy."

Read Sporn's post and examine the frame grabs and the drawings. Then look back through the other three posts in this series.

Sunday, April 7, 2013

Animated Acting

We all know that Marlon Brando was a Method actor, and that The Method has roots in the work of the Russian director, Stanislavski. You know who else was a devotee of The Method? Mickey Mouse, Donald Duck, and the three little pigs, the whole Disney stable of characters. That's what Donald Crafton argues in Shadow of a Mouse: Performance, Belief, and World-Making. See, for example, this passage about the Disney animators and Donald Graham, who taught a long-standing course on action analysis at the Disney's studio (p. 48):
The animators and Graham were pursuing a Stanislavskian ideal of embodiment, trying to inject human thought, motion and emotion into their formerly figurative hieroglyphs, but the result was more complicated than they intended. They constructed lifelike movements and gave their characters the illusion of sentience, free will, and human frailty without the visible strings to the animators or their techniques. Inadvertently, though, they introduced ambiguity and increased the likelihood of unintentional meanings. [Quoted from Adam Weaver Animation HERE]
Here's the publisher's blurb:
Animation variously entertains, enchants, and offends, yet there have been no convincing explanations of how these films do so. Shadow of a Mouse proposes performance as the common touchstone for understanding the principles underlying the construction, execution, and reception of cartoons. Donald Crafton's interdisciplinary methods draw on film and theater studies, art history, aesthetics, cultural studies, and performance studies to outline a personal view of animated cinema that illuminates its systems of belief and world making. He wryly asks: Are animated characters actors and stars, just like humans? Why do their performances seem live and present, despite our knowing that they are drawings? Why is animation obsessed with distressing the body? Why were California regional artists and Stanislavsky so influential on Disney? Why are the histories of animation and popular theater performance inseparable? How was pictorial space constructed to accommodate embodied acting? Do cartoon performances stimulate positive or negative behaviors in audiences? Why is there so much extreme eating? And why are seemingly insignificant shadows vitally important?
I've not yet read the book, but it's plausible on the face of it. Henry Jenkins has a four-part interview with Crafton which goes through the argument without all the (I'm sure) fascinating detail: Part 1, Part 2, Part 3, and Part 4.

Skepticism about Obama's Brain Project

"When you look at the cerebellum it's a godawful mess" says Randy Gallistel, and its circuitry is very regular. Some remarks at blogginheads.tv:


BTW, the whole discussion is worthwhile. Gallistel makes the point that the human genome project was driven by a good computational model of what the genome does. We have no such model for the nervous system. In particular, we don't know how read-write memory is implemented in the nervous system. Read-write memory is essential to all computation.

And THAT's what the discussion is about, memory. The action potential ("spike") carries information from one place to another in the nervous system. But what carries information forward in time?

If you're interested in a somewhat detailed and technical account of Gallistel's views on memory, see this paper: Gallistel CR, Matzel LD. 2013. The neuroscience of learning: Beyond the Hebbian Synapse (PDF). Annual Review of Psychology 64: 169-200.

Friday, April 5, 2013

Citizen Science, that Means YOU

Science isn't just for scientists. It's for all of us. Here's a study of 11,000 citizen scientists involved with a single project:

Galaxy Zoo: Motivations of Citizen Scientists

Citizen science, in which volunteers work with professional scientists to conduct research, is expanding due to large online datasets. To plan projects, it is important to understand volunteers' motivations for participating. This paper analyzes results from an online survey of nearly 11,000 volunteers in Galaxy Zoo, an astronomy citizen science project. Results show that volunteers' primary motivation is a desire to contribute to scientific research. We encourage other citizen science projects to study the motivations of their volunteers, to see whether and how these results may be generalized to inform the field of citizen science.
Download a PDF HERE.

Indiana Philosophy "Ontology" Project

Here's the link. It takes you to the front door.

I've put "ontology" in scare quotes because the term is used as it is in computer science, which is not quite its meaning in philosophy proper. The Indiana Philosophy Ontology Project is about philosophy, and its conceptual structure, and not about the world at all.

As it says on the front door:
Philosophy Big Data
The InPhO analyzes over 37 million words of philosophical content from:
We are committed to open access, with data available via a REST APIa monthly OWL archive of the ontology, visualizations and datafiles postedon our datablog, and source code at GitHub. All data uses the Creative Commons BY-NC-SA 3.0 license.
This is digital humanities. Have fun poking around.

Frog Up Ponyo Sita: Complacency vs. Vision

An old post in which I talk about Disney (The Princess and the Frog), Pixar (Up), Hayao Miyazaki (Ponyo on the Cliff by the Sea), and Nina Paley (Sits Sings the Blues). Disney isn't what it was in the old days, Pixar may be cresting the hill and looking a the down slope, but Miyazaki is still going strong and Nina Paley, who knows what she'll do next.
Walt Disney’s company created the first feature-length animated film and was, for a long-time, the most distinguished producer of such films. Pixar, now owned by Disney, is the most important producer of animated features made though 3D computer graphics, techniques Pixar pioneered. Hayao Miyazaki is arguably the most important Japanese filmmaker currently working in feature-length animation and Nina Paley is, well, her blog tags her as “America’s best-loved unknown cartoonist.” The purpose of this post is to examine and assess four films by each of these, respectively, The Princess and the Frog, Up, Ponyo on the Cliff, and Sita Sings the Blues. The first two are wonderful technical achievements, but only so-so as stories. The last two are just wonderful.

Disney’s The Princess and the Frog was better than its trailers led me to believe it would be, but it is hardly the sign of a renaissance of traditional hand-drawn animation it was touted to be. A combination of action and musical numbers, it continues the approach to feature-length animation Disney has been exploring since Snow White and the Seven Dwarfs. Some of the musical numbers are wonderful – I’m particularly fond the psychedelic voodoo imagery of “Friends on the Other Side” and the clouds of glass bottles at the end of Mama Odie’s “Dig a Little Deeper” – and Louis, the trumpet-playing alligator, was a clever delight. But the story, not so much.

The story conflates a standard-issue frogboy-meets-girl fairy tale with a plea for the virtue of hard thankless work, as opposed to, say, magical hokum, unless it’s good magical hokum to counteract bad magical hokum. Bad magical hokum is purveyed by long tall city-living males dressed in black while good magical hokum is purveyed by short swamp-dwelling earth mamas dressed in white. To add a bit of tension to this already tenuous plot we throw in some frog-loving back woods bumpkins. They’re foiled by a good-ole boy Cajun firefly, who turns out to be the real hero of the film. All of which is to say, there’s a lotta’ stuff been thrown into the story, but it’s not clear why it’s all needed.

What about race? This was to be the Walt Disney film that deals with race. And, yes, Tiana and Naveen are black, which is to say, they have brown skin. And that’s all that being black means in this film, set in early 20th Century New Orleans. One might well say: So what? And, if Disney hadn’t made such a big deal of race in touting the film, that’s what I’d say. I know race is more than complexion, but one needn’t do EVERYTHING in a film. Disney chose to make race an issue, however, so their failure to come to terms what race as a social fact, rather than as a cosmetic fact, suggests a fundamental lack of seriousness about the story they’re telling. In the end it’s just one more story about a princess and her party dress.

Pixar’s Up is often gorgeous and occasionally poignant, but it is confused about what film it wants to be. Yes, the ten-minute recap of Carl and Ellie’s life together is touching; but that’s well within the range of any competent film-maker. And, yes, I too shed a tear when Carl gave Russell his merit badge at the end of the film. But those moments certainly don’t lift the film out of the ordinary, and that’s what Up is, an ordinary film, albeit one made though extraordinary technical means. It rode its technical virtuosity to the Oscars, but that hardly compensates for a diffuse and confused story.

That opening recap is done in highly stylized, but realistic terms. And so it is with Carl's conflict with the construction workers and their all-but faceless boss. It's all highly stylized, but within the imaginable capabilities of an old man living in our world. It isn't until the house actually lifts off that we realize we're not in Dorothy's Kansas anymore, no we're not. Still, this isn't much more than a plausible exaggeration. But, at some point the exaggeration ceases being plausible. At some point a carefully constructed world gave way to a disconnected set of gags. And the gags weren't in support of any particular premise. They're just gags – the talking dogs, Alpha with the high squeaky voice, and so forth – and those gags and stunts and obliterate the serious underpinnings of this story: Carl’s loss of Ellie, Russell’s estrangement from his father.

Home Sweet Home

The sleeping area:

behind the chocolate factory.jpg

Transportation:

bicycle belonging to homeless man.jpg

Wednesday, April 3, 2013

What Bryant Knows about Mary, Sorta: More on Color and Consciousness

The Bryant I have in mind is, of course, Levi Bryant, who has come in for severe criticism on this blog. And Mary, of course, is the fictional Mary invented by the philosopher Frank Jackson and much beloved by philosophers of mind (discussed, for example, HERE). As you may recall, she's an expert in the neuroscience and physics of color but is, alas, herself colorblind. Being an expert she knows everything there is to know about color. Then she gets an operation and is able to see the redness of the rose and the greenness of grass for the very first time. Now, so the story goes, she knows something she didn't know before.

Since she knew all the physical facts back in her color-blind days, it follows that whatever it is that her new-found consciousness has supplied cannot be physical. Hence consciousness is not physical, not even in the rarified way of electrochemical processes in nervous systems.

Something like that.

Jackson’s thought experiment has generated a tremendous amount of controversy (and a huge literature), and it seems to me, at least, that it is deeply problematic and almost sophistical.  Whenever I reflect on the thought experiment, I feel as if a trick has been played on me, that there is some sort of fundamental confusion at work here.
I understand the feeling. I felt the same way, and still do. And, as far as I can tell, something like that seems to be the general feeling among philosophers, as most of them are interested in figuring out just what it is that went wrong (see this review article by Martine Nida-Rümelin).

Monday, April 1, 2013

Is Hyperactivity for Real?

The opening paragraphs of an article in The New York Times:
Nearly one in five high school age boys in the United States and 11 percent of school-age children over all have received a medical diagnosis of attention deficit hyperactivity disorder, according to new data from the federal Centers for Disease Control and Prevention.

These rates reflect a marked rise over the last decade and could fuel growing concern among many doctors that the A.D.H.D. diagnosis and its medication are overused in American children.

The figures showed that an estimated 6.4 million children ages 4 through 17 had received an A.D.H.D. diagnosis at some point in their lives, a 16 percent increase since 2007 and a 53 percent rise in the past decade. About two-thirds of those with a current diagnosis receive prescriptions for stimulants like Ritalin or Adderall, which can drastically improve the lives of those with A.D.H.D. but can also lead to addiction, anxiety and occasionally psychosis.
An abstract for some notes I wrote up a few years ago, Music and the Prevention and Amelioration of ADHD: A Theoretical Perspective:
Russell A. Barkley has argued that ADHD is fundamentally a disorientation in time. These notes explore the possibility that music, which requires and supports finely tuned temporal cognition, might play a role in ameliorating ADHD. The discussion ranges across cultural issues (grasshopper vs. ant, lower rate of diagnosis of ADHD among African-Americans), play, distribution of dopamine and norepinephrine in the brain, neural development, and genes in culture (studies of the distribution of alleles for dopamine receptors). Unfortunately, the literature on ADHD does not allow us to draw strong conclusions. We do not understand what causes ADHD nor do we understand how best to treat the condition. However, in view of the fact that ADHD does involve problems with temporal cognition, and that music does train one’s sense of timing, the use of music therapy as a way of ameliorating ADHD should be investigated. I also advocate conducting epidemiological studies about the relationship between dancing and music in childhood, especially in early childhood, and the incidence of ADHD.

Slavery and Capitalism in 19th Century America

Walter Johnson, in The New York Times:
Every year, British merchant banks advanced millions of pounds to American planters in anticipation of the sale of the cotton crop. Planters then traded credit in pounds for the goods they needed to get through the year, many of them produced in the North. “From the rattle with which the nurse tickles the ear of the child born in the South, to the shroud that covers the cold form of the dead, everything comes to us from the North,” said one Southerner.

As slaveholders supplied themselves (and, much more meanly, their slaves) with Northern goods, the credit originally advanced against cotton made its way north, into the hands of New York and New England merchants who used it to purchase British goods. Thus were Indian land, African-American labor, Atlantic finance and British industry synthesized into racial domination, profit and economic development on a national and a global scale.
And so:
It is not simply that the labor of enslaved people underwrote 19th-century capitalism. Enslaved people were the capital: four million people worth at least $3 billion in 1860, which was more than all the capital invested in railroads and factories in the United States combined. Seen in this light, the conventional distinction between slavery and capitalism fades into meaninglessness.