Sunday, September 24, 2017

The hermeneutics of Ta-Nehisi Coates

The thing is, he seldom makes arguments in the sense I understand that term. There isn’t extended reasoning through assumptions and implications or careful sifting through evidence to see which hypotheses are supported or disconfirmed. No, he offers an articulate, finely honed expression of his worldview, and that’s it. He is obviously a man of vast talents, but he uses them the same way much less refined thinkers simply bloviate.

But that raises the question, why is he so influential? Why does he reach so many people? What’s his secret?

No doubt there are multiple aspects to this, but here’s one that just dawned on me. Those who respond to Coates are not looking for argumentation—they’re looking for interpretation.

The demand for someone like Coates reflects the broad influence that what might be called interpretivism has had on American political culture. This current emerged a few decades ago from literature, cultural studies and related academic home ports. Its method was an application of the interpretive act of criticism. A critic “reads”, which is to say interprets, a work of art or some other cultural product, and readers gravitate toward critics whose interpretations provide a sense of heightened awareness or insight into the object of criticism. There’s nothing wrong with this. I read criticism all the time to deepen my engagement with music, art, film and fiction.

But criticism jumped channel and entered the political realm. Now events like elections, wars, ecological crises and economic disruptions are interpreted according to the same standards developed for portraits and poetry. And maybe there is good in that too, except that theories about why social, economic or political events occur are subject to analytical support or disconfirmation in a way that works of art are not.

Saturday, September 23, 2017

Military ontology

Thank you, Ladies of NCIS [#NCIS]

Pete Turner, colors – the world

Richard Sandomir writes his obituary in The New York Times:
Altering reality was nothing new for Mr. Turner. Starting in the pre-Photoshop era, he routinely manipulated colors to bring saturated hues to his work in magazines and advertisements and on album covers.

“The color palette I work with is really intense,” he said in a video produced by the George Eastman House, the photographic museum in Rochester that exhibited his work in 2006 and 2007. “I like to push it to the limit.”

Jerry Uelsmann, a photographer and college classmate who specializes in black-and-white work, said in an email that when he saw Mr. Turner’s intense color images, he once told him, “I felt like I wanted to lick them.”

On the lookout


I'd be wary of Ken Burns. His jazz documentary stretched the material to fit his myth of America.

Ken Burns has a new Vietnam documentary out. I'd be wary of it. I've only seen one Burns extravaganza, the one on jazz, and that one made me a Ken Burns skeptic. Why? Because I know the history better than Burns. I bought the 10-DVD boxed set to get the archival footage. Otherwise...Here's a note I sent to my old teacher, Bruce Jackson, while I was watching the series on TV.

* * * * *

Dear Bruce,

If you've been watching Ken Burns' Jazz, I'd be interested in your reactions.

Of course, this has come to us with lots of hype and counter hype as well, much of the latter centered on the aesthetically conservative scope and selection of material. I'm quite sympathetic to this line of criticism, but don't want to pursue it here, at least not directly. The following caveats would remain even if he hadn't bought the Marsalis/Crouch/Murray line pretty much wholesale.

I find it to be a somewhat mixed bag. The documentary material is often quite fine -- I'm particularly struck by the scenes of dancers, though I've seen that sort of stuff before. However, he often mixes images, especially stills, from distinctly different eras without really telling you that. If you know the faces well, and even the instruments (e.g. Armstrong played distinctly different horns at different periods) this is obvious. I'm not sure how obvious this would be to someone who's new to the material. I suppose this is a nit-picky issue, but in an overall scheme that's chronological, this tends to lift the major figures out of history and into the eternal ether, which is surely one of the things going on here.

Then there's the talking heads, the authoritative commentators. I'm not quite sure what role these folks and their words play/will play in the overall impact of this work. The images and sounds are quite powerful.

In any event, what I find interesting is that the words we hear, whether in voiceover or from a head we see, are a mixture of things, but, with one exception, not signaled as such. The exception is where, in voiceover, we get a fragment from a contemporary source (newspaper, magazine, biography, etc.). That is always identified, as such, but only after it has been read. The other commentary includes identification of materials, names and dates & other straight history, how-jazz-works (mostly from Marsalis so far), interpretation, exaggeration, and unverified lore. What bugs me is that all this is presented on pretty much the same footing, on pretty much the same authority.

And it takes a pretty sophisticated person to recognize what's going on and to even begin to sort this out; more intellectually sophisticated and knowledgeable about jazz, I'd guess, than Ken Burns. I've never done any serious oral history or ethnography, but I've talked to many musicians and I've read lots of interviews and I know that you simply cannot take their words at face value. While it's possible that they may be deliberately playing you, that's really the least of your problems in dealing with what they say. They can only speak in the categories they know, and if those categories are poor -- and they certainly have been and still are for this music -- then the commentary will be poor as well. Beyond that, memory simply isn't reliable. Etc.

So that's an issue. And I'm not sure how you deal with it. I mean if you're going to interview 90-year-old Milt Hinton about what happened when he was twenty you pretty much have to present what he says. You can't give him a lawyerly grilling nor can you stick a little reliability meter there in the lower left hand corner. Now, if it is your explicit intention to do an oral history, then it's all talking heads and you frame it as an oral history and everyone knows it for what it is. But that's not what Burns is doing. He's presenting....well, just what is he presenting? that's the question. I think it's a nationalist myth, a rather attractive one, but still a myth.

Friday, September 22, 2017

Bus in motion


Deep Learning through the Information Bottleneck

Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition — a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?

“This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.” [...]

Imagine X is a complex data set, like the pixels of a dog photo, and Y is a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors Fernando Pereira, now at Google, and William Bialek, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.
But, you know, the emic/etic distinction is about relevance. What are phonemes, they are "he most relevant features of a spoken word".

To the most recent experiments:
In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.

Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.
For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example. Lake and his colleagues’ models suggest the brain may deconstruct the new letter into a series of strokes — previously existing mental constructs — allowing the conception of the letter to be tacked onto an edifice of prior knowledge. “Rather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features” as in standard machine-learning algorithms, Lake explained, “instead I aim to build a simple causal model of the letter,” a shorter path to generalization.
On 'deconstructing' letterforms into strokes, see the work of Mark Changizi [1].

For a technical account of this work, see Ravid Schwartz-Ziv and Naftali Tishby, Opening the black box of Deep Neural Networks via Information:

[1] Mark A. Changizi, Qiong Zhang, Hao Ye, and Shinsuke Shimojo, The Structures of Letters and Symbols throughout Human History Are Selected to Match Those Found in Objects in Natural Scenes, vol. 167, no. 5, The American Naturalist, May 2006

Cheap criticism & cheap defense: Can machines think?

Searle’s Chinese room argument is one of the best-known thought experiments in analytic philosophy. The point of the argument as I remember it (you can google it) is that computers can’t think because they lack intentionality. I read it when Searle published in Brain and Behavioral Science back in the Jurassic Era and thought to myself: So what? It’s not that I thought that computers really could thing, someday, maybe – because I didn’t – but that Searle’s argument didn’t so much as hint at any of the techniques used in AI or computational linguistics. It was simply irrelevant to what investigators were actually doing.

That’s what I mean by cheap criticism.

But then it seems to me that, for example, Dan Dennett’s staunch defense of the possibility of computers thinking is cheap in the same way. I’m sure he’s read some of the technical literature, but he doesn’t seem to have taken any of those ideas on board. He’s not internalized them. Whatever his faith in machine thought is based on, it’s not based on the techniques investigators on the matter have been using or on extrapolations from those techniques. That makes his faith as empty as Searle’s doubt.

So, if these guys aren’t arguing about specific techniques, what ARE they arguing about? Inanimate vs. animate matter? Because it sure can’t be spirit vs. matter, or can it?

I think like a Pirahã (What's REAL vs. real)

Something I'd recently posted to Facebook.

I just realized that in one interesting aspect, I think like a Pirahã. I’m thinking about their response to Daniel Everett’s attempts to teach the Christian Gospel: 
Pirahã: “This Jesus fellow, did you ever meet him?” 
Everett: “No.” 

Pirahã: “Do you know someone who did?” 
Everett: “Um, no.” 
Pirahã: “Then you don’t know that he’s real.”
As far as the Pirahã are concerned, if you haven't seen it yourself, or don't know someone who has, then it's not REAL (upper case).

This recognition of the REAL takes a somewhat different form for me, after all, I recognize the reality (lower case) of lots of things of which I have no direct experience and don't know anyone who has. Thus, to give but one example, I've not set foot on the moon and I don't know anyone who has. But I don't believe that the moon landings were faked. Yada yada.

But I’ve been thinking about the REAL for awhile. One example, the collapse of the Soviet Union in the last decade of the millennium. For someone born in 1990, say, that’s just something they read about in history books. They know it’s real, and they know it’s important. But it just doesn’t have the “bite” that it does for someone, like me, who grew up in the 1950s when the Cold War was raging. It’s not simply events that appeared in newspapers and on TV, it’s seeing Civil Defense markers on buildings designated as fall-out shelters, doing duck-and-cover drills in school, reading about home fall-out shelters in Popular Mechanics and picking a spot in the backyard where we should build one. I fully expected to live in the shadow of the Soviet Union until the day I died. Some when it finally collapsed – after considerable slacking off in the Cold War – that was a very big deal. It’s REAL for me in a way that it can’t be for someone born in 1990 or after (actually, that date’s probably a bit earlier than that).

[Yeah, I know, I didn't see it with my own eyes. But then is something like the Soviet Union something you can see? Sure, you can see the soil and the buildings, etc. But they're not the Soviet Union, nor are the people. The USSR is an abstract entity. And I can reasonably say that I witnessed the collapse of that abstract entity in a way that younger people have not. That makes it REAL. Or should that be REALreal? It's complicated.]

This sense of REALness is intuitive. And I’d think it is in fact quite widespread in the literate world, but mostly overwhelmed by “book learnin’”. 

Another example. Just the other day I read a suite of articles in Critical Inquiry (an initial article, 5 comments in a later issue, and a reply to comments). It was about the concept of form in literary criticism, which is very important, but also very fuzzy and much contested. What struck me is that, as far as I can tell, only one of the people involved is old enough to have been thinking about literary criticism at the time when structuralism (a variety of thinkers including Lévi-Strauss and, of course, Roman Jakobson) and linguistics (Chomsky+) was something people read about and took seriously, as in: “Maybe we ought to use some of this stuff.” That phase ran from roughly the mid-1960s to the mid-1970s. Any scholar entering their 20s in, say, 1980 and after would think of structuralism as something in the historical past, as something the profession had considered and rejected. Over and done with. 

For those thinkers structuralism and linguistics aren’t REAL in the sense I’m talking about. They know that work was done and that some of it was important; they’re educated in the history of criticism. They know that linguistics continues on, and they’ve probably heard about the recursion debates. But they’ve never even attempted to internalize any of that as a mode of thinking they could employ. It’s just not REAL to them.

Why is this important in the context of that Critical Inquiry debate? Because linguists have a very different sense of form than literary critics do. The spelling’s the same, but the idea is not. Yet literary critics are dealing with language.

A brief note on interpretation as translation

I’ve come to think of interpretation as a kind of translation, and translation doesn’t use description. When you translate from, say, Japanese into English, you don’t first describe the Japanese utterance/text and then make the translation based on that description. You make the translation directly. So it is with interpretation. I’ve come to think of the devices used to make the source text present into the critical text (quotation, summary, paraphrase) as more akin to observations than descriptions. Of course, we also have a descriptive vocabulary, the terms of versification, rhetoric, narratology, poetics, and others, but that’s all secondary.

Hence the longstanding practice of eliding the distinction between “reading” in the ordinary sense of the word and “reading” as a term of art for interpretive commentary. We like to pretend that this often elaborate secondary construction is, after all, but reading. Even after all the debate over not having immediate access to the text we still like to pretend that we’re just reading the text. Do we know what we’re doing? Blindness and insight, or the blind leading the blind?

Friday Fotos: KidZ!






Thursday, September 21, 2017

The effects of choir & solo singing

Front. Hum. Neurosci., 14 September 2017 |

Choir versus Solo Singing: Effects on Mood, and Salivary Oxytocin and Cortisol Concentrations

T. Moritz SchladtGregory C. Nordmann, Roman Emilius, Brigitte M. Kudielka, Trynke R. de Jong and Inga D. Neumann

Abstract: The quantification of salivary oxytocin (OXT) concentrations emerges as a helpful tool to assess peripheral OXT secretion at baseline and after various challenges in healthy and clinical populations. Both positive social interactions and stress are known to induce OXT secretion, but the relative influence of either of these triggers is not well delineated. Choir singing is an activity known to improve mood and to induce feelings of social closeness, and may therefore be used to investigate the effects of positive social experiences on OXT system activity. We quantified mood and salivary OXT and cortisol (CORT) concentrations before, during, and after both choir and solo singing performed in a randomized order in the same participants (repeated measures). Happiness was increased, and worry and sadness as well as salivary CORT concentrations were reduced, after both choir and solo singing. Surprisingly, salivary OXT concentrations were significantly reduced after choir singing, but did not change in response to solo singing. Salivary OXT concentrations showed high intra-individual stability, whereas salivary CORT concentrations fluctuated between days within participants. The present data indicate that the social experience of choir singing does not induce peripheral OXT secretion, as indicated by unchanged salivary OXT levels. Rather, the reduction of stress/arousal experienced during choir singing may lead to an inhibition of peripheral OXT secretion. These data are important for the interpretation of future reports on salivary OXT concentrations, and emphasize the need to strictly control for stress/arousal when designing similar experiments.

What interests you, or: How’d things get this way in lit crit?

This isn’t going to be another one of those long-form posts where I delve into the history of academic literary criticism in the United States since World War II. I’ve done enough of that, at least for awhile [1]. I’m going to assume that account.

Rather, I want to start with the individual scholar, even before they become a scholar. Why would someone want to become a professional literary scholar? Because they like to read, no? So, you take literature courses and you do the work you’re taught how to do. If you really don’t like that work, then you won’t pursue a professional degree [2]. You’ll continue to read in your spare time and you’ll study something else.

If those courses teach you how to search for hidden meanings in texts, whether in the manner of so-called close reading or, more recently, the various forms of ideological critique, that’s what you’ll do. If those courses don’t teach you how to analyze and describe form, then you won’t do that. The fact is, beyond versification (which is, or at least once was, taught in secondary school), form is hard to see.

Some years ago Mark Liberman had a post at Language Log which speaks to that [3]. He observes that it’s difficult for students to analyze sentences into component strings:
But when I first started teaching undergraduate linguistics, I learned that just explaining the idea in a lecture is not nearly enough. Without practice and feedback, a third to a half of the class will miss a generously-graded exam question requiring them to use parentheses, brackets, or trees to indicate the structure of a simple phrase like "State Department Public Relations Director".
In that example Liberman is looking for something like this: [(State Department) ((Public Relations) Director)].

Well, such analysis, which is central to the analysis of literary form (as I conceive and practice it), is difficult above the sentence level as well. If you aren’t taught how to do it, chances are you won’t try to figure it out yourself. Moreover, you may not even suspect that there’s something there to be described.

What we’ve got so far, then, is this: 1) Once you decide to study literary criticism professionally, you learn what you’re told. 2) It’s difficult to learn anything outside the prescribed path. There’s nothing surprising here, is there? Every discipline is like that.

Let’s go back to the history of the discipline, to a time when critics didn’t automatically learn to search out hidden meanings in texts, to interpret them. Without that pre-existing bias wouldn’t it have been at least possible that critics would have decided to focus on the description of form? And some did, in a limited way – I’m thinking of the Russian Formalists and their successors.

Still, formal analysis is difficult, and what’s it get you? Formal analysis, that’s what. The possibility of formal analysis is likely not what attracts anyone to literature, not now, not back then. You’re attracted to literary study because you like to read, and your reading is about love, war, beauty, pain, joy, suffering, life, the world, and the cosmos! THAT’s what you want to write about, not form.

And, sitting right there, off to the side, we’ve got a long history of Biblical hermeneutics stretching back to the time before Christianity differentiated from Judaism. Why not refit that for the study of meaning in literary texts? Now, I don’t think that’s quite what happened – the refitting of Biblical exegesis to secular ends – but that tradition was there exerting its general influence on the humanistic landscape. Between that and the ‘natural’ focus of one’s interest in literature, the search for literary meaning was a natural.

So that’s what the discipline did. And now it’s stuck and doesn’t know what to do.

More later.

[1] See, for example, the following working papers: Transition! The 1970s in Literary Criticism,
An Open Letter to Dan Everett about Literary Criticism,  June 2017,  24 pp.

[2] I figure we’ve all got our preferred intellectual styles. Some of us like math, some don’t and so forth. Take a look at this post: Style Matters: Intellectual Style,  March 18, 2017, Style Matters: Intellectual Style,

I make the following assertions: 
1.) In anyone’s intellectual ecology, style preferences are deeper and have more inertia than explicit epistemological beliefs.
2.) Some of the pigheadedness that often crops up in discussions about humanities vs. science is grounded in stylistic preference that gets rationalized as epistemological belief.
[3] Mark Liberman, Two brews, Language Log, February 6, 2010,
See also my blog post quoting Liberman’s post,
Form is Hard to See, Even in Sentences*, November 29, 2015,

Ring of posies


The origins of (the concept of) world literature

On the afternoon of 31 January 1827, a new vision of literature was born. On that day, Johann Peter Eckermann, faithful secretary to Johann Wolfgang von Goethe, went over to his master’s house, as he had done hundreds of times in the past three and a half years. Goethe reported that he had been reading Chinese Courtship (1824), a Chinese novel. ‘Really? That must have been rather strange!’ Eckermann exclaimed. ‘No, much less so than one thinks,’ Goethe replied.

A surprised Eckermann ventured that this Chinese novel must be exceptional. Wrong again. The master’s voice was stern: ‘Nothing could be further from the truth. The Chinese have thousands of them, and had them when our ancestors were still living in the trees.’ Then Goethe reached for the term that stunned his secretary: ‘The era of world literature is at hand, and everyone must contribute to accelerating it.’ World literature – the idea of world literature – was born out of this conversation in Weimar, a provincial German town of 7,000 people.
Later: "World literature originated as a solution to the dilemma Goethe faced as a provincial intellectual caught between metropolitan domination and nativist nationalism."

And then we have a passage from The Communist Manifesto (1848):
In a stunning paragraph from that text, the two authors celebrated the bourgeoisie for their role in sweeping away century-old feudal structures:
By exploiting the world market, the bourgeoisie has made production and consumption a cosmopolitan affair. To the annoyance of its enemies, it has drawn from under the feet of industry the national ground on which it stood. … These industries no longer use local materials but raw materials drawn from the remotest zones, and its products are consumed not only at home, but in every quarter of the globe. … In place of the old local and national seclusion and self-sufficiency, we have commerce in every direction, universal interdependence of nationals. And as in material so also in intellectual production. The intellectual creations of individual nations become common property. National one-sidedness and narrow-mindedness become increasingly impossible, and from the numerous national and local literatures there arises a world literature.
World literature­. To many contemporaries, it would have sounded like a strange term to use in the context of mines, steam engines and railways. Goethe would not have been surprised. Despite his aristocratic leanings, he knew that a new form of world market had made world literature possible.
Rolling along:
Ever since Goethe, Marx and Engels, world literature has rejected nationalism and colonialism in favour of a more just global community. In the second half of the 19th century, the Irish-born critic Hutcheson Macaulay Posnett championed world literature. Posnett developed his ideas of world literature in New Zealand. In Europe, the Hungarian Hugó Meltzl founded a journal dedicated to what he described as the ‘ideal’ of world literature.

In India, Rabindranath Tagore championed the same idealist model of world literature. Honouring the Ramayana and the Mahabharata, the two great Indian epics, Tagore nevertheless exhorted readers to think of literature as a single living organism, an interconnected whole without a centre. Having lived under European colonialism, Tagore saw world literature as a rebuke to colonialism.
After World War II:
In the US, world literature took up residence in the booming post-war colleges and universities. There, the expansion of higher education in the wake of the GI Bill helped world literature to find a home in general education courses. In response to this growing market, anthologies of world literature emerged. Some of Goethe’s favourites, such as the Sanskrit play Shakuntala, the Persian poet Hafez and Chinese novels, took pride of place. From the 1950s to the ’90s, world literature courses expanded significantly, as did the canon of works routinely taught in them. World literature anthologies, which began as single volumes, now typically reach some 6,000 pages. The six-volume Norton Anthology of World Literature (3rd ed, 2012), of which I am the general editor, is one of several examples.

In response to the growth of world literature over the past 20 years, an emerging field of world literature research including sourcebooks and companions have created a scholarly canon, beginning with Goethe, Marx and Engels and through to Tagore, Auerbach and beyond. The World Literature Institute at Harvard University, headed by the scholar David Damrosch, spends two out of three summers in other locations.
And now:
oday, with nativism and nationalism surging in the US and elsewhere, world literature is again an urgent and political endeavour. Above all, it represents a rejection of national nativism and colonialism in favour of a more humane and cosmopolitan order, as Goethe and Tagore had envisioned. World literature welcomes globalisation, but without homogenisation, celebrating, along with Ravitch, the small, diasporic literatures such as Yiddish as invaluable cultural resources that persevere in the face of prosecution and forced migration.

There is no denying that world literature is a market, one in which local and national literatures can meet and transform each other. World literature depends, above all, on circulation. This means that it is incompatible with efforts to freeze or codify literature into a set canon of metropolitan centres, or of nation states, or of untranslatable originals. True, the market in world literature is uneven and not always fair. But the solution to this problem is not less circulation, less translation, less world literature. The solution is a more vibrant translation culture, more translations into more languages, and more world literature education.

The free circulation of literature is the best weapon against nationalism and colonialism, whether old or new, because literature, even in translation, gives us unique access to different cultures and the minds of others.