Wednesday, June 28, 2017

The unintelligibilityopacity of AI systems

In the old days of classical symbolic AI program logic was "hand-coded" and based on expert knowledge of the application domain. It was thus possible, at least in principle, to figure out why the program did what it did in any particular case. That's not true of contemporary AI systems that use so-called "deep learning", which more or less program themselves. Writing about a self-driving car, Will Knight observes:
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
Of course, this has legal implications:
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

Is this what mind looks like from the inside?

Forest crop

SJB_feet_invert_equalize_wv_redgrad

Humans as pattern-seekers

Last week I’d posted a video in which Jeremy Lent sketches out a transformation in which humankind manages to escape climate catastrophe. He’s recently published The Patterning Instinct: A Cultural History of Humanity’s Search for Meaning. While I’m leery of the term “instinct” in this context, I certainly believe that we are pattern seeking creatures, and that we seek meaning (unity of being?).

What I’m wondering is if we can deriving this pattern seeking from the fact that each individual neuron is, of course, a living agent, seeking to increase its inputs (nutrients) through its actions (its outputs) – see my old post on The Busy Bee Brain. Of course that’s true of every kind of brain, not just human brains. What is it that sets the human brain free to seek patterns of every kind everywhere? Conversely, what is it that keeps the brains of butterflies, octopi, iguanas, rabbits, parrots, and so forth from such untethered pattern seeking?

I think it’s the (special) nature of human society, our ability to walk about in one another’s minds though language and the arts and sciences, that’s what does it. Alas, I don’t know how to turn that into an explicit argument. How is it that seeking and finding patterns energizes individual neurons, for the seeking and finding of patterns requires the coordinated efforts of millions and billions of neurons distributed across many brains. How can we formulate that in a coherent way?

Multifractals (fractals within fractals) in literary texts

From Science Daily:
James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world's greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

[...]  Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. [...]

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals 'interwoven' with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

"Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity," says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).
Stream of consciousness turned out to be particularly complex:
However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

"The absolute record in terms of multifractality turned out to be Finnegan's Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals," says Prof. Drozdz. [...] "It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers."

The original research:

Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative texts. Information Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023

Moby Dick meets the motherboard

IMG_2898 plasma thruster

Systematic Annotation of Literary Texts, A Shared Task

Posted to the Humanist Discussion Group:
Dear all,

We would like to draw your attention to a community-oriented initiative that will introduce a new format of collaboration into the field of Humanities: The 1st shared task on the analysis of narrative levels through annotation It is an extension of the established shared task format from the field of Computational Linguistics to Literary Studies and will commence this fall. The goal of the first stage of the (two-staged) shared task is the *collaborative creation of annotation guidelines*, which in turn will serve as a basis for the second round, an automatisation-oriented shared task. The 1st call for participation is to be sent in August 2017. The audience for the first round of the shared task are researchers interested in the (manual) analysis of narrative.

We are sending this pre-call in order to a) make you aware of this activity and b) give you the opportunity to coordinate a possible participation with your teaching or research activities in winter/fall.

Please check out our web page and feel free to point other colleagues to it. If you have questions or comments, please do not hesitate to contact us.

Best regards,
Evelyn Gius, Nils Reiter, Jannik Strötgen and Marcus Willand

Overview

FAQ

Leaflet
From the overview:
In this talk, we would like to outline a proposal for a shared task (ST) in and for the digital humanities. In general, shared tasks are highly productive frameworks for bringing together different researchers/research groups and, if done in a sensible way, foster interdisciplinary collaboration. They have a tradition in natural language processing (NLP) where organizers define research tasks and settings. In order to cope for the specialties of DH research, we propose a ST that works in two phases, with two distinct target audiences and possible participants.

Generally, this setup allows both “sides” of the DH community to bring in what they can do best: Humanities scholars focus on conceptual issues, their description and definition. Computer science researchers focus on technical issues and work towards automatisation (cf. Kuhn & Reiter, 2015). The ideal situation, that both “sides” of DH contribute to the work in both areas, this is challenging to achieve in practice. The shared task scenario takes this into account and encourages Humanities scholars without access to programming “resources” to contribute to the conceptual phase (Phase 1), while software engineers without interest in literature per se can contribute to the automatisation phase (Phase 2). We believe that this setup can actually lower the entry bar for DH research. Decoupling, however, does not imply strict, uncrossable boundaries: There needs to be interaction between the two phases, which is also ensured by our mixed organisation team. In particular, this setup does allow mixed teams to participate in both phases (and it will be interesting to see how they fare).

In Phase 1 of a shared task, participants with a strong understanding of a specific literary phenomenon (literary studies scholars) work on the creation of annotation guidelines. This allows them to bring in their expertise without worrying about feasibility of automatisation endeavours or struggling with technical issues. We will compare the different annotation guidelines both qualitatively, by having an in-depth discussion during a workshop, and quantitatively, by measuring inter-annotator agreement, resulting in a community guided selection of annotation guidelines for a set of phenomena. The involvement of the research community in this process guarantees that heterogenous points of view are taken into account.

The guidelines will then enter Phase 2 to actually make annotations on a semi-large scale. These annotations then enter a “classical” shared task as it is established in the NLP community: Various teams competitively contribute systems, whose performances will be evaluated in a quantitative manner.

Given the complexity of many phenomena in literature, we expect the automatisation of such annotations to be an interesting challenge from an engineering perspective. On the other hand, it is an excellent opportunity to initiate the development of tools tailored to the detection of specific phenomena that are relevant for computational literary studies.
Note that while Phase 1, the creation of guidelines, must be done by experts, the application of those guidelines on a large scale could be done by "student assistants". Perhaps there could be a Phase 3 that opens out to the public of people with a strong interest in literature. Anyone who's spent a fair amount of time cruising the web looking for literary resources knows that some high-quality work is being done by people with no academic affiliation. We're now talking about so-called "citizen science", or is that citizen humanities?

Tuesday, June 27, 2017

Cultural Identity and Cultural Appropriation @3QD

My most recent articles at 3 Quarks Daily:


What about this provocative poster?



For one thing it assumes and audience where people would recognize what’s going on in the lower image. The upper image is obvious, white children playing at cowboys and Indians. But the lower image? It shows Native American children at a boarding school where they are being separated from the traditional world of their parents and being educated in the ways of the White Man.

When I was a kid I was one of the kids in the upper picture. More often than not I would have been an Indian (or is it “Native American”, “American Indian”, or more specially, Lakota, Dene, etc.) and I made sure that us Indians won some of the battles. By the time I was in my early teens I more or less knew what that lower picture was about and didn’t like it. But the social and cultural system connecting those two images, man, it’s complicated. It includes Mark Twain’s Injun Joe (and Nigger Jim) and, a bit later, Oliver La Farge’s 1929 novel, Laughing Boy (1930 Pulitzer Prize) which I read in my early teens. More recently we have Tony Hillerman’s wonderful Joe Chee mysteries.

But what about white children dressing up as Indians and playing? Should we do that anymore? Why or why not? For that matter, is it so common as it once was? Westerns, with cowboys and Indians, were all over television and in the movies when I was a child (the 1950s), but that’s no longer the case. I’d guess that such play is no longer so common, though I don’t really know.

Monday, June 26, 2017

On the computational value of diagrams

Something I'm thinking about and may comment on a bit later:
In a landmark 1987 essay,“Why a Diagram Is (Sometimes) Worth Ten Thousand Words,” Herbert Simon and Jill Larkin argue that a diagram is fundamentally computational, and that the graphical distribution of elements in spatial relation to each other supported “perceptual inferences” that could not be properly structured in linear expressions, whether these were linguistic or mathematical. They state at the outset that “a data structure in which information is indexed by two-dimensional location is what we call a diagrammatic representation.” They argue that the spatial features of diagrams are directly related to a concept of location, and that location performs certain functions. Locations exercise constraints and express values through relations, whether a machine or human being is processing the instructions. Larkin and Simon were examining computational load and efficiency, so they looked at data representations from the point of view of a three part process: search, recognition, and inference. Their point was that visual organization plays a major role in diagrammatic structures in ways that are unique and specific to these graphical expressions. In particular, they bring certain efficiency into their epistemological operations because the information needed to process information is located “at or near a locality” so that it can be “assessed and processed simultaneously.”
Johanna Drucker, Graphesis: Visual Forms of Knowledge Production, Harvard University Press, 2014, pp. 106-107.

Here's a link to an ungated version of Larkin & Simon, “Why a Diagram Is (Sometimes) Worth Ten Thousand Words”.

Up, up, and away

20170611-_IGP9168

Mathematics, Computing, and the Literary Mind

As some of you know, Willard McCarty has been hosting an informal online seminar on the digital humanities since 1987. One topic that comes and goes is the question of whether or not computing has more to offer the humanist than a set of practical tools. Could computing offer us a way of thinking about how the objects and processes of humanistic inquiry actually function? This topic has reappeared once again and I've decided to offer my two cents. Here's the note I posted to the seminar.

Moire Trio

Dear Willard et al.,

This business – mathematics & computing and what they offer humanists other than tools – is something I've been thinking about, off and on, since the late 1960s. Back then I wasn't interested in practical tools (for making a concordance, or stylometrics, or whatever), I was interested in thinking about how the mind worked. Of course lots of thinkers have pursued that line over the years, and while it's produced its share of nonsense, I don't think that should discredit the whole line of investigation, which is hardly unified and is still very much open-ended.

As far as I know the nature of computation is itself still very much under investigation. And I figure that literary studies (my particular corner of the humanities) may well have contributions to make. That is, understanding the computational properties of the literary mind is NOT (going to be) a matter of taking some existing ensemble of computational processes and fitting them to one text after another. Rather, we – someone – is going to have to create appropriate computational procedures.

Just how we – someone – get there from here, that's way beyond the scope of an email note, nor would I be able to chart a course given whatever scope I please. But I think we have to start with literary form and we must learn how to describe it.

I've got some general notes on this in a working paper, Description 3: The Primacy of Visualization: https://www.academia.edu/16835585/Description_3_The_Primacy_of_Visualization

Here's a somewhat more polished account (though unpublished): Sharing Experience: Computation, Form, and Meaning in the Work of Literature: https://www.academia.edu/28764246/Sharing_Experience_Computation_Form_and_Meaning_in_the_Work_of_Literature

Some years ago I engaged in extensive correspondence with Mary Douglas, the anthropologist, and she got me interested in ring-composition. Texts with the form:

A, B, C...X, C', B', A

Why ring-composition? 1) Because it "smells" like something that requires a computational account. 2) It's something definite one can look for in a text. 3) Identifying and describing ring-composition in texts doesn't require any esoteric knowledge. But it does require the sort of feel for the phenomenon that comes only from paying close attention to texts.

Douglas has published short book on the subject (her last), based on a series of lectures she delivered at Yale: Thinking in Circles: An Essay on Ring Composition (Yale 2007). There's a chapter where she lists a set of identifying features of ring-composition.

I've produced a handful of working papers in which I describe ring-composition in a variety of texts. You can find those listed here: https://independent.academia.edu/BillBenzon/Ring-Composition

If you're interested in reading around in that material, you might start with, Ring Composition: Some Notes on a Particular Literary Morphology: https://www.academia.edu/8529105/Ring_Composition_Some_Notes_on_a_Particular_Literary_Morphology

One of the things I do in that working paper is gloss Douglas's diagnostic features as being aspects of a computational process.

Finally, it's worth remembering that ordinary arithmetic (which is fairly important in the theory of computation) is, after all, a linguistic process. The symbol set is highly restricted, as is the set of rules for its use (both sets are finite); but it is a creature of language.

Best,

Bill Benzon

Sunday, June 25, 2017

The Threat of AI

Kai-Fu Lee has an important op-ed in the NYTimes, "The Real Threat of Artificial Intelligence". He begins by pointing out that all too many discussions of the problems posed by AI turn on the so-called "singularity", when AI will surpass human intelligence. He points out, quite rightly IMO, that however interesting such questions are "they are not pressing". Our best AI tools have little or no understanding of anything, but they nonetheless can do useful tasks and are improving rapidly. These tools will take existing human jobs without replacing them with new jobs.
This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
The rest of the op-ed addresses these questions and is well-worth reading.

He's call for high tax rates with the government subsidizing "most people's lives and work". The USA and China may well be able to do this. Most countries will not.
So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.
Yikes!

Addendum, 6.26.17: Mark Liberman has posted about this over at Language Log, and has some interesting links to remarks by Norbert Wiener.

Friday, June 23, 2017

Maybe quantum mechanics isn't so weird after all

I more or less believe that on general principle, but I don't quite follow this interesting article by Philip Ball, Quantum common sense. I haven't read it with care, at least not yet. But I wanted to park some quotes here for the record. I thought this was kind of neat:
It’s not enough, though, for a quantum state to survive decoherence in order for us to be able to measure it. Survival means that the state is measurable in principle – but we still have to get at that information to detect the state. So we need to ask how that information becomes available to an experimenter. (Really, who’d have thought there is so much to the mere act of observation?)

Here’s the exciting answer: it’s precisely because a quantum system interacts with its environment that it leaves an imprint on a classical measuring device at all. If we were able, with some amazing instrument, to record the trajectories of all the air molecules bounding off the speck of dust, we could figure out where the speck is without looking at it directly; we could just monitor the imprint it leaves on its environment. And this is, in effect, all we are doing whenever we determine the position, or any other property, of anything: we’re detecting not the object itself, but the effect it creates.

Just as coupling the object to its environment sets decoherence in train, so too it imprints information about the object onto the environment, creating a kind of replica. A measurement of that object then amounts to acquiring this information from the replica.
Later:
What Quantum Darwinism tell us is that, fundamentally, the issue is not really about whether probing physically disturbs what is probed (although that can happen). It is the gathering of information that alters the picture. Through decoherence, the Universe retains selected highlights of the quantum world, and those highlights have exactly the features that we have learnt to expect from the classical world. We come along and sweep up that information – and in the process we destroy it, one copy at a time.

Decoherence doesn’t completely neutralise the puzzle of quantum mechanics. Most importantly, although it shows how the probabilities inherent in the quantum wave function get pared down to classical-like particulars, it does not explain the issue of uniqueness: why, out of the possible outcomes of a measurement that survive decoherence, we see only one of them. Some researchers feel compelled to add this as an extra (you might say ‘super-common-sensical’) axiom: they define reality as quantum theory plus uniqueness.
Seems rather poetic, doesn't it?

Blossom in black and white

20170527-_IGP9146 eq BW

Trump's foreign policy: An end to American hegemony?

Writing in The American Conservative, Andrew Bacevich notes a post-Trump nostalgia for a world order characterized as, "Liberalism, along with norms, rules, openness, and internationalism: these ostensibly define the postwar and post-Cold War tradition of American statecraft." He goes on to note that such a view leaves out a few things:
Or, somewhat more expansively, among the items failing to qualify for mention in the liberal internationalist, rules-based version of past U.S. policy are the following: meddling in foreign elections; coups and assassination plots in Iran, Guatemala, the Congo, Cuba, South Vietnam, Chile, Nicaragua, and elsewhere; indiscriminate aerial bombing campaigns in North Korea and throughout Southeast Asia; a nuclear arms race bringing the world to the brink of Armageddon; support for corrupt, authoritarian regimes in Iran, Turkey, Greece, South Korea, South Vietnam, the Philippines, Brazil, Egypt, Nicaragua, El Salvador, and elsewhere—many of them abandoned when deemed inconvenient; the shielding of illegal activities through the use of the Security Council veto; unlawful wars launched under false pretenses; “extraordinary rendition,” torture, and the indefinite imprisonment of persons without any semblance of due process.
A bit later:
Prior to Trump’s arrival on the scene, few members of the foreign-policy elite, now apparently smitten with norms, fancied that the United States was engaged in creating any such order. America’s purpose was not to promulgate rules but to police an informal empire that during the Cold War encompassed the “Free World” and became more expansive still once the Cold War ended.
Rather
Trump’s conception of a usable past differs radically from that favored in establishment quarters. Put simply, the 45th president does not subscribe to the imperative of sustaining American hegemony because he does not subscribe to the establishment’s narrative of 20th-century history. According to that canonical narrative, exertions by the United States in a sequence of conflicts dating from 1914 and ending in 1989 enabled good to triumph over evil. Absent these American efforts, evil would have prevailed. Contained within that parable-like story, members of the establishment believe, are the lessons that should guide U.S. policy in the 21st century.

Trump doesn’t see it that way, as his appropriation of the historically loaded phrase “America First” attests. In his view, what might have occurred had the United States not waged war against Nazi Germany and Imperial Japan and had it not subsequently confronted the Soviet Union matters less than what did occur when the assertion of hegemonic prerogatives found the United States invading Iraq in 2003 with disastrous results.

In effect, Trump dismisses the lessons of the 20th century as irrelevant to the 21st. Crucially, he goes a step further by questioning the moral basis for past U.S. actions. Thus, his extraordinary response to a TV host’s charge that Russian President Vladimir Putin is a killer.
Concerning the Trump resistance:
Say this for the anti-Trump resistance: while the fascism-just-around-the-corner rhetoric may be overheated and a touch overwrought, it qualifies as forthright and heartfelt. While not sharing the view that Trump will rob Americans of their freedoms, I neither question the sincerity nor doubt the passion of those who believe otherwise. Indeed, I am grateful to them for acting so forcefully on their convictions. They are inspiring.

Not so with those who now wring their hands about the passing of the fictive liberal international order credited to enlightened American statecraft. They are engaged in a great scam, working assiduously to sustain the pretense that the world of 2017 remains essentially what it was in 1937 or 1947 or 1957 when it is not.
H/t 3 Quarks Daily.

The profession of literary criticism as I have observed it over 50 years

Updated 6.23.17.

In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.

This post is a guide to my historically-oriented thinking about academic literary criticism. Much, but not all, of the historical material is autobiographical in nature.

I list the articles more or less in the order of writing. In some cases a post has been rewritten and revised several years after I first wrote it. The link I give is to the most recent version.

Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)

This is about my years at Johns Hopkins, both undergraduate (1965-1969) and graduate (1969-72). That’s when, I see in retrospect, I left the profession intellectually, with a “structuralism and beyond” MA thesis on “Kubla Khan,” even before I’d joined it institutionally, but getting my PhD. I originally wrote this while I was working on my PhD in English at SUNY Buffalo. Art Efron published a journal, Paunch, and I wrote it for that. The current version includes interpolated comments from 2014 and 2015.

The Demise of Deconstruction: On J. Hillis Miller’s MLA Presidential Address 1986. PMLA. Vol. 103, No. 1, Jan. 1988, p. 57.

A letter I published in PMLA in which I replied to J. Hillis Miller on the eclipse of deconstruction. I suggested 1) that deconstruction had a different valence for those who merely learned it in graduate school than for those who had struggled to create it, and 2) that it was in eclipse because it did the same thing to every text.

For the Historical Record: Cog Sci and Lit Theory, A Chronology (2006-2016)

At the beginning of every course (at Johns Hopkins) Dick Macksey would hand out a chronology, a way, I suppose, of saying “history is important” without lecturing on the topic. It was with that in mind that I originally posted this rough and ready chronology in a comment to a discussion at The Valve. The occasion was an online symposium that interrogated Theory by discussing the anthology, Theory’s Empire (Columbia UP 2005). I then emended it a bit and made it a freestanding post. As the title suggests, it juxtaposes developments in cognitive science and literary theory from the 1950s through the end of the millennium.

[BTW The entire Theory’s Empire symposium is worth looking at, including the comments on the posts: http://www.thevalve.org/go/valve/archive_asc/C41]

Seven Sacred Words: An Open Letter to Steven Pinker (2007-2011)

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism (2015)

Steven Pinker has been a severe critic of the humanities for ignoring recent work in the social and behavioral sciences. He has also argued that the arts serve no biological purpose, that they are “cheesecake for the mind.” When I read his The Stuff of Thought (2007) I realized his later chapters contained the basis for an account of the arts. I sketched that out, added a brief account of why deconstruction had been popular, and published it as an open letter, along with his reply. It appeared first at The Valve (2007) and then at New Savanna (2011). In 2015 I posted it to a “session” at Academia.edu. I took some of my comments in that discussion along with some other materials and published the lot at Academic.edu as a working paper. In a final section I propose a four-fold division of literary criticism: 1) description, 2) naturalist criticism, 3) ethical criticism, and 4) digital criticism.

Thinking is action as well (in the brain)

June 12, Science News:
Summary: Neuroscientists have recently put forward an original hypothesis -- all these cognitive functions rely on one central function: emulation. This function creates an abstract dynamic 'image' of movements, thereby enabling the brain to strengthen its motor skills and construct a precise and lasting representation of them. The fronto-parietal network, it is argued, has evolved from a network that only controlled motor skills to a much more generalized system.
* * * * *

Emulation:
This function creates an abstract dynamic 'image' of movements, thereby enabling the brain to strengthen its motor skills and construct a precise and lasting representation of them. The fronto-parietal network, it is argued, has evolved from a network that only controlled motor skills to a much more generalised system. This hypothesis, which is set out in the journal Trends in Cognitive Sciences, would explain why patients who have suffered an injury in this specific part in the brain have sequelae that affect a number of functions which, at first glance, do not necessarily appear to be linked. This research could open the door to more effective multi-modal therapies for individuals with cerebral lesions.

Numerous functional imaging studies show that the fronto-parietal network is activated by very disparate tasks. This is the case for motor activities, such as picking up or pointing to an object, as well as for eye movements -- and even when no movement is involved, if we shift our attention or perform a mental calculation. Radek Ptak, a neuropsychologist at the UNIGE Faculty of medicine and the HUG Division of neurorehabilitation, puts it like this: "Why is the very same region important for so many different tasks? What is the relationship between motor skills, motor learning and the development of cognition in humans? These are the questions that lie at the heart of our research." A review of all the data currently available suggests that the tasks share a common process, which the scientists have termed "emulation." This process, which consists of planning and representing a movement without actually performing it, activates the brain network in the same way as real movements. "But we hypothesise that the brain goes a step further," explains Dr Ptak: "It uses such dynamic representations to carry out increasingly complex cognitive functions beyond just planning movements."

* * * * *

Radek Ptak, Armin Schnider, Julia Fellrath. The Dorsal Frontoparietal Network: A Core System for Emulated Action. Trends in Cognitive Sciences, 2017; DOI: 10.1016/j.tics.2017.05.002

Friday Fotos: Rainbow Variations on a Blossom

20170527-_IGP9146 eq R

20170527-_IGP9146 eq Org

20170527-_IGP9146 eq Y

20170527-_IGP9146 eq G

20170527-_IGP9146 eq B

Cooperative hunting among the orcas?

The orcas will wait all day for a fisher to accumulate a catch of halibut, and then deftly rob them blind. They will relentlessly stalk individual fishing boats, sometimes forcing them back into port.

Most chilling of all, this is new: After decades of relatively peaceful coexistence with cod and halibut fishers off the coast of Alaska, the region’s orcas appear to be turning on them in greater numbers.

“We’ve been chased out of the Bering Sea,” said Paul Clampitt, Washington State-based co-owner of the F/V Augustine.

Like many boats, the Augustine has tried electronic noisemakers to ward off the animals, but the orcas simply got used to them.

“It became a dinner bell,” said Clampitt.

John McHenry, owner of the F/V Seymour, described orca pods near Alaska’s Aleutian Islands as being like a “motorcycle gang.”

“You’d see two of them show up, and that’s the end of the trip. Pretty soon all 40 of them would be around you,” he said.

A report this week in the Alaska Dispatch News outlined instances of aggressive orcas harassing boats relentlessly — even refusing to leave after a desperate skipper cut the engine and drifted silently for 18 hours.
Sperm whales are getting into the act too. Here's a video of whales skimming a line:

"After a particularly heavy assault by sperm whales, fishers are known to pull up lines in which up to 90 per cent of the catch has disappeared or been mangled."

H/t Tyler Cowen.

Thursday, June 22, 2017

Jeremy Lent on the Great Transformation


Imagining himself speaking in the year 2050, historian Jeremy Lent imagines how the world escapes climate catastrophe.

Wednesday, June 21, 2017

An Open Letter to Dan Everett about Literary Criticism (PDF at Academia.edu)

I’ve finally PDF’ed my Open Letter to Dan Everett and uploaded it to Academia,edu:

https://www.academia.edu/33589497/An_Open_Letter_to_Dan_Everett_about_Literary_Criticism

An Open Letter to Dan Everett about Literary Criticism

Abstract: Literary critics are interested in meaning (interpretation) but when linguistics, such as Haj Ross, look at literature, they’re interested in structure and mechanism (poetics). Shakespeare presents a particular problem because his plays exist in several versions, with Hamlet as an extreme case (3 somewhat different versions). The critic doesn’t know where to look for the “true” meaning. Where linguists to concern themselves with such things (which they mostly don’t), they’d be happy to deal with each of version separately. Undergraduate instruction in literature is properly concerned with meaning. Conrad’s Heart of Darkness has become a staple because of its focus on race and colonialism, which was critiqued by Chinua Achebe in 1975 and the ensuing controversy and illustrates the problematic nature of meaning. And yet, when examined at arm’s length, the text exhibits symmetrical patterning (ring composition) and fractal patterning. Such duality, if you will, calls for two complementary critical approaches. Ethical criticism addresses meaning (interpretation) and naturalist criticism addresses structure and mechanism (poetics).

Dan Everett & Me . . . 1
Haj’s Problem: Interpretation and Poetics . . . 1
Will the Real Shakespeare Stand Up . . . 4
Meaning in Conrad’s Heart of Darkness . . . 8
Pattern in Conrad’s Heart of Darkness . . . 13
Contexts of Understanding: Naturalist Criticism and Ethical Criticism . . . 21

Wonder Woman: A Quick Take

After reading the rapturous reviews and reading about young girls exiting the theaters pulling swords from their dresses and spinning lariats of glowing gold I decided to see Wonder Woman. And, yes, it was a good film. And, yes, it was still a superhero comic-book film, albeit with a grrl in the lead.

One of the fight scenes – I believe it was the first one – had me chuckling with glee. Gal Gadot was spectacular; hope she gets a raise for the next one, and profit participation. The trench warfare was appropriately grim – the War to Win All Wars, ha! And the film played nicely with Diana’s expectation that she would be fighting Ares. Her male sidekick tried to tell her it’s only a metaphor – which is surely what much of the audience was thinking as well. But, no, Diana insisted that he was real and that she’d fight him. And the film obeyed, pulling Ares out of leftfield for a final super-spectacular battle sequence (in the course of which boy sidekick sacrifices himself for the good of the cause).

And then there’s that final scene, back in the present, with Diana Prince in her office at the Louvre looking at a photograph sent to her by Bruce Wayne. It’s a photo taken at the front with Diana, male sidekick, and the others in their rag-tag gang. She’s thinking that only love can save the world.

She no doubt believes that. It may well be true. But that’s not what this movie is about. As contrarian economist Tyler Cowen observed, “Yet, immediately beneath the facade of the apparently rampant feminism is a quite traditional or even reactionary tale of martial virtue being inescapable.”

If you’re looking for a heroic woman warrior who fights with love, might I suggest Miyazaki’s Nausicaä? She’s good with a sword, but she also speaks with the animals and she’s a scientist. And, yes, she does save the world.

nausicaa-3nausica-walk

Tuesday, June 20, 2017

Jabba the Hutt, or How We Communicate

This post is now over six years old, but it's one of my favorites.

* * * * *

(10.23.13) They're having an interesting discussion of conversational turn-taking over at Language Log (see the comment HERE). So I thought I'd dig out this three year old post which suggests that conversation is a bit like kids playing in a sandbox, or with blocks, or dolls. Everything is visible to everyone at all times. The trick is to coordinate movements as you move the toys around.

* * * * *

From my notes:
A number of years ago I saw a TV program on the special effects of the Star Wars trilogy. One of the things the program explained was how the Jabba the Hutt puppet was manipulated. There were, I think, perhaps a half dozen operators for the puppet, one for the eyes, one for the mouth, one for the tail, etc. Each had a TV monitor which showed him what Jabba was doing, all of Jabba, not just their little chunk of Jabba. So each could see the whole, but manipulate only a part. Of course, each had to manipulate his part so it blended seamlessly with the movements of the other parts. So each needed to see the whole to do that.

That seems to me a very concrete analogy to what musicians have to do. Each plays only a part in the whole, but can hear the whole.
I don’t know how long ago I saw that program, it may well have been pre-WWW, but certainly not pre-internet, which is older than Star Wars, or at least it’s precursor, ARPAnet, is older than Star Wars. In any event, you can now read about the puppetry behind Jabba at the Wikipedia and elsewhere (scroll down to Behind the Scenes). The above description is accurate enough for my purposes.

And that purpose is to provide a metaphor, not just for music-making, but for communication in general. In particular, for speech communication. The idea is to provide an alternative that thoughtful people can use to over-ride the pernicious effects of the so-called conduit metaphor, which Michael Reddy* analyzed as a pile of lexical habits we employ when talking about language. These habits presume that we communicate by sending meaning through some kind of conduit, whether real (e.g. a telephone line) or virtual (e.g. that air between two people talking). The person at one end of the conduit puts the meaning into a packet of language, sends the packet through the conduit. The other person takes the packet from the conduit and then takes the meaning out of it.

It doesn’t work that way, not the meaning part. What does go through the conduit is a speech signal, vibrations in air, analog or digital signals through electrical lines, characters written or printed on paper, and so forth. But the meaning isn’t actually IN the signal. If it were, then we could understand any language with ease because the meaning would be in the physical signal itself. Alas, that’s not the case. Meanings are linked to segments of the signal by hard-learned linguistic conventions; and the conventions are different for each language.

What happens, then, is that the listener construes the meaning of the signal according to their understanding of the overall context and their understanding of the governing linguistic conventions. The may or may not get it right. And there’s likely to be a bit of conversational negotiation before the speakers agree on whatever is at issue.

And that is what the Jabba metaphor is about. Everyone stands in the same relationship to what appears on the TV monitor showing Jabba’s movements. In the case of a musical group, each person is playing their own part – the drummer, bass player, tuba, glockenspiel, sitar, nose flute, pipe organ, whatever – and is aware of it and what they intend next. The monitor gives them the whole, in which their part must fit.

The case of speech is trickier, for one person speaks while the others listen. The Jabba metaphor suggests that the speaker doesn’t actually know what he or she is saying until he or she actually hears it spoken. And that just doesn’t make sense.

Monday, June 19, 2017

Tickle! Tickle! Magenta!

20170527-_IGP9146

Extra! Computational Criticism Breaks with Tradition! Sky Remains Overhead

Andrew Goldstone has decided that perhaps so-called distant reading is not reading at all. The document is relatively short, “The Doxa of Reading” (PDF), and will appear in PMLA for May 2017. Goldstone takes “doxa” from Pierre Bourdieu (whom I’ve not read) who regards it “as what participants in the literary field take for granted.” What critics take for granted is “the assumption that the primary activity of academic literary study is textual interpretation.” Though Goldstone doesn’t say so, this wasn’t always the case, though it has been the case for the last five or six decades.

Bourdieu also allows for orthodoxy and heterodoxy, which stand in opposition to rupture. In Goldstone’s analysis–I refuse to call it a reading–the term “distant reading” functions as a heterodox form of reading and does so, in effect, to ward off the realization that it is not a form of reading at all but a rupture from reading. Goldstone’s analysis is both subtle and complex, just enough so that I’m not sure where he stands. But his penultimate sentence seems clear enough: “If we cease to regard the different versions of distant reading as a singular project [...] we gain a wider sense of the possibilities for scholarship.” I take that to mean in some version distant reading computational criticism really does point beyond “reading” and so to new modes of literary investigation.

In the course of his argument Goldstone considers the visual objects that have become a signal feature of so much work in computational criticism:
Looking back on the work of the Literary Lab in a recent pamphlet, Moretti asserts:
Images come first, in our pamphlets, because–by visualizing empirical findings–they constitute the specific object of study of computational criticism; they are our “text”; the counterpart to what a well-defined excerpt is to close reading. (“Literature, Measured” 3)
As an account of a quantitative methodology, this is a strange statement: visualizations are powerful exploratory tools, but they should be considered provisional summaries of the data of literary history, not primary objects. As a description of argumentative rhetoric, however, Moretti’s analogy between visualization and excerpt is illuminating. It positions the “computational critic” as an expert interpreter of visual texts, a heterodox version of the close reader.
He is correct. Those visual objects are not the “primary objects” of investigation. Yes, they are “exploratory tools”.

It seems to me that they have the character of observations (of the primary object of investigation). Think of the images created in radio astronomy; they make look rather like optical photographs, but they aren’t. The process through which they are constructed is different. The visual objects of computational criticism don’t look like photographs; they look like the charts and graphs that are ubiquitous in so many quantitative disciplines. But they have the same status in the intellectual process as those images of radio astronomy, that of observations.

It is through observations that a phenomenon of interest enters the investigative field. Observations as well have something of a descriptive character. Wouldn’t you know, description has recently become of interest in discussions of critical method and practice, along with “surface reading”.

And with that I want to turn to the aborted structuralist moment in the history of literary criticism. Jonathan Culler published Structuralist Poetics in 1975. In his preface he observed (xiv-xv):
The type of literary study which structuralism helps one to envisage would not be primarily interpretive; it would not offer a method which, when applied to literary works, produced new and hitherto unexpected meanings. Rather than a criticism which discovers or assigns meanings, it would be a poetics which strives to define the conditions of meaning.
Culler himself never followed up on the poetics he proposed, nor did others take him up on it. To be sure, we have narratology, a poetics of the narrative. But it is hardly central to the discipline as it has been practiced over the past half-century or so.

A turn to poetics would have been a rupture from emerging interpretive practices (aka reading). But then those practices themselves constituted a rupture from the practice of a discipline rooted in philology and (traditional) literary history. While interpretive criticism has its roots before World War II, it isn’t until after the war that it became the center of critical activity. As that happened interpretive practice became the focus of scrutiny, scrutiny that led, among other things, to the (in)famous structuralism conference that took place at Johns Hopkins in 1966. But the rupture that actually happened, was not a turn to poetics, but a turn to deconstruction. And if deconstruction was noisy and obstreperous at the time, in retrospect it is clear that it was not a rupture at all, but just a further variation on interpretive reading.

It remains to be seen whether or not computational criticism will flourish as a genuine rupture from interpretative reading, though not so much as a replacement as a supplement. If so, will it find common cause with a new poetics, one based on surface “reading”, description, and perhaps even the newer psychologies (cognitive, neuro-, and evolutionary)? Can we move beyond reading, not only in the analysis of large corpora, but in the analysis of single texts? And can we move beyond a meta language centered on notions of reading and space (close, distant, hidden, surface) to one based on the intellectual operations involved (among others, description, analysis, interpretation, explanation)?

* * * * *

Of course I've addressed these issues endlessly here and in my working papers. For a start, see this post from 3 Quarks Daily, “The Only Game in Town: Digital Criticism Comes of Age”, and in numerous blog entries and working papers (e.g. these working papers on description).

Sunday, June 18, 2017

Does "distant reading" presage a return to poetics?

By all means, read the piece Goldstone links to in the first tweet,"The Doxa of Reading" (accepted for publication in PMLA).

See this piece from 3 Quarks Daily (5 may 2014), The Only Game in Town: Digital Criticism Comes of Age.

Friday, June 16, 2017

Friday Fotos: Hoboken Arts & Music Festival, June 2017

20170611-_IGP9163

20170611-_IGP9171

20170611-_IGP9179

20170611-_IGP9174

20170611-_IGP9167

Kenan Malik asks some questions about culture and cultural appropriation


Cf. my earlier post in which I quote from his NYTimes op-ed.

H/t Jerry Coyne.

Thursday, June 15, 2017

Jomny Sun – OOO tweeter


From the NYTimes profile:
Sun had just emerged from the “cave” of finishing the book’s illustrations. He spent a year completing 180 drawings, pen on vellum, and managed to damage his shoulder in the process. It was his attempt to bring a sensibility reminiscent of some of his favorite writer-illustrators — Bill Watterson of “Calvin and Hobbes,” Shel Silverstein, Maurice Sendak — into the social-media age. In the book, an alien comes to Earth and encounters animals and trees, which he assumes are people. There is an otter that spouts art theory. An egg enduring an existential crisis. A melancholic tree. A bee who offers therapy to a bear. The story lines intersect, vanish, reappear.
Cf. My current post on Tim Morton.

Matryoshka dolls

20170611-_IGP9164

Timothy Morton in the Guardian – "the philosopher prophet of the Anthropocene"


Morton in the art world:
Over the past decade, Morton’s ideas have been spilling into the mainstream. Hans Ulrich Obrist, the artistic director of London’s Serpentine gallery, and perhaps the most powerful figure in the contemporary art world, is one of his loudest cheerleaders. Obrist told readers of Vogue that Morton’s books are among the pre-eminent cultural works of our time, and recommends them to many of his own collaborators. The acclaimed artist Olafur Eliasson has been flying Morton around the world to speak at his major exhibition openings. Excerpts from Morton’s correspondence with Björk were published as part of her 2015 retrospective at the Museum of Modern Art in New York.
Some of his ideas:
His most frequently cited book, Ecology Without Nature, says we need to scrap the whole concept of “nature”. He argues that a distinctive feature of our world is the presence of ginormous things he calls “hyperobjects” – such as global warming or the internet – that we tend to think of as abstract ideas because we can’t get our heads around them, but that are nevertheless as real as hammers. He believes all beings are interdependent, and speculates that everything in the universe has a kind of consciousness, from algae and boulders to knives and forks. He asserts that human beings are cyborgs of a kind, since we are made up of all sorts of non-human components; he likes to point out that the very stuff that supposedly makes us us – our DNA – contains a significant amount of genetic material from viruses. He says that we’re already ruled by a primitive artificial intelligence: industrial capitalism. At the same time, he believes that there are some “weird experiential chemicals” in consumerism that will help humanity prevent a full-blown ecological crisis.
And so:
We live in a world with a moral calculus that didn’t exist before. Now, doing just about anything is an environmental question. That wasn’t true 60 years ago – or at least people weren’t aware that it was true.
Apocalypse now:
Morton believes that this constitutes a revolution in our understanding of our place in the universe on a par with those fomented by Copernicus, Darwin and Freud. He is just one of thousands of geologists, climate scientists, historians, novelists and journalists writing about this upheaval, but, perhaps better than anyone else, he captures in words the uncanny feeling of being present at the birth of this extreme age.
Which is to say, the apocalypse is already with us:
Morton means not only that irreversible global warming is under way, but also something more wide-reaching. “We Mesopotamians” – as he calls the past 400 or so generations of humans living in agricultural and industrial societies – thought that we were simply manipulating other entities (by farming and engineering, and so on) in a vacuum, as if we were lab technicians and they were in some kind of giant petri dish called “nature” or “the environment”. In the Anthropocene, Morton says, we must wake up to the fact that we never stood apart from or controlled the non-human things on the planet, but have always been thoroughly bound up with them. We can’t even burn, throw or flush things away without them coming back to us in some form, such as harmful pollution. Our most cherished ideas about nature and the environment – that they are separate from us, and relatively stable – have been destroyed.

Morton likens this realisation to detective stories in which the hunter realises he is hunting himself (his favourite examples are Blade Runner and Oedipus Rex). “Not all of us are prepared to feel sufficiently creeped out” by this epiphany, he says. But there’s another twist: even though humans have caused the Anthropocene, we cannot control it.
And so, believe it or not, we must rejoice:
That might sound gloomy, but Morton glimpses in it a liberation. If we give up the delusion of controlling everything around us, we might refocus ourselves on the pleasure we take in other beings and life itself. Enjoyment, Morton believes, might be the thing that turns us on to a new kind of politics. “You think ecologically tuned life means being all efficient and pure,” the tweet pinned to the top of his Twitter timeline reads. “Wrong. It means you can have a disco in every room of your house.”
And, wouldn't you know, he's been "convening with members of Nasa’s Jet Propulsion Laboratory to contemplate the kinds of messages we should be sending into space on a potential reboot of the Voyager mission."

Of course, not everyone is enchanted:
The Morton detractors with whom I spoke accused him of misunderstanding contemporary science, like quantum mechanics and set theory, and then claiming his distortions as support for his wild ideas. They shared a broad critique that reminded me of the sceptical adage, “If you open your mind too far, your brains will fall out.” The slurry of interesting ideas in Morton’s work doesn’t hold together under scrutiny, they say.
And, yeah, I can understand that. But:
Morton’s defenders, however, see him as something of a Ralph Waldo Emerson for the Anthropocene: his writing has value, even if it doesn’t always stand up to philosophical scrutiny. “No one in a philosophy department is going to be taking Tim Morton seriously,” Claire Colebrook, a professor of English at Pennsylvania State University who has worked extensively on the Anthropocene, told me. But she teaches Morton’s work to undergraduates and they love it. “Why? Because they’re like, ‘Shut up and give me an idea!’”

Wednesday, June 14, 2017

Is Shelley’s Frankenstein a Ring-Form Text?

I’ve not read Frankenstein, or The Modern Prometheus, but I’m told that it has a complex narrative structure. Dorothea Wolschak explains:
In the core of the novel the Creature's story is presented to us framed by Victor Frankenstein's story which itself is enframed by Robert Walton's epistolary narrative. The overall structure of the novel is symmetrical: it begins with the letters of Walton, shifts to Victor's tale, then to the Creature's narration, so as to switch to Victor again and end with the records of Walton. In this manner the reader gets different versions of the same story from different perspectives. Mary Shelley's rather atypical approach not to stick to only one narrator and one defined narrative situation throughout the book creates various impressions on the reader of the novel.
Thus we have:

Walton (Frankenstein ((Creature)) Frankenstein) Walton

That looks like a ring-form. Until I’ve actually read the text I can’t say whether or not it functions in that way. It’s on my list.

H/t Giorgina Paiella

Frederick Douglass was the most photographed man in 19th century America

Tyler Cowen was a wide-ranging interview with Jill Lepore, Harvard historian and writer for The New Yorker. The whole thing is worth reading, but I was particularly struck by this bit:
LEPORE: There’s this lovely series of lectures that Frederick Douglass gives in the 1860s. They have different titles, but one of them is called “Pictures in Progress.” Douglass, when he escaped from slavery, 1838, he was 20, the year before the daguerreotype comes to the United States. He sits for a daguerreotype in 1841, and he’s really transformed by what it is to see himself in a photograph.

In the 1860s, he writes all these essays about photography in which he argues that photography is the most democratic art. And he means portrait photography. And that no white man will ever make a true likeness of a black man because he’s been represented in caricature — the kind of runaway slave ad with the guy, the little figure, silhouette of the black figure carrying a sack. And, as historians have recently demonstrated, he’s the most photographed man in the 19th century. Douglass just makes a big commitment to being photographed.

COWEN: More than Lincoln?

LEPORE: Yeah, he is really obsessed with photography because what it means to have a black man represented is the kind of “I am a man” speech that you know from the 1960s, these kind of black protests, that slogan “I am a man. I’m not a caricature; I’m not less than a man.” And he writes this essay about photography, why it’s so important, and why it’s basically, although not a natively American art, is the sort of de facto American art form because even the poorest servant, even the poorest cook-maid, can afford a photograph of herself and of the people that she loves.

In previous ages, when it would be kings and bishops who were portrayed, that everybody can see themselves and can see one another and, therefore, we can understand our equality. He has this whole technologically deterministic argument about photography and progress, and it’s very much bound up in 19th-century fantasy, notions of progress. But it’s a little heartbreaking to read because that’s not what happens with photography, right?