Wednesday, April 30, 2014
Computational Thinking and the Digital Critic: Part 2, An Ant Walks on the Beach and a Pilot is Alone
Simon’s ant is a well-known thought experiment from Chapter 3, “The Psychology of Thinking: Embedding Artifice in Nature,” in Herbert A. Simon, The Sciences of the Artificial, 1981. It’s a parable about computation, about how computational requirements depend on the problem to be solved. Stated that way, it is an obvious truism. But Simon’s thought experiment invites you to consider this truism where the “problem to be solved” is an environment external to the computer – it is thus reminiscent of Braitenberg’s primitive vehicles (which I discussed in Part 1).
Think of it like this: the nervous system requires environmental support if it is to maintain its physical stability and operational coherence. Note that Simon was not at all interested in the physical requirements of the nervous system. Rather, he was interested in suggesting that we can get complex behavior from relatively simple devices, and simplicity translates into design requirements for a nervous system.
Simon asks us to imagine an ant moving about on a beach:
We watch an ant make his laborious way across a wind- and wave-molded beach. He moves ahead, angles to the right to ease his climb up a steep dunelet, detours around a pebble, stops for a moment to exchange information with a compatriot. Thus he makes his weaving, halting way back to his home. So as not to anthropomorphize about his purposes, I sketch the path on a piece of paper. It is a sequence of irregular, angular segments--not quite a random walk, for it has an underlying sense of direction, of aiming toward a goal.
Tuesday, April 29, 2014
If you're a digital humanist, or curious about the species, listen to this lecture, carefully!
Roughly at 12:12:
The primary historical object I want to bring into focus and call on for help is the Otherness of computing, not its user friendliness, ubiquitous presence of social power...I want to grab on to the fear this Otherness provokes and reach through it to the otherness of the techno-scientific tradition from which computing comes. I want to recognize and identify this fear of Otherness, that is the uncanny, as for example, Sigmund Freud, Stanley Cavell, and Masahiro Mori have identified it, to argue that this Otherness is to be sought out and cultivated, not concealed, avoided, or overcome. That it’s sharp opposition to our somnolence of mind is true friendship.
Somewhat later: "If we’re not changed by computing, we’re imprisoned by it."
This is about computational thinking. But computational thinking is not one thing. It is many, some as yet undefined. What can it become for students of the humanities?
How, you might ask, are we to engage a computational understanding of literary process, if computation isn’t well-defined?
With care, I say, with care. We have to make it up.
* * * * *
As Stephen Ramsay pointed out in a post, DH and CS (where DH = digital humanities and CS = computer science), computer scientists are mostly interested in abstract matters of computability and data structures while programmers are mostly concerned with the techniques of programming certain kinds of capabilities in this or that language. Those are different, though related, undertakings.
Further, the practical craft has two somewhat different aspects. One faces toward the end user and is concerned with capturing that user’s world in the overall design of the program. This design process is, in effect, applied cognitive anthropology. The other aspect faces toward the computer itself and is concerned with implementing that design through the means available in the appropriate programming language. This is writing, but in a very specialized dialect. But it’s all computational thinking in some meaningful sense.
Though I have written a computer program or three, that was long ago. I have, however, spent a fair amount of time working with programmers. At one period in my life I documented software; at a different time I participated in product design.
But I also spent several years in graduate school studying the computational semantics of natural language with the late David Hays. That’s an abstract and theoretical enterprise. Though he is one of the founders of computational linguistics, Hays did no programming until relatively late in his career, after he’d left academia. He was interested in how the mind works and computation was one of his conceptual strategies. I studied with Hays because I wanted to figure out how poetry worked. All the members of his research group were interested in the human mind in one way or another; some of them were also programmers of appreciable skill.
Monday, April 28, 2014
When this is posted it will be the 2398th post on New Savanna since my first post on April 4, 2010 (which now contains nothing but a busted link). Some of those are just links to other material, many of them my photos, while a few others are long-form posts one a variety of subjects: literature, animation, cognitive psychology, society, cultural evolution, graffiti, and a few others. The thing is, despite the fact that I did those posts, I no longer know what I’ve done.
When I look through old posts for something the merits reposting (thus saving me the time of writing something new) I find posts I’d forgotten about. And when I go looking for something I know I’ve written, it sometimes takes me awhile to find it. Sometimes I find it by search through my tags. Sometimes I’ll search on a word or phrase I figure is likely to be in the target post, but not in many others. This searching may take several minutes or more.
I suppose that’s not bad. But, really, I’d like to find stuff instantly. Just like I recall something from my own mind.
But whoops! my own mind doesn’t work like that either. Sometimes I can remember things, sometimes I can’t.
What’s interesting though is that blogging has shifted the boundary between my private notes and public thoughts.
Sunday, April 27, 2014
The following notes are an addendum to my original conjecture that Fantasia is encyclopedic in scope. That is, that it spans our knowledge of the cosmos, not by exhaustive listing, but by strategic indication. Each of the eight episodes, plus the interlude, depicts a different aspect of the cosmos – think, e.g. Rite of Spring for the cosmos large and small, Pastoral Symphony for domesticity, and so forth. If you then ask yourself, what’s the most compact subject area that embraces all of Fantasia, you’re left with nothing smaller than the cosmos as a whole.
How’d Disney do it? It was a group effort. Sure, good old Uncle Walt had the final say. But tens and hundreds of people were involved in coming up with ideas. Each searched their individual minds and offered ideas into the complex evolving system that was the production of Fantasia.
For those in digital humanities, note that I employ a mathematical metaphor, that of a space. But I don’t think of it as a mere metaphor. Done properly, it’s a model.
I’ve been thinking about this encyclopedia notion. I like it, I think it works, but I also think it’s a bit vague. What’s it mean to “cover/imply the world”? How many and what distribution of topics does a real encyclopedia have to have in order to qualify, good and proper, as an encyclopedia? How is it that Mendelson and Moretti know that those texts are encyclopedic? Sure, they give reasons and examples – as I did for Fantasia. But that’s all after-the-fact rationalization. What’s the original “aha!” recognition about?
Saturday, April 26, 2014
Look at some of the presentations for the 2nd Workshop on Mind, Mechanism, and Mathematics (Mat 12-13, NYC):
- Mark Braverman (Princeton University) - Protecting a Conversation Against Adversarial Interference
- Rebecca Schulman (Johns Hopkins University) - Software for Matter: Programming the Morphogenesis, Replication and Metamorphosis of Everyday Things
- Martin Davis (New York University and UC Berkeley) - Gödel, Mechanism, and Consciousness
- Benjamin Koo (Tsinghua University) - CELL: A Cognitive Extreme Learning Lab
- Paul Grant (University of Cambridge) - Synthetic Spatial Patterning Using Two-Channel Quorum-Sensing Signaling
Check out the Big Questions:
1. The Mathematics of Emergence: The Mysteries of Morphogenesis2. Possibility of Building a Brain: Intelligent Machines, Practice and Theory3. Nature of Information: Complexity, Randomness, Hiddenness of Information4. How should we compute? New Models of Logic and Computation
Friday, April 25, 2014
No, what’s really new about “Capital” is the way it demolishes that most cherished of conservative myths, the insistence that we’re living in a meritocracy in which great wealth is earned and deserved.
For the past couple of decades, the conservative response to attempts to make soaring incomes at the top into a political issue has involved two lines of defense: first, denial that the rich are actually doing as well and the rest as badly as they are, but when denial fails, claims that those soaring incomes at the top are a justified reward for services rendered. Don’t call them the 1 percent, or the wealthy; call them “job creators.”
But how do you make that defense if the rich derive much of their income not from the work they do but from the assets they own? And what if great wealth comes increasingly not from enterprise but from inheritance?
Piketty wouldn’t raise taxes on income, which thriving professionals have a lot of; he would tax investment capital, which they don’t have enough of ... Politically, the global wealth tax is utopian, as even Piketty understands. If the left takes it up, they are marching onto a bridge to nowhere. But, in the current mania, it is being embraced.
This is a moment when progressives have found their worldview and their agenda. This move opens up a huge opportunity for the rest of us in the center and on the right. First, acknowledge that the concentration of wealth is a concern with a beefed up inheritance tax.
Second, emphasize a contrasting agenda that will reward growth, saving and investment, not punish these things, the way Piketty would. Support progressive consumption taxes not a tax on capital. Third, emphasize that the historically proven way to reduce inequality is lifting people from the bottom with human capital reform, not pushing down the top.
In what way is Godzilla, King of the Monsters, like Wuthering Heights? You might think that they are so different that their resemblances are of little account. Sure, they’re both about people in conflict, but other than that... After all, one is a classic text of English literature written in the early 19th Century. The other is an Americanized version of a mid-20th Century Japanese creature feature that has spawned almost 30 sequels, all about monsters.
Yes, they ARE very different texts. But they are alike in a very profound way: form. Consider the following diagram, which applies to both texts:
The upper line divides indicates story in temporal order, from first (on the left) to last (on the right). The line shows these events divided into three segments. The lower line depicts how those three sets of events are re-ordered in the text.
Thus both stories have a certain sequence of events that happens relatively late in the overall sequence. As the stories are actually narrated, however, this late sequence is moved to the beginning and the other sequences are adjusted to accommodate. In both cases the sequence that is moved involves a narrator external to the main sequence but known to the characters in it.
Thursday, April 24, 2014
From still eating oranges:
The necessity of conflict is preached as a kind of dogma by contemporary writers’ workshops and Internet “guides” to writing. A plot without conflict is considered dull; some even go so far as to call it impossible. This has influenced not only fiction, but writing in general—arguably even philosophy. Yet, is there any truth to this belief? Does plot necessarily hinge on conflict? No. Such claims are a product of the West’s insularity. For countless centuries, Chinese and Japanese writers have used a plot structure that does not have conflict “built in”, so to speak. Rather, it relies on exposition and contrast to generate interest. This structure is known as kishōtenketsu.Kishōtenketsu contains four acts: introduction, development, twist and reconciliation. The basics of the story—characters, setting, etc.—are established in the first act and developed in the second. No major changes occur until the third act, in which a new, often surprising element is introduced. The third act is the core of the plot, and it may be thought of as a kind of structural non sequitur. The fourth act draws a conclusion from the contrast between the first two “straight” acts and the disconnected third, thereby reconciling them into a coherent whole. Kishōtenketsu is probably best known to Westerners as the structure of Japanese yonkoma (four-panel) manga; and, with this in mind, our artist has kindly provided a simple comic to illustrate the concept.
See Azumanga Daioh.
In the 6th pamphlet from Stanford’s Literary Lab, “Operationalizing”: or, the Function of Measurement in Modern Literary Theory, Franco Moretti ended with a call to explicate the theoretical consequences of computing for literary study. That’s what I’ve been doing. It is now time to wrap up the exposition.
Let us begin with a passage from one of the last essays published by Edward Said, Globalizing Literary Study (PMLA, Vol. 116, No. 1, 2001, pp. 64-68). In his second paragraph Said notes: “An increasing number of us, I think, feel that there is something basically unworkable or at least drastically changed about the traditional frameworks in which we study literature“ (p. 64). Agreed. He goes on (pp. 64-65):
I myself have no doubt, for instance, that an autonomous aesthetic realm exists, yet how it exists in relation to history, politics, social structures, and the like, is really difficult to specify. Questions and doubts about all these other relations have eroded the formerly perdurable national and aesthetic frameworks, limits, and boundaries almost completely. The notion neither of author, nor of work, nor of nation is as dependable as it once was, and for that matter the role of imagination, which used to be a central one, along with that of identity has undergone a Copernican transformation in the common understanding of it.
What has happened to all those things, as Alan Liu has noted in “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423) is that they have dissolved into vast networks of objects and processes interacting across many different spatial and temporal scales, from the syllables of a haiku dropping into a neural net through the process of rendering ancient texts into movies made in Hollywood, Bollywood, or “Chinawood” (that is, Hengdian, in Zhejiang Province) and shown around the world.
If it is difficult to gain conceptual purchase on the autonomous aesthetic realm, then perhaps we need new conceptual tools. Computation provides tools that allow us to examine large bodies of texts in new ways, ways we are only beginning to utilize. But computation also gives us new ways of thinking about the mind, and that is particularly important, and problematic, in this context.
Wednesday, April 23, 2014
Poets are the hierophants of an unapprehended inspiration; the mirrors of the gigantic shadows which humanity casts upon the present; the words which express what they understand not; the trumpets which sign to battle, and feel not what they inspire; the influence which is moved not but moves. Poets are the unacknowledged legislators of the world.
–Percy Bysshe Shelley
In the first post in this series, Discourse and Conceptual Topology, I reviewed network models on three scales, micro, meso, and macro. In the second post, From History to Abstraction, I moved to the micro scale and argued that the mechanism of abstraction proposed by David Hays gives us a way of thinking about how a historical process can lead to subsequent abstraction and illustrated the model through an examination of Shakespeare’s Sonnet 129. In this post I examine Heuser and Le-Khac on the 19th Century British novel and undertake a formal comparison of The Winter’s Tale and Wuthering Heights in which I argue that Brontë had the advantage of conceptual machinery unavailable to Shakespeare, though in some way anticipated by him. I hope to conclude this series with a fourth post in which I return to purely theoretical and methodological matters.
History: Showing and Telling
As we all know, one of the major problems of literary studies up to now is that it has concentrated its attentions on a relatively small body of texts, the so-called canon, and has allowed the examination of those texts to stand as a proxy for all of literary history. The assumption is either that, because of their quality, those are the only texts that matter or, perhaps, their quality allows them to “stand-in” for the rest. The widespread availability of powerful computers now allows as to put these assumptions to the test or, rather, simply to abandon them.
Sister disciplines have developed techniques for analyzing large bodies of texts, corpus linguistics, and literary critics are applying these to newly available digital text collections. I want to examine one such study, Ryan Heuser and Long Le-Khac, A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (Stanford Literary Lab, Pamphlet 4, May 2012; HERE is an older post on this study). Their corpus included almost 3000 British novels spanning the period from 1785 to 1900. What they discovered, roughly speaking, is a shift from abstract terms to concrete, which they characterize as shift from telling (abstract terminology) showing (concrete terms). They read this shift through Raymond Williams (The Country and the City) as reflecting a population shift from small rural closely-knit communities to large urban communities where people are constantly amid strangers.
Here is how Heuser and Le-Khac characterize the texts toward the beginning of the period (p. 35):
Thinking in terms of the abstract values, the tight social spaces in the novels at the left of the spectrum are communities where values of conduct and social norms are central. Values like those encompassed by the abstract values fields organize the social structure, influence social position, and set the standards by which individuals are known and their behavior judged. Small, constrained social spaces can be thought of as what Raymond Williams calls “knowable communities,” a model of social organization typified in representations of country and village life, which offer readers “people and their relationships in essentially knowable and communicable ways” (Country 165). The knowable community is a sphere of face-to-face contacts “within which we can find and value the real substance of personal relationships” (Country 165). What’s important in this social space is the legibility of people, their relationships, and their positions within the community.
Toward the end of the period writers wrote and readers read texts Ryan and Le-Khac characterize like this (p. 36):
If this is how the abstract values fields are linked to a specific kind of social space, then we can make sense of their decline over the century and across the spectrum. The observed movement to wider, less constrained social spaces means opening out to more variability of values and norms. A wider social space, a rapidly growing city for instance, encompass- es more competing systems of value. This, combined with the sheer density of people, contributes to the feeling of the city’s unordered diversity and randomness. This multiplicity creates a messier, more ambiguous, and more complex landscape of social values, in effect, a less knowable community... The sense of a shared set of values and standards giving cohesion and legibility to this collective dissipates. So we can understand the decline of the abstract values fields—these clear systems of social values organized into neat polarizations—as a reflection of their inadequacy and obsolescence in the face of the radically new kind of society that novels were attempting to represent.
The upshot (p. 36): “Alienation, disconnection, dissolution—all are common reactions to the new experience of the city.”
I have no problems with this, as far as it goes. But, in light of Hays mechanism of abstraction, where abstract terms are defined over terms, I want to suggest that something else might be going on. Perhaps all those concrete terms in the later novels are components of abstract patterns, patterns defining terms which may not even be named in the text (or elsewhere).
Tuesday, April 22, 2014
I have long known that my discipline is descended from philology, but I've never had a very firm sense of just what philology is other than the study of language with a historical emphasis. Language Log has taken up the quest. Mark Liberman, for whom philology means "an old term for linguistic analysis, and especially comparative and historical linguistics as applied to analyzing and understanding texts in dead languages such as Old English and Middle English", starts off with a post wondering just what Paul de Man meant by philology when he urged a return to philology in one of his late essays. Liberman cites a number of dictionary definitions of the term and his commenters have quite a bit to say on the matter.
He adds another post, based on remarks by one of his commenters, Omri Ceren, and himself notes "the supreme intellectual prestige of 'philology' in Europe through the middle of the 19th century, and to some extent until WW I." He closes with a publisher's blurb for James Turner, Philology: The Forgotten Origins of the Modern Humanities (2014):
Many today do not recognize the word, but "philology" was for centuries nearly synonymous with humanistic intellectual life, encompassing not only the study of Greek and Roman literature and the Bible but also all other studies of language and literature, as well as religion, history, culture, art, archaeology, and more. In short, philology was the queen of the human sciences. How did it become little more than an archaic word? In Philology, the first history of Western humanistic learning as a connected whole ever published in English, James Turner tells the fascinating, forgotten story of how the study of languages and texts led to the modern humanities and the modern university.
The redoubtable Victor Mair weighs in with on Philology and Sinology, explaining that:
a Sinologist is a philologist who specializes on matters pertaining to China. To which [people] will generally ask, "Huh, what's that?" Whereupon I will say, "A philologist is someone who studies ancient texts for the purpose of understanding the languages and cultures of the times in which they were written."I definitely think of myself as a philologist specializing in Sinology. Disciplines parallel to Sinology are Indology, Japanology, Semitology, and so forth. For the majority of scholars, these have now morphed into Indian Studies, Japanese Studies, Semitic Studies, and so on, but I'm old fashioned and still cling to the old ideals and old methods of Sinology, though happily assisted now by modern technology and techniques (computers, data bases, online resources, etc.).
I examined three different uses of network vizualizations, topic models, Moretti’s plot diagrams, and cognitive networks in first part of this essay, Discourse and Conceptual Topology. When I posted that I imagined only a second part. In the writing, though, that second part grew and grew, so I cut it in two.
In this part I pose the problem of time and discuss two essays by Stephen Greenblatt, “The Cultivation of Anxiety: King Lear and His Heirs” and “Psychoanalysis and Renaissance Culture” and then compare Amleth (Saxo-Grammaticus) with Hamlet (Shakespeare). I then move back to cognitive networks and talk about Hays’s concept of metalingual defintion and conclude with more Shakespeare, Sonnet 129. I’ll get to Heuser and Le-Khac in Part 3: Prophesy.
Time and History
For physics, I understand, time presents a problem. It seems to have a direction, as some processes are irreversible. Why? If you drop a small quantity of ink into a tumbler of water – as I did in A Primer on Self-Organization: With some tabletop physics you can do at home – it diffuses, irreversibly so. The ink particles never collect together into the compact volume they had when first dropped into the water. Why?
Monday, April 21, 2014
Poets are the unacknowledged legislators of the world.
– Percy Bysshe Shelley
... it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.
Worlds within worlds – that’s how Tim Perper, my friend and colleague, described biology. At the smallest scale we have individual molecules, with DNA being of prime importance. At the largest scale we have the earth as a whole, with all living beings interacting in a single ecosystem over billions of years. In between we have cells, tissues, and organs of various sizes, autonomous organisms, populations of organisms on various scales from the invisible to continent-spanning, and interactions among populations of organisms on various scales.
Literature too is like that, from single figures and tropes, even single words (think of Joyce’s portmanteaus) through complete works of various sizes, from haiku to oral epics, from short stories through multi-volume novels, onto whole bodies of literature circulating locally, regionally, across continents and between them, from weeks and years to centuries and millennia. Somehow we as humanists and literary critics must comprehend it all. Breathtaking, no?
In this essay I sketch a potential computational historicism operating at multiple scales, both in time and textual extent. In the first part I consider network models on three scale: 1) topic models at the macroscale, 2) Moretti’s plot networks at the mesoscale, and 3) cognitive networks, taken from computational linguistics, at the microscale. I give examples of each and conclude by sketching relationships among them. I open the second part by presenting an account of abstraction given by David Hays in the early 1970s; in this model abstract concepts are defined over stories. I then move on to Hauser and Le-Khac on 19th Century novels, Stephen Greenblatt on self and person, and consider several texts, Amleth, Hamlet, The Winter’s Tale, Wuthering Heights, and Heart of Darkness.
Graphs and Networks
To the mathematician the image below depicts a topological object called a graph. Civilians tend to call such objects networks. The nodes or vertices, as they are called, are connected by arcs or edges.
Such graphs can be used to represent many different kinds of phenomena, a road map is an obvious example, a kinship tree is another, sentence structure is a third example. The point is that such graphs are signs of phenomena, notations. They are not the phenomena itself.
Sunday, April 20, 2014
Tom Friedman to Laszlo Bock, head of hiring at Google:
Are the liberal arts still important?
They are “phenomenally important,” he said, especially when you combine them with other disciplines. “Ten years ago behavioral economics was rarely referenced. But [then] you apply social science to economics and suddenly there’s this whole new field. I think a lot about how the most interesting things are happening at the intersection of two fields. To pursue that, you need expertise in both fields. You have to understand economics and psychology or statistics and physics [and] bring them together. You need some people who are holistic thinkers and have liberal arts backgrounds and some who are deep functional experts. Building that balance is hard, but that’s where you end up building great societies, great organizations.”
Scott Weingart reviews Manuel Lima, The Book of Trees: Visualizing Branches of Knowledge (Princeton 2014:
Lima’s book is a history of hierarchical visualizations, most frequently as trees, and often representing branches of knowledge. He roots his narrative in trees themselves, describing how their symbolism has touched religions and cultures for millennia. The narrative weaves through Ancient Greece and Medieval Europe, makes a few stops outside of the West and winds its way to the present day. Subsequent chapters are divided into types of tree visualizations: figurative, vertical, horizontal, multidirectional, radial, hyperbolic, rectangular, voronoi, circular, sunbursts, and icicles. Each chapter presents a chronological set of beautiful examples embodying that type.
Back in August 2011 I published a short document called Preview: The Key to the Treasure IS the Treasure, A Program for Literary Studies in the Current Era.
In December 2013, after I’d been working on Latour and pluralism for awhile, I published a revised version in which I replaced Object Oriented Ontology from the first version with Ethical Criticism: The Key to the Treasure IS the Treasure.
No doubt there will be a third version, perhaps soon, or perhaps a little later. In any event I offer this diagram by way of making the obvious point that the divisions I imagine aren’t exclusive.
For those who haven’t read either of the earlier documents I note two things: 1) I give description separate billing because I believe that we MUST get better descriptive control over our materials. Description as I have come to understand it encompasses both somewhat revised approaches to “close” reading and “distant” reading. 2) Ethical criticism encompasses hermeneutics and critique while naturalist criticism can accommodate cognitive and evolutionary approaches.
I took more or less the first version of Key to the Treasure, added some philosophical reflection, and posted a working paper to SSRN in September 2012: Working Paper: Literary Criticism 21: Academic Literary Study in a Pluralist World. Here’s the abstract:
At the most abstract philosophical level the cosmos is best conceptualized as containing various Realms of Being interacting with one another. Each Realm contains a broad class of objects sharing the same general body of processes and laws. In such a conception the human world consists of many different Realms of Being, with more emerging as human cultures become more sophisticated and internally differentiated. Common Sense knowledge forms one Realm while Literary experience is another. Being immersed in a literary work is not at all the same as going about one's daily life. Formal Literary Criticism is yet another Realm, distinct from both Common Sense and Literary Experience. Literary Criticism is in the process of differentiating into two different Realms, that of Ethical Criticism, concerned with matters of value, and that of Naturalist Criticism, concerned with the objective study of psychological, social, and historical processes.
Saturday, April 19, 2014
Well, not directly. No one can do that, they’re too small. They’re so small that you can’t even see them with the most powerful light microscope. They’re dimensions are less than the wavelengths of visible light. In effect, light misses them.
So you zap them with a tightly focused x-ray beam – much shorter wavelengths – and the beam scatters onto a photographic emulsion or, these days I guess, on to some micro-electronic detector and get an image. Which looks like a smudge. But, from such smudges you can deduce what the thing must look like.
That’s where Geis came in. He takes those deductions, which have been rendered as sketches of some sort, and turns them into useful and elegant images, images that make sense to the naked eye. These images are at once true to the molecule and to the eye, but they are also fictions. Because, as I said, they’re way too small to be visible themselves. So Geis had to come up with plausible visualization.
Many others have painted molecules, but Geis was the first. You can find nice appreciations at L2Molecule and at Brain Pickings, which covers some of his other scientific visualizations. This Google query pulls up a bunch of images.
Here’s a passage about Geis from an article* I wrote on visual thinking:
In a personal interview, Geis indicated that, in studying a molecule's structure, he uses an exercise derived from his training as an architect. Instead of taking an imaginary walk through a building, the architectural exercise, he takes an imaginary walk through the molecule. This allows him to visualize the molecule from many points of view and to develop a kinesthetic sense, in addition to a visual sense, of the molecule's structure. Geis finds this kinesthetic sense so important that he has entertained the idea of building a huge model of a molecule, one large enough that people could enter it and move around, thereby gaining insight into its structure. Geis has pointed out that biochemists, as well as illustrators, must do this kind of thinking. To understand a molecule's functional structure biochemists will imagine various sight lines through the image they are examining. If they have a three-dimensional image on a CRT, they can direct the computer to display the molecule from various orientations. It is not enough to understand the molecule's shape from one point of view. In order intuitively to understand it's three-dimensional shape one must be able to visualize the molecule from several points of view.
Think about that for a minute. In order to visualize a single tiny molecule, Geis used his entire body.
* * * * *
* William Benzon. Visual Thinking. Allen Kent and James G. Williams, Eds. Encyclopedia of Computer Science and Technology. Volume 23, Supplement 8. New York; Basel: Marcel Dekker, Inc. (1990) 411-427.
Friday, April 18, 2014
I've collected five posts on Alan Liu into a single PDF. You can download it from my SSRN page: Remarks on Alan Liu and the Digital Humanities, A Working Paper. Abstract and introduction below.
* * * * *
Abstract: Alan Liu has been organizing and conceptualizing digital humanities (DH) for two decades. I consider a major essay, “The Meaning of the Digital Humanities,” two interviews, one with Katherine Hayles and the other with Scott Pound, and a major blog post in which Liu engages Stephen Ramsay. Other investigators included: Willard McCarty and Franco Moretti. Some of Liu’s themes: DH as symbolic of the future of the humanities, the need for theory as well as practical projects, the role of DH in enlarging the scope of the “thinkable,” the importance of an engineering mindset, and the need for a long-term effort in revivifying the humanities.
* * * * *
Computation has theoretical consequences—possibly, more than any other field of literary study. The time has come, to make them explicit.
I first heard about Alan Liu back in the late 1990s, when he was working on Voice of the Shuttle. I may or may not have submitted some links, I don’t really remember, but if so, that would have been it. Since then I gather that he’s been acting as a Johnny Appleseed for what has come to be called digital humanities, an ambassador, or in the corporate jargon of Apple Inc., an evangelist.
But it wasn’t until early in 2012 that I started to focus on the so-called digital humanities (aka DH). To be sure, Matt Kirschenbaum showed up at The Valve (alas, now dormant) for the Moretti book event (Graphs, Maps, Trees) and, for that matter, Moretti himself put in a few appearances. I snagged a promising book reference from Kirschenbaum (Dominic Widdows, Geometry and Meaning), but for me that event was about Moretti, not DH. It took Stanley Fish to get me thinking about DH. He’d gone to the MLA convention, attended some DH sessions, and blogged about it in January, 2012: Mind Your P’s and B’s: The Digital Humanities and Interpretation. Of my posts tagged “digital humanities”, only a bit less than a quarter of them were written before Fish. The rest come after.
Sometime in the wake of Fish I came across anxiety within the DH community about the lack of Theory, with Alan Liu prominent among the worriers. Now I was irritated. On the one hand, it seems to me that Theory has lost most of its energy – for what it's worth, it was an examination of that morbidity that had attracted me to The Valve (the discussions of Theory’s Empire) in the summer of 2005. On the other hand, there’s a rich body of theory around computation, language, the mind, and evolutionary process (read: history) which is relevant, it seemed to me, to DH and yet which has been for the most part neglected. There is more to theorizing humanity than is dreamt of in Theory.
Finally, in March of this year I saw a video of Liu’s Meaning of the Humanities talk at NYU. I watched it, liked it, and contacted Alan. He responded by sending me a PDF of his PMLA article of the same title (“The Meaning of the Digital Humanities”, PMLA 128, 2013, 409-423). That prompted me to write the first of the blog posts I’ve collected here: Computer as Symbol and Model: On reading Alan Liu.
Thursday, April 17, 2014
I’m wondering how many digital humanists set out to do one thing and ended up realizing they were doing something else, something they don’t quite understand. Some texts...
* * *
We who have been working in the field know that the digital humanities can provide better resources for scholarship and better access to them. We know that in the process of designing and constructing these resources our collaborators often undergo significant growth in understanding of digital tools and methods, and that this sometimes, perhaps even in a significant majority of cases, fosters insight into the originating scholarly questions. Sometimes secular metanoia is not too strong a term to describe the experience.
Willard McCarty, A Telescope for the Mind?
* * *
The Gist: The only way the digital humanities are going to develop a cultural analytics that is sui generis is by thinking about the nature of computation itself in relation to human minds, as embodied in human brains, and as developing though interaction with other minds through various media and in groups of varying size and social structure. Otherwise the digital humanities will have no choice by to borrow its cultural concepts from other discourses, as it is now doing.
* * * * *
Let has start from some passages. First up, Alan Liu, from “Why I’m In It” x 2 – Antiphonal Response to Stephan Ramsay on Digital Humanities and Cultural Criticism (September 13, 2013):
The digital humanities can only take on their full importance when they are seen to serve the larger humanities (and arts, with affiliated social sciences) in helping them maintain their ability to contribute to the making of the full wealth of society, where “wealth” here has its older, classic sense of “well-being” or the good life woven together with the life of good.
Compare that with Willard McCarty, A telescope for the mind? (Debates in the Digital Humanities, ed. Matthew K. Gold. Minneapolis MN: University of Minnesota Press, 2012): “What can the digital humanities can do for the humanities as a whole that helps these disciplines improve the well-being of us all?”
Back to Liu:
It seems to me that digital humanists can and should evolve a mode of cultural criticism that is uniquely their own and not a mere echo of fading humanist cultural criticism by treating their immediate objects of inquiry (academically-oriented technologies and methods) as always also “mediate objects of inquiry” bearing on the way the human beings they wish their students could become (and they themselves could be on their best days) can really engage meaningfully with larger social agents and forces....The goal is to do research, to teach, and to live as if humanities technology is constantly intertwined with, reacts to, and acts on the way the links are now being forged between individuals (starting with those in the academy where we teach and conduct research) and the social-economic-political-technological constitution of contemporary society.What it comes down to is that the digital humanities need both to work on tools and methods in their own institutional place (the academy) and to develop a capable imagination of the relation of that unique institutional place (or family of variant institutional spaces) to the other major institutions that play a part in enabling or thwarting the passageway from private human subjectivity to public social sensibility.
It seems that Liu is imagining a cultural criticism centered on institutions and society, one that treats computers and minds as black boxes whose inner workings remain unexamined. In this practice it seems to me that the ideas about culture and society would likely come from already existing bodies of work.
Wednesday, April 16, 2014
From the NYTimes:
Graffiti in Athens, as in other cities the world over, has flourished for decades. But in a country where the adversity of wars and military dictatorship already has shaped the national psyche, the five-year economic collapse has spawned a new burst of creative energy that has turned Athens into a contemporary mecca for street art in Europe.Denounced as thuggish vandalism by some observers, but hailed by others as artistic and innovative, tags, bubble letters and stylized paint work long have blanketed this city’s walls, trains, cars, banks, kiosks, crumbling buildings — and even some ruins of the Acropolis. But in the past several years, the anguish of the times has increasingly crept into the elaborate stencil work and multitude of large, colorful murals found all over the city, as Greece’s throngs of unemployed and underemployed young people have ample time to express their malaise.
And a dentist has turned to graff:
Recently, under cover of darkness, a Greek dentist whose business has been all but wiped out by the crisis reached into a tote bag and grabbed a can of spray paint and a stencil he had cut in his spare time using a cavity drill. Stopping at a crumbling wall, he quickly painted an image not typically associated with his profession: a masked man hurling a firebomb.“The middle class and the working class in Greece have been ruined,” said the dentist, who goes by the street handle Mapet, declining to give his real name. “My goal is to deliver social and political counterpropaganda, and make people think.”
Athens School of Fine Arts has courses in street painting.
Tuesday, April 15, 2014
While I've got some more thinking to do about Gojira, I'm currently preoccupied with other things. So I've to decided to take what I've got so far and wrap it up into a working paper. Here's the link to the Academia.edu page. Abstract and introduction are below.
Abstract : Gojira (1954) is a Japanese film with two intertwined plots: 1) a monster plot about a prehistoric beast angered by atomic testing, and 2) a love plot structured around a conflict between traditional arranged marriage and modern marriage by couple’s choice. The film exhibits ring-composition (A B C D C’ D’ A’) as a device linking the two apparently independent plots together. Nationalist sentiment plays an important role in that linkage. the paper ends with a 6-page table detailing the actions in the film from beginning to end.
Back when I threw in my lot with cognitivism in the early 1970s, I did so because I was excited by the idea of computation. That's what animated the early years of the “cognitive revolution.” But, by the 80s you could get on board with the cognitive revolution without really having to think about computation. Computation made the mind thinkable in a way it hadn't been in in the Dark Ages of Behaviorism.
Once that had happened psychologists and others were happy to think about the mind and leave computation sitting off to the side. Among other things, that’s the land of cognitive metaphor, mirror neurons, and theory of minds.
By the mid-90s literary critics were getting interested in cognitive science, but with nary hint of computation. A lot of cognitive criticism looks lot old wine in new barrels. The same with most literary Darwinism. All that's new here are the tropes, the vocabulary.
As far as I can tell, digital criticism is the only game that's producing anything really new. All of a sudden charts and diagrams are central objects of thought. They're not mere illustrations; they're the ideas themselves. And some folks have all but begun to ask: What IS computation, anyhow? When a died-in-the-wool humanist asks that question, not out of romantic Luddite opposition, but in genuine interest and open-ended curiosity, THAT's going to lead somewhere.
If you think of a singularity as a moment where change is no longer moving away from the old, but (now has the possibility of) moving toward an as yet unknown something new, then that's where we are NOW.
Monday, April 14, 2014
I would like to continue the examination of fundamental presuppositions, conceptual matrices, which I began in The Fate of Reading and Theory. That post was concerned with how, in the context of academic literary criticism, 1) “reading” elides the distinction between (merely) reading some text – for enjoyment, edification, whatever – and writing up an interpretation of that text and 2) how “literary theory” became the use of theory in interpreting literary texts. This post is about the common sense association between computers and computing on the one hand and numbers and mathematics on the other.
* * * * *
Let’s start with a couple of sentences from one of the pamphlets published by Stanford’s Literary Lab, Ryan Heuser and Long Le-Khac, A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (May 2012, 68 page PDF):
The general methodological problem of the digital humanities can be bluntly stated: How do we get from numbers to meaning? The objects being tracked, the evidence collected, the ways they’re analyzed—all of these are quantitative. How to move from this kind of evidence and object to qualitative arguments and insights about humanistic subjects—culture, literature, art, etc.—is not clear.
There we have it, numbers on the one hand and meaning on the other. It’s presented is a gulf which the digital humanities must somehow cross.
When first read that pamphlet most likely I thought nothing of that statement. It states, after all, a commonplace notion. But when I read those words in the context of writing a post about Alan Liu’s essay, “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423) I came up short. “That’s not quite right,” I said to myself, it’s wrong to so casually identify computers and computing with numbers.”
* * * * *
Now let’s take a look at an essay by Kari Krauss, Conjectural Criticism: Computing Past and Future Texts (DHQ: Digital Humanities Quarterly, 2009, Volume 3 Number 4). Here’s her opening paragraph:
In an essay published in the Blackwell Companion to Digital Literary Studies, Stephen Ramsay argues that efforts to legitimate humanities computing within the larger discipline of literature have met with resistance because well-meaning advocates have tried too hard to brand their work as "scientific," a word whose positivistic associations conflict with traditional humanistic values of ambiguity, open-endedness, and indeterminacy [Ramsay 2007]. If, as Ramsay notes, the computer is perceived primarily as an instrument for quantizing, verifying, counting, and measuring, then what purpose does it serve in those disciplines committed to a view of knowledge that admits of no incorrigible truth somehow insulated from subjective interpretation and imaginative intervention [Ramsay 2007, 479–482]?
Sunday, April 13, 2014
An open letter to Alan Liu concerning the notion of a tabula rasa interpretation which he introduced, though not in his own person, in “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423).
You know, in a way Stanley Fish anticipated the notion of a tabula rasa interpretation way back in his 1973 essay, “What Is Stylistics and Why Are They Saying Such Terrible Things About It? (reprinted in Is There a Text in This Class?, which is where my page numbers come from). Fish takes on an article by the linguist Michael Halliday, remarking that Halliday has a considerable conceptual apparatus – an attribute of many modern linguistic theories, lots of categories and relationships, all tightly defined. After quoting a passage in which Halliday analysis a single sentence from Through the Looking Glass, Fish remarks (p. 80):
When a text is run through Halliday’s machine, it’s parts are first dissembled, then labeled, and finally recombined in their original form. The procedure is a complicated one, and it requires many operations, but the critic who performs them has finally done nothing at all.
Now, though I am familiar with some of Halliday’s work, I’ve not yet read that particular essay. Still, Fish’s characterization seems fair, and would apply to many similar and even not-so-similar models. Note, however, that he frames Halliday’s essay as one of many lured on by “the promise of an automatic interpretive procedure” (p. 78).
That, it seems to me, is the tabula rasa interpretation which you see as the goal of a least some digital critics. To be sure, Halliday did his work manually, but by that time the computer was very much in the air. On the one hand, Chomsky’s linguistics was driven by the notion of an abstract computer, but also computer-based statistical stylistics was fairly well established and Fish also hacks away at some of that work.
But, as far as I can tell, and I’ve been thinking about this for a lllllloooonng time, with one odd exception, there is never going to be any such thing as an automatic interpretive procedure.
Saturday, April 12, 2014
Near the end of the previous millennium Francis Fukuyama declared history to be at an end. He did not of course mean that time has stopped or even that there would be no more change. He was speaking of history as conceptualized within a certain intellectual tradition. With the worldwide spread of liberal democracies driving large-scale social change had come to equilibrium. History is no more.
More recently, within this the present millennium Timothy Morton has declared an end, not merely to history, but to the world. He means this in a special sense, of course. As the blub to his book, Hyperobjects (which I’ve not read), has it:
The world as we know it has already come to an end.Having set global warming in irreversible motion, we are facing the possibility of ecological catastrophe. But the environmental emergency is also a crisis for our philosophical habits of thought, confronting us as it does with a problem that seems to defy not only our control but also our understanding.
By way of clarification there is this statement, with which Morton ends an anti-Lovelock blog post: “The end of the world qua neutral backdrop to human whatever has already occurred. This is the afterlife. I am already dead.”
The Anthropocene began in the coal-fired, iron-mongering, carbon-belching industrial age and has become known to us in the silicon-cradled, bit-crunching, internet-clothed information age. It’s not merely that computer technology is with us even as we know of global warming, but that we couldn’t know of global warming without that technology. For that computer tech runs the climate simulations that tell us, yes, our energy-hungry activities have changed the climate so much that, for example, the sea is beginning to consume Bangladesh:
When I saw it I wondered What could that sign possibly mean? I don't believe that I've ever see such a traffic sign before. But I didn't think much about it. I just snapped my picture and continued on. Now, however, in retrospect, the meaning is obvious: Stop sign ahead. The sign is at the top of a hill facing traffic that is about to go down the hill. Drivers are being warned to get prepared for the stop. However, if the red octagon actually had the word "stop" in it, then the sign would likely be interpreted as a stop sign. And that doesn't make any sense at all at that point.
Friday, April 11, 2014
Santo Fortunato published in Nature pointing out that, in physics, the lag between fundamental discover and a Nobel Prize for that discovery is getting longer and longer. Science writer John Horgan comments:
A National Geographic editor asked the letter’s lead author, Santo Fortunato, what the trend meant, and he suggested that science is “scratching the bottom of the barrel in fundamental science” and “running out of fundamental discoveries.” That reminded the editor of my 1996 book The End of Science, so he invited me to riff on the Nature piece, which I did. The Nobel trend in physics, I argued, supports my book’s assertion that further research will yield “no more great revelations or revolutions, but only incremental, diminishing returns.”The point seems plausible to me. On the other hand, we're just getting started in the human sciences, aka the humanities. Though we've still got to shake off the shackles of the 19th Century.
For the last several weeks I've been sitting-in at a Monday evening jam session at the Parkwood in Maplewood, NJ. The repertoire centers on "the Great American Songbook", as it's called. A vocalist named Jay likes to sit in as well, and he likes ballads. He's done "The Nearness of You" several times now. I've heard the song, of course, but it's not in my repertoire. However, after hearing Jay sing it two or three times I find it running around in my mind's ear without any prompting from me. It's just there.
It's about to enter my repertoire. To that end I've been listening to versions on Youtube. Here's one by Sinatra, whose repertoire all but defined the Great American Songbook:
Here's a version by Sarah Vaughn, also known as The Voice. Notice that she sings the opening verse, which is generally dropped by most performers:
Thursday, April 10, 2014
The Six-Shooter Marketplace: 19th-Century Gunfighting as Violence Expertise
Jonathan Obert (2014).
Studies in American Political Development, Volume 28, Issue01, April 2014 pp 49-79
Studies in American Political Development, Volume 28, Issue01, April 2014 pp 49-79
Abstract: How are new forms of violence expertise organized and exploited? Most scholars view this as primarily a question of state-building; that is, violence experts use their skills in an attempt to regulate economic transactions or to extract and redistribute resources via protection rents either for themselves or at the behest of political elites. In an alternative view, this article demonstrates that historical gunfighters active in the late 19th-century American Southwest were actually market actors—the possessors of valuable skills cultivated through participation in the Civil War and diffused through gunfighting and reputation building in key market entrepôts. Neither solely state-builders nor state-resisters, as they have traditionally been interpreted, gunfighters composed a professional class that emerged in the 1870s and 1880s and who moved frequently between wage-paying jobs, seizing economic opportunities on both sides of the law and often serving at the behest of powerful economic, rather than political, actors. I establish this claim by examining a dataset of over 250 individuals active in the “gunfighting system” of the post-bellum West, demonstrating that the social connections forged through fighting, and diffused through social networks, helped generate a form of organized violence that helped bring “law and order” to the frontier but as a byproduct of market formation rather than as state-building.
The problem comes from the dirt that's on the window. It's not grimy dirt, and you'd hardly notice it if you were in the room looking at or through the window. But it's clearly visible in these photos.
So, what's the problem? Does the presence of dirt on the windows reduce the value of the photos, or eliminate it altogether? Is the presence of that dirt a good reason not to display the pictures? Of course, they're small on the screen (you could click through to my Flickr site and see larger versions). The dirt would be more prominent in 8 by 12 prints.
This is one part of a longer piece. That longer piece may not, however, actually get written. So I’m posting this now. In this post I treat word use – reading, theory – not as mere terminology, but as telltale clues about the nature of the underlying conceptual matrix (aka paradigm or episteme).
For as long as I can remember I’ve at least noticed, and have often been irritated by, the practice of literary critics to use the word “reading” to cover two somewhat different though related practices. There is the ordinary business of reading, familiar to any literate person. And there is more or less professionalized activity of creating written interpretations of literary and sometimes non-literary texts. There is no difficulty in using “interpret” in the second instance – and talking of an interpretation rather than a reading – and we do this often enough. But we don’t do so always and consistently.
It has always seemed to me that there is a bit of willfulness in using that one word, “reading”, in those two ways where that willfulness is insisting that the two things are the same, rather than one word being used in two different ways. But then Geoffrey Hartman, who seemed almost tortured by the strain of that insistence (cf. Geoffrey Hartman on Reading) did once draw a line in the sand (The Fate of Reading, 1975, p. 271):
I wonder, finally, whether the very concept of reading is not in jeopardy. Pedagogically, of course, we still respond to those who call for improved reading skills; but to describe most semiological or structural analyses of poetry as a "reading" extends the term almost beyond recognition.
He went on to observe, “modern ‘rithmatics’—semiotics, linguistics, and technical structuralism—are not the solution. They widen, if anything, the rift between reading and writing” (p. 272).
Quite so. But what would Hartman say to distant reading, which places an obtrusive computer-mediated apparatus between the critic and the text/s? Whatever it is, it is not reading in the sense over which Hartman agonized.
Now that I'm immersed myself in thinking about digital criticism (aka machinic reading) it makes sense to make this post current. I've just made a few revisions to the Prospero doc, most minor, one a bit more substantial (a new paragraph on how Prospero changes in the course of operation).I've uploaded a thought experiment to Scribd. It's about a computer that can read literature and talk about it. It's called, naturally, Prospero. An abstract's below, followed by the introduction.
Abstract: Prospero is a thought experiment, a computer program powerful enough to simulate, in an interesting way, the reading of a literary text. To do that it must simulate a reader. Which reader? Prospero would also simulate literary criticism, and controversies among critics. The point of Prospero, if we could build it, is the knowledge required to build it. If we had it, we could examine its activities as it reads and comments on texts. But our knowledge of Prospero is of a different kind and order from our knowledge of the world and of life, though those things are central to literary texts. The point of this thought experiment is to clarify that difference, for that is what we will have to do to build a naturalist literary criticism grounded in the neuro-, cognitive, and evolutionary psychologies.
* * * * *
Back in the mid-1970s a journal then called Computers and Humanities invited my teacher, David Hays, to write a review of recent work in computational linguistics. Hays was a natural for the job. He’d been involved in computational linguistics from the beginning, was then the editor of the leading journal in the field, American Journal of Computational Linguistics (AJCL), and had recently written about “Language and Interpersonal Relationships” for Dædalus.
Hays asked me to draft the article. I was his student at the time and, perhaps more directly germane to the task at hand, I was in charge of abstracting the current literature for AJCL. I was of course pleased and flattered to do so.
We began the article by defining computational linguistics and concluded it with a fantasy, a computer program so powerful that it was capable of reading Shakespeare texts in a way that was interesting but not human. We called it Prospero. It was a reasonable fantasy at the time. I figured that we might have such a Prospero system in twenty years. Hays knew better and refused to put dates on such fantasies.
Wednesday, April 9, 2014
The New York Times, How to Think Like the Dutch in a Post-Sandy World:
In the Netherlands, a man named Henk Ovink offered to be Donovan’s guide. Ovink was the director of the office of Spatial Planning and Water Management, meaning, essentially, that it was his job to keep the famously waterlogged country dry. As he learned about various Dutch innovations, Donovan was struck by the fact that Ovink looked at water as much in cultural as in engineering terms, which was a function of the centuries-old need of the Dutch to act together for protection.
Think like a community:
Beyond that, Ovink feared that politics might undermine any chance to encourage new thinking about water management. “When I mentioned climate change to one official,” he said, “she almost hit me.” He characterized some of the wishful thinking he believed he would be dealing with as: “Don’t hire a Dutchman — believe in angels.”
Dutch battles against water led his country to develop a communal society. To this day, Water Boards, which date to the Middle Ages, are a feature of every region, and they guide long-term infrastructural planning. American individualism, on the other hand, has yielded a system in which each municipality has a great deal of autonomy, making regional cooperation difficult.
Pacific Avenue in Jersey City, the day after Sandy