Tuesday, May 31, 2016
The Text, Intention from Textual to Interpretive Criticism
As some of you may know, there is such a thing as textual criticism, which is about the production of a definitive text, however that is understood, for literary works. The discipline exists because many to most literary works exist in multiple versions, different published editions as well as author manuscripts. Quite often those different versions are not the same. Some differences may be relatively inconsequential, such as minor matters of usage or small errors of one sort of another, while others may be so substantial as to affect who did what to whom and when, etc. How then, does one select which version to include in the final text?
Textual criticism is about, not simply how one decides among variants, but about just what the final text is to be. Often enough that text is understood to reflect the author’s intentions, whether conceived of as “final” or “best” or perhaps in some other way. Just how that intention is characterized is not my present concern. I’m simply interested in the fact that authorial intention is invoked.
But it is invoked in a somewhat different way from the use of authorial intention in interpretive criticism. The textual critic is invoking authorial intention to decide among two, three, or more existing variants for a section of text. The interpretive critic starts with an existing text and is attempted to decide on a meaning to ascribe to that text where the meaning is derived from a relatively open field of possibilities. The textual critic’s decision thus seems to me rather more tightly circumscribed than the interpretive critic’s.
In the decades after World War II interpretive criticism became more important in literary criticism in America to the point where the discipline had become centered on it and then textual criticism was driven to the periphery. but not, of course, eliminated, because there is always a need for new editions. Moreover as digital became widely and cheaply available to humanists, the preparation of digital editions gave new life to textual criticism, but that’s not what most interests.
What interests me is the role that authorial intention plays in these two forms of criticism. What if anything did interpretive critics pick up from textual critics, and vice versa? I’m wondering if anyone has studied that.
* * * * *
Meanwhile, I’ve just become aware of the following book:
Amy E. Earhart, Traces of the Old, Uses of the New: The Emergence of Digital Literary Studies. Ann Arbor, MI: University of Michigan Press, 2015.
I’ve just blitzed through it over the weekend and I recommend it to anyone who is interested in the history of digital studies–which, as we know, is only one facet of digital humanities–or who is simply interested in what it is.
Friday, May 27, 2016
Thursday, May 26, 2016
William James on Music
Last year Kurt Newman went through William James's The Principles of Psychology, found all references to music, and gathered them into a single blog post. Here's one of those entries:
We learn to appreciate what is ours in all its details and shadings, whilst the goods of others appear to us in coarse outlines and rude averages. Here are some examples: A piece of music which one plays one’s self is heard and understood better than when it is played by another. We get more exactly all the details, penetrate more deeply into the musical thought. We may meanwhile perceive perfectly well that the other person is the better performer, and yet nevertheless––at times get more enjoyment from our own playing because it brings the melody and harmony so much nearer home to us. This case may almost be taken as typical for the other cases of self-love…
And this is also surely the reason why one’s own portrait or reflection in the mirror is so peculiarly interesting a thing to contemplate . . . not on account of any absolute “c’est moi,” but just as with the music played by ourselves. What greets our eyes is what we know best, most deeply understand; because we ourselves have felt it and lived through it. We know what has ploughed these furrows, deepened these shadows, blanched this hair; and other faces may be handsomer, but none can speak to us or interest us like this.
Radical DH and the Xenaverse
It's not at all clear to me just what "digital humanities" is or will become, but it's clear that some are worried that it is just another form neoliberal oppression and won't become much of anything unless it figures out how to be radical in the face of all that tech. Me, I'm Old School and believe in capital "T" Truth, though constructing it is certainly a challenge and a trial. For reasons which I may or may not get around to explaining here on New Savanna I'm inclined to think of all this throat clearing and hemming and hawing about radical DH as a form of territorial marking (I'm thinking of recent LARB piece and its fallout). The purpose of this post is to suggest that the territory was radically marked before the DH term was coined.
Back in 1998 Christine Boese released her doctoral dissertation to the web: The Ballad of the Internet Nutball: Chaining Rhetorical Visions from the Margins of the Margins to the Mainstream in the Xenaverse. It's certainly one of the earliest hypertext dissertations, if not THE first – I simply don't know. I figure that qualifies it as DH. Here's the abstract:
This dissertation is a case study applying methods of rhetorical analysis and cultural critique to the burgeoning online phenomenon called the Xenaverse: the online spaces devoted to the cult following of the syndicated television program "Xena, Warrior Princess." My work combines two modes of inquiry:(1) Locating and capturing texts from multiple sites on the Internet known to be parts of the Xenaverse, and(2) Supplementing those texts with data generated from the limited use of the ethnographic tools of participant observation and informant interviews, both electronic and face to face.The primary focus of my analysis is on constructions of authority in cyberspace. I explore the constellations of social forces in cyberspace which have led to the success of a noncommercial, highly trafficked, dynamic culture or what is sometimes called a "community." The strengths and weaknesses of this online "community" are examined in terms of the ideals of radical democracy, using Fantasy-Theme rhetorical analysis. This research examines how the rhetorical visions of this culture are used to write the narratives of its ongoing existence, in a way that is increasingly independent of the dominant narratives of the television program itself. Using the relevance of an insider's point of view and taking a case which implies successful democratic social resistance to diverse hegemonic forces, I look beyond the Xenaverse and consider the strength of the frequently-cited claim that the medium of cyberspace is intrinsically democratizing. Considering that claim critically, I suggest democracy both is and is not being enhanced online.
Tuesday, May 24, 2016
Emotional Contagion
The idea that emotions can spread from person to person is not new. But recent research is starting to uncover the physiological mechanisms behind such “emotional contagion.” A study published this month (May 9) in Psychological Science, for example, showed that infants dilate or contract their pupils in response to depictions of eyes with the corresponding state, suggesting that emotional contagion may develop early in life. A 2014 study found that mothers could pass emotional stress on to their babies in a largely unconscious way. Together, the findings add to a growing body of research revealing the role of this phenomenon in human interactions.“One of the most important things as a human species is to communicate effectively,” said Garriy Shteynberg, a psychologist at the University of Tennessee, Knoxville, who has shown that emotional contagion is enhanced in group settings. In order to do that, “we need cognitive mechanisms that give us a lot of common background knowledge,” Shteynberg told The Scientist.
Our old friends synchrony and the default network:
The most popular model, developed by social psychologist Elaine Hatfield and colleagues, suggests that people tend to synchronize their emotional expressions with those of others, which leads them to internalize those states. This suggests, for example, that the act of smiling can make a person feel happiness.As to what may be going on in the brain when this happens, some research suggests that emotional contagion may engage the default mode network—a set of brain circuits that are active when an individual is not engaged in any particular task, but may be thinking about his or herself or others, noted Richard Boyatzis of Case Western Reserve University. When this network is activated, a person may be picking up on emotional cues from others, he told The Scientist. And “the speed at which you pick it up is probably the most important issue going on,” as it suggests that this process is largely unconscious, Boyatzis said.
Students of literary culture, and broadcast media, take note. I'm particularly interested in the case of story-telling in preliterate cultures, which is, after all, the default situation for human story telling. Here the stories are well known and people absorb them in the company of others. That's very different from reading a book in the privacy of one's home.
Monday, May 23, 2016
Sunday, May 22, 2016
Chomsky, Hockett, Behaviorism and Statistics in Linguistics Theory
Here's an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965, by Gregory Radick (abstract below). Commenting on it at Dan Everett's Facebook page, Yorick Wilks observed: "It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer."
Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.
Street art museum in Berlin
Renovation is not complete. Here's a sense of what it will be like when complete:
Tim Renner, undersecretary of state for culture in Berlin is also happy about the plans for the museum: “The whole project is lunatic, almost insane”, he says and laughs. “And that’s why it matches Berlin”.
Friday, May 20, 2016
Friday Fotos: Graffiti Details
As the title indicates, these are details from various works of graffiti. Further, they are so relatively small in scale that you've got little sense of the larger work. There's no way you can tell that the last detail, that black square on a white background, if from a Jerkface image of Sylvester the Cat. Nor is it in the least bit obvious that the center target is from Sonet piece in the Bergen Arches (now two or three layers down. Still, if you look closely, and not all that closely, there are things to notice, the different surfaces for example. Of course the colors. And isn't it obvious that the second image shows some paint that has been splashed on rather than being sprayed from a can? And the first one centers of a silver square (with some white overlay at the top). Silver has a special place in the graffiti ecology, with a whole class of images known as "silvers" because their predominant paint (generally with black outline). And the fourth one, notice the pealed paint? How many layers are on that surface?
Thursday, May 19, 2016
Who put “The Terminator” in “Digital Humanities”?
In thinking about the current discussions provoked by the LARB takedown of digital humanities I’ve been thinking about the term itself: “digital humanities.” The term itself yokes together the opposite poles of a binary that’s plagued us since Descartes, man and machine, the human and the mechanical. I suppose that’s part of its attraction, but also why it can be so toxic.
In these discussions the term itself seems almost to float free of any connection to actual intellectual practice. Given the impossible scope of the term – see, e.g. the Wikipedia article on DH – its use tends to collapse on the binary itself. And if one is skeptical of whatever it is, then the “digital” term becomes those nasty machines that have been threatening us for centuries, most visibly in movies from Lang’s Metropolis through the Terminator series and beyond.
I certainly count myself among those who find the term somewhere between problematic and useless. More usefully, I think the many developments falling within the scope of that term are best seen in terms of a wide variety of discourses emerging after WWII. As a useful touchstone, check out a fascinating article: Bernard Dionysius Geoghegan, “From Information Theory to French Theory: Jakobson, Lévi-Strauss, and the Cybernetic Apparatus,” Critical Inquiry, Fall 2011. He looks at the period during and immediately after World War II when Jakobson, Lévi-Strauss and Lacan sojourned in NYC and picked up ideas about information theory and cybernetics from American thinkers at MIT and Bell Labs. You should be able to get a copy from Geoghegan's publications page.
Yet another response to LARB on DH: A plague on all your houses!
Allington, Brouillette, and Golumbia (ABG) on digital humanities continues reverberating through the online hive mind. It's not clear to me that Justin Evans, System Reboot, knows much, if anything about DH, beyond what ABG mention in their article. He's focused entirely on literary studies:
Anyone who cares about the humanities needs to pay close attention to this argument, because DH undeniably harms the study of literature in the university.
However, after that note of agreement, he goes a step further and ends up here:
If English department politics isn’t a target for neoliberal elites, why has English been so vulnerable to the digital humanities? Perhaps it is because, since English scholars turned their attention to political progressive humanities, the English department has lacked a subject matter of its own. It has become the home of ersatz social science, ersatz science, ersatz philosophy—and now, ersatz computer science. The English department is not a victim of some kind of neoliberal conspiracy. It is just looking for something to talk about.From this perspective, the digital and the politically progressive humanities [aka PPH] are more like squabbling cousins than opposing forces. Allington et al. write that DH “was born from disdain and at times outright contempt, not just for humanities scholarship, but for the standards, procedures and claims of leading literary scholars”—just as the politically progressive humanities were born from the rejection of New Criticism. DH tries to “redefine what had formerly been classified as support functions for the humanities as the very model of good humanities scholarship”; similarly, PPH redefined the literary scholar’s legitimate helpmates (philosophy, history, science) as “the very model” of literary scholarship. And finally, most importantly, neither DH nor PPH can give students a reason to study literature rather than, say, linguistics or sociology or neuroscience.
Scholars who are serious about saving the English department from the digital humanities need to acknowledge that it needs to be saved from the politically progressive humanities, too.
Whoops! So much for ABG! This puts me in mind of Alex Reid's post, Otters' Noses, Digital Humanities, Political Progress, and Splitters, which invokes a scene from Life of Brian:
Here's his gloss on the scene:
Of course the whole point of it is to satirize the divisive nature of political progressivism. Apparently it pointed specifically at the leftist politics of England at the time, but really this kind of stuff has a timeless quality to it.
Wednesday, May 18, 2016
Godzilla Rises in Japan, for the first time in 12 years
The Godzilla franchise is one of the most prolific in the movie biz. Writing in The New Yorker, Matt Alt reports that the Japanese are releasing their first Godzilla film in 12 years.
The 1954 “Gojira” resonated so deeply with its audience because of the painfully fresh memories of entire Tokyo city blocks levelled by Allied firebombings less than a decade before. In the preview for “Godzilla Resurgence,” you can see how the directors are again mining the collective memories of Japanese viewers for dramatic effect. But their touchstones are no longer incendiary and nuclear bombs. Instead they are the 2011 earthquake and tsunami, which killed close to twenty thousand people and introduced Japan to a new form of nuclear horror caused by out-of-control civilian reactors. Indeed, the now fiery-complexioned Godzilla seems to be a walking nuclear power plant on the brink of melting down. For anyone who lived in Japan through the trying days of late March of 2011, the sight of blue-jumpsuited government spokesmen convening emergency press conferences is enough to send a chill down one’s spine. So is the shot in the trailer of a stunned man quietly regarding mountains of debris, something that could have been lifted straight out of television footage of the hardest-hit regions up north. Even the sight of the radioactive monster’s massive tail swishing over residential streets evokes memories of the fallout sent wafting over towns and cites in the course of Fukushima Daiichi’s meltdown.It’s an open question as to how foreign audiences will perceive the subtext of these scenes, or if they even really need to. Equally so for the shots of the Japanese military’s tanks, aircraft, cruisers, and howitzers engaging the giant monster, which at first glance might simply appear a homage to a classic movie trope. But “Godzilla Resurgence” appears at a time in Japanese history much changed from that of even its most recent predecessor. In the last twelve years, the Self-Defense Forces have gone from little more than an afterthought to folk heroes for their role in 2011 tsunami rescue efforts, and now to the center of a controversy as Prime Minister Shinzo Abe pushes through legislation to expand their role abroad. Traditionally, Japanese monster movies have used images of military hardware hurled indiscriminately against a much larger foe as a potent symbol of the nation’s own disastrous wartime experience. But the regional political situation for Japan is far more complicated now. The fight scenes hint at changing attitudes toward the military in Japan.
Here's a trailer:
The brain is not a computer
Robert Epstein in Aeon:
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
See this post from 2010 where I quote Sydney Lamb making the same point. Computers, in contrast, really are rather like that:
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Anti-representationalism:
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
Tuesday, May 17, 2016
SSRN sold to Elsevier
SSRN (Social Science Research Network) is a text repository that's been around for over a decade. FWIW I joined up in November of 2009 and have over 90 documents there. It's now been bought out by Elsevier, the giant academic publishing conglomerate. On the one hand, that's neither here nor there; it's just another bit of business.
I mention it, however, in view of recent discussions about Academia.edu, e.g. On Staying With Academia.edu and Open Letter to Rosemary Feal, Kathleen Fitzpatrick, and the Modern Language Association (both at Academia.edu). Much of that discussion was about whether or not one should entrust one's scholarship to a profit-making enterprise which, however, facilitated getting one's ideas out and across disciplinary boundaries, vs. (anemic) efforts by traditional disciplines (e.g. the MLA Commons). SSRN's Founder and Chairman, Michael C. Jensen, has sent a letter to authors using SSRN in which he assured us that things will be OK:
We realize that this change may create some concerns about the intentions of a legacy publisher acquiring an open-access working paper repository. I shared this concern. But after much discussion about this matter and others in determining if Mendeley and Elsevier would be a good home for SSRN, I am convinced that they would be good stewards of our mission. And our copyright policies are not in conflict -- our policy has always been to host only papers that do not infringe on copyrights. I expect we will have some conflicts as we align our interests, but I believe those will be surmountable.
We shall see.
See also: Cory Doctrow, Elsevier Buys SSRN, Boingboing; Mike Masnick, Disappointing: Elsevier Buys Open Access Academic Pre-Publisher SSRN, Techdirt. Paul Gowder, SSRN has been captured by the enemy of open knowledge.
See also: Cory Doctrow, Elsevier Buys SSRN, Boingboing; Mike Masnick, Disappointing: Elsevier Buys Open Access Academic Pre-Publisher SSRN, Techdirt. Paul Gowder, SSRN has been captured by the enemy of open knowledge.
For the Historical Record: Cog Sci and Lit Theory, A Chronology
Another reprint from The Valve, though slightly revised. I'm bumping this to the top of the queue in view of this interesting post in which Scott Enderle meditates on loose conjunctions between perceptrons, post-structuralist thought, and data mining.
Back in the ancient days of the Theory's Empire event at The Valve I contributed a comment paralleling the rise of Theory with that of cognitive science. That parallel seems - at least to me - of general interest. So I decided to dig it out from that conversation and present it here, in lightly edited form. The parallel I present does not reflect extensive scholarship on my part, no digging in the historical archives, etc. Rather, it is an off-the-top-of-my-head sketch of the multifaceted intellectual milieu in which I have lived much of my intellectual life.
I take 1957 as a basic reference point. That's when Northrop Frye published his Anatomy and that's when Noam Chomsky published Syntactic Structures. 1957 is also when the Russians launched Sputnik, the first artificial satellite to circle the globe. The Cold War was in full swing at that time and Sputnik triggered off a deep wave of tech anxiety and tech envy in America. One consequence was more federal money going into the university system and a move to get more high school students into college. So we see an expansion of college and university enrollments through the 60s and an expansion of the professorate to accommodate. Cognitive science (especially its AI side) and, perhaps to a lesser extent, Theory rode in on this wave. By the time the federal money began contracting in the early 70s an initial generation of cognitivists and Theorists was becoming tenured in, and others were in the graduate school and junior faculty pipe-line. Of course, the colleges and universities couldn't simply halt the expansion once the money began to dry up. These things have inertia.
We may take cognitive science for granted now, but the fact is that there are precious few cognitive science departments. There are some, but mostly we've got interdisciplinary programs pulling faculty from various departments. These programs grant PhDs by proxy; you get your degree in a traditional department but are entitled to wear a cog sci gold seal on your forehead. As Jerry Fodor remarked somewhere (I forget where) in the last year or three, most cognitive psychologists don't practice cognitive science. They do something else, something that most likely was in place before cognitive science came on the scene.
Monday, May 16, 2016
Sunday, May 15, 2016
The navigational skills of dung beetles
Dung beetles record a mental image of the positions of the Sun, the Moon and the stars and use the snapshot to navigate, according to researchers.
Scientists in Sweden found that the beetles capture the picture of the sky while dancing on a ball of manure.
As they roll away with their malodorous prize, the beetles compare the stored image with their current location.
* * * * *
A Snapshot-Based Mechanism for Celestial Orientation
Current Biology
Basil el Jundi, James J. Foster, Lana Khaldy, Marcus J. Byrne, Marie Dacke, Emily Baird
publication stage:
In Press Corrected ProofSummary
In order to protect their food from competitors, ball-rolling dung beetles detach a piece of dung from a pile, shape it into a ball, and roll it away along a straight path [ 1 ]. They appear to rely exclusively on celestial compass cues to maintain their bearing [ 2–8 ], but the mechanism that enables them to use these cues for orientation remains unknown. Here, we describe the orientation strategy that allows dung beetles to use celestial cues in a dynamic fashion. We tested the underlying orientation mechanism by presenting beetles with a combination of simulated celestial cues (sun, polarized light, and spectral cues). We show that these animals do not rely on an innate prediction of the natural geographical relationship between celestial cues, as other navigating insects seem to [ 9, 10 ]. Instead, they appear to form an internal representation of the prevailing celestial scene, a “celestial snapshot,” even if that scene represents a physical impossibility for the real sky. We also find that the beetles are able to maintain their bearing with respect to the presented cues only if the cues are visible when the snapshot is taken. This happens during the “dance,” a behavior in which the beetle climbs on top of its ball and rotates about its vertical axis [ 11 ]. This strategy for reading celestial signals is a simple but efficient mechanism for straight-line orientation.Saturday, May 14, 2016
Google opens a a powerful parser to the world
From Google's research blog:
There's more at the blog, of course. Mark Liberman at Language Log will be posting about it in due course.
Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source
Thursday, May 12, 2016
Posted by Slav Petrov, Senior Staff Research Scientist
At Google, we spend a lot of time thinking about how computer systems can read and understandhuman language in order to process it in intelligent ways. Today, we are excited to share the fruits of our research with the broader community by releasing SyntaxNet, an open-source neural network framework implemented in TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems. Our release includes all the code needed to train new SyntaxNet models on your own data, as well as Parsey McParseface, an English parser that we have trained for you and that you can use to analyze English text.
Parsey McParseface is built on powerful machine learning algorithms that learn to analyze the linguistic structure of language, and that can explain the functional role of each word in a given sentence. Because Parsey McParseface is the most accurate such model in the world, we hope that it will be useful to developers and researchers interested in automatic extraction of information, translation, and other core applications of NLU.
At Google, we spend a lot of time thinking about how computer systems can read and understandhuman language in order to process it in intelligent ways. Today, we are excited to share the fruits of our research with the broader community by releasing SyntaxNet, an open-source neural network framework implemented in TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems. Our release includes all the code needed to train new SyntaxNet models on your own data, as well as Parsey McParseface, an English parser that we have trained for you and that you can use to analyze English text.
Parsey McParseface is built on powerful machine learning algorithms that learn to analyze the linguistic structure of language, and that can explain the functional role of each word in a given sentence. Because Parsey McParseface is the most accurate such model in the world, we hope that it will be useful to developers and researchers interested in automatic extraction of information, translation, and other core applications of NLU.
* * * * *
There's more at the blog, of course. Mark Liberman at Language Log will be posting about it in due course.
Friday, May 13, 2016
Thursday, May 12, 2016
Wednesday, May 11, 2016
Hiroshima
President Obama will be the first sitting American President to visit Hiroshima, a visit that carries heavy symbolic weight (see, e.g. this article in the NYTimes). In August of 1946 John Hersey wrote an article for The New Yorker in which he followed the lives of six residents of Hiroshima, five Japanese and one German (priest), who survived the bombing. One of these was Reverend Mr. Kiyoshi Tanimoto, pastor of the Hiroshima Methodist Church.. Here's paragraph about him:
He thought he would skirt the fire, to the left. He ran back to Kannon Bridge and followed for a distance one of the rivers. He tried several cross streets, but all were blocked, so he turned far left and ran out to Yokogawa, a station on a railroad line that detoured the city in a wide semicircle, and he followed the rails until he came to a burning train. So impressed was he by this time by the extent of the damage that he ran north two miles to Gion, a suburb in the foothills. All the way, he overtook dreadfully burned and lacerated people, and in his guilt he turned to right and left as he hurried and said to some of them, “Excuse me for having no burden like yours.” Near Gion, he began to meet country people going toward the city to help, and when they saw him, several exclaimed, “Look! There is one who is not wounded.” At Gion, he bore toward the right bank of the main river, the Ota, and ran down it until he reached fire again. There was no fire on the other side of the river, so he threw off his shirt and shoes and plunged into it. In midstream, where the current was fairly strong, exhaustion and fear finally caught up with him—he had run nearly seven miles—and he became limp and drifted in the water. He prayed, “Please, God, help me to cross. It would be nonsense for me to be drowned when I am the only uninjured one.” He managed a few more strokes and fetched up on a spit downstream.
Why do philosophy departments pretend that Western philosophy is all there is?
Jay L. Garfield and Bryan W. Van Norden in The Stone, in the NYTimes:
The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
This is well known and the authors, along with others, have spent decades trying to get American philosophy departments to broaden their range, to little avail. There's a simple fix:
Instead, we ask those who sincerely believe that it does make sense to organize our discipline entirely around European and American figures and texts to pursue this agenda with honesty and openness. We therefore suggest that any department that regularly offers courses only on Western philosophy should rename itself “Department of European and American Philosophy.” This simple change would make the domain and mission of these departments clear, and would signal their true intellectual commitments to students and colleagues. We see no justification for resisting this minor rebranding (though we welcome opposing views in the comments section to this article), particularly for those who endorse, implicitly or explicitly, this Eurocentric orientation.
Alas, no one's buying it. Why not?
Chess is for the young
Although it scarcely occurred to me at the time, my daughter and I were embarking on a sort of cognitive experiment. We were two novices, attempting to learn a new skill, essentially beginning from the same point but separated by some four decades of life. I had been the expert to that point in her life—in knowing what words meant, or how to ride a bike—but now we were on curiously equal footing. Or so I thought.I began to regularly play online, do puzzles, and even leafed through books like Bent Larsen’s Best Games. I seemed to be doing better with the game, if only because I was more serious about it. When we played, she would sometimes flag in her concentration, and to keep her spirits up, I would commit disastrous blunders. In the context of the larger chess world, I was a patzer—a hopelessly bumbling novice—but around my house, at least, I felt like a benevolently sage elder statesmen.And then my daughter began beating me.
Young brains are different from old brains, unformed, with more synapses.
Chess—which has been dubbed the “fruit fly” of cognitive psychology—seems a tool that is purpose-built to show the deficits of an aging brain. The psychologist Timothy Salthouse has noted that cognitive tests on speed, reasoning, and memory show age-related declines that are “fairly large,” “linear,” and, most alarming to me, “clearly apparent before age 50.” And there are clear consequences on the chessboard. In one study, Charness had players of a variety of skills try and assess when a check was threatened in a match. The more skilled the player, the quicker they were able to do this, as if it were a perceptual judgment—essentially by pattern recognition stored up from previous matches. But no matter what the skill, the older a player was, the slower they were to spot the threat of a check.
Meanwhile:
Back at the board, there seemed to be plenty of chaos. For one, my daughter tended to gaily hum as she contemplated her moves. Strictly Verboten in a tournament setting, but I did not want to let her think it was affecting me—and it certainly wasn’t as bad as the frenetic trash talking of Washington Square Park chess hustlers. It was the sense of effortlessness that got to me. Where I would carefully ponder the board, she would sweep in with lightning moves. Where I would carefully stick to the scripts I had been taught—“a knight on the rim is dim”—she seemed to be making things up. After what seemed a particularly disastrous move, I would try to play coach for a moment, and ask: Are you sure that’s what you want to do? She would shrug. I would feel a momentary shiver of pity and frustration; “it’s not sticking,” I would think. And then she would deliver some punishing pin on the Queen, or a deft back rank attack I had somehow overlooked. When I made a move, she would often crow: “I knew you were going to do that.”
Monday, May 9, 2016
Nowviskie on why DH is not a neoliberal con job
Melissa Dinsman interviews Beverly Nowviskie in LARB. Here's one bit:
So how can we reconcile the digital humanities benefit outside the academy with critiques of the field that connect the emergence of DH to the increased “neoliberalization and corporatization of higher education”? (I am quoting media scholar Richard Grusin here from his C21 post “The Dark Side of the Digital Humanities.”) Do you think such a comparison has merit? Is there something about the digital humanities’s desire to produce that creates an alignment with neoliberal thinking?Well, sure, DH is complicit in the modern university in the same way that every other practice and part of the humanities that succumbs to certain logics and economies of production and consumption is complicit. So, part of that struggle to enter the mainstream academy I was telling you about succeeded insofar as we, too, in DH now participate in screwed-up metrics and systems of scholarly communication along with everybody else. I’m talking here about how almost everyone in the academic humanities is caught up in the provision of free labor and content to monopolistic, private journal and database providers that then sell us our own content back to us at exorbitant prices. And I’m also thinking of humanities scholars’ general complicity with mismatched demands between what we know might really benefit scholarship and open inquiry and the public good versus what our disciplines ask early-career scholars to produce — and of how parochially we measure their output and impact. In many ways, I think you could turn around and look to the digital humanities not as a sign of the apocalypse but for paths out of this mess. Here’s a field that has been working for years on open access research and publication platforms, on ways to articulate and valorize work done outside of narrow, elite channels, and on how to value scholarship that’s collaborative and interdisciplinary — instead of done solo, individualistically, and only made legible and accessible to fellow academics in little subdisciplines, which is still the M. O. of the broader field. And on a conceptual level, the data- and text-analysis and visualization strand of digital humanities is pretty much all about finding ways to nuance mechanistic quantification and turn it on its head — to better value and appreciate and elevate the ineffable, not in spite of numbers and measures but through them. I mean, that’s our research field.
On that last point I would note purely discursive thinking tends toward binary factoring into black and white while appropriate quantification allows for nuance by supporting shades of gray, lots of them.
An interesting response to Moretti on digital humanities
On Saturday I ran up a post on Franco Moretti’s assertion that the results of computational criticism have “so far been below expectations”, as he remarked in his LARB interview, conducted by Melissa Dinsman. In response Ted Underwood assured me
I'm more sanguine than I have ever been about the intellectual and historical payoffs for computational criticism. What I've seen in recent papers, and in forthcoming ones, makes me very confident that we are going to engage, challenge, and in some cases frankly refute important existing theses about literary history. Big Ideas and broad theories of the nature of literary change won't be scarce.
I’ve found another response in an interview with Richard Jean So, also by Dinsman in LARB. He’s an “assistant professor in modern and contemporary American culture at the University of Chicago” and has done “computational work on race, language, and power dynamics.” In her last question Disman references Moretti’s reservations and asks So to “look backward and speak to what you think the digital in the humanities has accomplished so far.” Here’s how he responds:
As a younger person in the field, I don’t like the gesture of looking back. I find it problematic for someone to justify a field by saying to people, “Look how great my field is because of all these accomplishments.” If you are in a field and the accomplishments don’t speak for themselves, then you have more work to do. It is a position of potential weakness or insecurity to constantly say, “Look what we’ve done.” There is certainly a place for that, but in trying to build out the field, this looking back can be problematic when done in a defensive way. If our work and accomplishments are not instantly recognizable outside the field, then we have to do more work. So I would definitely say that I am more future minded. It isn’t obvious yet that DH is here to stay, so rather than meet critiques with a “look what we’ve done” mentality, we need to go back to our books and computers and do better work until we don’t have to answer this question any more.
Sunday, May 8, 2016
Saturday, May 7, 2016
Concepts are emergent effects
I started off reading Ryan Heuser’s post, Word Vectors in the Eighteenth Century, Episode 1: Concepts, and quickly followed a link to Michael Gavin’s post, The Arithmetic of Concepts: a response to Peter de Bolla. Blitzing on through, as I tend to do with these things, I came up against this in the final paragraph:
What’s more striking to me, though, is how commensurable the assumptions of vector-based semantics are with a theory of concepts that sees them, not as mental objects or meanings, per se, but instead as what de Bolla calls “the common unshareable of culture”: a pattern of word use that establishes networks across the language. A concept is not denoted by a word; rather, concepts are emergent effects of words’ co-occurrence with other words.
YES! But not without qualification.
Unfortunately this is not the time or place to explain what I mean, as that would require that I explain my ideas on semantics. That would require more than a blog post. Way ore.
I will remark, however, that the concept of a “word” is a tricky one. As generally used the concept encompasses meaning and syntactic affordances as well as orthography and sound. The computational techniques that Gavin is writing about, however, have no access whatever to meanings and syntactic affordances, just to orthography (in digital representation). But if you get a large corpus it is possible to analyze that corpus in a way that reveals traces, if you will, of word meanings from a mathematical model of the corpus.
But then, the young child learning language doesn’t have direct access to word meanings either. The child hears the words and has access to the context of utterance, which of course includes the person uttering the word. And out of that manages to extract meanings for words. The meanings initially extracted are crude, but they become refined over time.
Now, as I have explained more times than enough, early in my career I was knee deep in semantic models developed in computational linguistics, not the models Gavin mentions, but the semantic of cognitive networks of the 1970s and 1980s. For such models it was clear that the meaning of a node in the network was a function of its position in the network. A word’s signifier would be one node in the network and its signified would be, not some other node, but well, you know, it would be a function of the whole network. To a first approximation that’s what Gavin is saying.
He also says (quoting from a formal book review of his):
If concepts exist culturally as lexical networks rather than as expressions contained in individual texts, the whole debate between distant and close reading needs reframing. Conceptual history should be traceable using techniques like lexical mapping and supervised probabilistic topic modeling.
Yes. And I say something like that in a blog post on discourse and conceptual topology which I then included in a working paper: Toward a Computational Historicism: From Literary Networks to the Autonomous Aesthetic. When I turned to computational linguistics in the mid-1970s I was pursuing “close” reading, not “distant” reading. But this work of Gavin’s sure has a familiar ring.
The fact is that while computational critics have been very careful to avoid any use or even mention of computational theories of mind and thinking, they work they do seems inevitably to point back to those ideas from the 1970s and 1980s. And why should we be surprised about that?
If the eye were not sun-like, the sun’s light it would not see.
– Johann Wolfgang von Goethe
"DH is guilty of making all too visible the dirty gears that drive the scholarly machine"
As though no robber baron tried to launder his guilt by giving libraries to communities across the nation. Brian Greenspan on digital humanities as whistle-blower:
If the digital humanities seem at times to pander to the neoliberal discourses and tendencies that are undeniably rampant within post-secondary institutions, it’s not because they necessarily contribute to exploitative social relations (although they certainly do at times, just like every other academic sector). I rather suspect it’s because digital humanists tend as part of their scholarly practice to foreground self-reflexively the material underpinnings of scholarship that many conventional humanists take for granted. DH involves close scrutiny of the affordances and constraints that govern most scholarly work today, whether they’re technical (relating to media, networks, platforms, interfaces, codes and databases), social (involving collaboration, authorial capital, copyright and IP, censorship and firewalls, viral memes, the idea of “the book,” audiences, literacies and competencies), or labour-related (emphasizing the often hidden work of students, librarians and archivists, programmers, techies, RAs, TAs and alt-ac workers). Far from being “post-interpretative, non-suspicious, technocratic, conservative, [and] managerial,” the “lab-based practice” that we promote in the Hyperlab, at least, involves collaborative and broadly interdisciplinary work that closely scrutinizes the materiality of scholarly archivization, bibliography, writing and publishing across media, as well as the platforms and networks we all use to read and write texts in the 21st century.If anything, DH is guilty of making all too visible the dirty gears that drive the scholarly machine, along with the mechanic’s maintenance bill. Of course, it doesn’t help appearances that many of these themes also happen to be newly targeted areas for funding agencies as they try to compensate for decades of underfunding, deferred maintenance, rising tuition and falling enrolments on campuses everywhere. Now, some would argue (and I’d agree) that these material costs should ideally be sustained by our college and university administrations and not by faculty research grants. But DH isn’t responsible for either the underfunding of higher education over the last 25 years or the resulting mission creep of scholarly grants, which in addition to funding “pure research” are increasingly expected to include student funding packages, as well as overhead for equipment, labs and building maintenance, even heat and power. The fault and burden of DH is that it reveals all the pieces of this model of post-secondary funding that seems novel to many humanists, but which has long been taken for granted within the sciences. This is the model that acknowledges that most funding programs aren’t intended mainly for tenured professors to buy books and travel, but for their research infrastructure and, above all, their students who justify the mission of scholarship in the first place, and who fill in while we’re flying off to invade foreign archives like the detritivores we are.
What’s Interesting? Is Moretti Getting Bored?
“The interesting” or “interestingness” came up in the Twittersphere in a response Ted Underwood made to Ryan Heuser:
@quadrismegistus I think about this a lot lately, mostly just in the form of, the importance of being *interesting.*— Ted Underwood (@Ted_Underwood) May 4, 2016
This is something I think about from time to time – I blogged about it back in 2014: The Thinkable and the Interesting: Katherine Hayles Interviews Alan Liu – generally in connection with my own work on description and, in particular, the description of formal features of texts and films, such as ring-composition. I find this activity to be quite interesting, intrinsically interesting if you will. But, judging by what literary critics actually do, most critics aren’t particularly interested in such things.
Why not?
I don’t know. I assume, though, that I’ve got some unseen conceptual context to which I assimilate such description and in terms of which it is interesting. Whatever that context is, most literary critics don’t have it.
But enough about me in my possibly peculiar interests.
I want to think about Franco Moretti. In both a recent interview in the Los Angeles Review of Books and his most recent pamphlet, Literature, Measured (which is now the preface to Canon/Archive, 2107, pp. ix-xvii), Moretti has expressed misgivings about the current state of affairs in computational criticism. He ends the pamphlet thus (p. 7):
“Bourdieu” stands for a literary study that is empirical and sociological at once. Which, of course, is obvious. But he also stands for something less obvious, and rather perplexing: the near-absence from digital humanities, and from our own work as well, of that other sociological approach that is Marxist criticism (Raymond Williams, in “A Quantitative Literary History”, being the lone exception). This disjunction […] is puzzling, considering the vast social horizon which digital archives could open to historical materialism, and the critical depth which the latter could inject into the “programming imagination”. It’s a strange state of affairs; and it’s not clear what, if anything, may eventually change it. For now, let’s just acknowledge that this is how things stand; and that – for the present writer – something needs to be done. It would be nice if, one day, big data could lead us back to big questions. It would be nice if, one day, big data could lead us back to big questions.
What I’m wondering is if “big questions” is a marker for missing intellectual context. Is Moretti unable to see that these investigations will lead, even must eventually lead, to home truths? It’s not that I can see where things are going – I can’t – but, for whatever reason, I don’t share his feeling that things might be going amiss.
The curious thing is that, even as he expresses these misgivings in both these texts, he also expresses a certain fondness, even nostalgia, for traditional critical discourse. Thus, in Literature, Measured he says (p. 5):
Forget the hype about computation making everything faster. Yes, data are gathered and analyzed with amazing speed; but the explanation of those results – unless you’re happy with the first commonplace that crosses your mind – is a different story; here, only patience will do. For rapidity, nothing beats traditional interpretation: Verne’s “Nautilus” means – childhood; Count Dracula – monopoly capital. One second, and everything changes. In the lab, it takes months of work.
And maybe the explanation, if and when it comes at all, seems a bit weak. This, from the LARB interview:
But again, think of this: to make it better — it's a perfect expression because it's a comparative, it was good and now it's more good — this is not how the humanities think in general. It's usually much more of a polemical, an all-or-nothing affair. It's a conflict of interpretation. It's: you thought Hamlet was the protagonist of Hamlet, how foolish of you; the protagonist is Osric. Digital humanities doesn't work in this mode and I think there is something very adult and very sober in not working in this mode. There is also something, maybe especially for older people like me, which is always a little disappointing: the digital humanities lacks that free song — the bubbliness of the best example of the old humanities.
The thing is, that traditional critical discourse, the one in which the critic can construct a bubbly free song, is embedded in a conceptual matrix that is linked to some version of Home Truth, whether Christian humanism, Hegelian phenomenology, Marxist theory, or critique in its many flavors, whatever. That matrix gives the critic intellectual purchase on the Whole of Life.
That’s the context of the traditional humanities. That’s where the traditional humanities draw their interest: Life, the Universe, and All.
By contrast, computational criticism seems to have cut the cord. Can’t get there from here. The big questions seem impossibly distant, if not actually gone.
I wonder what sustained all those pre-Darwinian naturalists, the ones who went out in the field and were both content and even eager to describe the flora and fauna they found? For that matter, would Darwin have spent so much time describing barnacles if he didn’t find the activity itself interesting apart from whatever it might contribute to his larger intellectual enterprise? Could he even have found that larger enterprise if he hadn’t been fascinated by the details of the morphology, physiology, and lifeways of flora and fauna?
Friday, May 6, 2016
Thursday, May 5, 2016
Ecstatic Jazz @3QD
On Monday I posted a piece at 3 Quarks Daily, Ecstasy at Baltimore’s Left Bank Jazz Society. As I remark in the piece:
Best jazz venue I’ve ever been in. Of course I’ve got my quirks. I’m not a night person, so I don’t go to jazz clubs that often. By the time the music starts cookin’ at a regular club I’m ready for bed. But those sets at the Famous started at 5 PM on Sunday afternoon. Well, they were scheduled for 5 which, allowing for local quirks and prerogatives, generally translated into 5:30 PM or thereabouts.And that was fine. Gave us time to chat, check out the used records for sale, maybe go to the Kentucky Fried Chicken around the corner and get a bucket of the Colonel’s Original. You could order something from the kitchen on the days it was open – fried chicken, collard greens, sweet potato, biscuits and gravy, grits, aka soul food. Or maybe get some set-ups, bring out your own liquor, and get loosened up. By the time the music started up we were mellowed out and ready to luxuriate in delicious sounds.I heard a lot of great music in those days, but I don’t remember most of it. When the music goes through you, that’s it. It’s gone. Nothing to remember, though the aura lingers and one aura blends with another until all that’s left is a numinous presence in your mind: the Famous!
Here’s Rahsaan Roland Kirk – one of the musicians I heard at the Left Bank – performing with McCoy Tyner, Stanley Clarke, and Chick Corea:
Is the American Body Politic Crumbling?
The media has made a cottage industry out of analyzing the relationship between America’s crumbling infrastructure, outsourced jobs, stagnant wages, and evaporating middle class and the rise of anti-establishment presidential candidates Donald Trump and Bernie Sanders. Commentators are also tripping all over one another to expound daily on the ineffectual response of America’s political elite – characterized by either bewilderment or a dismissal of these anti-establishment candidates as minor hiccups in the otherwise smooth sailing of status-quo power arrangements. But the pundits are all missing the point: the Trump-Sanders phenomenon signals an American oligarchy on the brink of a civilization-threatening collapse.The tragedy is that, despite what you hear on TV or read in the paper or online, this collapse was completely predictable. Scientifically speaking, oligarchies always collapse because they are designed to extract wealth from the lower levels of society, concentrate it at the top, and block adaptation by concentrating oligarchic power as well. Though it may take some time, extraction eventually eviscerates the productive levels of society, and the system becomes increasingly brittle. Internal pressures and the sense of betrayal grow as desperation and despair multiply everywhere except at the top, but effective reform seems impossible because the system seems thoroughly rigged. In the final stages, a raft of upstart leaders emerge, some honest and some fascistic, all seeking to channel pent-up frustration towards their chosen ends. If we are lucky, the public will mobilize behind honest leaders and effective reforms. If we are unlucky, either the establishment will continue to “respond ineffectively” until our economy collapses, or a fascist will take over and create conditions too horrific to contemplate.
Handwriting on the well:
Rigged systems erode the health of the larger society, and signs of crisis proliferate. Developed by British archaeologist Sir Colin Renfrew in 1979, the following “Signs of Failing Times” have played out across time in 26 distinct societies ranging from the collapse of the Roman Empire to the collapse of the Soviet Union:
- Elite power and well-being increase and is manifested in displays of wealth;
- Elites become heavily focused on maintaining a monopoly on power inside the society; Laws become more advantageous to elites, and penalties for the larger public become more Draconian;
- The middle class evaporates;
- The “misery index” mushrooms, witnessed by increasing rates of homicide, suicide, illness, homelessness, and drug/alcohol abuse;
- Ecological disasters increase as short-term focus pushes ravenous exploitation of resources;
- There’s a resurgence of conservatism and fundamentalist religion as once golden theories are brought back to counter decay, but these are usually in a corrupted form that accelerates decline.
Prospects:
Today’s big challenge is twofold. First, we need to find a way to unite today’s many disjointed reform efforts into the coherent and effective reinvention we so desperately need. This unity will require solid science, compelling story, and positive dream. Secondly, since hierarchies are absolutely necessary for groups beyond a certain size, this time we must figure out how to create healthy hierarchical systems that effectively support the health and prosperity of the entire social, economic, and environmental system including everyone within. In short, our goal must be to figure out how to end oligarchy forever, not just create a new version of it. This is a topic I will take up in my next blog.
Tuesday, May 3, 2016
Digital Humanities in the cross-hairs, with the cavalry on the way
Daniel Allington, Sarah Brouillette, and David Golumbia have published a critique of digital humanities (mostly just digital work in english lit.) in the LARB: Neoliberal Tools (and Archives): A Political History of Digital Humanities. Here's the opening paragraph:
Advocates position Digital Humanities as a corrective to the “traditional” and outmoded approaches to literary study that supposedly plague English departments. Like much of the rhetoric surrounding Silicon Valley today, this discourse sees technological innovation as an end in itself and equates the development of disruptive business models with political progress. Yet despite the aggressive promotion of Digital Humanities as a radical insurgency, its institutional success has for the most part involved the displacement of politically progressive humanities scholarship and activism in favor of the manufacture of digital tools and archives. Advocates characterize the development of such tools as revolutionary and claim that other literary scholars fail to see their political import due to fear or ignorance of technology. But the unparalleled level of material support that Digital Humanities has received suggests that its most significant contribution to academic politics may lie in its (perhaps unintentional) facilitation of the neoliberal takeover of the university.
It has provided 30 comments so far, most of them critical. It also provoked Alan Liu to respond with a twitter stream which he then Storified: On Digital Humanities and "Critique". He his not happy, referring to the article's "character assassination of the still evolving field of DH...as an attempt at a last word foreclosing the critical potential of DH". Liu also posted drafts from a book he's working on:
“Drafts for Against the Cultural Singularity (book in progress.” Alan Liu, 2 May 2016. http://liu.english.ucsb.edu/drafts-for-against-the-cultural-singularity
I'm about to read through the drafts. Here's Liu's opening paragraph:
My aim in this book is to make a strategic intervention in the development of the digital humanities. Following up on my 2012 essay, “Where is Cultural Criticism in the Digital Humanities?”, I call for digital humanities research and development informed by, and able to influence, the way scholarship, teaching, administration, support services, labor practices, and even development and investment strategies in higher education intersect with society, where a significant channel of the intersection between the academy and other social sectors, at once symbolic and instrumental, consists in shared but contested information-technology infrastructures. I first lay out in the book a methodological framework for understanding how the digital humanities can develop a mode of critical infrastructure studies. I then offer a prospectus for the kinds of infrastructure (not only research “cyberinfrastructures,” as they have been called) whose development the digital humanities might help create or guide. And I close with thoughts on how the digital humanities can contribute to ameliorating the very idea of “development”–technological, socioeconomic, and cultural–today.
Tyler Cowen on Novels as Models
This is a paper that is apparently unpublished. Robin Hanson has uploaded it to Research Gate.
Tyler Cowen
I defend the relevance of fiction for social science investigation. Novels can be useful for making some economic approaches -- such as behavioral economics or signaling theory -- more plausible. Novels are more like models than is commonly believed. Some novels present verbal models of reality. I interpret other novels as a kind of simulation, akin to how simulations are used in economics. Economics can, and has, profited from the insights contained in novels. Nonetheless, while novels and models lie along a common spectrum, they differ in many particulars. I attempt a partial account of why we sometimes look to models for understanding, and other times look to novels.
[accessed May 3, 2016].
From the article itself:
An investigation of novels and models also may help us better understand how the public thinks about economic issues. Economists typically use formal models to think about the world. We cannot help but notice that most members of the general public do not appear to think very scientifically about policy, in the sense of deferring to the established expert bodies of knowledge. Instead most citizens are heavily influenced by stories, movies, and popular culture. They think in terms of narrative, often false narrative, and spend little time learning economics. Economists naturally wonder whether citizens and voters spend too much time thinking in terms of stories and not enough time thinking in terms of models.
Novels enrich our sense of people's motivations:
A familiarity with novels increases the plausibility of behavioral economics. Most characters in novels have complex motivations and show a variety of behavioral quirks. For instance, Flaubert's characters often exhibit a "grass is always greener" approach to romantic choice, rather than rationally assessing their current and future prospects. Madame Bovary seems to want every man but the one, her husband, who adores her. The lead characters in Bronte's Wuthering Heights (Heathcliff and Catherine) consider themselves connected by a sense of common fate and destiny, and they pursue disastrous courses of action. Captain Ahab, from Melville's Moby Dick, is obsessed with taking his revenge against the white whale, which eventually leads to his death.Utility maximization may describe the behavior of these characters ex post, but it does not help us understand or predict their behavior very much. Instead their behavior appears best described in terms of complex psychological mechanisms. ...The standard criticism of behavioral economics is that it offers too many varying accounts of human behavior, with no unified framework or no ability to offer useful exante predictions. A reader of novels, who is used to complex portraits of multi-faceted characters, is less likely to find such a criticism persuasive. Such a reader is less likely to see simplicity as an explanatory virtue, and is less likely to look for unified accounts of complex social phenomena.
Sunday, May 1, 2016
Respect: AIDS
I took this photo on April 24 of this year:
It's a graffiti production by the AIDS (= Alone in Deep Space | America is Dying Slowly) crew in Jersey City. As you can see it's been treated harshly by the weather, with the right half badly washed out. But no one's gone over it, which typically happens with old graffiti. Yet other walls in that area – there are over a dozen in close proximity – have been gone over many times. What's the difference?
Perhaps it is that this wall is just a bit more exposed than some of the others, and so you're a bit more likely to get caught. But I think mostly it's a matter of respect for the writers and the crew.
Here's what that wall looked like the first time I flicked it, on Nov. 28, 2006:
Subscribe to:
Posts (Atom)