Thursday, May 26, 2016
Last year Kurt Newman went through William James's The Principles of Psychology, found all references to music, and gathered them into a single blog post. Here's one of those entries:
We learn to appreciate what is ours in all its details and shadings, whilst the goods of others appear to us in coarse outlines and rude averages. Here are some examples: A piece of music which one plays one’s self is heard and understood better than when it is played by another. We get more exactly all the details, penetrate more deeply into the musical thought. We may meanwhile perceive perfectly well that the other person is the better performer, and yet nevertheless––at times get more enjoyment from our own playing because it brings the melody and harmony so much nearer home to us. This case may almost be taken as typical for the other cases of self-love…
And this is also surely the reason why one’s own portrait or reflection in the mirror is so peculiarly interesting a thing to contemplate . . . not on account of any absolute “c’est moi,” but just as with the music played by ourselves. What greets our eyes is what we know best, most deeply understand; because we ourselves have felt it and lived through it. We know what has ploughed these furrows, deepened these shadows, blanched this hair; and other faces may be handsomer, but none can speak to us or interest us like this.
It's not at all clear to me just what "digital humanities" is or will become, but it's clear that some are worried that it is just another form neoliberal oppression and won't become much of anything unless it figures out how to be radical in the face of all that tech. Me, I'm Old School and believe in capital "T" Truth, though constructing it is certainly a challenge and a trial. For reasons which I may or may not get around to explaining here on New Savanna I'm inclined to think of all this throat clearing and hemming and hawing about radical DH as a form of territorial marking (I'm thinking of recent LARB piece and its fallout). The purpose of this post is to suggest that the territory was radically marked before the DH term was coined.
Back in 1998 Christine Boese released her doctoral dissertation to the web: The Ballad of the Internet Nutball: Chaining Rhetorical Visions from the Margins of the Margins to the Mainstream in the Xenaverse. It's certainly one of the earliest hypertext dissertations, if not THE first – I simply don't know. I figure that qualifies it as DH. Here's the abstract:
This dissertation is a case study applying methods of rhetorical analysis and cultural critique to the burgeoning online phenomenon called the Xenaverse: the online spaces devoted to the cult following of the syndicated television program "Xena, Warrior Princess." My work combines two modes of inquiry:(1) Locating and capturing texts from multiple sites on the Internet known to be parts of the Xenaverse, and(2) Supplementing those texts with data generated from the limited use of the ethnographic tools of participant observation and informant interviews, both electronic and face to face.The primary focus of my analysis is on constructions of authority in cyberspace. I explore the constellations of social forces in cyberspace which have led to the success of a noncommercial, highly trafficked, dynamic culture or what is sometimes called a "community." The strengths and weaknesses of this online "community" are examined in terms of the ideals of radical democracy, using Fantasy-Theme rhetorical analysis. This research examines how the rhetorical visions of this culture are used to write the narratives of its ongoing existence, in a way that is increasingly independent of the dominant narratives of the television program itself. Using the relevance of an insider's point of view and taking a case which implies successful democratic social resistance to diverse hegemonic forces, I look beyond the Xenaverse and consider the strength of the frequently-cited claim that the medium of cyberspace is intrinsically democratizing. Considering that claim critically, I suggest democracy both is and is not being enhanced online.
Tuesday, May 24, 2016
From The Scientist:
The idea that emotions can spread from person to person is not new. But recent research is starting to uncover the physiological mechanisms behind such “emotional contagion.” A study published this month (May 9) in Psychological Science, for example, showed that infants dilate or contract their pupils in response to depictions of eyes with the corresponding state, suggesting that emotional contagion may develop early in life. A 2014 study found that mothers could pass emotional stress on to their babies in a largely unconscious way. Together, the findings add to a growing body of research revealing the role of this phenomenon in human interactions.“One of the most important things as a human species is to communicate effectively,” said Garriy Shteynberg, a psychologist at the University of Tennessee, Knoxville, who has shown that emotional contagion is enhanced in group settings. In order to do that, “we need cognitive mechanisms that give us a lot of common background knowledge,” Shteynberg told The Scientist.
Our old friends synchrony and the default network:
The most popular model, developed by social psychologist Elaine Hatfield and colleagues, suggests that people tend to synchronize their emotional expressions with those of others, which leads them to internalize those states. This suggests, for example, that the act of smiling can make a person feel happiness.As to what may be going on in the brain when this happens, some research suggests that emotional contagion may engage the default mode network—a set of brain circuits that are active when an individual is not engaged in any particular task, but may be thinking about his or herself or others, noted Richard Boyatzis of Case Western Reserve University. When this network is activated, a person may be picking up on emotional cues from others, he told The Scientist. And “the speed at which you pick it up is probably the most important issue going on,” as it suggests that this process is largely unconscious, Boyatzis said.
Students of literary culture, and broadcast media, take note. I'm particularly interested in the case of story-telling in preliterate cultures, which is, after all, the default situation for human story telling. Here the stories are well known and people absorb them in the company of others. That's very different from reading a book in the privacy of one's home.
Monday, May 23, 2016
In thinking about the recent LARB critique of digital humanities and of responses to it I couldn’t help but think, once again, about the term itself: “digital humanities.” One criticism is simply that Allington, Brouillette, and Golumbia (ABG) had a circumscribed conception of DH that left too much out of account. But then the term has such a diverse range of reference that discussing DH in a way that is both coherent and compact is all but impossible. Moreover, that diffuseness has led some people in the field to distance themselves from the term.
And so I found my way to some articles that Matthew Kirschenbaum has written more or less about the term itself. But I also found myself thinking about another term, one considerably older: “computational linguistics.” While it has not been problematic in the way DH is proving to be, it was coined under the pressure of practical circumstances and the discipline it names has changed out from under it. Both terms, of course, must grapple with the complex intrusion of computing machines into our life ways.
Let’s begin with Kirschenbaum’s “Digital Humanities as/Is a Tactical Term” from Debates in the Digital Humanities (2011):
To assert that digital humanities is a “tactical” coinage is not simply to indulge in neopragmatic relativism. Rather, it is to insist on the reality of circumstances in which it is unabashedly deployed to get things done—“things” that might include getting a faculty line or funding a staff position, establishing a curriculum, revamping a lab, or launching a center. At a moment when the academy in general and the humanities in particular are the objects of massive and wrenching changes, digital humanities emerges as a rare vector for jujitsu, simultaneously serving to position the humanities at the very forefront of certain value-laden agendas—entrepreneurship, openness and public engagement, future-oriented thinking, collaboration, interdisciplinarity, big data, industry tie-ins, and distance or distributed education—while at the same time allowing for various forms of intrainstitutional mobility as new courses are approved, new colleagues are hired, new resources are allotted, and old resources are reallocated.
Just so, the way of the world.
Kirschenbaum then goes into the weeds of discussions that took place at the University of Virginia while a bunch of scholars where trying to form a discipline. So:
A tactically aware reading of the foregoing would note that tension had clearly centered on the gerund “computing” and its service connotations (and we might note that a verb functioning as a noun occupies a service posture even as a part of speech). “Media,” as a proper noun, enters the deliberations of the group already backed by the disciplinary machinery of “media studies” (also the name of the then new program at Virginia in which the curriculum would eventually be housed) and thus seems to offer a safer landing place. In addition, there is the implicit shift in emphasis from computing as numeric calculation to media and the representational spaces they inhabit—a move also compatible with the introduction of “knowledge representation” into the terms under discussion.How we then get from “digital media” to “digital humanities” is an open question. There is no discussion of the lexical shift in the materials available online for the 2001–2 seminar, which is simply titled, ex cathedra, “Digital Humanities Curriculum Seminar.” The key substitution—“humanities” for “media”—seems straightforward enough, on the one hand serving to topically define the scope of the endeavor while also producing a novel construction to rescue it from the flats of the generic phrase “digital media.” And it preserves, by chiasmus, one half of the former appellation, though “humanities” is now simply a noun modified by an adjective.
And there we have it.
Sunday, May 22, 2016
Here's an interesting (and recent) article that speaks to statistical thought in linguistics: The Unmaking of a Modern Synthesis: Noam Chomsky, Charles Hockett, and the Politics of Behaviorism, 1955–1965, by Gregory Radick (abstract below). Commenting on it at Dan Everett's Facebook page, Yorick Wilks observed: "It is a nice irony that statistical grammars, in the spirit of Hockett at least, have turned out to be the only ones that do effective parsing of sentences by computer."
Abstract: A familiar story about mid-twentieth-century American psychology tells of the abandonment of behaviorism for cognitive science. Between these two, however, lay a scientific borderland, muddy and much traveled. This essay relocates the origins of the Chomskyan program in linguistics there. Following his introduction of transformational generative grammar, Noam Chomsky (b. 1928) mounted a highly publicized attack on behaviorist psychology. Yet when he first developed that approach to grammar, he was a defender of behaviorism. His antibehaviorism emerged only in the course of what became a systematic repudiation of the work of the Cornell linguist C. F. Hockett (1916–2000). In the name of the positivist Unity of Science movement, Hockett had synthesized an approach to grammar based on statistical communication theory; a behaviorist view of language acquisition in children as a process of association and analogy; and an interest in uncovering the Darwinian origins of language. In criticizing Hockett on grammar, Chomsky came to engage gradually and critically with the whole Hockettian synthesis. Situating Chomsky thus within his own disciplinary matrix suggests lessons for students of disciplinary politics generally and—famously with Chomsky—the place of political discipline within a scientific life.
Renovation is not complete. Here's a sense of what it will be like when complete:
Tim Renner, undersecretary of state for culture in Berlin is also happy about the plans for the museum: “The whole project is lunatic, almost insane”, he says and laughs. “And that’s why it matches Berlin”.
Friday, May 20, 2016
As the title indicates, these are details from various works of graffiti. Further, they are so relatively small in scale that you've got little sense of the larger work. There's no way you can tell that the last detail, that black square on a white background, if from a Jerkface image of Sylvester the Cat. Nor is it in the least bit obvious that the center target is from Sonet piece in the Bergen Arches (now two or three layers down. Still, if you look closely, and not all that closely, there are things to notice, the different surfaces for example. Of course the colors. And isn't it obvious that the second image shows some paint that has been splashed on rather than being sprayed from a can? And the first one centers of a silver square (with some white overlay at the top). Silver has a special place in the graffiti ecology, with a whole class of images known as "silvers" because their predominant paint (generally with black outline). And the fourth one, notice the pealed paint? How many layers are on that surface?
Thursday, May 19, 2016
In thinking about the current discussions provoked by the LARB takedown of digital humanities I’ve been thinking about the term itself: “digital humanities.” The term itself yokes together the opposite poles of a binary that’s plagued us since Descartes, man and machine, the human and the mechanical. I suppose that’s part of its attraction, but also why it can be so toxic.
In these discussions the term itself seems almost to float free of any connection to actual intellectual practice. Given the impossible scope of the term – see, e.g. the Wikipedia article on DH – its use tends to collapse on the binary itself. And if one is skeptical of whatever it is, then the “digital” term becomes those nasty machines that have been threatening us for centuries, most visibly in movies from Lang’s Metropolis through the Terminator series and beyond.
I certainly count myself among those who find the term somewhere between problematic and useless. More usefully, I think the many developments falling within the scope of that term are best seen in terms of a wide variety of discourses emerging after WWII. As a useful touchstone, check out a fascinating article: Bernard Dionysius Geoghegan, “From Information Theory to French Theory: Jakobson, Lévi-Strauss, and the Cybernetic Apparatus,” Critical Inquiry, Fall 2011. He looks at the period during and immediately after World War II when Jakobson, Lévi-Strauss and Lacan sojourned in NYC and picked up ideas about information theory and cybernetics from American thinkers at MIT and Bell Labs. You should be able to get a copy from Geoghegan's publications page.
Allington, Brouillette, and Golumbia (ABG) on digital humanities continues reverberating through the online hive mind. It's not clear to me that Justin Evans, System Reboot, knows much, if anything about DH, beyond what ABG mention in their article. He's focused entirely on literary studies:
Anyone who cares about the humanities needs to pay close attention to this argument, because DH undeniably harms the study of literature in the university.
However, after that note of agreement, he goes a step further and ends up here:
If English department politics isn’t a target for neoliberal elites, why has English been so vulnerable to the digital humanities? Perhaps it is because, since English scholars turned their attention to political progressive humanities, the English department has lacked a subject matter of its own. It has become the home of ersatz social science, ersatz science, ersatz philosophy—and now, ersatz computer science. The English department is not a victim of some kind of neoliberal conspiracy. It is just looking for something to talk about.From this perspective, the digital and the politically progressive humanities [aka PPH] are more like squabbling cousins than opposing forces. Allington et al. write that DH “was born from disdain and at times outright contempt, not just for humanities scholarship, but for the standards, procedures and claims of leading literary scholars”—just as the politically progressive humanities were born from the rejection of New Criticism. DH tries to “redefine what had formerly been classified as support functions for the humanities as the very model of good humanities scholarship”; similarly, PPH redefined the literary scholar’s legitimate helpmates (philosophy, history, science) as “the very model” of literary scholarship. And finally, most importantly, neither DH nor PPH can give students a reason to study literature rather than, say, linguistics or sociology or neuroscience.
Scholars who are serious about saving the English department from the digital humanities need to acknowledge that it needs to be saved from the politically progressive humanities, too.
Whoops! So much for ABG! This puts me in mind of Alex Reid's post, Otters' Noses, Digital Humanities, Political Progress, and Splitters, which invokes a scene from Life of Brian:
Here's his gloss on the scene:
Of course the whole point of it is to satirize the divisive nature of political progressivism. Apparently it pointed specifically at the leftist politics of England at the time, but really this kind of stuff has a timeless quality to it.
Wednesday, May 18, 2016
The Godzilla franchise is one of the most prolific in the movie biz. Writing in The New Yorker, Matt Alt reports that the Japanese are releasing their first Godzilla film in 12 years.
The 1954 “Gojira” resonated so deeply with its audience because of the painfully fresh memories of entire Tokyo city blocks levelled by Allied firebombings less than a decade before. In the preview for “Godzilla Resurgence,” you can see how the directors are again mining the collective memories of Japanese viewers for dramatic effect. But their touchstones are no longer incendiary and nuclear bombs. Instead they are the 2011 earthquake and tsunami, which killed close to twenty thousand people and introduced Japan to a new form of nuclear horror caused by out-of-control civilian reactors. Indeed, the now fiery-complexioned Godzilla seems to be a walking nuclear power plant on the brink of melting down. For anyone who lived in Japan through the trying days of late March of 2011, the sight of blue-jumpsuited government spokesmen convening emergency press conferences is enough to send a chill down one’s spine. So is the shot in the trailer of a stunned man quietly regarding mountains of debris, something that could have been lifted straight out of television footage of the hardest-hit regions up north. Even the sight of the radioactive monster’s massive tail swishing over residential streets evokes memories of the fallout sent wafting over towns and cites in the course of Fukushima Daiichi’s meltdown.It’s an open question as to how foreign audiences will perceive the subtext of these scenes, or if they even really need to. Equally so for the shots of the Japanese military’s tanks, aircraft, cruisers, and howitzers engaging the giant monster, which at first glance might simply appear a homage to a classic movie trope. But “Godzilla Resurgence” appears at a time in Japanese history much changed from that of even its most recent predecessor. In the last twelve years, the Self-Defense Forces have gone from little more than an afterthought to folk heroes for their role in 2011 tsunami rescue efforts, and now to the center of a controversy as Prime Minister Shinzo Abe pushes through legislation to expand their role abroad. Traditionally, Japanese monster movies have used images of military hardware hurled indiscriminately against a much larger foe as a potent symbol of the nation’s own disastrous wartime experience. But the regional political situation for Japan is far more complicated now. The fight scenes hint at changing attitudes toward the military in Japan.
Here's a trailer:
Robert Epstein in Aeon:
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
See this post from 2010 where I quote Sydney Lamb making the same point. Computers, in contrast, really are rather like that:
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.