Pages in this blog

Sunday, July 31, 2016

3 Approaches to machine intelligence: Classic AI, Simple Neural Networks, and Biological Neural Networks

For many problems, researchers concluded that a computer had to have access to large amounts of knowledge in order to be “smart”. Thus they introduced “expert systems”, computer programs combined with rules provided by domain experts to solve problems, such as medical diagnoses, by asking a series of questions. If the disease was not properly diagnosed, the expert adds additional questions/rules to narrow the diagnosis. A Classic AI system is highly tuned for a specific problem.
When classical AI kept failing, researchers turned to artificial neural networks (aka ANNs), though the neurons are only marginally like real ones. Hence Simple Neural Network:
Instead, the emphasis of ANNs moved from biological realism to the desire to learn from data without human supervision. Consequently, the big advantage of Simple Neural Networks over Classic AI is that they learn from data and don’t require an expert to provide rules. Today ANNs are part of a broader category called “machine learning” which includes other mathematical and statistical techniques. Machine learning techniques, including ANNs, look at large bodies of data, extract statistics, and classify the results. [...] they don’t work well when there is limited data for training, and they don’t handle problems where the patterns in the data are constantly changing. Essentially, the Simple Neural Network approach is a sophisticated mathematical technique that finds patterns in large, static data sets.
And so we arrive at Biological Neural Networks:
For example we know the brain represents information using sparse distributed representations (SDRs), which are essential for semantic generalization and creativity. We are confident that all truly intelligent machines will be based on SDRs. SDRs are not something that can be added to existing machine learning techniques; they are more like a foundation upon which everything else depends. Other essential attributes include that memory is primarily a sequences of patterns, that behavior is an essential part of all learning, and that learning must be continuous. In addition, we now know that biological neurons are far more sophisticated than the simple neurons used in the Simple Neural Network approach — and the differences matter.

Friday, July 29, 2016

Descriptive Tables for Tezuka’s Metropolis, A Note on Descriptive Method

I've added another working paper, title above. It contains information supplementary to my article, Tezuka’s Metropolis: A Modern Japanese Fable about Art and the Cosmos (PDF). You can download the tables from Academia.edu:


Abstract: This document contains analytical tables for Osamu Tezuka’s Metropolis. The tables demonstrate that the text is ring-form as follows: 1, 2, 3, 4, 5, 4’, 3’, 2’ 1’. The outermost structure is a framing device while the innermost structure reveals a variety of “secrets.”

CONTENTS

Ring Form 2
Rings in Tezuka 3
Summary Table 4
Full Table 5

While I offer a few remarks on ring-form I do not say much about Metropolis itself beyond what you can see in the tables, though I have some remarks about why and how I undertook the analysis. While the full table does contain a good deal of the story, it’s not really in a form suitable for reading. At the very least I assume that you have read and know the story.

Why Chomsky’s Ideas Were so Popular

I am in the process of revising, Golumbia Fails to Understand Chomsky, Computation, and Computational Linguistics, and uploading it as a downloadable PDF. The following is from the new material going into the revision.
At the beginning of the chapter on Chomsky, Golumbia speculates that Chomsky’s ideas were so popular because they filled an existing need (p. 31):
[...] despite Chomsky’s immense personal charisma and intellectual acumen, it is both accurate and necessary to see the Chomsky revolution as a discourse that needed not so much an author as an author-function – a function “tied to the legal and institutional systems that circumscribe, determine, and articulate the realm of discourses” (Foucault 1965, 130).
I believe he is correct in this, that the intellectual climate was right for Chomsky, though I’m skeptical about his analysis.

Golumbia goes on to suggest that nascent neoliberalism provided the ideological matrix in which Chomsky’s ideas flourished (p. 32). More specifically (p. 32):
Chomsky insists that the only reasonable focus for analysis of cognitive and linguistic matters is the human individual, operating largely via a specific kind of rationality [...]; second, specifically with regard to the substance both of cognition and of language, Chomsky argues that the brain is something very much like one of the most recent developments in Western technology; the computer.

What is vital to see in this development is not merely the presence of these two views but their conjunction: for while one can imagine any number of configurations according to which the brain might be a kind of computer without operating according to classical rationalist principles (a variety of recent cognitive-scientific approaches, especially connectionism [...] offer just such alternatives), there is a natural fit between the computationalist view and the rationalist one, and this fit is what proves so profoundly attractive to the neoliberal academy.
Before going on, note that parenthetical remark in the second paragraph. We will return to it in a moment.

It is one thing to argue that Chomskyian linguistics dovetails nicely with neoliberalism, but it is something else to argue that it is attractive only to neoliberalism, and it is this latter that Golumbia seems to be arguing. And there he encounters an immediate problem: Chomsky’s own politics. For Chomsky’s own “real-world politics” (my scare quotes) are quite unlike the politics Golumbia finds lurking in his linguistics. He unconvincingly glosses over this by pointing out that Chomsky’s “institutional politics are often described as exactly authoritarian, and Chomsky himself is routinely described as engaged in ‘empire building’” (p. 33). Authoritarian empire building is common enough in the academy, but what has that to do with the radical left views that Chomsky has so consistently argued in his political essays?

To be sure, Chomsky is only a single case of divergence between real-world politics and ideology, but it is an important one as Golumbia has made Chomsky himself the center of his argument for the confluence between computationalist ideology and conservative politics. If the center does not hold, is the connection between real-world politics and computationalist ideology as close as Golumbia argues?

There’s a problem with this story, which is after all a historical one. And history implies change. That parenthetical reference to connectionism as providing an alternative to “classical rationalist principles”, that gives us a clue about the history. While it’s not at all clear to me that connectionism is meaningfully different from and opposed to those classical rationalist principles, let’s set that aside. While connectionism has roots in the 1950s and 1960s (if not earlier), the same time that Chomsky’s ideas broke through, it didn’t really become popular until the 1980s, by which time neoliberalism was no longer nascent. It was visibly on the rise. Shouldn’t a visible and powerful neoliberalism have been able to suppress conceptions inconsistent with it? Shouldn’t those classical rationalist principles become more prevalent with the rise of neoliberalism rather than retreating to the status of but one conception among many?

Connectionist ideas flourished within an intellectual space that originated in the 1950s, and Chomsky’s ideas were catalytic, but certainly not the only ones (as we’ll soon see). As a variety of thinkers began to explore that space, some of them were attracted to different ideas; at the same time, the originating ideas, often grounded in classical logic, ran into problems. Consequently this new conceptual space became populated with ideas often at odds with those that brought the space into existence.

I suggest that at the primaryintellectual attractiveness of newly emerging computing technology is the simple fact that it gave thinkers a way to conceputalize how mind could be implemented in matter, how Descartes’ res cogitans could be realized in res extensa. Computing had been around for a long time, but not computing machines. The computing machines that emerged after World War II promised to be far more powerful and flexible than all others. That is what was attractive, not nascent neoliberalism, though that neoliberalism may have been waiting in the wings and shouting encouragement.

Thursday, July 28, 2016

Reply to Lindley Darden on Abstraction

Back in 1987 Lindley Darden published “Viewing the History of Science as Compiled Abstraction,” AI Magazine, Volume 8, Number 2, Summer 1987. David Hays and I replied with a letter, William Benzon and David Hays, Reply to Darden, AI Magazine, Volume 8, Number 4, 1987, pp. 7-8.

Download at:

Summary: Abstract patterns can be recognized in stories, which are likened to one another through Wittgenstein’s ‘family resemblance.’ But it is possible to use language to rationalize such abstract patterns. Thus “charity” can be rationalized as happening when “someone does something nice for someone else without thought of reward.” The notion of reward must itself be abstract, as the variety of physical rewards defies description and abstract rewards (such as fame) are possible. Thus the linguistic means of dealing with abstraction must be recursive. “Metalingual definition,” in recognition of Roman Jakobson, seems an apt term. Note further that science frequently concocts abstract accounts of physical things. Diamonds and lampblack are physical things, and quite different from one another. Yet both consist primarily of carbon, which is abstractly defined through the concepts and experimental technique of modern chemistry. Knowledge must be grounded in phenomenologically naïve commonsense knowledge; but the recursive nature of metalingual definition allows thought to move far away from the sensory base.

A Review of Rita Felski, The Limits of Criticique

Under review: Rita Felski. The Limits of Critique. University of Chicago Press, 2015.

Dan Weiskopf in ArtsATL, July 5, 2016:
Felski documents extensively how critics in the grip of suspicion cast themselves as detectives, turning over each word with gloved hands and dusting spaces for prints. Every text appears as a crime scene: no matter how placid and controlled it appears on the surface, a transgression must be concealed just beneath. Texts take on a kind of sinister agency, disguising their nature from readers in order to more subtly influence them.

9780226294032Suspicious reading, then, aligns itself with “guardedness rather than openness, aggression rather than submission, irony rather than reverence, exposure rather than tact” (p. 21). And it goes hand-in-hand with a view of language as tacitly coercive, a conduit for unconsciously replicating oppressive social structures. Critique is driven by the need to expose and name the “crime” perpetrated by the text, though here the quarry is “not an anomalous individual — a deranged village vicar, a gardener with a grudge — but some larger entity targeted by the critic as an ultimate cause: Victorian society, imperialism, discourse/power, Western metaphysics” (p. 89).
A false idol?
Felski argues compellingly that despite its many virtues, critique is a false idol. Perhaps its greatest failing is the inability to imagine anything outside of itself. Its totalizing ambitions force it to deny that there could even be any other intellectually rigorous method of engagement with a text. Whatever strays from the aim of demystifying and exposing the limitations of an artwork, or from seeing the work as ultimately an expression of relations of power that need to be opposed, must be a form of unchecked sentiment or complicity.
Critique only knows what it (thinks) it knows:
No, the problem lies with critique that only knows how to probe for the cracks, gaps and fissures in the fabric of a work, that sees debunking as the highest aim of interpretation, and that hollows out texts and artworks into mere arenas for ideological combat. As if rigor and insight had to be coupled with fault-finding and a strident meanness of feeling. This, too, is an effect of suspicious reading: to cast even your own emotions about a work into doubt, so much so that it’s rare that any critical texts contain meditations on our everyday feelings of amusement, pleasure, or surrender in the face of the works we are most passionate about.
But is Latour the way?
Unfortunately, Felski’s own proposals for “postcritical reading” are not always as sharply drawn. She is quick to reassure us — we scholars, at least — that “the antidote to suspicion is thus not a repudiation of theory . . . but an ampler and more diverse range of theoretical vocabularies” (p. 181). Drawing on Bruno Latour’s “Actor-Network Theory”, she proposes that we revise our view of the reader/text/context divides and see texts not as “servile henchmen” (p. 170) for ideology but as akin to agents themselves, enmeshed in our lives in countless ways and capable of compelling in us a far fuller range of emotional responses.

While it’s easy to applaud her call to move beyond the “vulgar sociology” (p. 171) of critique, it’s not clear that Latour’s generalized model of network relations is much of an advance. It’s also a little hard to square this slightly wonky scientism with her call for a renaissance of humanistic values in criticism. To write out one’s responses to a text or an image is to record the shifting interplay between two particulars, oneself and the work. So-called “strong” theories inevitably bleach out the specific nature of what emerges from these encounters. In this way they run counter to the impulse that drives criticism in the first place, which is to record the private, idiosyncratic act of figuring out for oneself what one thinks and feels about an artwork.
As you may know, I've given quite a bit of thought to Latour and even blogged a series of posts on Reassembling the Social, which I then turned into a working paper (downloadable PDF). The great weakness of Latour for literary studies is that, while he gives us a way of thinking about how we negotiate our relationships with one another, he has little to say about the mind and so gives us few to no tools for reaching into literature's interior. I have, however, suggested that his distinction between intermediaries and mediators is a good place to start. Intermediaries are transparent between interacting individuals while mediators require transformation and translation. I suggest, then, that we think of literary form as an intermediary while content must be mediated. That is to say, the literary text is both an intermediary and a mediator.

The potential of the world's tribes, Big, Small, and New

Sitting in Sri Lanka, recently at war with itself, Ram Manikkalingam contemplates Europe and the rest of the world in 3 Quarks Daily:
Meanwhile, (with perhaps a small degree of schadenfreude) I watch Europe become tense, turn in on itself, exclude communities, become subject to attacks, impose emergency law, and break apart with Brexit. I ask myself what is really going on in Europe. While we may draw a direct line from the invasion of Iraq to the attacks against civilians in Paris and Brussels, that alone is insufficient to explain why young men in Brussels and Paris will travel thousands of miles away to join a movement with which they have little social, cultural or political affinity. And it simply does not even begin to explain Brexit, Scottish nationalism, Marie Le Pen or Vladimir Putin. Maybe, just maybe, it might be more useful to start in Europe and ask how have things changed in the past decade since I have been living there. What do I see now that I did not see before? And how would I describe the politics of Europe to someone who had never been there, not experienced it, and needed to understand it better?

For all its progress and enlightenment, Europe is still a continent of Tribes – Big Tribes, Small Tribes and New Tribes. Big Tribes have their own state. Within this state they feel dominant (or at least feel that they ought to be). These Big Tribes may be as big as the English and French or as small as the Dutch and Danes. What they have in common is they live under their own political roof. Then we have the Small Tribes. These are invariably the Tribes that live within the borders of a state the Big Tribes dominate. These Tribes range from the Scots and the Northern Irish, to the Basques, the Tyroleans and the Corsicans. They yearn for a political roof that is closer to them. Or at least they reject the political roof that has been built on top of them by others who are more powerful then they. And finally you have the New Tribes. These are Tribes related to Europe's colonial project. Some arrived during colonialism, others after colonialism ended, and still others continue to enter today. This Tribe is viewed as foreign by the Big Tribes. But they are, or at least feel they are, as European as the other two Tribes. Let me unpack each of these Tribes a little further.
Well worth reading.

Ralph Nader endorses "We Need a Department of Peace"

“…very important little paperback…a pragmatic argument for a department of peace…”

Nader and Peace Book 1x1
Photo courtesy of The Lakeville Journal.

Nader's radio interview with Charlie Keil.

Meanwhile, in the Twitterersphere:







Purchase at Amazon.com (paperback, Kindle), or Barnes and Noble (paperback, NOOK Book).

Monday, July 25, 2016

Peace Now! War is Not a Natural Disaster

Department of Peace

Over at 3 Quarks Daily my current post reproduces a section of a slender book I’ve put together with the help of Charlie Keil and Becky Liebman. The book collects some historical materials about efforts to create a department of peace in the federal government, starting with at 1793 essay by Benjamin Rush, one of our Founding Fathers: “A Plan of a Peace-Office for the United States.” It includes accounts of legislative efforts in the 20th century and commentary by Charlie Keil and me. The book is entitled We Need a Department of Peace: Everybody’s Business; Nobody’s Job. It’s available at Amazon and Barnes and Noble in paperback and eBook formats.

Below the peace symbol I’m including the Prologue, which is by Mary Liebman, an important activist from the 1970s. The book include other excerpts from the newsletters Liebman wrote for the Peace Act Advisory Council.

one of them old time good ones

Google in the cloud, is the tethersphere humanity's future?

Amazon is 1st in cloud services, microsoft is second, and Google is playing catch-up ball, according to the NYTimes. So Google is ramping up. How will that go?
Can faster networks, lower prices and lots of artificial intelligence put Google ahead? Amazon’s lead seems to give it an edge for at least the next couple of years, as its cloud branch has perfected a method of developing hundreds of new cloud features annually. Yet while the company appears to have some basic artificial intelligence features, called machine learning, it seems to have little in the way of speech recognition or translation.

Mr. Lovelock, the Gartner analyst, predicted that Google would offer businesses the insights it has gained from years of watching people online. “Amazon views the customer as the person paying the bill, while Google believes the customer is the end user of a service,” he said. And Microsoft is promoting itself as the company that has products customers already know and use.
What will it be like, living your life tethered to AmaGoogSoft?

Saturday, July 23, 2016

Dana Boyd at Davos: We have met the enemy and he R us?

Yet, what I struggled with the most wasn’t the sheer excess of Silicon Valley in showcasing its value but the narrative that underpinned it all. I’m quite used to entrepreneurs talking hype in tech venues, but what happened at Davos was beyond the typical hype, in part because most of the non-tech people couldn’t do a reality check. They could only respond with fear. As a result, unrealistic conversations about artificial intelligence led many non-technical attendees to believe that the biggest threat to national security is humanoid killer robots, or that AI that can do everything humans can is just around the corner, threatening all but the most elite technical jobs. In other words, as I talked to attendees, I kept bumping into a 1970s science fiction narrative.

At first I thought I had just encountered the normal hype/fear dichotomy that I’m faced with on a daily basis. But as I listened to attendees talk, a nervous creeping feeling started to churn my stomach. Watching startups raise downrounds and watching valuation conversations moving from bubbalicious to nervousness, I started to sense that what the tech sector was doing at Davos was putting on the happy smiling blinky story that they’ve been telling for so long, exuding a narrative of progress: everything that is happening, everything that is coming, is good for society, at least in the long run.

Shifting from “big data,” because it’s become code for “big brother,” tech deployed the language of “artificial intelligence” to mean all things tech, knowing full well that decades of Hollywood hype would prompt critics to ask about killer robots. So, weirdly enough, it was usually the tech actors who brought up killer robots, if only to encourage attendees not to think about them. Don’t think of an elephant. Even as the demo robots at the venue revealed the limitations of humanoid robots, the conversation became frothy with concern, enabling many in tech to avoid talking about the complex and messy social dynamics that are underway, except to say that “ethics is important.” What about equality and fairness?
The tech-sector misunderstands itself:
There is a power shift underway and much of the tech sector is ill-equipped to understand its own actions and practices as part of the elite, the powerful. Worse, a collection of unicorns who see themselves as underdogs in a world where instability and inequality are rampant fail to realize that they have a moral responsibility. They fight as though they are insurgents while they operate as though they are kings.

Friday, July 22, 2016

To Russia, with Love

I've just been looking at my stats and notice that, for some reason, I've recently been getting a lot of view from Russia. Here's the breakdown:
             Russia    USA
      Month   3516    6098
      Week    3223    1458 
      Day     1130     242
For the last month, USA is ahead of Russia. But for the last week and the most recent day, Russia is ahead of the USA. I also notice that I've had a big spike of interest in the last two days. That must be from Russia. What gives?

Character change in the Hollywood film

Rory Kelly has an interesting guest post at David Bordwell's Observations on film art. It's called "Rethinking the character arc." The opening paragraphs:
Since the 1960s, the character arc has become all but obligatory in Hollywood movies. Genres like sci-fi and horror, once largely unconcerned with character change, now often include it. Compare, for example, the 1953 version of War of the Worlds with Steven Spielberg’s 2005 remake. In the latter the alien invaders are not only defeated, but in the process Tom Cruise’s character, Ray Ferrier, becomes a better father.

Why has character change become so prominent? In The Way Hollywood Tells It (2006), David offered some suggestions. The psychological probing in plays by Arthur Miller, Paddy Chayefsky, and Tennessee Williams became popular models of serious drama. Teachers and writers were persuaded by Lajos Egri’s book, The Art of Dramatic Writing (1946), in which the author advises that characters should grow over the course of a play. There was also the impact of self-actualization movements of the 1960s and 1970s that held out the promise of personal growth and transformation.

It’s also likely that star power has had considerable influence. When trying to raise financing for a low-budget indie script a few years back, my collaborator and I were advised by three different seasoned producers to give our protagonist a more pronounced arc or we would never be able to attract a name actress to the role. Whether they were right or not, I do not know, but their shared assumption about attracting talent is telling.

Given how common the character arc has become, we need to better understand how it is typically handled. I think we can identify and analyze six narrative strategies that create a particular character type: the protagonist who is flawed but is capable of positive psychological change. My primary example will be The Apartment (1960).

I’ll also consider aspects of character change in Casablanca (1942), Jaws (1975), and About a Boy (2002). This list will allow me to consider the character arc over six decades, from the studio era to contemporary Hollywood, and across several genres.

Tequila Sunrise in the Library: Another take on "digital humanities"

As I noted in an earlier post, Who put “The Terminator” in “Digital Humanities”?, it seems to me that in its very construction the phrase digital humanities was destined to become a bright shiny object that attracted some and repelled others almost without regard for its extension in the world. There is a substantial anti-science anti-technology line of thinking in the humanities that goes back at least to the Romantics. Digital humanities proclaims a species of humanities that is conceived on the side of science&technology. It is thus different in its effect from humanities computing, which subordinates computing to humanities. Computing, yes, but computing in service to the humanities; we can live with that. But humanities that is born digital, is that even possible? Maybe it's a miracle that will save us or, or maybe it's an abomination that's a sign of the coming End Times.

Compare lines 35 and 36 of "Kubla Khan":
It was a miracle of rare device,
A sunny pleasure-dome with caves of ice!
Miracles have a very different kind of causal structure from devices, which are human-made, even rare ones. Miracles, in contrast, are divine. Something that partakes of both is strange indeed. The digital humanities lab would hardly seem to be a sunny pleasure-dome with caves of ice, but who knows.

Wednesday, July 20, 2016

Pokémon Go and the citizen scientist

Millions of people have spent the past week walking around. Ostensibly, they are playing the online game Pokémon Go and hunting for critters in an ‘augmented reality’ world. But as gamers wander with their smartphones — through parks and neighbourhoods, and onto the occasional high-speed railway line — they are spotting other wildlife, too.

Scientists and conservationists have been quick to capitalize on the rare potential to reach a portion of the public usually hunched over consoles in darkened rooms, and have been encouraging Pokémon hunters to snap and share online images of the real-life creatures they find. The question has even been asked: how long before the game prompts the discovery of a new species?

It’s not out of the question: success is 90% perspiration after all, and millions of gamers peering around corners and under bushes across the world can create a very sweaty exercise indeed. By definition, each Pokémon hunter almost certainly holds a high-definition camera in their hands. And there is a precedent: earlier this year, scientists reported Arulenus miae, a new species of pygmy devil grasshopper, identified in the Philippines after a researcher saw an unfamiliar insect in a photo on Facebook (J. Skejo and J. H. S. Caballero Zootaxa 4067, 383–393; 2016).

But Pokémon Go players beware. It is one thing to conquer a world of imaginary magical creatures with names like Eevee and Pidgey, and quite another to tangle with the historical complexity of the Inter­national Code of Zoological Nomenclature. So, say you do manage to snap a picture of something previously unknown to science — what then? Let Nature be your guide.
H/t 3QD.

Miyazaki’s Metaphysics: Some Observations on The Wind Rises

THE_WIND_RISES-00.58.15

Another working paper. Title above, abstract, TOC and introduction below.

Download at:
Abstract: In The Wind Rises Hayao Miyazaki weaves various modes of experience in depicting the somewhat fictionalized life of Jiro Horikoshi, a Japanese aeronautical engineer who designed fighter planes for World War II. Horikoshi finds his vocation through ‘dreamtime’ encounters with Gianni Caproni and courts his wife with paper airplanes. The film opposes the wind and chance with mechanism and design. Horikoshi’s attachment to his wife, on the one hand, and to his vocation on the other, both bind him to Japan while at the same time allowing him to separate himself, at least mentally, from the imperial state.

CONTENTS

Making Sense of It All: Miyazaki’s The Wind Rises 2
Miyazaki’s The Wind Rises, Some Observations on Life 6
Some Thoughts about The Wind Rises 9
The Wind Rises, It Opens with a Dream: What’s in Play? 10
Horikoshi at Work: Miyazaki at Play Among the Modes of Being 23
The Pattern of Miyazaki’s The Wind Rises 31
From Concept to First Flight: The A5M Fighter in Miyazaki’s The Wind Rises 32
Why Miyazaki’s The Wind Rises is Not Morally Repugnant 40
Horikoshi’s Wife: Affective Binding and Grief in The Wind Rises 49
The Wind Rises: A Note About Failure, Human and Natural 57
Problematic Identifications: The Wind Rises as a Japanese Film 59
How Caproni is Staged in The Wind Rises 67
The Wind Rises: Marriage in the Shadow of the State 89
Miyazaki: “Film-making only brings suffering” 99
Counterpoint: Germany and Korea 100
Wind and Chance, Design and Mechanism, in The Wind Rises 103
Appendix: Descriptive Table 112

Making Sense of It All: Miyazaki’s The Wind Rises

I was going great guns writing about The Wind Rises in November and December of last year. And then the energy ran out while I was drafting “Registers of Reality in The Wind Rises.” Here’s the opening paragraph:
The term, “registers of reality”, is not a standard one, and that’s the point. It’s not clear to me just what’s going on here, and so we might as well be upfront. But it has to do with those “dream” scenes, among other things. And it’s also related to what seems to be a common line on The Wind Rises, namely that while all other Miyazaki films have elements of fantasy in them, often strong ones, this does not.
Many of the reviews casually mention those so-called dream sequences. You can’t miss them. They seem, and are in a way, typical of Miyazaki. But if you look closely you’ll see that they’re not all dream sequences, not quite. Without getting to fussy let’s all them dreamtime with the understanding that sleep is only one of the occasions of dreamtime, that one can enter it under various circumstances–a discussion I open in the posts, “The Wind Rises, It Opens with a Dream: What’s in Play?” and “Horikoshi at Work: Miyazaki at Play Among the Modes of Being.” And they happen only in the first half of the film and at the very end. That’s one thing.

But I had more in mind with the phrase “registers of reality.” A couple paragraphs later in that incomplete draft:
That’s one set of questions. What’s the parallel set of questions we must ask about Horikoshi’s relationship with Naoko Satomi? I ask that question out of formal considerations. As I pointed out in an early post in this series, “The Pattern of Miyazaki’s The Wind Rises”, Horikoshi interacts with Caproni in the first half of the film and with Naoko in the second half. His meetings with Caproni didn’t take place in ordinary mundane reality. What about his meetings with Naoko?
All of those meetings DO take place in mundane reality. The first half of the film alternates between mundane reality and dreamtime (whether waking or sleeping). The second half alternates between work-time and Naoko-time. But the two are, of course, symbolically related. The object of the “registers of reality” post was to make sense out of all this, out of how Miyazaki weaves them together–mundane and dreamtime, work and love–into a life.

But it got too hard, just too hard. And so I stopped. I had other posts planned, including one on “Unity of Being in The Wind Rises.” I’ll get back to it one day. I need to. Perhaps it will take more conceptual apparatus than I can work up in a blog post. Who knows?

That was half a year ago and I’ve not yet gotten back to it. I’ve decided to take the work I’ve done and assemble it into a working paper. Before I do that, however, I can at least indicate something of where I was going, of where I hoped to arrive.

Monday, July 18, 2016

Friday, July 15, 2016

Golumbia Fails to Understand Chomsky, Computation, and Computational Linguistics

Edit Note, 7.16.16: I've added two substantial paragraphs near the end of the section, "Computing, Abstract and Real." Check the Golumbia tag for further thoughts, as I'm in the processing of adding new material to a revised version of this piece.
As many of you know, David Golumbia is one of three authors of a recent article that that offered a critique of the digital humanities (DH), Neoliberal Tools (and Archives): A Political History of Digital Humanities [1]. The article sparked such vigorous debate within the DH community that I decided to investigate Golumbia’s thinking. I’ve known about him for some time but had not read his best known book:
David Golumbia. The Cultural Logic of Computation. Harvard University Press, 2009.
I’ve still not read it in full. But I’ve read enough to arrive at a conclusion.

Thus this piece is not a book review. I focus on the second chapter, “Chomsky’s Computationalism,” with forays into the first, “The Cultural Functions of Computation,” and a look at the fourth, “Computationalist Linguistics.”

Why Chomsky?

Chomsky is one of the seminal thinkers in the human sciences in the last half of the twentieth century. The abstract theory of computation is at the center of his work. His early work played a major role in bringing computational thinking to the attention of linguists, psychologists, and philosophers and thus helped catalyze the so-called cognitive revolution. At the same time Chomsky has been one of our most visible political essayists. This combination makes him central to Golumbia’s thinking, which is concerned with the relationship between the personal and the political as mediated by ideology. Unfortunately his understanding of Chomsky’s thinking is so tenuous that his enterprise is flawed from its inception. I am not prepared to say whether or not the rest of the book redeems its dismal beginning.

First I consider the difference between abstract computing theory and real computing, a distinction to which Golumbia gives scant attention. Then I introduce his concept of computationalist ideology and criticize his curious assertion that computational linguistics “is almost indistinguishable from Chomskyan generativism” (p. 47). From there I move to his treatment of the Chomsky Hierarchy, pointing out that it is a different kind of beast from hierarchical power relations in society. The last two sections examine remarks that are offered almost as casual asides. The first remark is a speculation about the demise of funding for machine translation in the late 1960s. Golumbia gets it wrong, though he lists a book in his bibliography that gets it right. I conclude with some corrective observations in response to his off-hand speculation about the ideological demographics of linguistics.

Computing, Abstract and Real

I want to start by making a standard distinction between computing in the abstract and embodied computation, “real” if you will. This distinction is important in the context of Golumbia’s book because Chomsky concerned himself only with computing in the abstract. In contrast, computational linguistics (hereafter CL) is about real computing, though computational linguists may avail themselves of abstract theory as an analytical tool. The so-called Chomksy Hierarchy, which we’ll get to a bit later, is one of those analytical tools, and an important one.

Real computation is a physical activity. It is bounded in time – it must come to an end or it has failed – and space – it is realized in physical stuff, Descartes’ res extensa. In decades stretching from the present back into the 19th century, that physical stuff has been some kind of mechanical, electrical, or electronic system. Starting roughly in the middle of the previous century various disciplines have been entertaining the idea that computation might also be realized in animal nervous systems, the human brain in particular, and even the molecular machinery of DNA and RNA.

Computing in the abstract is not physically realized. It is a mathematical activity concerned with purely abstract machines, often called automata. The abstract theory is concerned about the powers of abstract machines as a function of the symbols available to a machine, the states the machine can take, and the operations through which the machine moves from one state to the next.

We can appreciate the difference between real and abstract computing with a relatively simple example, the contrast between tic-tac-toe on the one hand and chess on the other. Abstractly considered they are the same kind of game, and a trivial one at that. They are both finite. As real activities, that is, as activities realized in physical matter, they are quite different. Tic-tac-toe remains trivial, though perhaps not so for a six-year old; but chess becomes profoundly difficult and challenging for even the most brilliant of humans.

Thursday, July 14, 2016

Ricky Smiley: Black vs. White Marching Bands



I remember back in the mid-1960s when my high school marching band – the Marching Rams from Richland Township High School in western Pennsylvania – went to the Cherry Blossom festival in Washington, D.C. A.Very.Big.Effing.Deal. Yessir! We were in the formation area waiting to move out into the parade. There was a black band from I-don't-know-where practicing. Their drummers bounced their sticks off the pavement, caught them, and kept on drumming without missing a beat. Color my jaw dropped all the way to China.

Tuesday, July 12, 2016

DH Facing the Public, Sharon M. Leon in LARB

As far as I can tell, most of the debates at digital humanities – what is it? is it complicit in neoliberalism? – are about the position of digital humanities within the academy. These are conversations among scholars about the types of research they do and, less often, the courses they teach. They aren’t about how humanists address the needs and interests of people outside university walls. And yet digital technology, and the web in particular, provides a platform through which scholars can interact with the public at large. And many scholars do that through public-facing blogs. But these activities aren’t much touched upon in the DH debates.

In the most recent interview in her LARB series, Melissa Dinsmore talks with Sharon M. Leon, director of Public Projects at the Center for History and New Media (CHNM) at George Mason University. Her job is to face the public. Here’s some of her remarks.

* * * * *

Can you elaborate on the differences between digital history and digital public history?

I like to make the distinction between doing history in public and doing public history. I think a lot of scholars doing digital history work are doing that work in public and making it available for an open audience to engage with, but the work of digital public history is actually formed by a specific attention to preparing materials for a particular audience — to address their questions, to engage with them, to target a real conversation with the public about a particular aspect of history. In lots of ways, public history doesn’t look like what a traditional historian would consider to be a scholarly argument; it is a little bit more subtle and much more dialogic. It has a much greater sense of shared authority and it is much less about winning a methodological argument with a community of scholars. I engage the public on the grounds that there are multiple causes of a particular event and multiple historical perspectives for understanding it. The primary difference, however, is that public history is directed at a particular audience. It’s not a “we will build it, they will come, and they might be interested” mentality. Instead, it is “I am going to do research about you, I’m going to find out the kinds of materials and prior knowledge you bring to the topic, and I’m really going to engage you.” ...

Coming back to the relevance question, public history has always been targeted and worked in that direction because it is specifically about engaging the public in a conversation about history. That work has been going on for a long time and the interesting thing about it is that just as digital work sometimes struggles for recognition and authorization in the halls of academia, public history has had a really heavy lift. There’s this weird perception that if one engages with someone who is not a credentialed scholar about scholarly questions, somehow we have diluted our commitment to inquiry. ...

How do you think the general public understands the term “digital humanities” or, more broadly, the digital work being done in the humanities (if at all)?

The public doesn’t understand the term “digital humanities,” but they do understand the work if I frame what I do as “I’m a historian who uses digital tools and methods to answer historical questions.” I think as a field our penetration into the consciousness of the public is almost nonexistent, though I imagine your series might help some. The other way to answer this question is to say that we don’t do nearly enough outreach or evaluation to have any idea what the answer is. What I have learned doing public history work is that you have to prepare, you have to know about your audience going in, you have to do the work, and you have to follow up to find out if they got anything out of it. The majority of DH work only does the middle step. They do the work. And it may not be their goal to learn what the public understands, but I think if we are going to make these claims about public relevance, we have to do all of the steps along the way so that we have some sense about our impact.

My next question has to do with public intellectualism, which many scholars and journalists alike have described as being in decline (for example, Nicholas Kristof’s New York Times essay). What role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?

The problem of the public intellectual is about the way that intellectuals frame their work for the public. It’s not really about the medium. Writing an op-ed for The New York Times for a historian or a literary scholar or a political scientist is a sort of one-shot deal that will reach a certain number of people. Whereas someone like Ta-Nehisi Coates, who is writing all the time, engaging with really important questions, has a digital presence, and actually does engage with members of the public, is a much more effective public intellectual. But I think there are lots of public intellectuals in local communities that we don’t know anything about because we aren’t in that community. Some of that work is digital work and some of it is not.

* * * * *

I have a number of posts on citizen science that are relevant here.

Monday, July 11, 2016

Pāṇini was illiterate! How marvelous!

I have long known that the first grammarian was a 4th century BCE Indian named Pāṇini. I'd tacitly assumed that he was literate and that, consequently, he produced written texts. I've just learned that that is not so. This is from a Language Log post by Geoff Pullum:
...the finest and most detailed phonological description of any language was done about 3,000 years ago for Sanskrit by an ancient Indian known to us as Panini (sticklers note: the first "n" should have a dot under it to indicate retroflexion). If language was not "prominent" for Panini and his devoted circle of followers, successors, and commentators, I don't know what it would mean for language to be "prominent". But Panini was not literate: his phonological description was cast in the form of a dense oral recitation rather like a kind of epic poem, and designed to be memorized and repeated orally. The wonderful Devanagari writing system had yet to be developed. (When it was, naturally it was beautifully designed for Indic languages, because it had the insight of a phonological genius underpinning it.)
I am astounded. Think about it. How'd they do it – Pāṇini and his students?

The Humanities? What’s That? (w/ a look at Pamela Fletcher in LARB)

I don’t know when I first heard about “the humanities,” perhaps college, but maybe before. But if before, well, the phrase wasn’t meaningful until college, and after. And ever after it’s been a burden, always in crisis and, always: Am I a humanist or not? Of course I’m not talking about the humanities as in humanism, such as “Renaissance humanism” – from the dictionary: “an outlook or system of thought attaching prime importance to human rather than divine or supernatural matters.” I’m talking about a group of academic disciplines (philosophy, literature, history, etc.) or perhaps – and here things begin to get murky – a way of approaching the subject matter within those disciplines.

For one can approach literature and art as a psychologist. And philosophy departments harbor experts in symbolic logic, which looks and feels an awful lot math. Come to think of it, I used a philosophy course in logic to fulfill the math requirement for my B.A.

My problem, you see, is that while I study literary texts and films, I’ve spent an awful lot of time looking at them through cognitive psychology, neuroscience, and computational linguistics. And then there’s all those diagrams I use (and love). They just break the discursive flow, yet often they carry the argument.

And speaking of diagrams, what about all those charts and graphs that turn up in computational criticism (aka “distant reading”)? They may be about literary texts, but they come out of computers and statistics and data munging. Humanities?
I think not! 
And just who, may I ask, are you?
Could it been that 19th century disciplinary categories don’t fit 21st century conceptual practices and possibilities? Just what ARE the humanities NOW?

And that brings me to Melissa Dinsman’s interview with Pamela Fletcher in her excellent series at the Los Angeles Review of Books. Fletcher casts doubt on the category, “the humanities,” though not necessarily in the way I have just been doing. Here’s some remarks from the interview.

* * * * *

What is it about a liberal arts college (or Bowdoin specifically) that makes “digital humanities” an unproductive or not very meaningful label?

Your question gives me pause, and makes me wonder if this is really a good description of liberal arts colleges, or even of Bowdoin, or if it is just my own perspective. I should say that many liberal arts colleges (Hamilton comes to mind) have used the Digital Humanities category very productively, and my own colleagues at Bowdoin have put together a course cluster on that topic. I guess what I meant is that we are such a small — and collegial — place that limiting our conversations about digital and computational work to scholars in the humanities ultimately seemed limiting. And students simply do not divide themselves into those categories: there are plenty of art history majors who double major in physics or biology. And probably even more of the students in my class — or any humanities class — are also fully immersed in work across the curriculum no matter what their major might be. So that particular way of dividing human knowledge into three broad categories — humanities, natural sciences, social sciences — just doesn’t map onto my experience of liberal arts very well.

Sunday, July 10, 2016

DH and Critique (DHpoco), with a Nod to Latour (via Felski)

I just now came across this three-year old conversation, Open Thread: The Digital Humanities as a Historical “Refuge” From Race/Class/Gender/Sexuality/Disability? It’s a long and very interesting conversation, with 166 comments between May 10, 2013 and May 14. I am reading the conversation in the general context of the May 1 LARB critique by Daniel Allington, Sarah Brouille, and David Golumbia, Neoliberal Tools (and Archives): A Political History of Digital Humanities. This not-so-old conversation is distinguished by the variety and number of participants and it’s overall civility.

My purpose, however, is not to attempt either a summary or analysis of the conversation, but to highlight two voices I find particularly resonant. First I look at Chuck Rybak, who speaks up for “poetics/form/rhetoric.” Then Rafael Alvarado suggests that DH should not subordinate itself to the Ministry of Cultural Studies.

The Work Itself

Chuck Rybak, from the third day:
This is an amazingly varied and interesting discussion thread–thank you Roopika and Adeline.

When I look at the opening quote, I immediately wanted to paste in the entirety of Marjorie Perloff’s essay “Crisis in the Humanities.” Instead of the entire piece, I’ll settle for this paragraph:

“It is, I would argue, the contemporary fear and subordination of the pleasures of representation and recognition –the pleasures of the fictive, the what might happen to the what has happened–the historical/cultural– that has trivialized the status of literary study in the academy today. If, for an aesthete like Walter Pater, art was always approaching the condition of music, in our current scheme of things, art is always–and monotonously– approaching the condition of “culture.” Indeed, the neoPuritan notion that literature and the other arts must be somehow “useful,” and only useful, that the Ciceronian triad —docere, movere, delectare– should renounce its third element (“delight”) and even the original meaning of its second element, so that to move means only to move readers to some kind of virtuous action, has produced a climate in which it has become increasingly difficult to justify the study of English or Comparative Literature.”

Since I’m a creative writer and lit prof who teaches a lot of poetry, Perloff means a lot to me as a critic. My sense is that Perloff would reject the word “refuge” and replace it with something like “return.” But a return to what? Simply, a focus on poetics/form/rhetoric. When I first started dabbling in DH work, I was immediately struck by how text-centered the enterprise is, and that has proven very useful pedagogically, especially when working with an undergraduate population who often prefer to flee the text as quickly as possible and get right to ideas in the abstract. In short, I’m sympathetic to Perloff here because I think it approaches this question in terms of embracing an interest rather than primarily rejecting something else. Perloff, in that essay, gives respect to cultural readings of works like Ulysses and Heart of Darkness, especially as they relate to empire, etc. Still, what Martha Smith might describe as a refuge (or seemingly so), I hear someone like Perloff saying what’s needed is a return to poetics. Perloff wasn’t writing about DH, but I imagine one thing Perloff would respect about some DH tools are their ability to hone in on language/aesthetics and treat a poem as a unique rhetoric, stylistic artifact, etc. I’m learning so much from reading this thread, but I chafe a little bit against the notion of a “refuge” from a specific set of concerns, largely because it might attribute an act of will where someone might just be pursuing their particular interests.

And having written this, I can also hear the response: “Perloff, as you demonstrate, is exactly someone who would see this as a refuge.” I’m not sure I could completely rebut that. PoCo is not my area of study/specialization, so I really am offering this very generally.
I note that there was no response to this comment despite the fact that, as Rybak noted at the end, he was using Perloff to advance the kind of position that was brought into question by the thread. Perhaps, because it was late in the conversation, people were tired.

Now, consider a passage by Derek Attridge, which is from a dialog he had with Henry Staten, “Reading for the Obvious: A Conversation,” World Picture 2, Autumn 2008. It has nothing to do with DH, but it speaks to Rybak/Perloff in the face of the demand for critique:
As you know, I’ve been trying for a while to articulate an understanding of the literary critic’s task which rests on a notion of responsibility, derived in large part from Derrida and Levinas, or, more accurately, Derrida’s recasting of Levinas’s thought, one aspect of which is an emphasis on the importance of what I’ve called variously a “literal” or “weak” reading. That is to say, I’ve become increasingly troubled by the effects of the enormous power inherent in the techniques of literary criticism at our disposal today […] The result of this rich set of critical resources is that any literary work, whether or not it is a significant achievement in the history of literature, and whether or not it evokes a strong response in the critic, can be accorded a lengthy commentary claiming importance for it. What is worse, the most basic norms of careful reading are sometimes ignored in the rush to say what is ingenious or different. (The model of the critical institution whereby the critic feels obliged to claim that his or her interpretation trumps all previous interpretations is clearly part of the problem here, and beyond this the institutional pressure to accumulate publications or move up the ladder.) We may be teaching our students to write clever interpretations without teaching them how to read...
Attridge and Staten went on devote a book to the practice of weak reading: Derek Attridge and Henry Staten, The Craft of Poetry: Dialogues on Minimal Interpretation, Routledge 2015. Each chapter takes the form of back and forth conversation (via email) about a poem or maybe two. Their object is to find agreement as much as possible, and to clearly articulate disagreement where necessary [1]. They note that they are not trying to replace or displace critique, but to lay a foundation from which other forms of criticism can advance.

Saturday, July 9, 2016

Should digital humanists know how to code?

From Alverado Rafael Alvarado, The Code Problem:
The first is to learn for the rea­son that Tim Berners-Lee exhorts jour­nal­ists to learn–you need to know how to use tools to manip­u­late data because knowl­edge is increas­ingly pro­duced as data, that is, in more or less struc­tured forms all over the web. This is because the future of the human­i­ties “lies with jour­nal­ists human­ists who know their CSV from their RDF, can throw together some quick MySQL queries for a PHP or Python out­put … and dis­cover the story lurk­ing in datasets released by gov­ern­ments, local author­i­ties, agen­cies, dig­i­tal archives, online libraries, aca­d­e­mic cen­ters, or any com­bi­na­tion of them – even across national bor­ders.” [...]

The sec­ond rea­son to learn to code is philo­soph­i­cal. You should be able to write code–not nec­es­sar­ily pro­gram or, God for­bid, “develop”–so that you can under­stand how machines think. Play with sim­ple algo­rithms, parse texts and cre­ate word lists, gen­er­ate silly pat­terns a la 10 PRINT. Get a feel for what these so-called com­puter lan­guages do. Get a feel for the propo­si­tion, to which I mostly assent, that text is a kind of code and code a kind of text (but with really impor­tant dif­fer­ences that you won’t dis­cover or under­stand until you play around with code). This level of knowl­edge does not require any great mas­tery of a lan­guage in my view. It only requires a will­ing­ness to get one’s hand dirty, make mis­takes, and accept the lim­i­ta­tions of beginner’s knowl­edge. I per­son­ally believe that this sec­ond rea­son is as or more impor­tant than the first.
OK. Moreover:
To get to this place with code, to be able write sim­ple scripts that are use­ful or inter­est­ing or both, you don’t need to do many of the things your cod­ing brethren think you should do. First and fore­most, you don’t need to learn a spe­cific lan­guage unless there is a com­pelling local rea­son to do so, such as being in a class or on a project that uses the lan­guage. [...]

Sec­ond, you don’t need to be involved in writ­ing a full-blown appli­ca­tion to do DH-worthy cod­ing. Appli­ca­tions are fine, and being on a col­lab­o­ra­tive project has huge ben­e­fits of its own, but know that appli­ca­tion devel­op­ment is a huge time-suck and that appli­ca­tions are like restaurants–fun to set up but most likely to fail in the real world. Lots of DH cod­ing projects in my expe­ri­ence are jour­neys, not des­ti­na­tions. [...]

Third, there is no rea­son ever to be forced into using a spe­cific edi­tor or cod­ing envi­ron­ment, espe­cially if it is a dif­fi­cult one that “real” coders use. [...]

Beyond these spe­cific prob­lems, though, there is a more fun­da­men­tal issue about the cul­ture of code that con­tributes to the con­di­tion that Miriam [Posner] and oth­ers con­front: in spite of the well-meaning desire by many coders to bring every­one into the cod­ing fold, there is a coun­ter­vail­ing force the pre­vents this from hap­pen­ing and which emanates from these same coders. This is the force of mys­ti­fi­ca­tion. Mys­ti­fi­ca­tion appears in many forms, includ­ing some of the things I just described–insisting on a dif­fi­cult edi­tor, diss­ing cer­tain languages–but it more gen­er­ally comes from treat­ing code com­pe­tence as a source of iden­tity, whether it be per­sonal or dis­ci­pli­nary. As long as dig­i­tal human­ists regard cod­ing as a marker of prestige–and soft­ware as a token in the aca­d­e­mic economy–and not as a means to other forms of pres­tige (such mak­ing dis­cov­er­ies or writ­ing books), then knowl­edge of cod­ing will always be hedged in by taboos and rites of pas­sage that will have the effect of push­ing away newcomers.
Addendum (7.11.16): From an interview with Pamela Fletcher in the Los Angeles Review of Books:
No, I don’t think you necessarily need to know how to code to do meaningful digital humanities work, not least because collaboration is a central part of DH work and the idea of people bringing different skill sets — and research problems — together is one of its core strengths. Yes, because as a humanist I am deeply committed to the idea that in order to communicate with other people you need to speak their language, and coding is the language of computation. In our new Digital and Computational Studies curriculum at Bowdoin we are starting from the premise that every student who goes through our program needs to understand at least the underlying logic of how computers work and the many layers of abstraction and representation that lie between binary code and what you see on the screen. This is partially about communication: you need to understand what computers are (and aren’t) good at in order to come up with intelligent computational problems and solutions. But it is also because each stage of computational abstraction involves decisions that are essentially acts of interpretation, and you can’t do meaningful work if you don’t understand that. This is equally true, of course, of anyone who uses technology, which is most of us. So I’d say ideally we should be educating all our students to be computationally literate, which is not the same as being expert programmers.

Friday, July 8, 2016

Color the Subject

I've just been reading through this post and its many comments, which is about digital humanities and cultural criticism, more or less. One topic under discussion: Is digital technology "transparent"? Thinking about that makes my head ache; after all, "digital technology" covers a lot of bases. Photography, whether analog or digital, is ofter treated as "transparent" in the sense that photographs register what is really and truly there, without bias. Any reasonably serious photographer knows that this is not true, at least not in the most obvious meaning. This old post discusses color, which is deeply problematic for any serious photographer. While I frame the discussion as being about subjectivity, it is equally about the technology. 
* * * * * 
It's time for another post from The Valve. This one's rather different from my current run of posts. It's about color, subjectivity, and digital cameras. I'm posting it here as a complement to a recent post by John Wilkins. It generated a fair amount of discussion over there, which I recommend to you. I've appended two short comments below. I've got two other posts that are related to this: this one looks at mid-town Manhattan from Hoboken, NJ, and this one looks at a pair of railroad signals in Jersey City.

This post is about color and subjectivity. It's not that I am deeply interested in the phenomenon of color; I'm not. Nor, in some sense, am I interested in subjectivity. But I am interested in literary meaning and beauty and so have to deal with subjectivity in that context. Meaning and beauty are subjective.

The purpose of this post is to think about a certain notion of subjectivity. All too often we identify subjectivity with the idea of unaccountable and-or idiosyncratic differences in the way people experience the world in general, or works of art in particular. I think such difference, though apparently quite common in human populations, is incidental to subjectivity. Things are subjective in that they can be apprehended only by subjects.

I take it that the color of objects is subjective in this sense. There is, for example, no direct relationship between the wavelengths of light reflected from a surface and the perceived color of that surface. Oddly enough, it is because the relationship between reflected wavelengths and perceived color is indirect that perceived color can be relatively constant under a wide variety of circumstances. It is also the case that, different subjects have different visual perceptual systems, they will perceive color differently-that's what color blindness is about.

Whatever literary experience is, however it works, it can happen only in subjects. Whereas the difference among subjects with respect to color perception is relatively small, though real, the difference among subjects with respect to literary taste is relatively large. But, so what? I do note, however, that taste can and does change.
I want to think about color because it seems to be much simpler than the meaning of literary texts. So simple in fact that some aspects of color phenomena can be externalized in cameras and computers. When a digital camera “measures” or “samples” the wave front of light incident upon its sensor, it is interacting with the external world in a relatively simple way. Relative, that is, to what happens in the interaction between wave fronts and the retinal membranes of, say, reptilian or mammalian perceptual systems.

With that in mind, let's look at a photograph.

On the morning of 25 November 2006 I walked to the shore of the Hudson River to take pictures of the sunrise over the south end of Manhattan Island. I used a Pentax K100D and, for the most part, let the camera set the parameters for each shot. I shot the pictures in so-called RAW format and then processed them on my computer.

Here's one of those photographs before I did anything to adjust the color:

sun-tree-raw.jpg

Figure 1: Image without color adjustment

Wednesday, July 6, 2016

The Seinfeld Influence on Bill Gates

This is a link to a post at Bill Gate's blog. The post is about his relationship with Warren Buffett. On that page there's a video titled "Philanthropists in Golf Carts Eating Dilly Bars." Though the video has little in common with Seinfeld's "Comedians in Cars Getting Coffee," the name is certainly modeled on the name of Seinfeld's program.

From the post:
In 1991, when my mother called me to come out to our vacation home on Hood Canal to meet a group of friends, including Warren, I didn’t want to go. I told her I was too busy at work. Warren would be interesting, my mother insisted. But I wasn’t convinced. “Look, he just buys and sells pieces of paper. That’s not real value added. I don’t think we’d have much in common,” I told her. Eventually, she persuaded me to go. I agreed to stay for no more than two hours before getting back to work at Microsoft.

Then I met Warren. He started asking me some questions about the software business and why a small company like Microsoft could expect to compete with IBM and what were the skillsets and the pricing. These were amazingly good questions that nobody had ever asked. We were suddenly lost in conversation and hours and hours slipped by. He didn’t come across as a bigshot investor. He had this modest way of talking about what he does. He was funny, but what impressed me most was how clearly he thought about the world. It was a deep friendship from our very first conversation.

It’s not a question of manpower, a brief remark on computational criticism

One reason that has been given for computational criticism (aka ‘distant reading’) is that it is the only way we are going to examine books beyond the canon and that, until we do so, our understanding of literary history and partial and biased. Yes, it is likely true that this will be the only way we examine all those books that are no longer read, or even readily available. But this justification is misleading to the extent that it implies that the problem is one of manpower, as though we wouldn’t bother with computational criticism if the critical community had the time to read all those books. I submit that, on the contrary, even if we had the manpower, we’d still undertake computational analysis of 1000s of volumes at a time.

I have no idea of how many volumes would be in the complete corpus of English literature, does anyone? But for the sake of argument let’s pick a number, say 100,000, which is low even if we confine ourselves to work published before the 20th century. Imagine that we have a thousand critics available for reading, which implies that each of them will read 100 volumes. Assuming no other intellectual duties, they could do that reading in a year. Now, how do they make sense of their 10,000 books? As you think about that recall what a much larger number of critics has done for a much smaller number of volumes over the course of the last century.

What do we want from these critics? Well, topic analysis is a popular form of computational criticism, so why not do a manual version with all the (imaginary) critical manpower we’ve got available? Imagine that you are one of the 1000 critics. How will you undertake a topic analysis of your 100 volumes?

For that matter, how will you do it for the first volume? I suppose that you start reading and note down each topic as you come to it. When a topic recurs, you give it another check mark. How do you name the topics? How do you indicate what’s in them? You could list words, as is done in computational topic analysis. You could also use phrases and sentences. What happens as you continue reading? Perhaps what you thought was a horses topic at the beginning turns out to be, say, horse racing instead. So you’ve got to revise your topic list. I would imagine that maintaining coherence and consistency would be a problem as you read through your 100 volumes. Just think of all the scraps of paper and the computer files you’ll be working with. This is going to take a lot of time beyond that required for simply reading the books.

But it’s not just you. There are 1000 critics, each reading and analyzing 100 volumes. How do they maintain consistency among themselves? The mind boggles.

My point is that conducting a topic analysis of a corpus gives us a view of that corpus that we could not get with manual methods, with comparison and distillation of ‘close’ readings of 1000s of books by 1000s of readers. Topic analysis gives us something new. Whether this something is valuable, that’s a different question. But it’s not just a poor substitute for close readings that we’re never going to do.

The fact is, even if we had 1000 critics analyzing 100 volumes each, we’d probably conduct a topic analysis, and more, as a means of bringing some consistency to all those ‘manual’ analyses.

Topic analysis, of course, is not the only form of computational criticism. But the argument I’ve made using it as an example will apply across the board. We are doing something fundamentally new.