Wednesday, May 27, 2015

Could Heart of Darkness have been published in 1813? – a digression from Underwood and Sellers 2015

Here I’m just thinking out loud. I want to play around a bit.
Conrad’s Heart of Darkness is well within the 1820-1919 time span covered by Underwood and Sellers in How Quickly Do Literary Standards Change?, while Austen’s Pride and Prejudice, published in 1813, is a bit before. And both are novels, while Underwood and Sellers wrote about poetry. But these are incidental matters. My purpose is to think about literary history and the direction of cultural change, which is front and center in their inquiry. But I want to think about that topic in a hypothetical mode that is quite different from their mode of inquiry.

So, how likely is it that a book like Heart of Darkness would have been published in the second decade of the 19th century, when Pride and Prejudice was published? A lot, obviously, hangs on that word “like”. For the purposes of this post likeness means similar in the sense that Matt Jockers defined in Chapter 9 of Macroanalysis. For all I know, such a book may well have been published; if so, I’d like to see it. But I’m going to proceed on the assumption that such a book doesn’t exist.

The question I’m asking is about whether or not the literary system operates in such a way that such a book is very unlikely to have been written. If that is so, then what happened that the literary system was able to produce such a book almost a century later?

What characteristics of Heart of Darkness would have made it unlikely/impossible to publish such a book in 1813? For one thing, it involved a steamship, and steamships didn’t exist at that time. This strikes me as a superficial matter given the existence of ships of all kinds and their extensive use for transport on rivers, canals, lakes, and oceans.

Another superficial impediment is the fact that Heart is set in the Belgian Congo, but the Congo hadn’t been colonized until the last quarter of the century. European colonialism was quite extensive by that time, and much of it was quite brutal. So far as I know, the British novel in the early 19th century did not concern itself with the brutality of colonialism. Why not? Correlatively, the British novel of the time was very much interested in courtship and marriage, topics not central to Heart, but not entirely absent either.

The world is a rich and complicated affair, bursting with stories of all kinds. But some kinds of stories are more salient in a given tradition than others. What determines the salience of a given story and what drives changes in salience over time? What had happened that colonial brutality had become highly salient at the turn of the 20th century?

Tuesday, May 26, 2015

War Boys in Tomorrowland, or: Mad Max Meets Disney

Back in the 1950s Walt Disney brought Tomorrowland to life, most concretely as one of four divisions of Disneyland, but also as one of the formats for his television show where he envisioned the future and shilled for science, technology, and the space program. Then came, among other things, the War in Vietnam. Later, following Blade Runner (1982) we have had a long line of dystopian science fiction films, some, like George Miller’s Mad Max series from the 1980s, set in the near future. The World Trade Center was bombed on 9/11 and global warming became a matter for widespread public discussion, debate, and policy making.

IMGP6444

And now, within the last two weeks, we’ve seen a fourth film in the Mad Max franchise, Mad Max: Fury Road, and a Disney film simply entitled Tomorrowland. The films, both about the future (sorta’) are very different, but, as I said, they’ve come out at about the same time, and I’ve just seen both of them. So they’re rattling about in my brain more or less together, hence my title: War Boys [from the Mad Max film] in Tomorrowland [a Disney film directed by Brad Bird].

Simply as a matter of cinematic art, the Mad Max: Fury Road is the better of the two films. It’s gorgeous and astonishing in a way that Tomorrowland is not. As for Tomorrowland, it wandered incoherently about until within five or so minutes of the very end, when it pulled itself together into the most overtly didactic film in my memory. But instead of Mickey Rooney yelling “Hey, kids! Let’s put on a show!” we have director Brad Bird channeling Uncle Walt and yelling “Hey, kids! Let’s create the future!”

It’s clear to me that if we’re going to have anything remotely like the future Uncle Walt, Redux is calling for, those dystopian apocalyptic War Boys from Fury Road are going to be the ones to build it, them and the women who liberate them.

Why I like this photo of an iris

20150523-_IGP3783

I can imagine that, on first seeing it, some might be puzzled about just what this photograph depicts. Of course, I’m not puzzled, because I was there when I took the photo. I know it depicts an iris, or at any rate, parts of some iris blossoms. But there is no single blossom solidly in view; just fragments at various positions, angles, and degrees of focus.

The photo is a bit of a puzzle, though I didn’t shoot it in order to pose riddles. I’m not trying to fool or mystify you. I just want you to look at, and enjoy, the photo.

I can imagine that someone who is intimately familiar with irises would clarify (subject of) the image more readily than someone who is not. Someone who’d never seen an iris, except perhaps in a Japanese print, might not recognize the irises at all. And if they’d never even heard of irises, much less seen them, how could they possibly recognize them in the photo? But they would surely conclude that they’re looking at some kind of flower, some petals and what not.

As you surely are.

What’s at the in-focus center of the photo in shades of light tan? I believe it is a dead blossom, that has dried and lost its color. And then to the right below, a large area of shaded purples, a near petal that’s out of focus. At the upper edge of that purple and to the right, a bit of yellow, and then whites a lavender. The various petals to the left are more sharply focused.

Then there’s the play of light and shadow in and among the petals. That, as much as the petals themselves, is what this photograph is about.

And that is why I like this photograph.

Monday, May 25, 2015

Underwood and Sellers 2015: Beyond Whig History to Evolutionary Thinking

Evolutionary processes allow populations to survive and thrive in a world of contingencies

In the middle of their most interesting and challenging paper, How Quickly Do Literary Standards Change?, Underwood and Sellers have two paragraphs in which they raise the specter of Whig history and banish it. In the process they take some gratuitous swipes at Darwin and Lamarck and, by implication, at the idea that evolutionary thinking can be of benefit to literary history. I find these two paragraphs confused and confusing and so feel a need to comment on them.

Here’s what I’m doing: First, I present those two paragraphs in full, without interruption. That’s so you can get a sense of how their thought hangs togethe. Second, and the bulk of this post, I repeat those two paragraphs, in full, but this time with inserted commentary. Finally, I conclude with some remarks on evolutionary thinking in the study of culture.

Beware of Whig History

By this point in their text Underwood and Sellers have presented their evidence and their basic, albeit unexpected finding, that change in English-language poetry from 1820-1919 is continuous and in the direction of standards implicit in the choices made by 14 selective periodicals. They’ve even offered a generalization that they think may well extend beyond the period they’ve examined (p. 19): “Diachronic change across any given period tends to recapitulate the period’s synchronic axis of distinction.” While I may get around to discussing that hypothesis – which I like – in another post, we can set it aside for the moment.

I’m interested in two paragraphs they write in the course of showing how difficult it will be to tease a causal model out of their evidence. Those paragraphs are about Whig history. Here they are in full and without interruption (pp. 20-21):
Nor do we actually need a causal explanation of this phenomenon to see that it could have far-reaching consequences for literary history. The model we’ve presented here already suggests that some things we’ve tended to describe as rejections of tradition — modernist insistence on the concrete image, for instance — might better be explained as continuations of a long-term trend, guided by established standards. Of course, stable long-term trends also raise the specter of Whig history. If it’s true that diachronic trends parallel synchronic principles of judgment, then literary historians are confronted with material that has already, so to speak, made a teleological argument about itself. It could become tempting to draw Lamarckian inferences — as if Keats’s sensuous precision and disillusionment had been trying to become Swinburne all along.

We hope readers will remain wary of metaphors that present historically contingent standards as an impersonal process of adaptation. We don’t see any evidence yet for analogies to either Darwin or Lamarck, and we’ve insisted on the difficulty of tracing causality exactly to forestall those analogies. On the other hand, literary history is not a blank canvas that acquires historical self-consciousness only when retrospective observers touch a brush to it. It’s already full of historical observers. Writing and reviewing are evaluative activities already informed by ideas about “where we’ve been” and “where we ought to be headed.” If individual writers are already historical agents, then perhaps the system of interaction between writers, readers, and reviewers also tends to establish a resonance between (implicit, collective) evaluative opinions and directions of change. If that turns out to be true, we would still be free to reject a Whiggish interpretation, by refusing to endorse the standards that happen to have guided a trend. We may even be able to use predictive models to show how the actual path of literary history swerved away from a straight line. (It’s possible to extrapolate a model of nineteenth-century reception into the twentieth, for instance, and then describe how actual twentieth-century reception diverged from those predictions.) But we can’t strike a blow against Whig history simply by averting our eyes from continuity. The evidence we’re seeing here suggests that literary- historical trends do turn out to be relatively coherent over long timelines.
I agree with those last two sentences. It’s how Underwood and Sellers get there that has me a bit puzzled.

Sunrise among the dandelions

IMGP3481rdG

Oh woe is us! Those horrible horrible AIs are out to get us

These days some Very Smart, Very Rich, and Very Intellectual people are worried that the computers will do us in the first chance they get. Nonsense!

We would do well to recall that Nick Bostrom is one of these thinkers. He first gained renown from his fiendishly constructed "we are The Matrix argument", in which he went from
1) super-intelligent AI and mega-ginormous capacity computers are inevitable

to

2) the people who create them will likely devote a lot of time to Whole-World simulations of their past

to

3) more likely than not, we're just simulated creatures having simulated life in one of these simulations.
From that it follows that the existential dangers posed by advanced AI are only simulated dangers posed by simulated advanced AI. Why should we worry about that? After all, we're not even real. We're just simulations.

The REAL problem is that so many people seem to think this is real thought, even profound thought, rather than superstitious twaddle. We look back at medieval theologians who worried about how many angels could dance on the head of a pin and think, "How silly." Yes, it's silly, but it seemed real enough to advanced thinkers at the time. These fears of superintelligent evil computers are just as silly. Get over it.

Sunday, May 24, 2015

Advanced AI, friend or foe? Answers from David Ferrucci and from Benzon and Hays

David Ferrucci:
“To me, there’s a very deep philosophical question that I think will rattle us more than the economic and social change that might occur,” Ferrucci said as we ate. “When machines can solve any given task more successfully than humans can, what happens to your sense of self? As humans, we went from the chief is the biggest and the strongest because he can hurt anyone to the chief is the smartest, right? How smart are you at figuring out social situations, or business situations, or solving complex science or engineering problems. If we get to the point where, hands down, you’d give a computer any task before you’d give a person any task, how do you value yourself?”

Ferrucci said that though he found Tegmark’s sensitivity to the apocalypse fascinating, he didn’t have a sense of impending doom. (He hasn’t signed Tegmark’s statement.) Some jobs would likely dissolve and policymakers would have to grapple with the social consequences of better machines, Ferrucci said, but this seemed to him just a fleeting transition. “I see the endgame as really good in a very powerful way, which is human beings get to do the things they really enjoy — exploring their minds, exploring thought processes, their conceptualizations of the world. Machines become thought-­partners in this process.”

This reminded me of a report I’d read of a radical group in England that has proposed a ten-hour human workweek to come once we are dependent upon a class of beneficent robot labor. Their slogan: “Luxury for All.” So much of our reaction to artificial intelligence is relative. The billionaires fear usurpation, a loss of control. The middle-class engineers dream of leisure. The idea underlying Ferrucci’s vision of the endgame was that perhaps people simply aren’t suited for the complex cognitive tasks of work because, in some basic biological sense, we just weren’t made for it. But maybe we were made for something better.
From Benjamin Wallace-Wells, Jeopardy! Robot Watson Grows Up, New York Magazine, 20 May 2015.

William Benzon and David Hays:
One of the problems we have with the computer is deciding what kind of thing it is, and therefore what sorts of tasks are suitable to it. The computer is ontologically ambiguous. Can it think, or only calculate? Is it a brain or only a machine?

The steam locomotive, the so-called iron horse, posed a similar problem for people at Rank 3. It is obviously a mechanism and it is inherently inanimate. Yet it is capable of autonomous motion, something heretofore only within the capacity of animals and humans. So, is it animate or not? Perhaps the key to acceptance of the iron horse was the adoption of a system of thought that permits separation of autonomous motion from autonomous decision. The iron horse is fearsome only if it may, at any time, choose to leave the tracks and come after you like a charging rhinoceros. Once the system of thought had shaken down in such a way that autonomous motion did not imply the capacity for decision, people made peace with the locomotive.

The computer is similarly ambiguous. It is clearly an inanimate machine. Yet we interact with it through language; a medium heretofore restricted to communication with other people. To be sure, computer languages are very restricted, but they are languages. They have words, punctuation marks, and syntactic rules. To learn to program computers we must extend our mechanisms for natural language.

As a consequence it is easy for many people to think of computers as people. Thus Joseph Weizenbaum (1976), with considerable dis-ease and guilt, tells of discovering that his secretary "consults" Eliza--a simple program which mimics the responses of a psychotherapist--as though she were interacting with a real person. Beyond this, there are researchers who think it inevitable that computers will surpass human intelligence and some who think that, at some time, it will be possible for people to achieve a peculiar kind of immortality by "downloading" their minds to a computer. As far as we can tell such speculation has no ground in either current practice or theory. It is projective fantasy, projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.

A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. To give up the notion that one has to add "because . . . " to the assertion "I'm important" is truly difficult. But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.
From William Benzon and David G. Hays, The Evolution of Cognition, Journal of Social and Biological Structures 13(4): 297-320, 1990.

Saturday, May 23, 2015

The Art of Rotoscope: Taylor Swift Remade


49 University of Newcastle Australia animation students were each given 52 frames of Taylor Swift's Shake it Off music video, and together they produced 2767 frames of lovingly hand-drawn rotoscoped animation footage. http://bit.ly/1PzUznI Thank you to all the students of DESN2801: Animation 1 for your enthusiasm, good humour and terrific roto skills!
H/t Nina Paley.

Planet Iris

20150514-_IGP3710

20150523-_IGP3785

20150523-_IGP3814

Friday, May 22, 2015

The life of a hired intellectual gun

Xerox was developing a new operating system for its ill-fated line of computers. Their testing group was falling behind. Learning I had a statistical background as well as one in programming, I was asked to do two things. First, tell us why no matter how many testers we hire, we never seem to get ahead. Second, tell us when the testing will be complete.

The second question was the easiest. Each week, the testing group was discovering more and more errors at an accelerating rate. “Never, the testing will never be complete” was the answer.

The first question was also easy. They already knew the answer. They just wanted me, their consultant, that is, their patsy to be the one responsible for pointing it out. Seems the testing group had been given exactly four computers. Whenever a bug was discovered, the offending computer would sit idle until a member of the programming group could swing by and do a dump. So mostly, the testers played cards, gossiped or used the phones. When we got further and further behind, we were asked to come in Saturdays. The programming group never worked on Saturdays, so that really wasn’t much of a plan.
A sage observation about the consulting biz:
With a large well-established firm, there is one and only one reason you are being hired. The project is a disaster and every knowledgeable employee has already bailed.

Underwood and Sellers 2015: Cosmic Background Radiation, an Aesthetic Realm, and the Direction of 19thC Poetic Diction

I’ve read and been thinking about Underwood and Sellers 2015, How Quickly Do Literary Standards Change?, both the blog post and the working paper. I’ve got a good many thoughts about their work and its relation to the superficially quite different work that Matt Jockers did on influence in chapter nine of Macroanalysis. I am, however, somewhat reluctant to embark on what might become another series of long-form posts, which I’m likely to need in order to sort out the intuitions and half-thoughts that are buzzing about in my mind.

What to do?

I figure that at the least I can just get it out there, quick and crude, without a lot of explanation. Think of it as a mark in the sand. More detailed explanations and explorations can come later.

19th Century Literary Culture has a Direction

My central thought is this: Both Jockers on influence and Underwood and Sellers on literary standards are looking at the same thing: long-term change in 19th Century literary culture has a direction – where that culture is understood to include readers, writers, reviewers, publishers and the interactions among them. Underwood and Sellers weren’t looking for such a direction, but have (perhaps somewhat reluctantly) come to realize that that’s what they’ve stumbled upon. Jockers seems a bit puzzled by the model of influence he built (pp. 167-168); but in any event, he doesn’t recognize it as a model of directional change. That interpretation of his model is my own.

When I say “direction” what do I mean?

That’s a very tricky question. In their full paper Underwood and Sellers devote two long paragraphs (pp. 20-21) to warding off the spectre of Whig history – the horror! the horror! In the Whiggish view, history has a direction, and that direction is a progression from primitive barbarism to the wonders of (current Western) civilization. When they talk of direction, THAT’s not what Underwood and Sellers mean.

But just what DO they mean? Here’s a figure from their work:

19C Direction

Notice that we’re depicting time along the X-axis (horizontal), from roughly 1820 at the left to 1920 on the right. Each dot in the graph, regardless of color (red, gray) or shape (triangle, circle), represents a volume of poetry and its position on the X-axis is volume’s publication date.

But what about the Y-axis (vertical)? That’s tricky, so let us set that aside for a moment. The thing to pay attention to is the overall relation of these volumes of poetry to that axis. Notice that as we move from left to right, the volumes seem to drift upward along the Y-axis, a drift that’s easily seen in the trend line. That upward drift is the direction that Underwood and Sellers are talking about. That upward drift was not at all what they were expecting.

Drifting in Space

But what does the upward drift represent? What’s it about? It represents movement in some space, and that space represents poetic diction or language. What we see along the Y-axis is a one-dimensional reduction or projection of a space that in fact has 3200 dimensions. Now, that’s not how Underwood and Sellers characterize the Y-axis. That’s my reinterpretation of that axis. I may or may not get around to writing a post in which I explain why that’s a reasonable interpretation.

Mad Max: Fury Road – notes toward a psycho-kinetic reading

This is a quick note I posted to a private online forum. I may or may not expland on it later on, but I wanted to get the note out here in public space as well. Why? If you've seen it, you know that it's basically a two-hour chase scene. As I watched it I kept asking myself: Why? The last paragraph of this note hints at an answer to that question.

* * * * *

Have you seen the recent Mad Max movie, Charles? I ask, Charles, because some shots from it came to mind when you talked of the hope and fervor of millennialism. As you know, the films are set in a rather dystopian post apocalyptic desert world. In the current entry in the MM franchise, the 4th film and the first in 30 years, the fervor is expressed by War Boys who spray chrome paint around their mouths and on their lips and teeth. Their hope is to be transported to Valhalla when they die in battle.

There are two other things that make the film interesting. The surprising emergence of feminism in the film and the gorgeousness of much of it. 

And, as I think about it, I guess there's a third thing. The film is basically a two-hour chase scene. I can tell you that, more than once, I said to myself: "Jeez! Are we have to have no relief? Aren't we going to get something other than chase chase chase!?"

I'm now thinking that that unrelenting chase is the psycho-kinetic sea in which the millennial fervor of the War Boys floats. Of course, the War Boys are not on the side of THE GOOD in this film, a side that develops that unexpected feminist aspect as the chase rolls along. But one of the War Boys gets isolated from his fellows and he converts to the Mad Max/Furiosa/feminist cause.

* * * * *

That's the note. If I get around to commenting, I'll be wanting to argue that it's not only the War Boys' apocalyptic hope that floats on that psycho-kineticism, but the feminist alternative. And that in some way that psycho-kineticism merges the two, the War Boys and the newly empowered Breeders, and Max and Furiosa (the one-armed).

Friday Fotos: Vegetables in Context

IMGP4133

IMGP4059

IMGP9665rd

Tuesday, May 19, 2015

Does hot water freeze faster than cold?

In some circumstances, it does. The question is a old one, dating back at least to Aristotle, and turns out to be surprisingly complex. This paper is a good study in the subtleties of getting clear answers from Nature.
Monwhea Jeng
Physics Department, Box 1654, Southern Illinois University Edwardsville, Edwardsville, IL, 62025 
We review the Mpemba effect, where intially hot water freezes faster than initially cold water. While the effect appears impossible at first sight, it has been seen in numerous experiments, was reported on by Aristotle, Francis Bacon, and Descartes, and has been well-known as folklore around the world. It has a rich and fascinating history, which culminates in the dramatic story of the secondary school student, Erasto Mpemba, who reintroduced the effect to the twentieth century scientific community. The phenomenon, while simple to describe, is deceptively complex, and illustrates numerous important issues about the scientific method: the role of skepticism in scientific inquiry, the influence of theory on experiment and observation, the need for precision in the statement of a scientific hypothesis, and the nature of falsifiability. We survey proposed theoretical mechanisms for the Mpemba effect, and the results of modern experiments on the phenomenon. Studies of the observation that hot water pipes are more likely to burst than cold water pipes are also described.
H/t Faculty of Language.

Monday, May 18, 2015

When birds talk, other creatures listen

Studies in recent years by many researchers, including Dr. Greene, have shown that animals such as birds, mammals and even fish recognize the alarm signals of other species. Some can even eavesdrop on one another across classes. Red-breasted nuthatches listen to chickadees. Dozens of birds listen to tufted titmice, who act like the forest’s crossing guards. Squirrels and chipmunks eavesdrop on birds, sometimes adding their own thoughts. In Africa, vervet monkeys recognize predator alarm calls by superb starlings.

Dr. Greene says he wants to better understand the nuances of these bird alarms. His hunch is that birds are saying much more than we ever suspected, and that species have evolved to decode and understand the signals. He acknowledged the obvious Dr. Dolittle comparison: “We’re trying to understand this sort of ‘language’ of the forest.”...

Dr. Greene, working with a student, has also found that “squirrels understand ‘bird-ese,’ and birds understand ‘squirrel-ese.’ ” When red squirrels hear a call announcing a dangerous raptor in the air, or they see such a raptor, they will give calls that are acoustically “almost identical” to the birds, Dr. Greene said. (Researchers have found that eastern chipmunks are attuned to mobbing calls by the eastern tufted titmouse, a cousin of the chickadee.)

Other researchers study bird calls just as intently. Katie Sieving, a professor of wildlife ecology and conservation at the University of Florida, has found that tufted titmice act like “crossing guards” and that other birds hold back from entering hazardous open areas in a forest if the titmice sound any alarm. Dr. Sieving suspects that the communication in the forest is akin to an early party telephone line, with many animals talking and even more listening in — perhaps not always grasping a lot, but often just enough.

White Irises

20150514-_IGP3727

20150514-_IGP3731

20150514-_IGP3729