Saturday, January 31, 2015

The Autonomous Aesthetic: A Graduate Syllabus in Naturalist Literary Criticism

I'd originally posted this at The Valve in July of 2007 under a slightly different title: The Autonomous Aesthetic: A Graduate Syllabus in Literary Theory. Now that I have clearly distinguished between naturalist and ethnical criticism, and provided a rationale for the distinction, it seems best to restrict the title of this syllabus to naturalist criticism, for that IS its scope. Beyond that, the syllabus holds up pretty well. I've added Brian Boyd's book, On the Origin of Stories, which makes a bear of a course even worse, but the first half has a lot of useful psychology in it, making it a valuable supplement to me, Tsur, and Herman.

Note that I'd originally republished this in October of 2013. I'm bumping it to the top in the context of my open letter to Hillis Miller. Finally, if this post at all interests you, I urge you to go over to The Valve (link above) and read the discussion we had there. It's quite good. For on thing, there's some commentary on the notion of an autonomous aesthetic, and that's something I've been thinking about steadily for awhile. I came up in my letter to Miller, where it referenced my working paper on Matthew Jocker's
Macroanalysis.
John Holbo's recent post on Mark Bauerlein's proposed antidote (that link, alas, is dead as The Valve has finally bit the dust) for leftist politics in the Theory curriculum got me thinking about the question of how, given a free hand, I'd teach literary theory. In the spirit of a thought experiment I've put together a syllabus for a graduate course in literary theory, that is, the theory of literature.

On the one hand, I want to demonstrate that one could teach a course in literary theory that pretty much avoids High Theory and yet is intellectually contemporary rather than an exercise in nostalgia. If I myself have pursued these ideas, however, it has not been out of any desire to avoid Theory as though it were a disease (and, of course, it is very proud of the fact that it is grounded in dis-ease) but simply because these are the ideas that have interested me. They are compelling on their own terms and not simply as an alternative to something else.

As a way of setting an overall objective for such a course, a pole star if you will, I offer a passage from a very political High Theorist, the late Edward Said. This passage is from one of his last essays, “Globalizing Literary Study,” published in 2001 in PMLA. He says:
I myself have no doubt, for instance, that an autonomous aesthetic realm exists, yet how it exists in relation to history, politics, social structures, and the like, is really difficult to specify. Questions and doubts about all these other relations have eroded the formerly perdurable national and aesthetic frameworks, limits, and boundaries almost completely. The notion neither of author, nor of work, nor of nation is as dependable as it once was, and for that matter the role of imagination, which used to be a central one, along with that of identity has undergone a Copernical transformation in the common understanding of it.
I too believe that “an autonomous aesthetic realm exists,” and that one can conceptualize it without having to ignore either the human mind, nor society, nor their joint interaction through and embedding in history. The objective of this course in literary theory, then, is to begin understanding how literature partakes of this aesthetic autonomy while being embedded in the contingencies of history.

First I list the proposed texts, in the order that I would use them, and then I explain why.
David Herman. Story Logic: Problems and Possibilities of Narrative. University of Nebraska Press, 2002.

Claude Lévi-Strauss. “The Story of Asdiwal,” in Structural Anthropology II. Basic Books, 1976, pp. 146-197.

Reuven Tsur. Toward a Theory of Cognitive Poetics. North-Holland, 1992.

William Benzon. Beethoven's Anvil: Music in Mind and Culture. Basic Books, 2001.
Brian Boyd. On the Origin of Stories: Evolution, Cognition, and Fiction, Harvard 2009.  My review is HERE.
Dan Sperber. Explaining Culture: A Naturalistic Approach. Blackwell, 1996.

Franco Moretti. Graphs, Maps, Trees: Abstract Models for a Literary History. Verso, 2005.

Alistair Fowler. Kinds of Literature: An Introduction to the Theory of Genres and Modes. Harvard University Press, 1982.

Snow Study

20150126-_IGP2317 A Color FdLin EQ FdDiv

20150126-_IGP2317 A Color FdLin EQ

20150126-_IGP2317 EQ VRT Grd Dif VRT

Friday, January 30, 2015

Friday Fotos: The Empire State Building has its Moods

I've taken a fair number of photographs of the Empire State Building, all of them from New Jersey, mostly from Jersey City and Hoboken, with a few from Weehawken. In many of them the building is relatively small and inconsequential, which was the point. I had a vague political metaphor in mind, distant empire.

In the photos I've chosen for this Friday's series, I decided that the building had to have some significant visual presence. It's not particularly prominent in this photo, however, but it gets its "weight" a different way:

exotic mix 2nd mix.jpg

There's no obvious foreground and no central focus, giving the image a fantasy wildness that I like. Because of that you're naturally drawn to the Empire State Building there at the right in the middle distance and partially obscured. Since it is the most recognizeable thing in the photo, it serves to anchor the photo despite the fact that it's off-center and relatively small. And once you've spotted it, you can't help but look for the Chrysler Building to its left–assuming, of course, that you're familiar enough for the New York skyline to know that the two buildings are relatively close.

I once considered this to be one of my best photographs:

starship empire dancer.jpg

While I still like it a lot, I've become used to it.

Apple tops Microsoft in Market Capitalization and is the Largest Company in the World

Not so long ago Apple was a fragile boutique operation dwarfed by Microsoft. Now things are different, says the NYTimes:
When Microsoft stock was at a record high in 1999, and its market capitalization was nearly $620 billion, the notion that Apple Computer would ever be bigger — let alone twice as big — was laughable. Apple was teetering on bankruptcy. And Microsoft’s operating system was so dominant in personal computers, then the center of the technology universe, that the government deemed the company an unlawful monopoly.

This week, both Microsoft and Apple unveiled their latest earnings, and the once unthinkable became reality: Apple’s market capitalization hit $683 billion, more than double Microsoft’s current value of $338 billion….

Apple earned $18 billion in the quarter — more than any company ever in a single quarter — on revenue of $75 billion. Its free cash flow of $30 billion in one quarter was more than double what IBM, another once-dominant tech company, generates in a full year, noted a senior Bernstein analyst, Toni Sacconaghi. The stock jumped more than 5 percent, even as the broader market was down.
The flip-flop took a decade and a half, perhaps less. But no one really knows how to plan beyond three to five years, not in any detail. But it's dependent on one product, the iPhone. What now?
Mr. Cihra noted that Microsoft already dominates its core businesses, leaving little room for growth. But, he said, “Apple still doesn’t have massive market share in any of its core markets. Even in smartphones, its share is only in the midteens. Apple’s strategy has been to carve out a small share of a massive market. It’s pretty much a unique model that leaves plenty of room for growth.”

Can Apple continue to live by Mr. Jobs’s disruptive creed now that the company is as successful as Microsoft once was? Mr. Cihra noted that it was one thing for Apple to cannibalize its iPod or Mac businesses, but quite another to risk its iPhone juggernaut.

“It’s getting tougher for Apple,” Mr. Cihra said. “The question investors have is, what’s the next iPhone? There’s no obvious answer. It’s almost impossible to think of anything that will create a $140 billion business out of nothing.”

Thursday, January 29, 2015

"Precision medicine" – another funding boondoggle?

Obama's decided to spend a pile of money on "precision" medicine. Michael Joyner remarks that we've been down this road before and don't have much to show for it:
The idea behind the “war on cancer” was that a deep understanding of the basic biology of cancer would let us develop targeted therapies and cure the disease. Unfortunately, although we know far more today than we did 40-plus years ago, the statistics on cancer deaths have remained incredibly stubborn. The one bright spot has been tobacco control — again highlighting the dominant role of culture, environment and behavior versus biological destiny in what ails most of us.

Given the general omertà about researchers’ criticizing funding initiatives, you probably won’t hear too many objections from the research community about President Obama’s plan for precision medicine. But I am deeply skeptical. Like most “moonshot” medical research initiatives, precision medicine is likely to fall short of expectations. Medical problems and their underlying biology are not linear engineering exercises, and solving them is more than a matter of vision, money and will.

We would be better off directing more resources to understanding what it takes to solve messy problems about how humans behave as individuals and in groups. Ultimately, we almost certainly have more control over how much we exercise, eat, drink and smoke than we do over our genomes.
Given my current hobbyhorses, this is yet another example of how anxiety about the unknown, and death, leads us to expend great effort in ways that are ineffective in achieving stated objectives. One certainly gets the impression that progress is a crapshoot. Is this one source of the blind variation driving long-term cultural evolution? 

Light coming through the windows and through my fingers

20150117-_IGP2211

Wednesday, January 28, 2015

Cultural Beings & Intertextuality: Information

I’ve got two quickish thoughts on cultural evolution, once concerning the concept of cultural beings and the other in my ongoing ‘war’ against the concept of cultural information.

Cultural Beings and Intertextuality

I’ve recently introduced the term “cultural being” to indicate a package or envelope of coordinators (aka a ‘text’) plus the ‘trajectories’ of that package in the minds of all who encountered it. Such misgivings as I still have stem from the fact that it is in fact an odd notion, though it’s not so different from how the concept of ‘the text’ is in fact used in literary criticism. Literary critics also talk about intertextuality, how texts are related to one another.

The fact that Greene’s Pandosto was reworked into Shakespeare’s The Winter’s Tale is thus a fact about the intertextuality of both texts. Shakespeare may have written his text, but he did so after having read Greene’s text and using is as a model for his. The Winter’s Tale is thus, in a sense, a daughter of Pandosto. And most of Shakespeare’s plays are daughters of multiple sources.

The cultural being that is associated with Pandosto would thus extend into Shakespeare’s mind (and into mine as well). That is the ‘route’ through which it ‘influenced’ Shakespeare’s play. More generally, later cultural beings are the result of the intermixing of earlier cultural beings in the minds of authors.

Information in Cultural Evolution

I have two objections to standard memetic talk of memes as cultural information. One is simply that the concept is not very clear (see the addendum). The other is that it is used to paper over the very tricky and interesting question of how one mind influences another. “Oh, one mind just transfers information to the other mind.”

Not only is this a mistaken account of what physically happens during communication (which Michael Reddy critiqued with is account of the conduit metaphor) but it also glosses over the fact that communication is often imperfect. That very imperfection is one potential source of cultural variation. The mind that reads a signal, any signal, is not identical to the mind that sends it, and so it may misread the signal. Where the two minds are in face-to-face interaction it may be possible to negotiate a satisfactory understanding. Where negotiation is impossible, misunderstanding may be inevitable and hard to eradicate.

One case that I find particularly interesting is that of the musical interaction between European-Americans and African-Americans in 20th century musical styles. Instrumental styles pass from black music to white music more easily than do vocal styles, but even there we can see differences. The basic harmonic and melodic practices transfer easily enough, but the rhythmic nuances do not. The neuromuscular systems that reconstruct the music are different from those that produce it. Thus Pat Boone’s versions of Little Richard’s tunes are anemic in comparison to the originals. The “information” didn’t quite “transfer.”

Addendum: John Wilkins on Information

This is from John Wilkins’ entry, Replication and Reproduction, in the Stanford Encyclopedia of Philosophy:
The literature dealing with information is both extensive and factious. Several different formal analyses of information can be found and very little agreement about which analysis is best for which subjects. On one point these scholars tend to agree—cybernetic information and communication-theoretic information will not do for replication in biological contexts. The best bet is semantic information (Sterelny 2000a; Godfrey-Smith 2000; Sarkar 2000). The trouble is that no widely accepted version of semantic information exists. Winnie (2000) distinguishes between Classical and Algorithmic Information Theory and opts for a revised version of the Algorithmic Theory. But once again, the problem is that no such formal analysis currently exists. In the face of all this disagreement and unfinished business, biologists such as Maynard Smith (2000) maintain either that informal analyses of “information” are good enough or that some future formal version of information theory will justify the sorts of inferences that they make. The sense of “information” as used in the Central Dogma of molecular biology, which states that information cannot flow from protein to DNA, is more like a fit of template, or the primary structure of the protein sequence compared to the sequence of the DNA base pairs. Attempts have been made in what is now known as bioinformatics to use Classical Information Theory (Shannon's theory of communication) to extract functional and phylogenetic information (Gatlin 1972; Maclaurin 1998; Wallace and Wallace 1998; Brooks and Wiley 1988), but it appears to have been unsuccessful in the main. While the most likely conclusion is that no version of information theory as currently formulated can handle “information” as it functions in biology (see Griffiths 2001 for further discussion), attempts have been made to formulate just such a version (Sternberg 2008; Bergstrom and Rosvall 2011). However, this undercuts the motivation for appealing to information theory to elucidate genes in the first instance.

J. Hillis Miller on the future of the profession

A spectacular example of this sort of thing is the State University at Albany where an administrator closed Jewish studies, French, German, and Russian studies. He just closed them arbitrarily because he had the power to do that and wanted to use the money otherwise. My advice to Albany—not to any of you, it’s your own business what you do—would have been to tell the English Department at Albany to take this as an opportunity to sit around together and concoct a new programme which would not be called the English Department but something like ‘Teaching How to Read Media’ or ‘Understanding Media’. This new department would include Film Studies and also include all those other language programs, so students could read literature and theory in the original. You’ve got to know German to read Heidegger or Adorno properly, French to read Derrida or Baudrillard. So rescue the languages as part of this programme! I don’t know whether it would work. You could at least try. You could say, ‘We’re teaching students essential skills in how to live in this world of new media. We’re teaching them how to read television ads and political ads and not to be so bamboozled so easily by the lies they tell’. Television ads have a complex rhetoric, which I have begun to study. At Lancaster I gave one example. In the United States NBC Television News shows every night over and over again from night to night an ad sponsored by the American Petroleum Institute. The speaker on the screen is not an oil tycoon, the people who are making billions. It’s a very charming young woman. She comes out on the screen, accompanied by brilliant graphics, and says ‘I have good news for you. We have enough oil and gas, especially if we accelerate fracking (which is the extraction of gas from shale), to last for another 100 years. We’ll produce millions of jobs. This is the solution’. What she doesn’t say of course is that fracking will accelerate climate change and pollute the ground water where fracking is done. There soon won’t be any New York City left, not to speak of my house in Deer Isle Maine, or most of Florida. So, it’s a lie, the ad is a lie, a gross lie. But it’s very persuasive. The speaker is a woman, an attractive woman, persuasive, a very good actress. The argument is not made by the actual people who are doing this fracking. Sometimes such ads show bearded intellectual-looking engineers doing some of the talking. They too are part of our ideology of the good guys.

Monday, January 26, 2015

Computing and the Mind: ‘Top-down’ Isn’t Natural

Just a short note.

This thought may well be tucked away somewhere in one of my posts (HERE?), but I want it here where I can readily find it.

Over the last half-century or so much has been made of the idea that the mind/brain is computational in nature. But what kind of computation?

By ‘top-down’ I mean most of the programs written for digital computers. I don’t have any very specific way of characterizing that style, or family of styles, but what I’m thinking is that programs in that style can only be constructed a programmer who has a ‘transcendental’ relationship to the program and its intended application.

The word ‘transcendental’ is a philosopher’s word and by it I mean that the programmer exist outside the program and the computer and can inspect each more or less at will. This transcendental relationship allows the programmer to design data structures and patterns of control and operation that would be inconceivable any other way. What I’m thinking is that there ought to be a way of proving this mathematically.

Obviously, I’m not up to that job as I lack the technical skills.

The sort of thing I’ve got in mind is what’s implicit in, for example, Dan Dennett asserting that memes are like apps. Well, the programmer who writes an app has a transcendental relationship to the platform he’s writing for and the language he’s using. But there is no programmer to write a meme, something that Dennett knows perfectly well. Thus the analogy papers over the fact that we haven’t got the foggiest idea how a meme could get ‘written.’ In using this analogy Dennett is, in effect, call for a skyhook.

Sunday, January 25, 2015

Some varieties of the artist, and a deflation

William Deresiewicz has a piece about art and artists in The Atlantic that is both interesting and suspicious: The Death of the Artist—and the Birth of the Creative Entrepreneur. What's interesting is the capsule history of conceptions of artists and their art. What's suspicious is Deresiewicz's sense that he is somehow more in the know that you or I and that he is above it all. He's that kind of guy.

The capsule history goes from artisan, to artist, to professional. The artisan conception held up through the 17th century. Artisans were makers, craftsmen:
A whole constellation of ideas and practices accompanied this conception. Artists served apprenticeships, like other craftsmen, to learn the customary methods (hence the attributions one sees in museums: “workshop of Bellini” or “studio of Rembrandt”). Creativity was prized, but credibility and value derived, above all, from tradition. In a world still governed by a fairly rigid social structure, artists were grouped with the other artisans, somewhere in the middle or lower middle, below the merchants, let alone the aristocracy. Individual practitioners could come to be esteemed—think of the Dutch masters—but they were, precisely, masters, as in master craftsmen. The distinction between art and craft, in short, was weak at best. Indeed, the very concept of art as it was later understood—of Art—did not exist.
Underline that last sentence. It is very important. Not the least because Art (capital "A") is what Deresiewicz himself believes in, though he can't quite bring himself to say so. It is, in fact, the default concept in use today and underlies the rest.

Saturday, January 24, 2015

Studies in plastic and light

20150123-_IGP2274

20150123-_IGP2268

20150123-_IGP2270

Superstition and uncertainty

I've recently been arguing that cultural evolution is driven by anxiety, by uncertainty That is, over the long term, and in the aggregate, that is what motivates the creation and adoption of new cultural patterns. Medical Express reports a recent study demonstrating the role of uncertainty in superstition:
It might be a lucky pair of socks, or a piece of jewelry; whatever the item, many people turn to a superstition or lucky charm to help achieve a goal. For instance, you used a specific avatar to win a game and now you see that avatar as lucky. Superstitions are most likely to occur under high levels of uncertainty. Eric Hamerman at Tulane University and Carey Morewedge at Boston University have determined that people are more likely to turn to superstitions to achieve a performance goal versus a learning goal... Performance goals are when people try to be judged as successful by other people. "For example, if I'm a musician, I want people to applaud after I play. Or if I'm a student, I want to get a good grade," explains lead author Eric Hamerman. Performance goals tend to be extrinsically motivated, and are perceived to be susceptible to influence from outside forces. Learning goals are often judged internally. "For example, a musician wants to become competent as a guitar player and perceive that he/she has mastered a piece of music," Hamerman says. Since learning goals are intrinsically motivated, this leads to a perception that they are also internally controlled and less likely to be impacted by outside forces.
I note that "performance anxiety" is a well-known phenomenon.

Friday, January 23, 2015

Culture, the Humanities, and the Evolution of Geist

Alex Mesoudi’s 2011 book, Cultural Evolution, says little or nothing about the humanities, about music, art, and literature, though it purports to be synthetic in nature (its subtitle: “How Darwinian Theory can Explain Human Culture & Synthesize the Social Sciences”). There is a simple reason for that: humanists have shown little or no interest in evolutionary studies of culture. There is no humanistic work for him to include in his synthesis.

There are obvious things one could say about this, but the most important thing to say at this point is simple: the neglect of evolutionary thinking by humanists is stupid and shortsighted. It must stop. At the same time I note that Mesoudi doesn’t seem dismayed by this situation; he doesn’t even note it. He’s not dismayed by the lack of literary studies (other than work on the phylogeny of manuscripts), musicology, and art history. It’s as though such things are not important aspects of culture. Likewise: stupid and shortsighted.

I don’t know if and when this will stop. While it seems obvious to me that digital humanists should be investigating evolutionary thought, they are skittish about it. To some extent that is probably a side effect of their odd disciplinary situation: high-visibility, even funding, but skepticism from more traditional humanists. It’s bad enough that they’ve gone over to the dark side and are using computers, but to think in evolutionary terms…the horror! the horror!

Again: stupid and shortsighted.

Meanwhile, I’m thinking about the work I’ve done with Matthew Jockers’ Macroanalysis: Digital Methods & Literary History. While Jockers has explicitly rejected evolutionary thinking I’ve reinterpreted it in evolutionary terms. If I am right in this, then it is one of the most impressive bodies of empirical work that has been done on cultural evolution. In particular, the work he did under the rubric of investigating literary evolution may qualify as a contribution to evolutionary thinking in general and not just to cultural evolution.

For that work demonstrates the evolution of the 19th century novel is directional. The directionality of evolution is an important general topic and, within biology is, of course, highly problematic–for reasons I find obscure and insufficient (see a paper David Hays and I wrote, A Note on Why Natural Selection Leads to Complexity). Jockers' convincing demonstration that for at least a century the English language novel evolved in a direction is thus striking. Of course, that’s my interpretation of what Jockers demonstrated, not his. But he did the study, so the demonstration is his.

Time and Space and the City

The Benedictine monk Aidan Kavanagh, who straddled two worlds as both a monk and a Yale divinity professor, proposes that we understand the Church as originally and centrally an urban phenomenon. He translates civitas as “workshop” and “playground,” the space in which social, philosophical, and even scientific questions are worked out by humans in contact with their God, “the locale of human endeavor par excellence.”

By the fifth century A.D., Christian worship in the great cities of Jerusalem, Antioch, Alexandria, Rome, and Constantinople had become not just one service, but an “interlocking series of services” that began at daybreak with laudes and ended at dusk with lamp-lighting and vespers. Only the most pious participated in all the services, but everyone participated in some. The rites “gave form not only to the day itself but to the entire week, the year, and time itself,” says Kavanagh.

Perhaps just as important as the transformation of time was the transformation of space, for the mid-morning assemblages and processions appropriated the entire neighborhood as space for worship. Participants met in a designated place in some neighborhood or open space, and proceeded to the church designated for the day, picking up more participants as they went, and “pausing here and there for rest, prayer, and more readings from the Bible.” The Eucharist itself was a “rather rowdy affair of considerable proportions,” kinetic and free of stationary pews.
Let me underline the remark about time: The rites “gave form not only to the day itself but to the entire week, the year, and time itself,” says Kavanagh.

FDR Skate Park in Philadelphia

20141227-_IGP1705 EQ EXP CRVS

FDR is one of the largest DIY skate parks in the world, and is legendary in the skate boarding world. I took those photographs on December 27 and 28, 2014.

20141227-_IGP1802

There are scads of photos of FDR on the internet, and a quick look will tell you that it wasn't always covered in graffiti. But it's the graffiti that drew me there this time around.

20141227-_IGP1828

As the name indicates, it's at the south end of Franklin Delano Roosevelt Park underneath the Franklin Delano Roosevelt Expressway in south Philadelphia. Train tracks are thirty yards away, and then there's the naval yard.

20141227-_IGP1837

20141227-_IGP1669

As far as I know, the place is always under construction:

Thursday, January 22, 2015

Boss, WTF! But it becomes more compelling in season 2

I was browsing through Netflix looking for another show to watch. Boss, a grim and grimy political drama from 2011-2012, starring Kelsey Grammer as Tom Kane, corrupt mayor of Chicago, looked interesting. I’d enjoyed House of Cards and I like Kelsey “Fraser Crane” Grammer. So I started watching it.

My basic reaction after the first two or three shows, well after the first show if you must know, was: WTF! I thought House of Cards put politics in a bad light, but this! If one were to go through each series counting up the acts of humiliation, brutality, deception, double-crossing, and murder, I don’t know how the two shows would line up. But Boss just felt worse, though it’s been awhile since I’ve watched any episodes of House of Cards. Maybe it’s that Kevin Spacey’s Frank Underwood seemed a bit more likeable, though he actually murdered a woman with his own hands, which Tom Kane hasn’t yet done. Who knows?

And then there’s plausibility, which is a peculiar consideration. I understand that politics is brutal, that corruption is real, but this? And the thing is, I don’t know how to judge such things. I’ve never been inside a corrupt big-city political machine so I have little life experience against which to judge what Boss is showing me. I mean, like, I’ve just watched Marco Polo, where the Great Khan executed men by having them trampled by horses, where Marco was forced to brand his father’s hand, where a Chinese ruler broke a young girl’s feet; and I just watched a movie where King John ordered a baron’s hands and feet chopped off, and then had his carcass thrown over a wall; but Boss seemed more brutal even than that.

But it’s not the physical brutality; it’s the moral brutality.

In the first episode we learn that Mayor Kane has a degenerative neural disorder that will kill him in three to five years. Whatever he’s doing, he’s working against that. And yet it almost seemed pasted on, not really organic to the plot and plotting. Certainly Kane had been brutal and corrupt before the disease.

Wednesday, January 21, 2015

The sky opens up

20141231-_IGP2150

The mind is computational, in an extended sense

AI, invented by computer scientists, lived long with the conceit that the mind was "just computation" - and failed miserably. This was not because the idea was fundamentally erroneous, but because "computation" was defined too narrowly. Brilliant people spent lifetimes attempting to write programs and encode rules underlying aspects of intelligence, believing that it was the algorithm that mattered rather than the physics that instantiated it. This turned out to be a mistake. Yes, intelligence is computation, but only in the broad sense that all informative physical interactions are computation - the kind of "computation" performed by muscles in the body, cells in the bloodstream, people in societies and bees in a hive. It is a computation where there is no distinction between hardware and software, between data and program; where results emerge from the flow of physical signals through physical structures in real-time rather than from abstract calculations; where the computation continually reconfigures the computer on which it is occurring (an idea central to GEB!) The plodding, sequential, careful step-by-step algorithms of classical AI stood no chance of capturing this maelstrom of profusion, but that does not mean that it cannot be captured!
Later:
This "embodied" view of the mind has several important consequences. One of these is to revoke the idea of "intelligence" as a specific and special capability that resides in human minds. Rather, intelligence is just an attribute of animal bodies with nervous systems: The hunting behavior of the spider, the mating song of the bird and the solution of a crossword puzzle by a human are all examples of intelligence in action, differing not in their essence but only in the degree of their complexity, which reflects the differences in the complexity of the respective animals involved. And just as there is a continuum of complexity in animal forms, there is a corresponding continuum of complexity in intelligence. The quest for artificial intelligence is not to build artificial minds that can solve puzzles or write poetry, but to create artificial living systems that can run and fly, build nests, hunt prey, seek mates, form social structures, develop strategies, and, yes, eventually solve puzzles and write poetry. The first successes of AI will not be Supermind or Commander Data, but artificial flies and fish and rats, and thence to humans - as happened in the real world! And it will be done not just by building smarter computer programs but by building smarter bodies capable of learning ever more complex behavior just as an animal does in the course of development from infancy to adulthood. Artificial intelligence would then already have been achieved without anyone "understanding" it.

Cultural Evolution: Literary History, Popular Music, Cultural Beings, Temporality, and the Mesh

Another working paper (title above):
Abstract and introduction below.

* * * * *

Abstract: Culture is implemented in a material and biological substrate but has a distinct ontology and its phenomena belong to a distinct order of temporality. The evolution of culture proceeds by random variation among coordinators, the cultural parallel to biological genes, and selective retention of phantasms, the cultural parallel to biological phenotypes. Taken together phantasms and a package or envelope of coordinators constitute a cultural being. In at least the case of 19th century American and British novels, cultural evolution has a direction, as demonstrated by the analytical work of Matthew Jockers (Macroanalysis 2013). While we can think of cultural evolution as a phenomenon that happens in history, it is at the same time a force that influences human life. It is thus a force IN history. This is illustrated by considering the history of the European novel from the 19th century and into the 20th century and in the evolution of popular musical styles in 20th century American music, in which interaction between African American and European American populations has been important. Ultimately, the evolution of culture can be thought of as the evolution of mind.

* * * * *

0. Introduction: The Evolution of Culture is the Evolution of Mind

One of the themes that has been prominent in Western culture is that we humans have a “higher” nature and a “lower” nature. That lower nature is something we share with animals, even plants–I’m thinking here of Aristotle’s account of the soul. That higher nature is unique to us and we have tended to identify it with reason and rationality. We are rational and can reason, animals are not and cannot.

It was one thing to hold such a belief when we could believe that our nature was distinct from that of animals. Darwin made that belief much more difficult to entertain. If we are descended from apes, and so are but animals, then how can we have this higher nature? And yet, by any reasonable account, we are quite different from all the other animals.

For one thing, we have language. Yes, other animals communicate, and, with much painstaking effort, we’ve managed to teach some sign language to chimpanzees, but still, no other species has yet managed anything quite like human language. And the same goes for culture. Yes, other animals have culture in the sense that they pass behavioral traits from one individual to another through social learning rather than through reproduction. But the trait repertoire of animal culture is quite limited in comparison to that of human culture. Nor has any animal species managed to remake their environment in the way we have, for better or worse, not beavers and their dams, nor termites and their often astounding mounds.

In the process of working through the posts I’ve gathered into the this working paper, the original writing and the subsequent reviewing and revising, I’ve come to believe that it is culture, not reason, that is our higher nature. Reason is a product of culture, not the reverse.

That conclusion is not a direct result of the post’s I’ve gathered here. You won’t find it as a conclusion in any of them, nor will I provide more of an argument in this introduction than I’ve already done. It’s a way of framing my current view of culture and human nature. It’s a higher nature. It rules us even as it is utterly dependent upon us.

Conceptualizing Cultural Evolution

This working paper marks the fruition of a line of investigation I began in 1996 with the publication of “Culture as an Evolutionary Arena” (Journal of Social and Evolutionary Systems, 19(4), 321-362). That was not my first work on cultural evolution; but my earlier work, going back to graduate school in the 1970s, was about stages conceived in terms of cognitive systems (called ranks). That work was descriptive in character, aimed at identifying the types of things possible with a given cognitive apparatus. The 1996 paper was my first attempt at characterizing the process of cultural evolution in evolutionary terms.

That paper originated in conversations I’d had with David Hays, who died in 1995, in which he suggested that the genetic material for cultural evolution was in the external world. Why? Because it is public, open for everyone to see. If the genetic material was out there in the world, I reasoned, then the selective environment must be social, something like a collective mind. That made sense because, after all, isn’t that how books and movies and records survive? Many are published, but only a few are taken up and kept in active circulation over the years.

That’s not much of a conception, but I stuck with it. It’s taken almost two decades for me to refine those initial intuitions into a technical conception that feels good. That’s what I managed to achieve in the process of writing the posts I’ve collected and edited into this working paper.

All of which is to say that I’ve been working on two levels. On the one hand I’ve been making specific proposals about specific phenomena. But those specific proposals are in service of a more abstract project: crafting a framework in which to conceptualize cultural evolution. By way of comparison, consider chapter eleven of Richard Dawkins, The Selfish Gene. That’s where he proposes the concept of memes in thinking about cultural evolution: “Memes: the new replicators” (pp. 189-201). He gives a few examples, but mostly he’s focused on the concept of the meme itself. The examples are there to support the concept. None of them are developed very extensively or in detail; he says just enough to give some sense of what he has in mind.

Tuesday, January 20, 2015

Overhead Wires

20141009-IMGP1356

Why are we crippling our kids?

Jane Brody in the NYTimes:
Experts say there is no more crime against children by strangers today — and probably significantly less — than when I was growing up in the 1940s and ’50s, a time when I walked to school alone and played outdoors with friends unsupervised by adults. “The world is not perfect — it never was — but we used to trust our children in it, and they learned to be resourceful,” Ms. Skenazy said. “The message these anxious parents are giving to their children is ‘I love you, but I don’t believe in you. I don’t believe you’re as competent as I am.’ ”...

In decades past, children made up their own games and acquired important life skills in the process. “In pickup games,” Dr. Gray said, “children make the rules, negotiate, and figure out what’s fair to keep everyone happy. They develop creativity, empathy and the ability to read the minds of other players, instead of having adults make the rules and solve all the problems.”

Dr. Gray links the astronomical rise in childhood depression and anxiety disorders, which are five to eight times more common than they were in the 1950s, to the decline in free play among young children. “Young people today are less likely to have a sense of control over their own lives and more likely to feel they are the victims of circumstances, which is predictive of anxiety and depression,” he said.

Monday, January 19, 2015

Rereading Goethe’s Faust 6: Allegory Alert

At the moment I’m reading an article about the US Military by James Fallows. It has no particular connection with Faust beyond the fact the Goethe himself spent a formative period of his life as a man of practical affairs in the Weimar government. One can’t help but think that Faust was begun, in part, as an effort by Goethe to bridge the gulf between his early fame as a writer and his early experience as an administrator, from “in the beginning was the Word” to “in the beginning was the Deed.”

I say “in part” because no literary work, not even an autobiographical novel, is motivated primarily by the author’s experience. It is motivated by the need write, whatever that is. And the need to write about the whole world, which seems to be the scope of Faust, what is that, where does it come from?

* * * * *

I’ve gotten no farther than I was in my previous post, two weeks ago, when I’d just finished the first part. As it was coming to a climax, such as it was, I was mostly thinking how poorly Gretchen was treated, not by Faust or the townspeople, but by Goethe. He creates this beautiful young woman for what purpose? To be the apple of the eye of a narcissistic megalomaniacal middle-aged intellectual. Faust sees, is smitten, and he barges in, getting her pregnant, leading her to poison her mother, and what does she get out of it? A celestial pardon. She gives herself to His judgment and he pardons her (ll. 4605-4612):
Margaret. Judgment of God! My all to thee I give!
Mephistopheles [to FAUST].
             Come! Come! Along with her I will abandon you.
Margaret. Thine am I, Father! Rescue me!
              Ye angels! Ye heavenly hosts! Appear,
              Encamp about and guard me here!
              Henry! I shrink from you!
Mephistopheles. She is judged!
A Voice [from above]. She is saved!
Mephistopheles [to FAUST]. Here to met
                                           He disappears with FAUST.
A Voice [from within, dying away]. Henry! Henry!
“She is saved!” – that’s it, says the booming voice (I assume it’s booming as the effect would be rather pathetic if it were high-pitched and whining), and Gretchen is redeemed. I wonder how she felt, once she’d heard that she’d been saved. That it had been worthwhile? Fat chance!

Early-modern Europe as backwater

Krugman is fishing for something else, but this observation struck me:
But was the Mughal state sui generis? Not really. It flourished in an era of “gunpowder empires”, large states where the strength of the central government rested on siege artillery and professional pike-and-musket infantry. The term is usually applied primarily to the three giant Islamic states – the Ottomans, the Safavids, and the Mughals. But as I understand it, the whole arc from Ming/Qing China to Habsburg Spain basically fits the model; and were the Bourbons really that different?

In this world the states of northwestern Europe that ended up looming so large in world history look trivial – and still look trivial in population and economic weight as late as the early 18th century. What’s more, they didn’t have any visible advantage in military technology until much later.

But surely this, too, is a simplistic picture. For even in the heyday of the gunpowder empires, the far Western states of Europe dominated the world’s oceans. Not the Mediterranean, where the Habsburgs and the Ottomans were relatively even at least until Lepanto, but in the open ocean, where galleys never had a chance and it was sail-and-cannon all the way, the Atlantic fringe took control very early. Why?

Sunday, January 18, 2015

Dan Dennett and others on 'thinking machines'

This year's Edge Question is, alas, "What do you think about machines that think?" "Alas" because I have something of a professional obligation to keep up with this kind of stuff, though I don't hold out high hopes for commentary on the topic, not even for John Brockman's Edgers. But I'll excerpt Dan Dennett's reply (scan down the page). Concerning IBM's Watson:
Do you want your doctor to overrule the machine's verdict when it comes to making a life-saving choice of treatment? This may prove to be the best—most provably successful, most immediately useful—application of the technology behind IBM's Watson, and the issue of whether or not Watson can be properly said to think (or be conscious) is beside the point. If Watson turns out to be better than human experts at generating diagnoses from available data it will be morally obligatory to avail ourselves of its results. A doctor who defies it will be asking for a malpractice suit. No area of human endeavor appears to be clearly off-limits to such prosthetic performance-enhancers, and wherever they prove themselves, the forced choice will be reliable results over the human touch, as it always has been. Hand-made law and even science could come to occupy niches adjacent to artisanal pottery and hand-knitted sweaters.
Maybe. His conclusion:
What's wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don't delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.
(1) It is very, very hard to imagine (and keep in mind) the limitations of entities that can be such valued assistants, and the human tendency is always to over-endow them with understanding—as we have known since Joe Weizenbaum's notorious Eliza program of the early 1970s. This is a huge risk, since we will always be tempted to ask more of them than they were designed to accomplish, and to trust the results when we shouldn't.

(2) Use it or lose it. As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is) but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That's an event we should bend our efforts to averting now, because it could happen any day.
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.

Incense Variations

20150117-_IGP2227 LL

20150117-_IGP2227 LL R

20150117-_IGP2227 LL BW

Dating the Anthropocene

...a formal decision by the International Commission on Stratigraphy is years away. In the meantime, a subsidiary body, the Anthropocene Working Group (because of my early writings, I’m a lay member), has moved substantially from asking whether such a transition has occurred to deciding when.

In a paper published online this week by the journal Quaternary International, 26 members of the working group point roughly to 1950 as the starting point, indicated by a variety of markers, including the global spread of carbon isotopes from nuclear weapon detonations starting in 1945 and the mass production and disposal of plastics. (About six billion tons have been made, with a billon of those tons dumped and a substantial amount spread around the world’s seas.)
For a series of charts showing the salience of that dating, go to the website of the IGBP (International Geosphere-Biosphere Program). From that site:
About one half of the global population now lives in urban areas and about third of the global population has completed the transition from agrarian to industrial societies. This shift is evident in several indicators. Most of the post-2000 rise in fertilizer consumption, paper production and motor vehicles has occurred in the non-OECD world.

Coinciding with the publication of the Great Acceleration indicators, researchers also led by Professor Steffen have published a new assessment of the concept of “planetary boundaries” in the journal Science. The international team of 18 scientists identified two core planetary boundaries: climate change and “biosphere integrity”. Altering either could “drive the Earth System into a new state.” The planetary boundaries concept, first published in 2009, identifies nine global priorities relating to human-induced changes to the environment. The new research confirms many of the boundaries and provides updated analysis and quantification for several of them including phosphorus and nitrogen cycles, land use and biodiversity.

Saturday, January 17, 2015

Is the world intelligible?

Rick Searle has some very interesting paragraphs at the end of his review of Tyler Cowen's Average is Over (bolded emphasis mine):
For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….
Yes, the unknown and the unknowable gives us the creeps. Why? An obvious answer is that what we don't know might very well be harmful. 

But I think that's only part of it. I think the unknowability is itself bothersome independently of anything that may be lurking behind it. I think that's how our nervous system is. Why that's so, I do not know. It seems to be the 'other side' of our ability to make up (often arbitrary) stories about anything. Whatever it is that freed our minds of the tyranny of the present, also left us wide open to the terrors of the unknown. And it is the terror of the unknown, more than anything else, that has driven long-term cultural evolution. That's why Searle's review caught my attention.

Friday, January 16, 2015

Friday Fotos: Los Alamos, home of the a-bomb

I didn't take these photographs. They're from a Flickr collection posted by Los Alamos National Laboratory, where the first atomic bomb was built and tested during World War II. Mouse over the photos to see identification.

1943 Los Alamos Project Main Gate

Back in the day

Dean Keller and Gasbuggy Recording Equipment

Housing

NRDS Station at Jackass Flat

Thursday, January 15, 2015

Rhythm Changes: Notes on Some Genetic Elements in Musical Culture

That's the title of another working paper. You may download it from Academia.edu (HERE) or from Social Science Research Network (HERE).

Here's the abstract:
An entity known as Rhythm Changes is analyzed as a genetic entity in musical culture. Because it functions to coordinate the activities of musicians who are playing together it can be called a coordinator. It is a complex coordinator in that it is organized on five or six levels, each of which contains coordinators that function in other musical contexts. Musicians do not acquire (that is, learn) such a coordinator through “transfer” from one brain to another. Rather, they learn to construct it from publically available performance materials. This particular entity is derived from George Gershwin’s tune “I Got Rhythm” and is the harmonic trajectory of that tune. But it only attained independent musical status after about two decades of performances. Being a coordinator is thus not intrinsic to the entity itself, but is rather a function of how it comes to be used in the musical system. Recent argument suggests that biological genes are like this as well.
Here's a key sequence of paragraphs:
In the case of music, let me suggest how different musicians understand Rhythm Changes is irrelevant as long as they can stay together while performing together. Charlie Parker’s neural patterns for Rhythm Changes may have been different from Dizzy Gillespie’s or Miles Davis’s or Thelonious Monk’s, but that doesn’t matter as long as they’re together on the bandstand. That is something they can judge perfectly well in performance, as can listeners. The question of how accurately patterns are “transferred” from one brain to another is mistaken.

I’ll go so far as to say that even when a neophyte is learning from a master, there is no transfer of patterns from the master’s brain to the neophyte’s. Rather, as they play together, the neophyte responds to what the master is doing and figures out, in his own (neural) terms, how to match or complement, as the case may be, what the master is playing. The student will make mistakes, the master will make comments, and they’ll try again.

There is no transfer of patterns in the sense that computers can transfer information from one machine to another. We have learning from public models, and we have interplay, negotiation, and mutual adjustment. How individual musicians achieve these results, what goes on in their brains, is a secondary matter. Whatever it is, it isn’t a matter or either sending or receiving information.
The introduction finishes out this post.

The making of a master musician in Hindustani classical music

Sumana Ramanan writes about Ulhas Kashalkar, a singer, in Caravan. Here's a passage about his training with Gajananbuwa Joshi, vocalist and violinist, and a well-known teacher and superb performer:
With the help of two scholarships amounting to Rs 600 a month, Kashalkar rented a small flat in Dombivli, the suburb just outside Mumbai where Joshi was based. For the next five years, Joshi poured himself into his student, taking him everywhere and teaching him during every spare moment. For the first seven months, Joshi worked with Kashalkar on only one raga, Yaman, often the first one taught to beginners. For Kashalkar, who had arrived with two MA gold medals under his belt, this was humbling. “He taught me numerous bandishes and taranas in a variety of talas in this one raga until I had mastered all of them,” Kashalkar said. “I then realised that our music just cannot be learnt in a classroom, only in a guru-shishya setting.”

Today we have an incredible window into this relationship: a benevolent soul, the late GP Thatte, recorded many of Joshi and Kashalkar’s lessons, and later uploaded them to the internet. Most of these audio clips are from three months in 1980, when Joshi travelled to Nashik to tutor the well-to-do Thatte. Joshi took his star pupil along to sit in, and taught him during the visit too.

Joshi had developed a highly sophisticated pedagogy. It included the use of sargams, or note patterns, to help students master a raga’s chalan, or characteristic gait, and build on this to improvise. In one of Thatte’s clips, for example, we hear Joshi drilling Kashalkar in the majestic Darbari Kanada, often called the king of Hindustani ragas. Starting with a few simple phrases, Joshi pushes his student to create ever more complicated patterns while gradually increasing the tempo. For over twenty minutes, Kashalkar follows his teacher’s lead, starting with some of the raga’s core phrases, such as sa-dha-ni-pa and sa-re-ga-ma-re-sa-re-sa, and embellishing these with just a note, then a few notes, then a longer phrase, until he establishes an entire soundscape.

Joshi also insisted on his students learning the basics of the tabla to help them master rhythm. Before every lesson, for twenty minutes, he made Kashalkar recite tabla bols, or syllables, in various talas, and taught him to create variations within the kaida and rela, both compositional forms for the tabla. “This method helps students overcome their fear of tala once and for all,” Kashalkar told me.

It was upon this rock-solid foundation that Kashalkar further built his skills. His energy as a student had also rejuvenated Joshi, to the extent that he started performing and accepting students again for the few years before his death, in 1987.
H/t 3QD.

Two Faces by Mustart

IMGP2126

_IGP0573

Notice the trompe l'oeil above the nose, at the more-or-less eyes, as though we are looking at a large piece of paper with two areas ripped out and those ripped out areas are the eyes of the wall, not the eyes of a face painted on the wall.

Wednesday, January 14, 2015

Uncertainty, Anxiety, and Torture

The New York Review of Books has an interview with Mark Danner, who has been following “the use of torture by the US government since the first years after September 11.” He’s talking about he recent Senate report on the CIA’s use of torture. Given my current thinking about anxiety and culture, I was struck by Danner’s remarks on anxiety in the CIA. Here he’s talking about the torture of Abu Zubaydah, one of the first ‘high value’ victims (I’ve bolded the word):
It’s an epistemological paradox: How do you prove what you don’t know? And from this open question comes this anxiety-ridden conviction that he must know, he must know, he must know. So even though the interrogators are saying he’s compliant, he’s telling us everything he knows—even though the waterboarding is nearly killing him, rendering him “completely non-responsive,” as the report says—officials at headquarters was saying he has to be waterboarded again, and again, because he still hadn’t given up information about the attacks they were convinced had to be coming. They kept pushing from the other side of the world for more suffering and more torture.

And finally, grudgingly, after the eighty-second and eighty-third waterboardings, they came to the conclusion that Abu Zubaydah didn’t have that information. So when they judged the use of enhanced interrogation techniques on Abu Zubaydah a “success,” what that really meant was that the use of those techniques, in this brutal, appalling extended fashion, had let them prove, to their satisfaction, that he didn’t know what they had been convinced that he did know. It had nothing to do with him giving more information as he was waterboarded. The use of these techniques let them alleviate their own anxiety. And their anxiety was based on complete misinformation. Complete ignorance about who this man actually was.

You see this in this report again and again. You see that CIA headquarters is absolutely convinced that these people know about pending attacks. And what the torture proves is that they don’t know it. And mostly the reason for this is that information about current attacks was very, very tightly held. That’s the way terrorist organizations work. They’re cellular structures, with information distributed on a need-to-know basis. And unless you manage to capture the person about to conduct the attack, or Osama bin Laden, you are going to have a very hard time finding people who know about current attacks.

There are moments of clarity in the report where CIA interrogators are conceding, internally, that we know astonishingly little about who these guys are. And yet this huge machinery of torture was put into place and defended at all costs.

We translated our ignorance into their pain. That is the story the Senate report tells. Our ignorance, our anxiety, our guilt, into their pain. It’s one reason why I think—looking much more broadly at policy—it was a grave error for President Bush not to replace people in the CIA after September 11. Because you had an agency that out of its guilt about having failed to prevent those attacks—guilt that extended from the director down—could think only of preventing another attack. And while preventing another attack was extremely important, it wasn’t the only thing. And I think here their hysteria caused them to operate in an irrational and counterproductive way.
These particular actions did little to nothing to relieve official anxiety. In fact, because the torturing proved so ineffective, the program may have increased anxiety rather than alleviate it, for these actions simply underscored our ignorance.

The list of things we humans do to alleviate our anxiety about the unknown is going to be very very long. Consider, as a source of examples, medical treatments. It’s only in the last century or so that we’ve developed effective medical treatments. And quackery still thrives, and while most of it may be outside the medical establishment, there is no doubt some quackery that has managed to get endorsed by organized medicine.

What about the “war on cancer”? How much of that money has been well spent.

And so forth.

Latour and Culture

Latour has written a lot, of which I’ve read only a fraction, not even a quarter, perhaps not even a tenth: We Have Not Been Modern, Reassembling the Social: An Introduction to Actor-Network-Theory, Politics of Nature: How to Bring the Sciences into Democracy, On the Modern Cult of the Factish Gods, and some essays. He has spent a lot of time studying culture, but I don’t associate him with culture, I associate him with society.

Until now, now that I’ve been once again immersing myself in the problems of conceptualizing cultural evolution. This particular aspect of my thinking goes back to a conversation I had years ago with David Hays in which he suggested that the genetic material for culture is out there, in the environment. It has to be out there, rather than in the mind, because that’s what’s there for everyone to experience; it’s what we hold in common, and culture HAS to be held in common, otherwise it can’t do its job. What I’ve realized is that that cultural genetic material is “painted” all over the “surfaces” of the actors in a Latourian actor-network.

Whether the actors are human beings, or wooden sticks, a stream of water, a television set, a cat, a culture of bacteria, what have you, they have culture written all over them. Without them culture couldn’t exist. Sure, there’s a sense in which culture exists in human minds, but those minds need the support of vast networks of objects in the world, including but not limited to other humans.

So why is Latour so oblivious to culture? Sure, he knows its there, he talks about it, but society is what he (thinks he) theorizes, not culture. The book is entitled Reassembling the Social not Assembling the Cultural.

But then it’s easy to get confused about society and culture, difficult to tease them apart. Pairs of phrases such as “American society” and “American culture” tend to mean the same thing.

Yet there is a distinction to be made. A society is a group of people. Culture is the norms, attitudes, and mores which guide a group’s interactions. But where does that distinction lead? What do you look at when you study society that and don’t look at when you study culture, and vice versa? A tricky question.

Still, it changes how I think about Latour to realize that, in effect, those actor networks are important because they are the genetic substrate of culture.

Finally, I’ve thought from the beginning that Latour is missing a psychology. He has little to nothing to say about the mind, and yet it is the mind that holds those actor networks together. Not transcendentally, mind you, not from above. But without those individual human minds reading culture from the surfaces of the actors in the network (including other humans), the network would not cohere.

Tuesday, January 13, 2015

Siri, getting to know you

A recent study found that computers can judge your personality based on your pattern of Facebook "likes":
In the study, a computer could more accurately predict the subject's personality than a work colleague by analysing just ten Likes; more than a friend or a cohabitant (roommate) with 70, a family member (parent, sibling) with 150, and a spouse with 300 Likes.

Given that an average Facebook user has about 227 Likes (and this number is growing steadily), the researchers say that this kind of AI has the potential to know us better than our closest companions....

In the new study, researchers used a sample of 86,220 volunteers on Facebook who completed a 100-item personality questionnaire through the 'myPersonality' app, as well as providing access to their Likes.

These results provided self-reported personality scores for what are known in psychological practice as the 'big five' traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism—the OCEAN model. Through this, researchers could establish which Likes equated with higher levels of particular traits e.g. liking 'Salvador Dali' or 'meditation' showed a high degree of openness.

Variations on a Moon Shot

Here's photo of the Moon that I took the other day. Nothing special, just a nice clean shot of the Moon in sky empty of clouds, contrails, birds, helicopters, and planes:

20141231-_IGP2188

Precisely because it is a clean simple shot I decided to run some Photoshop variations on it. It's an exercise in minimalism. Since the basic image material is simple, the variations "show" more strongly.

I do this every now and then without any specific purpose. It's just an exercise. When I do it, just about the first thing I do is to create red, green, and blue (RGB) versions of my source image. Since this one is already strong medium blue, I just needed to do red and green versions. I then decided to toss in a yellow one as well.

20141231-_IGP2188R

20141231-_IGP2188G

20141231-_IGP2188Y

Notice that the yellow is a bit desaturated. I couldn't get a highly saturated version using my favored techniques, so I left it at that.

Containers: A new and better kind of code?

From the NYTimes:
Docker is at the forefront of a new way to create software, called containers. These software containers are frequently compared with shipping containers. And as their popularity grows, building big computer networks could become remarkably simpler.

Like the big metal containers that can move from ship to ship to truck without being opened, software containers ship applications across different “cloud computing” systems and make it easy to tinker with one part, like the products for sale on a mobile application, without worrying about the effect on another part, like the big database at the heart of the corporate network....

“It’s a huge efficiency gain in how you write code,” said Mr. Golub, who started his career teaching business courses in Uzbekistan. “You don’t have to rewrite everything, then fix all the breaks when it goes into production. You just work on what you change.”
Tell me more. This tells me a little more:
DotCloud, the precursor of Docker, was in the business of helping developers build online applications by focusing on things like spreading use across several computers.

“Software developers need to be able to work easily with complicated infrastructure,” said the company’s founder, Solomon Hykes. “It was clear that cloud applications would have to be written efficiently, become part of the Internet, update constantly, and be always online, for all kinds of industries.”

DotCloud was one of many such services, and could not find many customers. But there was a container-type function in DotCloud, like the one Google had built. Mr. Hykes, who was talking with Mr. Golub about what the company could do to generate interest, worked at building a way for one container to work over the many versions of the Linux operating system.
What this is telling me is that we've got a new way to break big programs into (quasi)independent blocks. That's nice, but what's new about it aside from its newness?