Tuesday, May 28, 2013
One think I noticed a while ago is that Breaking Bad keeps you moving back and forth on its major characters, especially Walter and Jesse. There’s a sequence where the character is really nasty and then, a bit later, the character does something that draws you to them. If someone wanted to do some serious work on this show, this rhythm would be something to look at. Chart it out character by character, within episodes and across episodes.
It’s my crude impression that Breaking Bad does this more than other shows (e.g. The Sopranos, The Wire, etc.). For what it’s worth, I didn’t even notice this going on in other shows, well, a bit with Mad Men. Maybe it’s common, but I’ve just now noticed it. And maybe it’s particularly prevalent in Breaking Bad. I can’t tell.
Consider episode 10 in season 3, “Fly.” First, this episode is something of a dramatic “stunt” in that it involves only two characters, Walter and Jesse, and one setting, the meth lab. Second, nothing much happens in the episode. No one is killed, there’s no violence, no major changes in anything. It’s a very minimalist episode.
So, what does happen? It’s at the end of the workday and Walt and Jesse are at the lab. Walt’s been running numbers. He notices that some of the product seems missing. We, of course, know what’s been going on. Jesse’s been skimming a bit to sell on the side. We’re not surprised that Jesse offers various (lame) reasons to explain the discrepancy. (Doesn’t he realize that Walt can calculate the expected yield given his knowledge of input quantities?)
Does Walt suspect that Jesse’s skimming? Will he figure it out? Jesse leaves and Walt gets distracted by a fly. Really distracted. He begins to take physical risks to get the fly, even taking a nasty fall. He’s become obsessed for no apparent reason.
Jesse returns, finds Walt chasing the fly, and wonder’s what’s up. Walt gives him a song and dance about contamination, but doesn’t have the courtesy to even attempt to explain how one fly could damage the product. Jesse wants finish the current batch before it goes sour, but Walt won’t let him. Gotta get that fly. This makes no sense to Jesse or to us.
Walt is being a pain in the ass. Really annoying. It just seems to be prima donna perfectionism out of control.
Monday, May 27, 2013
I want to look at two recent pieces by Daniel Dennett. One is a formal paper from 2009, The Cultural Evolution of Words and Other Thinking Tools (Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIV, pp. 1-7, 2009). The other is an informal interview from January of 2013, The Normal Well-Tempered Mind. What interests me is how Dennett thinks about computation in these two pieces.
In the first piece Dennett seems to be using the standard-issue computational model/metaphor that he’s been using for decades, as have others. This is the notion of a so-called von Neumann machine with a single processor and a multilayer top-down software architecture. In the second and more recent piece Dennett begins by asserting that, no, that’s not how the brain works, I was wrong. At the very end I suggest that the idea of the homuncular meme may have served Dennett as a bridge from the older to the more recent conception.
Words, Applets, and the Digital Computer
As everyone knows, Richard Dawkins coined the term “meme” as the cultural analogue to the biological gene, or alternatively, a virus. Dennett has been one of the most enthusiastic academic proponents of this idea. In his 2009 Cold Spring Harbor piece Dennett concentrates his attention on words as memes, perhaps the most important class of memes. Midway through the paper tells us that “Words are not just like software viruses; they are software viruses, a fact that emerges quite uncontroversially once we adjust our understanding of computation and software.”
Those first two phrases, before the comma, assert a strong identification between words and software viruses. They are the same (kind of) thing. Then Dennett backs off. They are the same, providing of course, that “we adjust our understanding of computation and software.” Just how much adjusting is Dennett going to ask us to do?
This is made easier for our imaginations by the recent development of Java, the software language that can “run on any platform” and hence has moved to something like fixation in the ecology of the Internet. The intelligent composer of Java applets (small programs that are downloaded and run on individual computers attached to the Internet) does not need to know the hardware or operating system (Mac, PC, Linux, . . .) of the host computer because each computer downloads a Java Virtual Machine (JVM), designed to translate automatically between Java and the hardware, whatever it is.
The “platform” on which words “run” is, of course, the human brain, about which Dennett says nothing beyond asserting that it is there (a bit later). If you have some problems about the resemblance between brains and digital computers, Dennett is not going to say anything that will help you. What he does say, however, is interesting.
Notice that he refers to “the intelligent composer of Java applets.” That is, the programmer who writes those applets. Dennett knows, and will assert later on, that words are not “composed” in that way. They just happen in the normal course of language use in a community. In that respect, words are quite different from Java applets. Words ARE NOT explicitly designed; Java applets ARE. Those Java applets seem to have replaced computer viruses in Dennett’s exposition, for he never again refers to them, though they figured emphatically in the topic sentence of this paragraph.
The JVM is “transparent” (users seldom if ever encounter it or even suspect its existence), automatically revised as needed, and (relatively) safe; it will not permit rogue software variants to commandeer your computer.
Computer viruses, depending on their purpose, may also be “transparent” to users, but, unlike Java applets, they may also commandeer your computer. And that’s not nice. Earlier Dennett had said:
Our paradigmatic memes, words, would seem to be mutualists par excellence, because language is so obviously useful, but we can bear in mind the possibility that some words may, for one reason or another, flourish despite their deleterious effects on this utility.
Perhaps that’s one reason Dennett abandoned his talk of computer viruses in favor of those generally helpful Java applets.
Salon has an interesting article about time and timing (excerpted from, Time Warped: Unlocking the Mysteries of Time Perception, a book by Claudia Hammond). Here's a passage about the time scale involved in speech:
To produce and understand speech, we rely on critical timings of less than a tenth of a second. The difference between the sound of a ‘pa’ and a ‘ba’ is all in the timing of the delay before the subsequent vowel, so if the delay is longer you hear a ‘p’, if it’s short you hear a ‘b.’ If you put your hand on your vocal cords you can even feel that with the ‘ba’ your lips open at the same time as you feel your cords start to vibrate. With the ‘pa’ the vibration starts a moment later. This relies on timing accurate to the millisecond. Even the timing between syllables can be crucial to a phrase’s meaning. With Jimi Hendrix’s lyric, “Excuse me while I kiss the sky,” just a fraction of a second difference in timing is what gives you the famous monde-green, “Excuse me while I kiss this guy.”
Sunday, May 26, 2013
TV comedies tend to be half-hour shows while dramas tend to be hour-long shows? Why? And why don't we have comedies with a continuous story over a whole season on more?
Feature-length comic movies are common enough, so why not hour-long sitcoms on TV?
There's an obvious line of approach to that last question: Feature-length comedies are different from sitcoms. Just what that difference is, of course, needs to be spelled out. Whatever it is, we still have the question of why TV comedy adheres to the sitcom and not to a more expansive format.
Conversely, why don't we have a flock of half-hour TV dramas?
Saturday, May 25, 2013
In my previous post on Breaking Bad I noted that the show seems a bit thick with coincidence. Since the show’s been out for awhile I figured others must have remarked on it. So I got my Google-fu in gear and went looking.
I got some hits. The issue has been raised on Quora: Who thinks Breaking Bad is too contrived...too many improbable coincidences? The question had seven answers when I got there; I added an eighth. Most answers seemed to think that, yes, lots of coincidence. But that’s not necessarily a bad thing.
The show’s entry in TV Tropes lists three+ instances of the trope, Contrived Coincidence. The “+” is for “Everything involving the midair collision in the season 2 finale.”
There’s more, but that’s enough to tell me that the show’s use of coincidence is pretty obvious. It may not be obvious to the point where you’re supposed to be thinking to yourself “zing! there goes another one”; but it’s out there. It’s in the texture of the show.
I assume this use of coincidence is deliberate. As I’ve said, it’s part of the artistry. What’s it doing? In the penultimate (that is, next to the last) episode of season two Jesse’s girlfriend, Jane, chokes in her own vomit while Walt’s on a diaper run for the new baby, where he also meets up with Jane’s father in a bar (but doesn’t know who he is). Coincidence? Heck yeah! But what’s the point?
And Walt’s a bit pissed that his son’s “Help Walt” website is bringing in the bucks. He knows, of course, that those bucks are the bucks he made selling a megaload of primo meth and he’s pissed that he’s not getting credit for providing for his family.
Friday, May 24, 2013
Just watched this episode, “Mandala,” in season 2, episode 11. It has the kind of coincidence that is neither here nor there as far as realism in concerned, but that very much IS central to the aesthetic technique of the show.
Jesse’s new girlfriend, Jane, also his landlady, is a recovering addict. We met her several episodes ago, after Jesse got kicked out of his aunt’s place (owned by his parents, who let him live there). Jane has relapsed and done meth with Jesse. Now, in this episode, she introduces him to heroin.
Meanwhile, the murder of one of their dealers, Combo, has forced Walter and Jesse to seek help from their (very shady) lawyer, Saul. Saul hooks them up with a distributor, Gus, who can move their 38 pounds of prime meth. But he’s cautious and admonishes Walt for having an addict for a partner.
And then Walt’s wife, Skyler, who’s taken a job, discovers that her boss and company owner has been embezzling to the tune of one million dollars. She agrees not to snitch, but also decides to quit. (Maybe. But she goes back to the office after having left.)
Walt’s in class and he gets the call. He’s keeping his second cell phone, the one he uses for the drug business, in the ceiling of the classroom. While it’s on vibrate mode, still, the class hears it vibrating up there. Walt has made a stupid Jesse-class mistake. Still, the students leave, Walt retrieves the phone, and gets a text message. The deal is on. He’s got an hour to deliver the full 38 pounds.
Of course, he doesn’t have the stuff, Jesse does. And Jesse isn’t answering his phone. Why not? Because he’s high on heroin, remember? Walt breaks in through the real door, manages to get Jesse to tell him more or less where the stuff is and Walt finds it. As he’s loading it into a garbage bag he gets another text, from his wife. She’s going into labor.
End of episode. We’ve got to wait for the next one to find out what happens.
Of course, I’m watching on Netflix, so I don’t have to wait. Instead, I’ve decided to knock this note out, fast. Just to get it on record. An example of the show’s real-time technique.
Realistic? Yes/No. Take it or leave it. In life, yes, shit happens. Things are all lined up nice in a row. But this is a real pile-up: the Big Deal is on, Jesse’s stoned, Skyler’s in labor. All at the same friggin’ time.
I don’t think so. This is aesthetic technique, not mimetic necessity. And what’s the effect of having all this take place inside of 47 minutes. No time to think. You just have to go with this crazy flow.
Notice, by the way, Skyler’s now (provisionally) implicated in something illegal.
A couple of days ago I saw that one James Atlas–about whom I have heard, a bit–had and op-ed in the NYTimes entitled “Get a Life? No Thanks. Just Pass the Remote.” I bit.
I was mostly about Breaking Bad, one of those high-quality TV shows with the continuous story, grimy texture, and moral ambiguity. You know, The Sopranos, The Wire, Deadwood, Mad Men, House of Cards, that stuff, the stuff that’s giving canonical lit a run for the cultural brass ring. So I bit.
And, yeah, Breaking Bad–which I’d heard about, which Netflix kept pushing at me–IS one of those. Why those stories, at this time? In part it’s that it took awhile for TV both to figure out how to do it, whatever IT is, and to allow itself to do it. Still, why the darkness? Yeah, I know, life isn’t simple. Real people aren’t purely good nor purely bad.
As Atlas says about Breaking Bad:
But if the story line propels me into my TV grotto, it’s the realism that keeps me there. There’s nothing artificial about “Breaking Bad” — the spell is never broken. The dialogue is pitch-perfect. And there’s a lot of useless but fascinating information: you can learn how a meth lab operates, how money is laundered and guns are sold, how to murder people. (This is not necessarily a good thing. Dzhokhar Tsarnaev, accused in the Boston Marathon bombings, tweeted: “ ‘Breaking Bad’ taught me how to dispose of a corpse.”) And if you want to know how crack is smoked, how a heroin needle is inserted, the spoon heated up, I highly recommend this show. Self-destruction is interesting.
But reality’s notoriously difficult to judge. Atlas pretty much assumes that the meth lab scenes are, well, realistic, because that’s how they are presented. I make the same assumption. But I don’t know. I’ve never seen a meth lab and neither, I warrant, has Atlas.
It just smells real. The meth lab and a lot more besides. It’s not as though you can take a snatch of Real Life here and hold it up to a snatch of TV Life there and compare them. Fictional realism is a set of conventions.
Thursday, May 23, 2013
One of the most important ideas to come from the Chomsky program of linguistic thought is the idea of universal grammar (aka UG), a "deep" structure of human cognition that is innate and universal to all humans. It is the existence of this UG that allows humans to acquire language in the first place, for it is the UG that guides the young child's interpretation of the language around her.
Neat idea. So, let's go look for it. Avanti!
Over the years, however, as the Chomsky program has acquired critics, UG has been one point of contention. Perhaps the diversity among languages is so very great no UG can plausibly account for it.
But, you might ask, if there is no Universal Grammar then what happens to our vaunted capacity for language? Is it as empty as UG? And, if so, just how DO we learn language, and WHO are we anyhow?
Good questions, Grasshopper, good questions. They're all in play.
Sean Roberts over at Replicated Typo (where I occasionally post) reports a recent debate on the subject. His opening paragraphs:
There was a debate today between Peter Hagoort and Stephen Levinson on ‘The Myth of Linguistic Diversity”. Hagoort arguing the case for universalist accounts. He admitted that language does exhibit a large amount of diversity, but that this diversity is constrained. He argued that linguistics should be interested in which universal mechanisms explain the boundary conditions for linguistic diversity. The most likely domain in which to find these mechanisms is the brain. It comes with internal structure that defines the boundary conditions on the surface structures of human behaviours. These boundary conditions include the learnability of input, and that language is processed incrementally and under time constraints. Brains operate under these constraints so that linguistic processing of all languages happens in roughly the same processing stages. Hagoort argued that proponents of a diversity approach to linguistics think that variation is unbounded or constrained only by culture. While there is variation between individuals and between languages, it is the general types that we should be focussed on.In contrast, Levinson suggested that we should be moving away from the picture of the modal individual with a fixed language architecture. Instead, we should embrace population thinking and recognise the variation inherent at every level of language from typology to processing and brain structures. While languages are constrained by the processing structures of the brain, these processing structures are plastic and adapt to the language and cultures in which they are embedded. Adults lose the ability to distinguish sounds that are not part of their language. Recent work on linguistic planning using eye-tracking shows that the elements of a scene that speakers attend to before starting to speak differs with the canonical word order of their language. More fundamentally, brain structures can be affected by cultural experience, such as bilingualism or singing (indeed, the effect of bilingualism on processing shows that variation itself is a fundamental constraint). So, brains do constrain learning and processing, but are themselves subject to constraints from interaction between individuals. Brains also change over evolutionary time, adapting to a range of pressures. Therefore, there is a complex ecology of systems that co-evolve to define the constraints on language, and understanding these systems requires focussing on diversity.
The general message: Proponents of universals need to take diversity into account, and proponents of diversity need to be more specific about how diversity maps onto processing and how different domains of language co-evolve.
Wednesday, May 22, 2013
This post originally appeared in The Valve in August of 2006. I’m reprinting it here for several reasons. In the first place, the screen shots no longer show up in the Valve version. Why, I don’t know, but that loss renders the post rather opaque. Rather than fix it at The Valve, to no avail – who reads four-year old blog posts? – it makes more sense simply to reprint it here. I will also note that it is this anime series that sensitized me to graffiti, simply by demonstrating that it was part of Japanese popular culture. It was a few weeks after originally publishing this piece that I began photographing local graffiti.
Samurai Champloo is a 26-episode anime series directed by Schinichiro Watanabe, who also did Cowboy Bebop. Like Bebop, it is episodic, following the adventures of Fuu, Mugen, and Jin, as they look for the samurai who smells like sunflowers (Fuu’s father). Both Mugen and Jin are skilled fighters; Mugen is an ex-pirate, Jin is a ronin, a samurai without a master. Fuu worked as a waitress.
Like Bebop’s, Champloo’s mise en scène is culturally eclectic. It is also anachronistic. But the mix is different. Bebop is set in the future and in space, mostly Mars, its moons, and asteroids. As the name suggests, the music is nods toward jazz. “Cowboy” is slang for bounty hunter, which is the default profession of the four central characters (five if you count the dog). Champloo is set in Edo-era Japan – roughly the two and a half centuries before Perry arrived. The title theme is hip hop, and hip hop occurs in the soundtrack, imagery, and thematics.
This note is about two episodes that are relatively late in the series. Each involves an encounter between Japan and America, but only one is staged that way. One is about tagging & Andy Warhol and the sign (of the word), the other about baseball. Note that baseball was first played in Japan in 1872 and has been played more or less continuously since then.
The English title for episode 18 is “War of the Words.” What interests me about this episode is simply its emphasis on the printed word (plot summary). Running through the episode we have rival gangs fighting, not through physical violence against one another, but through tagging. The episode is framed by voice over about fashion and the street, delivered by the guy in the middle:
The image itself may or may not proclaim “Andy Warhol” to you (do look at the hair), but the reference is obvious enough in context. I have no sense of how obvious the reference would be to an audience too young to have been aware of Warhol when he was alive (he died in 1987). I have no idea about Warhol’s acceptance in Japan, but pretty much assume he was well-known there simply because he was one of the best-known art-world figures of the time.
From Tyler Cowen:
If I approach this question from a more general angle of cultural history, I find the diminution of superstars in particular areas not very surprising. As early as the 18th century, David Hume (1742, 135-137) and other writers in the Scottish tradition suggested that, in a given field, the presence of superstars eventually would diminish (Cowen 1998, 75-76). New creators would do tweaks at the margin, but once the fundamental contributions have been made superstars decline in their relative luster.In the world of popular music I find that no creators in the last twenty-five years have attained the iconic status of the Beatles, the Rolling Stones, Bob Dylan, or Michael Jackson. At the same time, it is quite plausible to believe there are as many or more good songs on the radio today as back then. American artists seem to have peaked in enduring iconic value with Andy Warhol and Jasper Johns and Roy Lichtenstein, mostly dating from the 1960s. In technical economics, I see a peak with Paul Samuelson and Kenneth Arrow and some of the core developments in game theory. Since then there are fewer iconic figures being generated in this area of research, even though there are plenty of accomplished papers being published.The claim is not that progress stops, but rather its most visible and most iconic manifestations in particular individuals seem to have peak periods followed by declines in any such manifestation.
I find a crude geographical metaphor useful. Only a handful of explorer's can be the first, second, third, or fourth (up to some relatively small Nth) to land on The New World. At some point we have folks who come to settle The New World, at which point it's no longer really New. But that doesn't mean that there isn't a New Land Somewhere Else.
And cultural exploration isn't confined to a finite splace like geographical exploration of the earth is. Is there a fundamentally new cultural territory out there? Of course there is.
Labels: cultural evolution
Tuesday, May 21, 2013
Once again, cultural evolution, and the problem of memes: What are they? Where are they? What do they do? While the general case does interest me, culture is so various that it is impossible to think about it directly. One has to think about specific cases. As details are important, I want to choose a fairly specific case, that of jazz in mid-20th-Century America. I want you to imagine that you’re in a jazz club in, say, Philadelphia, in, say, mid-October of 1952. It’s 1:30 in the morning, and the tune is Charlie Parker’s “Dexterity.” The piano player counts it off–ah one, ah two, one two three four…
But we’re getting ahead of ourselves. We need a little conceptual equipment before considering the example. It’s the conceptual equipment that’s in question. Make no mistake, the concept of memes is conceptual equipment, and it’s confused and confusing.
Roles in Cultural Selection
Genes and phenotypes play certain roles in a more or less standard account of biological evolution. The phenotype interacts with the environment, where it either succeeds or fails at reproduction, depending on the “fit” between its traits and that environment. Where the phenotype is successful at reproduction, it is the genes which are said to carry heredity from one generation to the next.
In one very widespread account genes are said to be replicators. That is to say, replication is the role they play in evolutionary change. Here’s what Peter Godfrey-Smith has to say about that (The Replicator in Retrospect, Biology and Philosophy 15 (2000): 403-423.):
In The Selfish Gene (1976), Richard Dawkins had argued that individual genes must be seen as the units of selection in evolutionary processes within sexual populations. This is primarily because the other possible candidates, notably whole organisms and groups, do not “replicate.” Organisms and groups are ephemeral, like clouds in the sky or dust storms in the desert. Only a replicator, which can figure in selective processes over many generations, can be a unit of selection.
At the same time Dawkins coined the term “meme” to name entities filling the replicator role in cultural evolution. Later on he used the term “vehicle” to designate the entity that interacts with the environment. In biological evolution it is phenotypes that are the vehicles. In cultural evolution, well, that’s a matter of some dispute. And that more general dispute–what are the roles in cultural evolution and what kinds of things occupy them?–is what interests me.
However, I don’t particularly like the term “vehicle.” As Godfrey-Smith has noted, following others, it is a gene-centric term, characterizing what entities do from the so-called “gene’s eye” perspective. I’d prefer a more neutral perspective and so will use a term coined by Richard Hull, “interactor.” Here are definitions as Godfrey-Smith gives them:
Replicator: an entity that passes on its structure largely intact in successive replications.
Interactor: an entity that interacts as a cohesive whole with its environment in such a way that this interaction causes replication to be differential.
Sunday, May 19, 2013
Would it be possible, some time in the unpredictable future, for people to have direct brain-to-brain communication, perhaps using some amazing nanotechnology that would allow massive point-to-point linkage between brains without shredding them? Sounds cool, no? Alas, I don’t think it will be possible, even with that magical nanotech. Here’s some old notes in which I explore the problem.
* * * * *
My basic point, of course, is the brains coupled through music-making are linked as directly and intimately as computers communicating through a network (an argument I made in Chapters 2 and 3 of Beethoven’s Anvil, and variously HERE, HERE, HERE, and HERE). And, like networked computers, networked brains are subject to constraints. In the human case the effect of those constrains is that the collective computing space can be no larger than the computing space of a single unconstrained brain. This is true no matter how many brains are so coupled, despite the fact that these coupled brains have many more computing elements (i.e. neurons) than a single brain has.
The explanatory problem, as I see it, is that we tend to think of brains as consisting of a lot of elements. Thus, an effective connection between brains should consist of an element-to-element, neuro-to-neuron, hook-up, no? Compared to that, music seems pretty diffuse, though there’s no doubt that, somehow, it works.
So, let’s take a ploy from science fiction, direct neural coupling. I’ve seen this ploy used for man-machine communication (by e.g. Samuel Delaney) and surely someone has used it for human-to-human communication (perhaps mediated by a machine hub). Let’s try to imagine how this might work.
The first problem is simply one of physical technique. Neurons are very small and very many. How do we build a connector that can hook up with 10,000,000 distinctly different neurons without destroying the brain? We use Magic, that’s what we do. Let’s just assume it’s possible: Shazzaayum! It’s done.
Given our Magic-Mega-Point-to-Point (MMPTP) coupling, how do we match the neurons in one brain to those in another? For each strand in this cable is going to run from one neuron to another. If our nervous system were like that of C. elegans (an all but microscopic worm), there would be no problem. For that nervous system is very small (302 neurons I believe) and each neuron has a unique identity. It would thus be easy to match neurons in different individuals of C. elegans. But human brains are not like that. Individual neurons do not have individual identities. There is no way to create a one-to-one match the neurons in one brain with those in another; two brains don’t even have the same number of neurons much less a scheme allowing for matching identities. In this respect, neurons are like hairs, and unlike fingers and toes, where it’s easy to match big toe to big toe, index finger to index finger, and so forth.
So, that’s one problem, how to match the neurons in two brains. About all I can see to do is to match neurons on the basis of location at, say, the millimeter level of granularity. Perhaps we choose 10M or 100M neurons in the corpus callosum and just link them up. There’s another problem: How does a brain tell whether or not a given neural impulse comes from it or from the other brain? If it can’t make the distinction, how can communication take place?
What, then, happens when we finally couple two people through our wonderful future-tech MMPTP? The neurons are not going to correspond in a rigorous way and they’re not going know what’s coming from within vs. outside. In that situation I would imagine that, at beast, each person would a bunch of noise.
Friday, May 17, 2013
The movie business is notoriously fickle. As screenwriter William Goldman has said "nobody knows anything" about the economics of making films. Every film is an economic adventure, and most of them don't make money no matter how hard studios and producers try. Getting name stars, directors, screenwriters, none of that guarantees success. You can open big and flog four weeks later; and you can open small and build a following week by week that finally puts you over the top 10 weeks out. But you can't predict any of this.
The fabulous success of Jaws got studios thinking about blockbusters, expensive films with jazz effects and wide appear that rake in megabucks. Ever since then Hollywood's been chasing the blockbuster, lining them up in the summer, and betting the farm on them. It seems to have worked. Sorta. Will the magic run out this summer?
That's what James Stewart of The New York Times is wondering. He writes:
According to Doug Creutz, the senior media and entertainment analyst for Cowen & Company: “Of the expensive action and animated movies, we’ve never had a summer where more than nine did well, and often it’s fewer. This summer you’ve got 17 blockbusters coming out between May and July, 19 if you add August,” which he said is the most crowded release slate in recent memory. “Is this going to be by far the biggest summer box office in history? Maybe, if they’re all great movies, but it’s not likely.”Studios have been shifting their resources toward what are variously called blockbuster, event, or tent pole movies for years — the big-budget movies intended to help studios make up for their less profitable films. By and large, the strategy seems to have paid off. “We haven’t seen many tent poles blow up,” Michael Nathanson, a media analyst and managing director at Nomura. “But this summer could be the breaking point. There may be some big write-offs on some of these films.”
The summer knows. But it's not telling, not yet. Stay tuned.
I've just discovered (via Dan Dennett) to the work of Peter Godfrey-Smith, a philosopher of science with a particular interest in biology. I've been reading around in some of his essays and this one, "On the Relation Between Philosophy and Science", has a nice account of what philosophy is about. Godfrey-Smith offers three roles: 1) intellectual integration, 2) conceptual incubation, and 3) critical-thinking skills. He regards the first as fundamental and as the most important of the three. I agree.
Here's his basic statement of that role:
The best one-sentence account of what philosophy is up to was given by Wilfrid Sellars in 1963: philosophy is concerned with "how things in the broadest possible sense of the term hang together in the broadest possible sense of the term." Philosophy aims at an overall picture of what the world is like and how we fit into it.A lot of people say they like the Sellars formulation but do not really take it on board. It expresses a view of philosophy in which the field is not self-contained, and makes extensive contact with what goes on outside it. That contact is inevitable if we want to work out how the picture of our minds we get from first-person experience relates to the picture in scientific psychology, how the biological world relates to the physical sciences, how moral judgments relate to our factual knowledge. [emphasis mine, BB] Philosophy can make contact with other fields without being swallowed up by them, though, and it makes this contact while keeping an eye on philosophy's distinctive role, which I will call an integrative role.
Thursday, May 16, 2013
According to the Wikipedia, that exact phrase never appeared in the original Star Trek, though a similar one did: "Scotty, beam us up."
Which is too bad, because I was all ready to write a brief note on why that phrase sticks in the mind in the way Picard's "Make it so, Number One" really doesn't. Here's what I was going to say.
For one thing, it's short, only four words. Picard's phrase is short as well. But it's not particularly suggestive of anything while "beam me up" evokes perhaps the most interesting bit of futuristic technology in the Star Trek world, the transporter. On the one hand, it's miraculous; but then so are replicators, warp drive, and the universal translator. But it's visually more interesting than any of them.
And so it becomes emblematic of the show and of the Star Trek universe. "Make it so" just doesn't have that power.
* * * * *
FWIW, Google's Ngram viewer first picks up "beam me up Scotty" (note that it's case sensitive) in 1982.
Labels: pop culture
Matt Yglesias, who has watched every episode and movie in the Star Trek franchise, has written an appreciative essay about the whole lot of them. He regards The Next Generation as the best series of the bunch.
...Trek has a very particular take on what it means to be human. Part of what it means, the franchise teaches us, is participating in an ongoing progressive project of building a utopian society. Even though the bulk of Trek comes from the ’90s, the franchise launched in the mid-’60s, and the now-anachronistic spirit of midcentury optimism has remained at the heart of the franchise throughout
Fair enough. He also argues that Star Trek helped usher in the era of long-arc series TV. While discussing DS9 and Voyager he observes:
And both shows suffer for having been filmed during the awkward teenage years of television drama. Modern TV features a fairly sharp divide between shows structured around long plot arcs (Breaking Bad, Game of Thrones) and those built as a series of one-offs (CSI). But in the late ’90s, things were different. DS9, like Buffy and The X-Files, flits back and forth between a big-picture story and alien-of-the-week one-shots. This makes for disconcerting binge-watching. The sustained 10-episode narrative that concludes the series is the best run of Trek that’s ever been made. But it comes after years’ worth of television in which the grand clash between the Federation and the Dominion is regularly interrupted.
Tyler Cowen thinks Yglesias underestimates the original series, shrewdly noting:
Tuesday, May 14, 2013
The Nation has an article on "neurohumanities" which is more or less about neuro-, cognitive, and evolutionary psychology in this humanities. Here's an excerpt:
Neuroscience appears to be filling a vacuum where a single dominant mode of thought and criticism once existed. That plinth has been held in the American academy by critical theory, neo-Marxism and psychoanalysis. Alva Noë, a University of California, Berkeley, philosopher who might be called a “neuro doubter,” sees neurohumanities as a reaction to the previous postmodern moment. “The pre-eminence of neuroscience” has legitimated an “anti-theory stance” within the humanities, says Noë, the author of Out of Our Heads.Noë argues that neurohumanities is the ultimate response to—and rejection of—critical theory, a mixture of literary theory, linguistics and anthropology that dominated the American humanities through the 1990s. Critical theory’s current decline was somewhat inevitable, as all intellectual movements erode over time. ... But as critical theory’s power—along with that of Marxism and Freudianism—fades within the humanities, neurohumanities and literary Darwinism are stepping up, ready to explain how we live, love art and read a novel (or rather, how the cortex absorbs text). And while much was gained as “the brain” replaced “individual psychology” or social class readings, much has also been lost.Critical theory offered us the fantasy that we have no control, making a fetish of haze and ambiguity and exhibiting what Noë terms “an allergy to anything essentialist.” In neurohumanities, by contrast, we do have mastery and concrete, empirical ends, which has proved more appealing, even as (or perhaps because) it is highly reductive.
Well, yes, no, maybe.
Monday, May 13, 2013
In live action films, the actors get to interact with one another directly. When Capt. Willard talks with Col. Kurtz in Apocalpse Now, you have actors Martin Sheen and Marlon Brando in the same place at the same time acting and reacting to one another. Animation can't be done like that. But how IS it done? How do animators handle the interaction of the characters they animate?
Until, several years ago, I started thinking seriously about animation, such issues hadn't occurred to me. Nor had it occurred to me that a given character might be animated by different artists in the same film. Because animation is such a time-consuming process, it may not be practical for one animator to do all the scenes for a given character in a feature-length film; so different scenes are given to different animators. What about scenes where there are two or three or four characters interacting with one another? How is that done? One animator for all of them, or different animators for each one? If the latter, how is the work of all the animators coordinated?
These sorts of questions still aren't central to my own interests, but I'm now aware of them as real issues. It's through the perceptive writing of such people as Michael Barrier, a historian and critic of animation, and Michael Sporn, an animator and director, that I've become aware of these issues. In this post (HERE) Michael Sporn takes up those issues in a consideration of Disney Animation: The Illusion of Life, by Disney animators Frank Thomas and Ollie Johnston. Here's Michael Barrier on the same subject, prompted by two books, Richard Williams, The Animator's Survival Kit, and John Canemaker, Walt Disney's Nine Old Men and the Art of Animation.
Sunday, May 12, 2013
In just a few years, these and other young musicians have created a new genre of youth-driven, socially conscious music and forced it on the Egyptian soundscape.
Their music predated the political revolution that ousted President Hosni Mubarak in February 2011, and most of the musicians did not join the uprising in Tahrir Square. But the turmoil since has left Egypt’s huge youth population searching for voices that address issues they care about.
Half of Egypt’s 85 million people are under 25, and many found what they were looking for in the raucous, defiant new music known as “mahraganat,” Arabic for “festivals.” The songs’ addictive beats helped, too.
“We made music that would make people dance but would also talk about their worries,” said Alaa al-Din Abdel-Rahman, 23, better known as Alaa 50 Cent. “That way everyone would listen and hear what was on their minds.”
The music is a rowdy blend of traditional Egyptian wedding music, American hip-hop and whatever else its creators can download for free online.
The Egyptian revolution, such as it is, created a new cultural landscape in which an already existing form of music called “mahraganat” could thrive and animate a much larger audience.
The music’s swift rise from the alleys of neglected Cairo neighborhoods to car stereos and high-class weddings and even television commercials reflects the profound shifts in Egyptian society since the revolution. More people are looking for open discussion of social issues and willing to reach across class lines to find it. Like the revolution, the music came from young people who looked at their lives and did not see much to look forward to. So they made noise, spread their ideas through social media and were surprised by the results. ... Arab popular music has long been dominated by beautiful stars who croon about love and heartbreak and market themselves with music videos shot in luxurious settings that many Egyptians will never visit. Mina Girgis, an Egyptian ethnomusicologist, said that left a wide opening after the revolution for music more in touch with its audience.
By way of comparison, I note that manga existed in Japan before World War II, but it grew tremendously after the war, as though Japan's defeat changed the cultural landscape is a way that allowed this heretofore minor form of popular culture to grow and diversify to the point where it now constitutes 30% to 40% of Japan's annual print output.
Will these new grooves change Egyptian society as profoundly as African-American grooves have changed American society?
Will these new grooves change Egyptian society as profoundly as African-American grooves have changed American society?
Friday, May 10, 2013
The 19th century thinkers who saw evolutionary processes in the biological world also saw them in culture. While evolutionary thinking about culture persisted well into the 20th century, it had become a minor theme, or less, in most of the humanities and social sciences by the second half of the century. Now, of course, there are concerted efforts to once again think culture as an evolutionary phenomenon.
There is thus a fair amount of biologically inspired work on language. But there are pitfalls there. Criticisms of the recent article on "ultraconserved" words is a case in point. More generally, it is useful to know, as Martin Lewis has written, that Linguistic Phylogenies Are Not the Same as Biological Phylogenies.
Linguistic evolution is only vaguely analogous to organic evolution for a variety of reasons, but a crucial factor is the fact that vastly less sharing occurs across biological lineages. We now know that genes can jump from one species to another, but the process is relatively rare; in this realm, change generally occurs as a result of random mutations acted upon by natural selection, not from the borrowing of elements from other species. When it comes to languages, however, sharing is ubiquitous. Languages are almost always borrowing words, and sometimes they adopt grammatical properties of other languages as well. At times, two completely unrelated languages essentially merge to create a hybrid tongue. To be sure, linguists are almost always able to determine which language contributed more elements and more basic structures, and hence should count as the parent tongue. (It should be noted that the use of the terms “parent” and “daughter” in relation to languages is misleading since, unlike in the biological realm, where individual organisms are discrete, the transition from “parent” to “daughter” language is always gradual.) When it comes to creole languages, however, such determinations are not always easy. In regard to grammar, different creoles of completely different parentage are often more similar to each other than they are to any of their source languages. In some instances of mixed languages, admixtures of vocabulary, grammar, and phonology run so deep that linguists abandon the quest for unambiguous classification. Cappadocian Greek, for example, is slotted by the Wikipedia into the seemingly impossible “Greek-Turkish” language family. Does Indo-European therefore encompass this language? Other sources, such as the Ethnologue, place this language in the Greek branch of the Indo-European family, but Turkish influences on Cappadocian Greek are pronounced: it has certain sounds that have been borrowed from Turkish, as well as vowel harmony; it has developed agglutinative inflectional morphology and lost (some) grammatical gender distinctions; and its basic word order is SOV. And Cappadocian Greek is by no means the only example of such a thoroughly “mixed language.” In the biological realm, in contrast, such mixtures are so obviously impossible that they have generated their own nonsense genre, as exemplified by Sara Ball’s delightful flip-book, Crocguphant.
Wednesday, May 8, 2013
Still thinking about Dan Dennett's conception of memetics. He's got an article in the Encyclopedia of Evolution (Oxford 2005), "New Replicators, The" that's worth looking at.
Some bits. From the beginning:
...evolution will occur whenever and wherever three conditions are met: replication, variation (mutation), and differential fitness (competition).In Darwin's own terms, if there is “descent [i.e., replication] with modification [variation]” and “a severe struggle for life” [competition], better-equipped descendants will prosper at the expense of their competitors. We know that a single material substrate, DNA (with its surrounding systems of gene expression and development), secures the first two conditions for life on earth; the third condition is secured by the finitude of the planet as well as more directly by uncounted environmental challenges.
The first question, then, is whether or not these conditions are met by human culture. Dennett thinks they are and so do I.
From the end, however:
Do any of these candidates for Darwinian replicator actually fulfill the three requirements in ways that permit evolutionary theory to explain phenomena not already explicable by the methods and theories of the traditional social sciences? Or does this Darwinian perspective provide only a relatively trivial unification?
We do not yet know. But are the prospects for non-triviality good enough to warrant considerable investment of conceptual time and energy? And so
We should also remind ourselves that, just as population genetics is no substitute for ecology—which investigates the complex interactions between phenotypes and environments that ultimate yield the fitness differences presupposed by genetics—no one should anticipate that a new science of memetics would overturn or replace all the existing models and explanations of cultural phenomena developed by the social sciences. It might, however, recast them in significant ways and provoke new inquiries in much the way genetics has inspired a flood of investigations in ecology.
The New York Times gives notice of a new glossy science magazine, Nautilus: Science Connected, and reflects on the boom and bust of science journalism in the last quarter of the 20th Century. Nautilus is funded by the Templeton Foundation. It's inaugural issue, fully online and open to all, is about human uniqueness. I haven't read any of the articles, but is certainly looks smashing.
Another recent entrant: Simons Science News.
Another recent entrant: Simons Science News.
From the Washington Post:
The traditional view is that words can’t survive for more than 8,000 to 9,000 years. Evolution, linguistic “weathering” and the adoption of replacements from other languages eventually drive ancient words to extinction, just like the dinosaurs of the Jurassic era.A new study, however, suggests that’s not always true.A team of researchers has come up with a list of two dozen “ultraconserved words” that have survived 150 centuries. It includes some predictable entries: “mother,” “not,” “what,” “to hear” and “man.” It also contains surprises: “to flow,” “ashes” and “worm.”The existence of the long-lived words suggests there was a “proto-Eurasiatic” language that was the common ancestor to about 700 contemporary languages that are the native tongues of more than half the world’s people.
Abstract of the research article at PNAS, Ultraconserved words point to deep language ancestry across Eurasia:
The search for ever deeper relationships among the World’s languages is bedeviled by the fact that most words evolve too rapidly to preserve evidence of their ancestry beyond 5,000 to 9,000 y. On the other hand, quantitative modeling indicates that some “ultraconserved” words exist that might be used to find evidence for deep linguistic relationships beyond that time barrier. Here we use a statistical model, which takes into account the frequency with which words are used in common everyday speech, to predict the existence of a set of such highly conserved words among seven language families of Eurasia postulated to form a linguistic superfamily that evolved from a common ancestor around 15,000 y ago. We derive a dated phylogenetic tree of this proposed superfamily with a time-depth of ∼14,450 y, implying that some frequently used words have been retained in related forms since the end of the last ice age. Words used more than once per 1,000 in everyday speech were 7- to 10-times more likely to show deep ancestry on this tree. Our results suggest a remarkable fidelity in the transmission of some words and give theoretical justification to the search for features of language that might be preserved across wide spans of time and geography.
The full article is undated.
Scientific and technical work has been producing aesthetically pleasing images for 100s of years. The New York Times has noticed some recent cases in a brief note: "A Princeton University art contest, soliciting “images produced during the course of scientific research that have aesthetic merit,” mustered some pretty cool stuff: an oblique photograph of an architectural structure built of chocolate, a highly intimate look at cells of the fruit fly ovary, and so forth."
Princeton announcement HERE; project Facebook page HERE; an extended article HERE.
Sunday, May 5, 2013
I've been reading around in Dan Dennett's papers and found this one, The Cultural Evolution of Words and Other Thinking Tools (Cold Spring Harbor Symp Quant Biol, Vol. LXXIV, August, 2009). To be sure, I disagree with his use of the meme concept. To be sure, his use is pretty standard and Dennett, in the standard way, claims more for it than can be justified by the current state of our knowledge and theorizing, but this paper is excellent despite that problem.
As the title indicates, Dennett focuses his attention on words and does so in a way that usefully brings their mystery, if you will, though mystery is rather low on Dennett's intellectual agenda.
What then are words? Do they even exist? This might seem to be a fatuous philosophical question, composed as it is of the very items it asks about, but it is, in fact, exactly as serious and contentious as the claim that genes do or do not really exist. Yes, of course, there are sequences of nucleotides on DNA molecules, but does the concept of a gene actually succeed (in any of its rival formulations) in finding a perspicuous rendering of the important patterns amidst all that molecular complexity? If so, there are genes; if not, then genes will in due course get thrown on the trash heap of science along with phlogiston and the ether, no matter how robust and obviously existing they seem to us today.
For what it's worth, I have it on good authority that there are languages which lack a word corresponding to our concept of word, though they generally have a word roughly corresponding to our concept of utterance (you can find this observation in, e.g., Alfred Lord, The Singer of Tales). That doesn't bear directly on the point Dennett is making in those words as lacking a word for this is that really existing phenomenon is common enough, but it does indicate that words do have a rather diffuse or abstract character that makes it difficult to understand what they are and how they operate.
A bit later Dennett continues:
A promise or a libel or a poem is identified by the words that compose it, not by the trails of ink or bursts of sound that secure the occurrence of those words. Words themselves have physical “tokens” (composed of uttered or heard phonemes, seen in trails of ink or glass tubes of excited neon or grooves carved in marble), and so do genes, but these tokens are a relatively superficial part or aspect of these remarkable information structures, capable of being replicated, combined into elaborate semantic complexes known as sentences, and capable in turn of provoking cognitive, emotional, and behavioral responses of tremendous power and subtly.
I particularly like his phrase in that first sentence, "the trails of ink or bursts of sound that secure the occurrence of those words." That secure the occurence, that's nice. "Anchor" might also work, that anchor the occurence of those words in an utterance or a written text, as though the ink or sound were a tether holding the airy nothings of meaning and syntax to the ground.
Kanon Mori, Yuki Sakura, Hinako Kuroki and Jun Amaki have been following the Nikkei 225 stock average obsessively since Prime Minister Shinzo Abe took office in December. The oldest of the foursome is Mori, but she is still only 23. The youngest is Kuroki, 16 and still in high school.
None of them are studying for a degree in economics, let alone playing the stock market. Instead, the four are members of a new idol group, Machikado Keiki Japan, and stocks play an important part in their performances.
“We base our costumes on the price of the Nikkei average of the day. For example, when the index falls below 10,000 points, we go on stage with really long skirts,” Mori explained.
The higher stocks rise, the shorter their dresses get. With the Nikkei index ending above 13,000, the four went without skirts altogether on the day of their interview with The Japan Times, instead wearing only lacy shorts.
More at The Japan Times. H/t Tyler Cowen.