Sunday, January 30, 2022

Neuroscience and control theory

Friday, January 28, 2022

Training physical neural nets with back propagation

Abstract of the linked article:

Deep-learning models have become pervasive tools in science and engineering. However, their energy requirements now increasingly limit their scalability1. Deep-learning accelerators aim to perform deep learning energy-efficiently, usually targeting the inference phase and often by exploiting physical substrates beyond conventional electronics. Approaches so far have been unable to apply the backpropagation algorithm to train unconventional novel hardware in situ. The advantages of backpropagation have made it the de facto training method for large-scale neural networks, so this deficiency constitutes a major impediment. Here we introduce a hybrid in situ–in silico algorithm, called physics-aware training, that applies backpropagation to train controllable physical systems. Just as deep learning realizes computations with deep neural networks made from layers of mathematical functions, our approach allows us to train deep physical neural networks made from layers of controllable physical systems, even when the physical layers lack any mathematical isomorphism to conventional artificial neural network layers. To demonstrate the universality of our approach, we train diverse physical neural networks based on optics, mechanics and electronics to experimentally perform audio and image classification tasks. Physics-aware training combines the scalability of backpropagation with the automatic mitigation of imperfections and noise achievable with in situ algorithms. Physical neural networks have the potential to perform machine learning faster and more energy-efficiently than conventional electronic processors and, more broadly, can endow physical systems with automatically designed physical functionalities, for example, for robotics materials and smart sensors.

Follow the spheres [Chinese lion]

Neurons learn by predicting future activity

Thursday, January 27, 2022

Training GPT3 to be more responsive

Update on Color Terms: Nature or Nuture?

More on color terms [1.27.22]:
* * * * *
Thinking about this again so I'm bumping this to the top. It's from December 2012.

Color vision, from genetics through neuropsychology to color terms, is one of the most intensely investigated topics in cognitive science. The subject is interesting for two reasons:
  1. as psychological phenomena go, it's relatively simple, and accessible to investigation through a variety of methods, and
  2. in particular, it can be studied cross-culturally and thus shed light on the nature-nurture question.
Mark Changizi has recently argued the color vision has evolved for the purpose of allowing us to obtain clues about a person's health and state of mind from changes in skin color, blushing, bruising, and the like. He's stated this idea at some length in the first chapter of The Vision Revolution (BenBella Books, 2009, pp. 5-48). Folks have been discussing this idea at some length over at Crooked Timber.

In thinking through that discussion I formulated this question and sent it to Changizi: Question: Lots of languages have rather impoverished systems of color terms. Would folks speaking a language that lacked a term for green thereby have more fine-grained perception of greens? He didn't have an answer but indicated that people are working on that kind of issue. He sent me reprints of two papers that are indeed relevant. 

Color terminology seems subject to constraints that are universal but, at the same time, differences in color naming across cultures does seem to cause differences in color perception. How do you like them apples? Both universal and different at the same time.

* * * * *

Paul Kay and Terry Regier. Language, thought and color: recent developments. TRENDS in Cognitive Sciences Vol.10 No.2 February 2006, pp. 51-54

Here's how Kay and Regier state matters as they existed, say, a quarter of a century ago:
Color naming varies across languages; however, it has long been held that this variation is constrained. Berlin and Kay [1] found that color categories in 20 languages were organized around universal ‘focal colors’ – those colors corresponding principally to the best examples of English ‘black’, ‘white’, ‘red’, ‘yellow’, ‘green’ and ‘blue’. Moreover, a classic set of studies by Eleanor Rosch found that these focal colors were also remembered more accurately than other colors, across speakers of languages with different color naming systems (e.g. [2]). Focal colors seemed to constitute a universal cognitive basis for both color language and color memory.
Research conducted in the last decade or so has called those conclusions into question. Kay and Regier present and discuss this work and offer this summary:
The debate over color naming and cognition can be clarified by discarding the traditional ‘universals versus relativity’ framing, which collapses important distinctions. There are universal constraints on color naming, but at the same time, differences in color naming across languages cause differences in color cognition and/or perception. The source of the universal constraints is not firmly established. However, it appears that it can be said that nature proposes and nurture disposes. Finally, ‘categorical perception’ of color might well be perception sensu stricto, but the jury is still out.
The key proposition is that "differences in color naming across languages cause differences in color cognition and/or perception."

* * * * *

Paul Kay and Terry Regier, Resolving the question of color naming universals, PNAS, vol. 100, no. 15, July 22, 2003, pp. 9085-9089.

The existence of cross-linguistic universals in color naming is currently contested. Early empirical studies, based principally on languages of industrialized societies, suggested that all languages may draw on a universally shared repertoire of color categories. Recent work, in contrast, based on languages from nonindustrialized societies, has suggested that color categories may not be universal. No comprehensive objective tests have yet been conducted to resolve this issue. We conduct such tests on color naming data from languages of both industrialized and nonindustrialized societies and show that strong universal tendencies in color naming exist across both sorts of language.
From the methodology discussion:
The central empirical focus of our study was the color naming data of the Word Color Survey (WCS). The WCS was undertaken in response to the above-mentioned shortcomings of the BK [Berlin and Kay] data (1): it has collected color naming data in situ from 110 unwritten languages spoken in small-scale, nonindustrialized societies, from an average of 24 native speakers per language (mode: 25 speakers), insofar as possible monolinguals. Speakers were asked to name each of 330 color chips produced by the Munsell Color Company (New Windsor, NY), representing 40 gradations of hue at eight levels of value (lightness) and maximal available chroma (saturation), plus 10 neutral (black-gray-white) chips at 10 levels of value. Chips were presented in a fixed random order for naming. The array of all color chips is shown in Fig. 1. (The actual stimulus colors may not be faithfully represented there.) In addition, each speaker was asked to indicate the best example(s) of each of his or her basic color terms. The original BK study used a color array that was nearly identical to this, except that it lacked the lightest neutral chip. The languages investigated in the WCS and BK are listed in Tables 1 and 2.
The concluding paragraph:
The application of statistical tests to the color naming data of the WCS has established three points: (i) there are clear cross-linguistic statistical tendencies for named color categories to cluster at certain privileged points in perceptual color space; (ii) these privileged points are similar for the unwritten languages of nonindustrialized communities and the written languages of industrialized societies; and (iii) these privileged points tend to lie near, although not always at, those colors named red, yellow, green, blue, purple, brown, orange, pink, black, white, and gray in English.

Wednesday, January 26, 2022

Back in the day...what choices for an evening out in NYC

Saturday, January 22, 2022

The "two-sleep" system (two periods of sleep at night, separated by wakefulness)

Zaria Gorvett, The forgotten medieval habit of 'two sleeps', BBC Future, Jan 9, 2022. Much of the article is apparently based on, Roger Ekrich, At Day's Close: A History of Nighttime.

In the 17th Century, a night of sleep went something like this.

From as early as 21:00 to 23:00, those fortunate enough to afford them would begin flopping onto mattresses stuffed with straw or rags – alternatively it might have contained feathers, if they were wealthy – ready to sleep for a couple of hours. (At the bottom of the social ladder, people would have to make do with nestling down on a scattering of heather or, worse, a bare earth floor – possibly even without a blanket.)

At the time, most people slept communally, and often found themselves snuggled up with a cosy assortment of bedbugs, fleas, lice, family members, friends, servants and – if they were travelling – total strangers.

To minimise any awkwardness, sleep involved a number of strict social conventions, such as avoiding physical contact or too much fidgeting, and there were designated sleeping positions. For example, female children would typically lie at one side of the bed, with the oldest nearest the wall, followed by the mother and father, then male children – again arranged by age – then non-family members.

A couple of hours later, people would begin rousing from this initial slumber. The night-time wakefulness usually lasted from around 23:00 to about 01:00, depending on what time they went to bed. It was not generally caused by noise or other disturbances in the night – and neither was it initiated by any kind of alarm (these were only invented in 1787, by an American man who – somewhat ironically – needed to wake up on time to sell clocks). Instead, the waking happened entirely naturally, just as it does in the morning.

The period of wakefulness that followed was known as "the watch" – and it was a surprisingly useful window in which to get things done. "[The records] describe how people did just about anything and everything after they awakened from their first sleep," says Ekirch.

Under the weak glow of the Moon, stars, and oil lamps or "rush lights" – a kind of candle for ordinary households, made from the waxed stems of rushes – people would tend to ordinary tasks, such as adding wood to the fire, taking remedies, or going to urinate (often into the fire itself).

For peasants, waking up meant getting back down to more serious work – whether this involved venturing out to check on farm animals or carrying out household chores, such as patching cloth, combing wool or peeling the rushes to be burned. One servant Ekirch came across even brewed a batch of beer for her Westmorland employer one night, between midnight and 02:00. Naturally, criminals took the opportunity to skulk around and make trouble – like the murderer in Yorkshire.

But the watch was also a time for religion.

For Christians, there were elaborate prayers to be completed, with specific ones prescribed for this exact parcel of time. One father called it the most "profitable" hour, when – after digesting your dinner and casting off the labours of the world – "no one will look for you except for God”.

Those of a philosophical disposition, meanwhile, might use the watch as a peaceful moment to ruminate on life and ponder new ideas. In the late 18th Century, a London tradesman even invented a special device for remembering all your most searing nightly insights – a "nocturnal remembrancer", which consisted of an enclosed pad of parchment with a horizontal opening that could be used as a writing guide.

But most of all, the watch was useful for socialising – and for sex.

As Ekirch explains in his book, At Day's Close: A History of Nighttime, people would often just stay in bed and chat. And during those strange twilight hours, bedfellows could share a level of informality and casual conversation that was hard to achieve during the day.

For husbands and wives who managed to navigate the logistics of sharing a bed with others, it was also a convenient interval for physical intimacy – if they'd had a long day of manual labour, the first sleep took the edge off their exhaustion and the period afterwards was thought to be an excellent time to conceive copious numbers of children.

Once people had been awake for a couple of hours, they'd usually head back to bed. This next step was considered a "morning" sleep and might last until dawn, or later. Just as today, when people finally woke up for good depended on what time they went to bed.

Ekirch also references bi-phasic sleep in the classical era.

Biphasic sleep is common among among animals as well.

"There are broad swaths of variability among primates, in terms of how they distribute their activity throughout the 24-hour period," says David Samson, director of the sleep and human evolution laboratory at the University of Toronto Mississauga, Canada. And if double-sleeping is natural for some lemurs, he wondered: might it be the way we evolved to sleep too?

The move away from biphasic sleep happened during the Industrial Revolution:

"Artificial illumination became more prevalent, and more powerful – first there was gas [lighting], which was introduced for the first time ever in London," says Ekirch, "and then, of course, electric lighting toward the end of the century. And in addition to altering people's circadian rhythms. artificial illumination also naturally allowed people to stay up later."

However, though people weren't going to bed at 21:00 anymore, they still had to wake up at the same time in the morning – so their rest was truncated. Ekirch believes that this made their sleep deeper, because it was compressed.

As well as altering the population's circadian rhythms, the artificial lighting lengthened the first sleep, and shortened the second. "And I was able to trace [this], almost decade by decade, over the course of the 19th Century," says Ekirch.

I mention multiphasic sleep in other posts, such as:

Thursday, January 20, 2022

Winter in the Arches, a close-up

Prospective Learning: Back to the Future

Abstract for the linked article:

Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes that the future resembles the past: intelligent agents or systems (what we call ‘intelligence’) observe and act on the world, then use this experience to act on future experiences of the same kind. We call this ‘retrospective learning’. For example, an intelligence may see a set of pictures of objects, along with their names, and learn to name them. A retrospective learning intelligence would merely be able to name more pictures of the same objects. We argue that this is not what true intelligence is about. In many real world problems, both NIs and AIs will have to learn for an uncertain future. Both must update their internal models to be useful for future tasks, such as naming fundamentally new objects and using these objects effectively in a new context or to achieve previously unencountered goals. This ability to learn for the future we call ‘prospective learning’. We articulate four relevant factors that jointly define prospective learning. Continual learning enables intelligences to remember those aspects of the past which it believes will be most useful in the future. Prospective constraints (including biases and priors) facilitate the intelligence finding general solutions that will be applicable to future problems. Curiosity motivates taking actions that inform future decision making, including in previously unmet situations. Causal estimation enables learning the structure of relations that guide choosing actions for specific outcomes, even when the specific action-outcome contingencies have never been observed before. We argue that a paradigm shift from retrospective to prospective learning will enable the communities that study intelligence to unite and overcome existing bottlenecks to more effectively explain, augment, and engineer intelligences.

Public representation of the humanities as revealed in a corpus 147K recent news articles

Abstract of the linked article:

Academic defenses of the humanities often make two assumptions: first, that the overwhelming public perception of the humanities is one of crisis, and second, that our understanding of what the humanities mean is best traced through a lineage of famous reference points, from Matthew Arnold to the Harvard Redbook. We challenge these assumptions by reconsidering the humanities from the perspective of a corpus of over 147,000 relatively recent national and campus newspaper articles. Building from the work of the WhatEvery1Says project (WE1S), we employ computational methods to analyze how the humanities resonate in the daily language of communities, campuses, and cities across the US. We compare humanities discourse to science discourse, exploring the distinct ways that each type of discourse communicates research, situates itself institutionally, and discusses its value. Doing so shifts our understanding of both terms in the phrase “public humanities.” We turn from the sweeping and singular conception of “the public” often invoked by calls for a more public humanities to the multiple overlapping publics instantiated through the journalistic discourse we examine. And “the humanities” becomes not only the concept named by articles explicitly “about” the humanities, but also the accreted meaning of wide-ranging mentions of the term in building names, job titles, and announcements. We argue that such seemingly inconsequential uses of the term index diffuse yet vital connections between individuals, communities, and institutions including, but not limited to, colleges and universities. Ultimately, we aim to show that a robust understanding of how humanities discourse already interacts with and conceives of the publics it addresses should play a crucial role in informing ongoing and future public humanities efforts.

Wednesday, January 19, 2022

Topics in music change: Old music strangles new, new grows over old, and a bunch of pianists

Two rather general sets of remarks, one set prompted by Ted Gioia, the other by Tyler Cowen, followed by a bunch of specifics about classical music and jazz piano.

Early in the 21st Century old music seems to be pushing out new

Ted Gioia has an interesting piece, Is Old Music Killing New Music? (1.19.22). He opens:

I had a hunch that old songs were taking over music streaming platforms—but even I was shocked when I saw the most recent numbers. According to MRC Data, old songs now represent 70% of the US music market.

Those who make a living from new music—especially that endangered species known as the working musician — have to look on these figures with fear and trembling.

But the news gets worse.

The new music market is actually shrinking. All the growth in the market is coming from old songs.

Just consider these facts: the 200 most popular tracks now account for less than 5% of total streams. It was twice that rate just three years ago. And the mix of songs actually purchased by consumers is even more tilted to older music—the current list of most downloaded tracks on iTunes is filled with the names of bands from the last century, such as Creedence Clearwater and The Police.

And so on.

After offering a fair bit of discussion he considers and rejects the idea “that this decline in music is simply the result of lousy new songs. Music used to be better, or so they say.” Rather, “I listen to 2-3 hours of new music every day, and I know that there are plenty of outstanding young musicians out there. The problem isn’t that they don’t exist, but that the music industry has lost its ability to discover and nurture their talents.” He goes on to note:

In fact, nothing is less interesting to music executives than a completely radical new kind of music. [...] Anything that genuinely breaks the mold is excluded from consideration almost as a rule. That’s actually how the current system has been designed to work.

Even the music genres famous for shaking up the world—rock or jazz or hip-hop—face this same deadening industry mindset. I love jazz, but many of the radio stations focused on that genre play songs that sound almost the same as what they featured ten or twenty years ago. In many instances, they actually are the same songs.

He goes on to say a bit more and ends with the observation:

New music always arises in the least expected place, and when the power brokers aren’t even paying attention. And it will happen again just like that. It certainly needs to. Because the decision-makers controlling our music institutions have lost the thread. We’re lucky that the music is too powerful for them to kill.

OK, so change has to happen from the bottom up.

Set that aside. 

* * * * *

Rick Beato discusses Gioia's thesis:

What happened to classical music?

A couple of weeks ago Tyler Cowen asked “Why has classical music declined?” He posed it in response to a request one of his readers, Rahul, had made to an earlier post (quoting Rahul’s remarks):

In general perception, why are there no achievements in classical music that rival a Mozart, Bach, Beethoven etc. that were created in say the last 50 years?

Is it an exhaustion of what's possible? Are all great motifs already discovered?

Or will we in another 50 or 100 years admire a 1900's composer at the same level as a Mozart or Beethoven?

Or was it something unique in that era (say 1800's) which was conducive to the discovery of great compositions? Patronage? Lack of distraction?

Cowen offers six numbered observations of his own, leading to a long discussion.

Here’s Cowen’s first comment:

The advent of musical recording favored musical forms that allow for the direct communication of personality. Mozart is mediated by sheet music, but the Rolling Stones are on record and the radio and now streaming. You actually get “Mick Jagger,” and most listeners prefer this to a bunch of quarter notes. So a lot of energy left the forms of music that are communicated through more abstract means, such as musical notation, and leapt into personality-specific musics.

From Mozart to the Rolling Stones, that’s quite a lot of musical territory to cover. Yikes!

* * * * *

I’ve been haunted by that conversation. While the quality of the responses is all over the place – no surprise there – what’s interesting is how many of them there are, 210 at the moment. The subject matters to a lot of people.

So I’ve been thinking about it making notes, and have lots of thoughts. 

Scattered thoughts. 

One of those thoughts is that this decline of classical music, if you want to call it that, has been followed by the ascendance of American music. I’d hesitate to go so far as to say that classical music was ‘killed’ by American pop, but there you have it in Chuck Berry, “Roll Over, Beethoven” (1956).

The idea is implicit in the juxtaposition of Mozart and The Rolling Stones in Cowen’s first comment. To be sure, the Stones are British, but they perform in a genre that arose in America, rock and roll. A lot of those “personality-specific” musics came out of America.

In the case of rock and roll, we can trace it back through jump bands and swing combos to early jazz and blues, which then disperse into the 19th century, where minstrelsy emerged as a major form of mass entertainment. The first quarter of the 20th century saw the migration of Blacks out of the South to the North, Midwest, and West. At the same time recording and radio allowed music to spread beyond the geographical locus of musical performers. And Europe was engulfed in the First World War, which you also mention.

That’s when “creators struck out in new directions” en mass. And so you had jazz/swing/blues vs. the long-hair and high-brow classics, a conflict that showed up, among other places, in films and cartoons (where it lingered for a while, e.g. Bugs Bunny in “Long-Haired Hare”, 1949). We had Gershwin and Broadway and Armstrong, Crosby, and Sinatra figuring out how to adapt singing style to microphones and Amplification.

What jazz pianists learned from the classical tradition

One thing that came up in the discussion of Tyler’s comments if the fact that, once movies acquired sound, the compositional strategies and techniques developed in 18th and (mostly) 19th century classical music showed up on the sound tracks of those movies (John Williams is one composer mentioned by name).

More generally, however, those techniques became the common inheritance of any musicians who would listen and learn. Consider these jazz musicians listed in a piece by Ethan Iversion, Theory and European Classical Music (NEC Missive #2):

Red, green, and white (leaves and flowers)

It seems that AI has solved poker

Keith Romer, How A.I. Conquered Poker, NYTimes Magazine, 1.18.22.

Von Neumann:

Using his own simplified version of the game, in which two players were randomly “dealt” secret numbers and then asked to make bets of a predetermined size on whose number was higher, von Neumann derived the basis for an optimal strategy. Players should bet large both with their very best hands and, as bluffs, with some definable percentage of their very worst hands. (The percentage changed depending on the size of the bet relative to the size of the pot.) Von Neumann was able to demonstrate that by bluffing and calling at mathematically precise frequencies, players would do no worse than break even in the long run, even if they provided their opponents with an exact description of their strategy. And, if their opponents deployed any strategy against them other than the perfect one von Neumann had described, those opponents were guaranteed to lose, given a large enough sample.

The early days:

Unlike in chess or backgammon, in which both players’ moves are clearly legible on the board, in poker a computer has to interpret its opponents’ bets despite never being certain what cards they hold. Neil Burch, a computer scientist who spent nearly two decades working on poker as a graduate student and researcher at Alberta before joining an artificial intelligence company called DeepMind, characterizes the team’s early attempts as pretty unsuccessful. “What we found was if you put a knowledgeable poker player in front of the computer and let them poke at it,” he says, the program got “crushed, absolutely smashed.”

Partly this was just a function of the difficulty of modeling all the decisions involved in playing a hand of poker. Game theorists use a diagram of a branching tree to represent the different ways a game can play out. [...] For even a simplified version of Texas Hold ’em, played “heads up” (i.e., between just two players) and with bets fixed at a predetermined size, a full game tree contains 316,000,000,000,000,000 branches. The tree for no-limit hold ’em, in which players can bet any amount, has even more than that. “It really does get truly enormous,” Burch says. “Like, larger than the number of atoms in the universe.”


At first, the Alberta group’s approach was to try to shrink the game to a more manageable scale — crudely bucketing hands together that were more or less alike, treating a pair of nines and a pair of tens, say, as if they were identical. But as the field of artificial intelligence grew more robust, and as the team’s algorithms became better tuned to the intricacies of poker, its programs began to improve. Crucial to this development was an algorithm called counterfactual regret minimization. Computer scientists tasked their machines with identifying poker’s optimal strategy by having the programs play against themselves billions of times and take note of which decisions in the game tree had been least profitable (the “regrets,” which the A.I. would learn to minimize in future iterations by making other, better choices). In 2015, the Alberta team announced its success by publishing an article in Science titled “Heads-Up Limit Hold’em Poker Is Solved.” [...]

It quickly became clear that academics were not the only ones interested in computers’ ability to discover optimal strategy. One former member of the Alberta team, who asked me not to name him, citing confidentiality agreements with the software company that currently employs him, told me that he had been paid hundreds of thousands of dollars to help poker players develop software that would identify perfect play and to consult with programmers building bots that would be capable of defeating humans in online games. Players unable to front that kind of money didn’t have to wait long before gaining more affordable access to A.I.-based strategies. The same year that Science published the limit hold ’em article, a Polish computer programmer and former online poker player named Piotrek Lopusiewicz began selling the first version of his application PioSOLVER. For $249, players could download a program that approximated the solutions for the far more complicated no-limit version of the game. As of 2015, a practical actualization of John von Neumann’s mathematical proof was available to anyone with a powerful enough personal computer.


Koon is quick to point out that even with access to the solvers’ perfect strategy, poker remains an incredibly difficult game to play well. The emotional swings that come from winning or losing giant pots and the fatigue of 12-hour sessions remain the same challenges as always, but now top players have to put in significant work away from the tables to succeed. Like most top pros, Koon spends a good part of each week studying different situations that might arise, trying to understand the logic behind the programs’ choices. “Solvers can’t tell you why they do what they do — they just do it,” he says. “So now it’s on the poker player to figure out why.”

The best players are able to reverse-engineer the A.I.’s strategy and create heuristics that apply to hands and situations similar to the one they’re studying. Even so, they are working with immense amounts of information. When I suggested to Koon that it was like endlessly rereading a 10,000-page book in order to keep as much of it in his head as possible, he immediately corrected me: “100,000-page book. The game is so damn hard.”

And so:

Not every player I spoke to is happy about the way A.I.-based approaches have changed the poker landscape. For one thing, while the tactics employed in most lower-stakes games today look pretty similar to those in use before the advent of solvers, higher-stakes competition has become much tougher. As optimal strategy has become more widely understood, the advantage in skill the very best players once held over the merely quite good players has narrowed considerably. But for Doug Polk, who largely retired from poker in 2017 after winning tens of millions of dollars, the change solvers have wrought is more existential. “I feel like it kind of killed the soul of the game,” Polk says, changing poker “from who can be the most creative problem-solver to who can memorize the most stuff and apply it.”

Piotrek Lopusiewicz, the programmer behind PioSOLVER, counters by arguing that the new generation of A.I. tools is merely a continuation of a longer pattern of technological innovation in poker. Before the advent of solvers, top online players like Polk used software to collect data about their opponents’ past play and analyze it for potential weaknesses. “So now someone brought a bigger firearm to the arms race,” Lopusiewicz says, “and suddenly those guys who weren’t in a position to profit were like: ‘Oh, yeah, but we don’t really mean that arms race. We just want our tools, not the better tools.’”

Tuesday, January 18, 2022

Three Against Two: Splitting the Mind [augmented with new material]

I'm bumping this to the top of the queue with the addition of a tweet linking to a study of hemispheric timing (see last two sections below, Two Hemispheres, Splitting the Mind? You can download a slightly different version of the original post at THIS LINK).

* * * * *

Abstract from the article: Anja Pflug, Florian Gompf, et al., Differential contributions of the two human cerebral hemispheres to action timing, eLife 2019;8:e48404 DOI: 10.7554/eLife.48404:
Abstract: Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
* * * * *

Three against two is one of the most important rhythm ‘cells’ in all of music. What do I mean, three against two? You play three evenly spaced beats in one ‘stream’ in the same period of time you play two evenly spaced beats in another ‘stream.’ It sounds simple enough but, the problem is that three and two do not have a common divisor, making the ‘evenly spaced’ part of the formula a bit tricky. The two patterns coincide on the first beat, but the second and third beats of the three-beat stream happen at different times from the second beat of the two-beat stream. And if you think that’s a lot of verbiage for something that ought to be simple, when then you’re beginning to get the idea.

In some cultures, including many in Africa, young children are taught 3 against 2 at a very young age. For them it IS easy. That’s not the case, however, in European derived musical traditions. Three against two is not part of basic toddler pedagogy and, as a consequence, learning to do it is a bit more difficult when, and if, the time comes – for some, it never comes. Thus, within the context of the Western classical tradition, three against two is considered moderately difficult rather than being fundamental. Such rhythms are exceptional in classical music, but they are common enough that any moderately skilled keyboard player must know how to execute them.

Go Slow, Count it in Six

In playing percussion or a keyboard instrument it is easy to alternate notes from one hand to the other and it is easy to play notes simultaneously with both hands. It is also easy to play two or four or eight notes with one hand against one note with the other hand or, for that matter, three or six notes with one hand against one note with the other. And, of course, it is easy to repeat such figures time after time after time. It is, however, distinctly more difficult to play three notes with one hand against two with the other. The problem is that, once the first note is struck by both hands, none of the successive notes in the pattern line up nor are they equidistant from one another. The patterns are incommensurate, as we see in the following diagram:

3 against 2
Three Against Two
Let’s consider the advice Joseph Hoffman offered on this problem. Hoffman was a piano virtuoso whose career spanned the late nineteenth and early twentieth centuries. In 1909 he published a book of Piano Questions, which consisted of “direct answers to two hundred fifty questions asked by piano students” which he had originally published in the Ladies’ Home Journal over a period of two years. One of the questions he answered was “How must I execute triplets played against two-eighths?” — a typical 2-against-3 pattern. Here is the first part of Hoffman’s answer:
In a slow tempo it may serve you to think of the second eighth-note of the triplet as being subdivided into two sixteenths. After both hands have played the first note of their respective groups simultaneously, the place of the aforsaid sixteenth note is to be filled by the second note of the couplet.

Spike Lee and Miles Davis

Sunday, January 16, 2022

Dick Macksey's library made the NYTimes today (1.16.22)

                                                Photo: Will Kirk/Johns Hopkins University

Dick Macksey’s library made the NYTimes today. Macksey died in July of 2019, but his library, or rather a simulacrum of it, lives on. That particular photo appeared on the web some time before Macksey’s death, two, three, I don’t know, maybe four years before. I don’t know when I first saw it, but I recognized it immediately. That library is legendary in the circle of Macksey’s colleagues, students and former students, and who knows who else. We’ve all seen it and marveled.

I took my first course of Dick Macksey in the Spring of 1966. It was on the autobiographical novel and when we read Tristram Shandy, he brought in a first edition for the class to see, nine small (quarto-sized) volumes. I likely visited his house that semester with the class. I remember seeing the three volumes of The Principia Mathematica on a chair in the entryway. But he’d not yet converted the garage into a library – that was a year later. After that I visited the library many times, sometimes with a class, sometimes after one of the films shown on Wednesday and Friday evenings in Levering Hall. Macksey would host discussions of the films, and further viewings – he had a screen and a 16mm projector in the library. The library seemed full at the time, but not nearly so full as it is in that photo.

If one looks, one can easily find photos of libraries on the web, public and institutional libraries, but home libraries as well. Many of them are magnificent, grander and more luxurious that Dick’s. They show well. But don’t look used.

Dick’s library was used, of course by Dick himself, but by others as well. That’s how the library looks, well used. And that despite the fact that there are no people in it. It appears inhabited, alive. It’s those lights and their relationship to the books and shelves. That photo reveals the library as the living embodiment of the inquiring mind.

I wonder how long it will keep spinning through the web?

Wednesday, January 12, 2022

Leaves and light

"Don't Look Up" is a hit [Media Notes 65d]

Cara Buckley, “Don’t Just Watch: Team Behind ‘Don’t Look Up’ Urges Climate Action,” NYTimes, 1.11.22:

After the film premiered in December, climate scientists took to social media and penned opinion essays, saying they felt seen at last. Neil deGrasse Tyson tweeted that it seemed like a documentary. Several admirers likened the film to “A Modest Proposal,” the 18th-century satirical essay by Jonathan Swift.

Naysayers, meanwhile, said the comet allegory was lost on those who took it literally, and questioned why Mr. McKay hadn’t been more straightforward about global warming. Writing in The New Yorker, Richard Brody said if scientists didn’t like what film critics had to say about science, “the scientists should stop meddling with art.”

Either way, at a time when leaders are failing to take the necessary measures to tackle the planet emergency, and the volume and ferocity of so-called “natural” disasters reach ever graver peaks, there is little question that the movie has struck a pretty big nerve. According to Netflix, which self reports its own figures and was the studio behind the film and its distributor, the movie is one of the its most popular films ever, amassing an unprecedented 152 million hours viewed in one week.

“The goal of the movie was to raise awareness about the terrifying urgency of the climate crisis, and in that, it succeeded spectacularly,” said Genevieve Guenther, the founder and director of End Climate Silence, an organization that promotes media coverage of climate change.

Tuesday, January 11, 2022

"Powers of Ten" Updated

Powers of Ten, the short 1968 documentary by Ray and Charles Eames has been updated to reflect current knowledge:

Note, however, that while the original version zooms in to the micro-scale world after having zoomed out to the macro-scale, this new version does only the zoom-out.

Here's a post that presents the original film along with a bit of commentary.

H/t 3QD.

Monday, January 10, 2022

Beethoven in Memphis [on the limits of civilization and sexuality in music]

I'm bumping this to the top of the queue in part on general principle and in part in response to Tyler Cowen's strategic juxtaposition of Mozart and Mick Jagger in this recent post on the decline of classical music (see also this reply to Cowen on Lisztomania).

* * * * *

In 1838 Ole Bull, the Norwegian violin virtuoso, gave the first classical concert ever heard in Memphis, Tennessee. I don’t know what he played on that occasion, but that’s beside the point. What could he have played? That was the year that Felix Mendelssohn thought of writing a concerto in E minor—which would come to be known simply as the Mendelssohn Violin Concerto, a staple of the classical repertoire—for his friend, Ferdinand David. Obviously Bull could not have performed this work. But he could have performed Bach, Hayden, Mozart, or Beethoven. Classical music was in full flower and the blues, jazz, rock and roll, they were still in the distant and unforeseeable deeply unpredictable future.

I would like to think Ole Bull performed some Beethoven, who had been dead for eleven years. Even more, I would like to think that Ole Bull was a pianist, not a violinist, and that he performed Beethoven’s last piano sonata, Opus 111, the one with rocking and rolling passages in the second movement (starting are roughly 14:20):

That passage certainly marked the remotest outpost of the Western musical imagination, which didn’t become comfortable with that kind of expressive material for another three-quarters of a century. Even then the comfort was strictly circumscribed. And Memphis in 1838 would certainly have impressed Ole Bull, or any other civilized European, as being pretty near the dropping-off point of Western civilization.

The sexuality which Beethoven had evoked and expressed so directly in the second and third variations of the “Arietta” was quickly sublimated, urging composers to ever more subtle and complex chromatic games, stretching movements over longer and longer time periods, from ten minutes to twenty, to half an hour or more for a single movement of a Mahler symphony. Haydn and Mozart wrote complete symphonies that weren’t that long. It did the same thing to opera, most particularly, to Wagner’s opera. Wagner would stretch it to four, five, or six hours, flowing and ebbing, building and collapsing, and ultimately exhausting. But never really fulfilling.

At roughly the same time when Stravinsky’s Rite of Spring scandalized Paris society (1913) with its driving rhythms, the blues, ragtime, and jazz would emerge in still-barbarous America. W.C. Handy wrote his “Memphis Blues” in 1912 and “Beale Street Blues in 1917. But back in 1838, Ole Bull could not have imagined music like Stravinsky’s nor could he have heard music like Handy’s, though the music he played contained the roots of one and the music he heard on the street was the roots of the other. In point of sophistication and complexity, the music Bull heard would have been less so than the music he played, just as Handy’s blues was less sophisticated than Stravinsky’s ballet. Yet with all these differences and distances these musics did meet, and that miscegenationating rhythm has been a driving force in twentieth century culture, popular, high American, Western, and world. Through these rhythms the mind of man, and woman, has been seeking a more generous and fulfilling interaction with the body.

Friday, January 7, 2022

Willard Quine on Limits to Knowing

It seems to me, off-hand, this Quine's discussion has some bearing on John Horgan's concerns about the end of science.  It may also have some bearing on the notion of superintelligent machines. After all, presumably those machines can do things that humans cannot. Could they know things that we cannot? 

After a fair amount of discussion, Quine concludes (c. 25:00):

Questions, let us remember, are in language. Language is learned by people from people only in relation ultimately to observable circumstances of utterance. The relation of language to observation is often very devious, but observation is finally all there is for language to be anchored to. If a question could, in principle, never be answered, then one feels that language has gone wrong. Language has parted its mooring and the question has no meaning.

On this philosophy, of course, our question has a sweeping answer. The question was whether there are things man could never know. The question was whether there are questions, meaningful questions, that man could in principle never answer. On this philosophy the answer to this question of questions is no.

That's the end of Quine's remarks. The rest of the video is given over to an interview with an anthropologist.

H/t 3 Quarks Daily.

Tuesday, January 4, 2022

Don’t read "Don't Look Up" too narrowly [Matt Yglesias] [Media Notes 65c]

The film’s creators say it’s a satire about climate change. But what do they know? Yglesias says:

If you insist on listening to the creators and seeing it as about climate, then while you might appreciate a few moments, I think you’ll mostly be annoyed and then start saying “but it’s not even funny” blah blah blah.

But that’s not the only way to read a text.

In policy terms, there’s not some sharp tradeoff between taking steps to minimize the risks of climate catastrophe and taking steps to minimize other kinds of catastrophes, and I don’t love framings that put it that way. But the use of a story about a comet collision as a metaphor for climate change — which I actually think works really well as a direct lesson about the risk of a comet hitting the planet as depicted in the film — struck me as funny. And I really encourage people to watch it with an open mind and see it as part of the cinema of existential risk and not just quibble about climate change.

I agree. It works as a story about a collision with a comet, but it works best if we read it more broadly, much more broadly.

On climate change:

The fundamental problem of climate change is that it involves asking people to make changes now for the sake of preventing harms that occur largely in the future to people living in other countries. It’s a genuine problem from hell, and it’s not actually solved by understanding the science or believing the factual information. This is exactly why ideas like McKay’s Manhattan Project [on carbon capture] are so important. While there is a lot we can do to improve the situation with more aggressive deployment of the technology we have, we also really do need more technological breakthroughs that will make lots of tradeoffs less painful and make progress easier.

Yglesias goes on to talk about the existential risk actually posed by comets, by supervolcanos, and but future pandemics.

Back to the film:

... in this case the message is much bigger than climate change. There is a range of often goofy-sounding threats to humanity that don’t track well onto our partisan cleavages or culture war battles other than that addressing them invariably involves some form of concerted action of the sort that conservatives tend to disparage. And this isn’t a coincidence. If existential threats were materializing all the time, we’d be dead and not streaming satirical films on Netflix. So the threats tend to sound “weird,” and if you talk a lot about them you’ll be “weird.” They don’t fit well into the grooves of ordinary political conflict because ordinary political conflict is about stuff that happens all the time.

So read Ord’s book “The Precipice” and find out all about it. Did you know that the Biological Weapons Convention has just four employees? I do because it’s in the book. Let’s maybe give them six?

For all that, though, I am genuinely shocked that the actual real-world emergence of SARS-Cov-2 has not caused more people to care about pandemic risk. The havoc that this pandemic has wreaked just in terms of economic harm and annoyance has been devastating.

There’s more.

Red branches, lion's head, and the peeping sun

Fantasia and Transnational Film Culture

I'm bumping this 2012 post to the top of the queue primarily for the argument (starting halfway through) it makes about film culture being fundamentally transnational. 

* * * * *

I have argued that Disney’s Fantasia is a signal work in a film culture that is fundamentally transnational. By transnational I don’t mean universal, or anything like it. I mean only that film culture at that time, the mid-20th Century, operated across national borders and, indeed, had been doing so since its inception.

This argument goes back to a piece I’d originally published in The Valve in 2006 and republished here in 2010: Disney’s Fantasia as Masterpiece. First I want to reprise aspects of that argument and then I’ll flesh out some things I didn’t get to back then.
Fantasia as a Singular Work

In order to argue that Fantasia is a masterpiece I had to argue that it is not, as it is so often considered, an unordered collection of autonomous episodes. That the film is not a narrative is obvious. What is not so obvious is that it is, I argue, an encyclopedia. The episodes have been carefully, if unconsciously, chosen to indicate the whole of the cosmos. In making that argument I called on two literary critics, Edward Mendelson and Franco Moretti:

In 1976 Edward Mendelson published an article on “Encyclopedic Narrative: From Dante to Pynchon” (MLN 91, 1267-1275). It was an attempt to define a genre whose members include Dante's Divine Comedy, Rablais', Gargantua and Pantagruel, Cervantes' Don Quixote, Goethe's Faust, Melville's Moby Dick, Joyce's Ulysses, and Pynchon's Gravity's Rainbow. Somewhat later Franco Moretti was thinking about “monuments,” “sacred texts,” “world texts,” texts he wrote about in Modern Epic (Verso 1996). He came upon Mendelson's article, saw a kinship with his project, and so added Wagner's Ring of the Nibelung's (note, a musical as well as a narrative work), Marquez's One Hundred Years of Solitude, and a few others to the list. It is in this company that I propose to place Fantasia.

And so I did.

You’ll have to consult that old post if you want the full argument where I consider the contribution of each of the film’s eight episodes. In this post I’m concerned with only one aspect of the argument, that these encyclopedic works are special kinds of cultural markers:

One other characteristic looms large in Mendelson's formulation. These works are identified with particular national cultures and arise were these nations become aware of themselves as distinct entities. This creates a problem for his nomination of Gravity's Rainbow as an encyclopedic work because Moby Dick already has the encyclopedic slot in American letters. He deals with the problem by suggesting that Pynchon is “the encyclopedist of the newly-forming international culture whose character his book explicitly labors to identify” (pp. 1271-1272).

Fantasia presents the same problem, for, like Moby Dick before and Gravity's Rainbow after, it is nominally an American work. But there is no specifically American reference in the film. None of the music is American, none of the segments are set in America nor refer to American history or culture. It is not, in any ordinary sense, a nationalist work, an expression of national identity. Rather, it is an expression of a naรฏve middle-brow universalism, unaware of the cultural specificities on which it depends.

That seems right to me, a naรฏve middle-brow universalism, one that circulated transnationally. That’s what Disney was after and the company still pursues it, I suppose, long after the founding genius, Walter Elias Disney, has died.
Nationalism Gets in the Way

First of  all, we need to recognize that national boundaries are not good markers of cultural kinds, though nationalist ideologies insist otherwise. I’ve argued this point at some length in a working paper, Culture, Plurality, and Identity in the 21st Century. The physics conducted behind Japanese borders is not Japanese physics in any culturally significant sense. It is just physics that is done on Japanese soil in Japanese-built structures and is readily intelligible to anyone who knows contemporary physics.

Nor is the golf played on American golf courses an essentially American game merely because American citizens play it on American soil. That the game originated in Scotland is one thing; that it is now played the world over is another. What that implies about cultural identity, I don’t quite know. But we’ve got to step back from the automatic practice of hanging national labels on cultural formations.

Culture doesn’t work that way, and never has. Cultural practices have circulated freely among human groups long before the nation state was invented. Upon its invention, though, which happened relatively recently according to Eric Hobsbawm, Nations and Nationalism Since 1780 (Cambridge 1990), it set out to create a myth of cultural homogeneity within its borders. Of national languages Hobsbawm writes (p. 54):

National languages are therefore almost always semi-artificial constructs and occasionally, like modern Hebrew, virtually invented. They are the opposite of what nationalist mythology supposes them to be, namely the primordial foundations of national culture and the matrices of the national mind. They are usually attempts to devise a standardized idiom out of  a multiplicity of actually spoken idioms, which are thereafter downgraded to dialects, the main problem in their construction being usually, which dialect to choose as the base of the standardized and homogenized language.

Of France, for example (p. 60):

However, given that the dialect which forms the basis of a national language is actually spoken, it does not matter that those who speak it are a minority, so long as it is a minority of sufficient political weight. In this sense French was essential to the concept of France, even though in 1789 50% of Frenchmen did not speak it at all, only 12-13% spoke it ‘correctly’—and indeed outside a central region it was not usually habitually spoken even in the area of the langue d’oui, except in towns, and then not always in their suburbs. In northern and southern France virtually nobody talked French.

Italy was even more problematic, with only 2½ % speaking Italian in 1860 when the country was unified.

One aspect of this nationalizing process that shows up in university curricula is the teaching of national literature and history to undergraduates as an aspect of training them to be good national citizens. Ironically enough, in American universities, it was, until relatively recently, English literature that was taught to undergraduates, not American. There were no courses in American literature at Johns Hopkins when I was a student there in the 1960s, nor were there any Americanists in the faculty of the English Department. That situation was not unusual for the time, though that time has since passed.

American national mythology has it that America is a melting pot, implying that many cultures went into it but we’ve all come out the same: American. But we haven’t. America wasn’t culturally homogeneous when Disney made Fantasia and it isn’t homogeneous now.

National boundaries and national institutions, political, economic and military, certainly have cultural consequences. But they do not constitute cultural essences. Cultural formations cross national borders all the time.
Film Culture as Transnational

In this perspective, the assertion that film culture is transnational would seem almost a trivial truism. By the late nineteenth century, when motion picture technology was invented and put to practical use, European conquest had linked the world with lines of trade and exploitation, which necessarily involved global circulation of cultural materials and practices of all kinds. The World’s Columbian Exposition, held in Chicago in 1893—where Elias Disney, Walt’s father, had worked as a carpenter—had exhibits from 46 nations around the world and was attended by 27 million people. Eadweard Muybridge exhibited moving pictures of animals at the fair.

The early film industry was, of course, built on silent films. So language was no barrier to circulation of films. According to Robert Sklar, Movie-Made America (1994, p. 22) American film-makers copied the comedies of Georges Mรฉliรจs as soon as they landed—copyright was not established for films until 1912. Until World War I the French company Pathรฉ Frรจres was the world’s largest film producer (p. 29). The Europeans were the first to make long films, a market the American producer, Adolph Zukor entered into in 1912 (pp. 42-44).

Zukor, like many of the early movie moguls, as a Jewish immigrant from Eastern Europe. This fact prompted Neal Gabler to write An Empire of Their Own: How the Jews Invented Hollywood, in which he argues that the Hollywood film industry was built by such men used it as a vehicle to make themselves into Americans, but also to make an imagined America in their image, as Christopher Lehman-Haupt notes in this review:

But above all things, these moguls ''wanted to be regarded as Americans, not Jews; they wanted to reinvent themselves here as new men.'' And in doing so, ''the Hollywood Jews created a powerful cluster of images and ideas - so powerful that, in a sense, they colonized the American imagination.''

By the time we get to Disney—the studio was founded in the early 1920s—the American film industry may have been the largest but the industry was itself a world industry in which it was  routine for films to circulate outside their countries of origin.

That was certainly true of Disney’s cartoons, which were known around the world. Mickey Mouse, and Mickey Mouse merchandise, was not exclusively American. By the time Fantasia came out, 1940, Mickey was known around the world. By then Disney had become so dependent on overseas exhibition that the advent of World War II hurt him badly, forcing him to stop making feature films and to make propaganda and training films for the Federal Government in order to keep the studio solvent.

Disney himself may have been an example of the rags-to-riches story so central to American mythology, and he may even have thought of his films as pure Americana. But pure they were not. He and his artists borrowed freely from many cultures in making their cartoons.
Disney Culture

Consider the first five feature length films, the ones completed before Disney shut down feature production at the beginning of World War II. Two of them, Snow White and the Seven Dwarfs and Pinocchio, are based on European sources and take place in European settings. Bambi is set entirely in the woods and so could take place anywhere with the appropriate flora and fauna, which is a rather large part of the world. The many settings of the episodes in Fantasia defy easy geographical reference, though The Rite of Spring takes nothing less than the solar system as its setting. That leaves us with Dumbo, which is set in America, starting in Florida with the winter quarters for the circus. But the circus itself is a European cultural form that was brought to America in the 19th Century.

Yes, these films are American in the sense that they were made in America and it is probably the case that most of the men and women involved in the making were born in America. But not all of them. Danish illustrator Kay Nielson joined Disney in 1939 and did concept art for the Ave Maria and Night of Bald Mountain sequences of Fantasia. More to the point, Robin Allen (Walt Disney in Europe) has documented a wide range of European influences in Disney’s cartoon imagery in general. In the particular case of Fantasia Kendall O'Conner, an art director on the Dance of the Hours, asserted African and Japanese influences (quoted in John Culhane, Walt Disney's Fantasia, pp. 168, 170).

Finally, all of Fantasia’s music is European; none of it is America. This was generally true of  soundtrack music for feature films. It may have been composed by American composes, some of them European immigrants, but the idiom was 19th Century European romanticism. Cartoon music was different. While much of it was from the 19th Century European classics—which were, of course, not in copyright—swing and jazz did show up in soundtracks, but not on the soundtracks of any of these old Disney features.

So, where does that leave the cultural identify of Disney’s cartoons, but most particularly, of Fantasia? If we insist on assigning them an identity, then the obvious identity is Disney. They’re an expression of Disney culture, but not in the contemporary sense of corporate culture, where, for example, the culture of Apple is said to be different from that of Monsanto or Goldman Sachs. The cultural identity of those films is Disney in the more interesting sense that it is an amalgam of diverse cultural influences that Disney himself—for he was very much a hands-on micro-manager for at least the first two decades of his studio—authorized and, to some extent, sought out.

In this process Disney and his artists did not confine themselves to American sources. That this Disney amalgam has something of a middle-brow universalism about it, that is interesting. And it is not too difficult to see how that would have a broad transnational appeal. Not universal, though. Just transnational. A lot of people in a lot of nations like Disney films, some more than others. And many, especially intellectuals, feel that they are trite and kitschy as well, not at all authentic. That’s OK as well, though not compatible with my belief that Fantasia is one of the great works. But THAT’s a different argument.

THIS argument is simply that, in the large, film culture is inherently transnational in that, from the beginning, films have circulated across national borders and film-makers in one nation have been influenced by film-makers in other nations. Yes, there ARE national cinemas, and there is more to that than the mere location where the film is made. Any number of  Hollywood Westerns, for example, can plausibly be said to be somehow culturally American in a way that Fantasia is not. And that is my argument in the small, that Fantasia in particular is not culturally American, despite having been made in America by people who were, for the most part, American citizens.
American Culture?

Is there such a thing as American culture? I believe so, though I’m not sure just what it is. I’ve already mentioned the Hollywood Western. I think a case can be made that it is authentically American. But it hasn’t stayed in America—think of the spaghetti Westerns from Italy. There is a national political mythology that encompasses the Revolutionary War, George Washington and the cherry tree, Lincoln’s Gettysburg Address, not to mention the Constitution and any number of such documents. And there’s Thanksgiving, a specifically American holiday. And so forth.

Such things constitute the cultural formation of American identity and, as such, of course they are American. And no doubt much else besides. All nation states have cultural apparatus specifically devoted to validating national identity and thus obscuring what actually takes place, on the ground, in modern nations with international trade, travel, and communications.

It precisely on account of that nationalist mystification that I think that we must be wary of the reflexive identification of cultural formations with nation states. And these large identities such as Western, Eastern, and African culture are even more doubtful. Such reflexive and pervasive identifications beg too many questions and get in the way of understanding cultural dynamics.

* * * * *

Note: The Wikipedia has a short entry on transnational cinema:
A key argument of Transnational cinema is the necessity for a redefinition, or even refutation, of the concept of a national cinema. National identity has been posited as an 'imaginary community' that in reality is formed of many separate and fragmented communities defined more by social class, economic class, sexuality, gender, generation, religion, ethnicity, political belief and fashion, than nationality.
The references are quite recent.