I am really digging this mashup of network neuroscience and control theory https://t.co/d60Ly8cizC— Patrick Mineault (@patrickmineault) January 30, 2022
Sunday, January 30, 2022
Friday, January 28, 2022
Abstract of the linked article:
Deep-learning models have become pervasive tools in science and engineering. However, their energy requirements now increasingly limit their scalability1. Deep-learning accelerators aim to perform deep learning energy-efficiently, usually targeting the inference phase and often by exploiting physical substrates beyond conventional electronics. Approaches so far have been unable to apply the backpropagation algorithm to train unconventional novel hardware in situ. The advantages of backpropagation have made it the de facto training method for large-scale neural networks, so this deficiency constitutes a major impediment. Here we introduce a hybrid in situ–in silico algorithm, called physics-aware training, that applies backpropagation to train controllable physical systems. Just as deep learning realizes computations with deep neural networks made from layers of mathematical functions, our approach allows us to train deep physical neural networks made from layers of controllable physical systems, even when the physical layers lack any mathematical isomorphism to conventional artificial neural network layers. To demonstrate the universality of our approach, we train diverse physical neural networks based on optics, mechanics and electronics to experimentally perform audio and image classification tasks. Physics-aware training combines the scalability of backpropagation with the automatic mitigation of imperfections and noise achievable with in situ algorithms. Physical neural networks have the potential to perform machine learning faster and more energy-efficiently than conventional electronic processors and, more broadly, can endow physical systems with automatically designed physical functionalities, for example, for robotics materials and smart sensors.
Thursday, January 27, 2022
We've trained GPT-3 to be more aligned with what humans want: The new InstructGPT models are better at following human intent than a 100x larger model, while also improving safety and truthfulness. https://t.co/rKNpCDAMb2— OpenAI (@OpenAI) January 27, 2022
We’ve used basically the same technique (which we call RLHF) in the past for text summarization (https://t.co/nrJjX62SsV). “All we’re doing” here is applying it to a much broader range of language tasks that people use GPT-3 for in the API— Ryan Lowe (@ryan_t_lowe) January 27, 2022
Color vision, from genetics through neuropsychology to color terms, is one of the most intensely investigated topics in cognitive science. The subject is interesting for two reasons:
- as psychological phenomena go, it's relatively simple, and accessible to investigation through a variety of methods, and
- in particular, it can be studied cross-culturally and thus shed light on the nature-nurture question.
Color naming varies across languages; however, it has long been held that this variation is constrained. Berlin and Kay  found that color categories in 20 languages were organized around universal ‘focal colors’ – those colors corresponding principally to the best examples of English ‘black’, ‘white’, ‘red’, ‘yellow’, ‘green’ and ‘blue’. Moreover, a classic set of studies by Eleanor Rosch found that these focal colors were also remembered more accurately than other colors, across speakers of languages with different color naming systems (e.g. ). Focal colors seemed to constitute a universal cognitive basis for both color language and color memory.
The debate over color naming and cognition can be clarified by discarding the traditional ‘universals versus relativity’ framing, which collapses important distinctions. There are universal constraints on color naming, but at the same time, differences in color naming across languages cause differences in color cognition and/or perception. The source of the universal constraints is not firmly established. However, it appears that it can be said that nature proposes and nurture disposes. Finally, ‘categorical perception’ of color might well be perception sensu stricto, but the jury is still out.
The existence of cross-linguistic universals in color naming is currently contested. Early empirical studies, based principally on languages of industrialized societies, suggested that all languages may draw on a universally shared repertoire of color categories. Recent work, in contrast, based on languages from nonindustrialized societies, has suggested that color categories may not be universal. No comprehensive objective tests have yet been conducted to resolve this issue. We conduct such tests on color naming data from languages of both industrialized and nonindustrialized societies and show that strong universal tendencies in color naming exist across both sorts of language.
The central empirical focus of our study was the color naming data of the Word Color Survey (WCS). The WCS was undertaken in response to the above-mentioned shortcomings of the BK [Berlin and Kay] data (1): it has collected color naming data in situ from 110 unwritten languages spoken in small-scale, nonindustrialized societies, from an average of 24 native speakers per language (mode: 25 speakers), insofar as possible monolinguals. Speakers were asked to name each of 330 color chips produced by the Munsell Color Company (New Windsor, NY), representing 40 gradations of hue at eight levels of value (lightness) and maximal available chroma (saturation), plus 10 neutral (black-gray-white) chips at 10 levels of value. Chips were presented in a fixed random order for naming. The array of all color chips is shown in Fig. 1. (The actual stimulus colors may not be faithfully represented there.) In addition, each speaker was asked to indicate the best example(s) of each of his or her basic color terms. The original BK study used a color array that was nearly identical to this, except that it lacked the lightest neutral chip. The languages investigated in the WCS and BK are listed in Tables 1 and 2.
The application of statistical tests to the color naming data of the WCS has established three points: (i) there are clear cross-linguistic statistical tendencies for named color categories to cluster at certain privileged points in perceptual color space; (ii) these privileged points are similar for the unwritten languages of nonindustrialized communities and the written languages of industrialized societies; and (iii) these privileged points tend to lie near, although not always at, those colors named red, yellow, green, blue, purple, brown, orange, pink, black, white, and gray in English.
Wednesday, January 26, 2022
Looking over my entertainment options for this evening. Sam Cooke at the @ApolloTheater? Brubeck at @carnegiehall? Miles Davis at the Village Vanguard? Dylan in... Newark?? I think @STILLERandMEARA at the Blue Angel sounds best. Hey, @RedHourBen want to join me? pic.twitter.com/SsXgTYAB8G— Dan Pasternack (@DanPasternack) January 27, 2022
Saturday, January 22, 2022
Zaria Gorvett, The forgotten medieval habit of 'two sleeps', BBC Future, Jan 9, 2022. Much of the article is apparently based on, Roger Ekrich, At Day's Close: A History of Nighttime.
In the 17th Century, a night of sleep went something like this.
From as early as 21:00 to 23:00, those fortunate enough to afford them would begin flopping onto mattresses stuffed with straw or rags – alternatively it might have contained feathers, if they were wealthy – ready to sleep for a couple of hours. (At the bottom of the social ladder, people would have to make do with nestling down on a scattering of heather or, worse, a bare earth floor – possibly even without a blanket.)
At the time, most people slept communally, and often found themselves snuggled up with a cosy assortment of bedbugs, fleas, lice, family members, friends, servants and – if they were travelling – total strangers.
To minimise any awkwardness, sleep involved a number of strict social conventions, such as avoiding physical contact or too much fidgeting, and there were designated sleeping positions. For example, female children would typically lie at one side of the bed, with the oldest nearest the wall, followed by the mother and father, then male children – again arranged by age – then non-family members.
A couple of hours later, people would begin rousing from this initial slumber. The night-time wakefulness usually lasted from around 23:00 to about 01:00, depending on what time they went to bed. It was not generally caused by noise or other disturbances in the night – and neither was it initiated by any kind of alarm (these were only invented in 1787, by an American man who – somewhat ironically – needed to wake up on time to sell clocks). Instead, the waking happened entirely naturally, just as it does in the morning.
The period of wakefulness that followed was known as "the watch" – and it was a surprisingly useful window in which to get things done. "[The records] describe how people did just about anything and everything after they awakened from their first sleep," says Ekirch.
Under the weak glow of the Moon, stars, and oil lamps or "rush lights" – a kind of candle for ordinary households, made from the waxed stems of rushes – people would tend to ordinary tasks, such as adding wood to the fire, taking remedies, or going to urinate (often into the fire itself).
For peasants, waking up meant getting back down to more serious work – whether this involved venturing out to check on farm animals or carrying out household chores, such as patching cloth, combing wool or peeling the rushes to be burned. One servant Ekirch came across even brewed a batch of beer for her Westmorland employer one night, between midnight and 02:00. Naturally, criminals took the opportunity to skulk around and make trouble – like the murderer in Yorkshire.
But the watch was also a time for religion.
For Christians, there were elaborate prayers to be completed, with specific ones prescribed for this exact parcel of time. One father called it the most "profitable" hour, when – after digesting your dinner and casting off the labours of the world – "no one will look for you except for God”.
Those of a philosophical disposition, meanwhile, might use the watch as a peaceful moment to ruminate on life and ponder new ideas. In the late 18th Century, a London tradesman even invented a special device for remembering all your most searing nightly insights – a "nocturnal remembrancer", which consisted of an enclosed pad of parchment with a horizontal opening that could be used as a writing guide.
But most of all, the watch was useful for socialising – and for sex.
As Ekirch explains in his book, At Day's Close: A History of Nighttime, people would often just stay in bed and chat. And during those strange twilight hours, bedfellows could share a level of informality and casual conversation that was hard to achieve during the day.
For husbands and wives who managed to navigate the logistics of sharing a bed with others, it was also a convenient interval for physical intimacy – if they'd had a long day of manual labour, the first sleep took the edge off their exhaustion and the period afterwards was thought to be an excellent time to conceive copious numbers of children.
Once people had been awake for a couple of hours, they'd usually head back to bed. This next step was considered a "morning" sleep and might last until dawn, or later. Just as today, when people finally woke up for good depended on what time they went to bed.
Ekirch also references bi-phasic sleep in the classical era.
Biphasic sleep is common among among animals as well.
"There are broad swaths of variability among primates, in terms of how they distribute their activity throughout the 24-hour period," says David Samson, director of the sleep and human evolution laboratory at the University of Toronto Mississauga, Canada. And if double-sleeping is natural for some lemurs, he wondered: might it be the way we evolved to sleep too?
The move away from biphasic sleep happened during the Industrial Revolution:
I mention multiphasic sleep in other posts, such as:
"Artificial illumination became more prevalent, and more powerful – first there was gas [lighting], which was introduced for the first time ever in London," says Ekirch, "and then, of course, electric lighting toward the end of the century. And in addition to altering people's circadian rhythms. artificial illumination also naturally allowed people to stay up later."
However, though people weren't going to bed at 21:00 anymore, they still had to wake up at the same time in the morning – so their rest was truncated. Ekirch believes that this made their sleep deeper, because it was compressed.
As well as altering the population's circadian rhythms, the artificial lighting lengthened the first sleep, and shortened the second. "And I was able to trace [this], almost decade by decade, over the course of the 19th Century," says Ekirch.
Thursday, January 20, 2022
Abstract for the linked article:
Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes that the future resembles the past: intelligent agents or systems (what we call ‘intelligence’) observe and act on the world, then use this experience to act on future experiences of the same kind. We call this ‘retrospective learning’. For example, an intelligence may see a set of pictures of objects, along with their names, and learn to name them. A retrospective learning intelligence would merely be able to name more pictures of the same objects. We argue that this is not what true intelligence is about. In many real world problems, both NIs and AIs will have to learn for an uncertain future. Both must update their internal models to be useful for future tasks, such as naming fundamentally new objects and using these objects effectively in a new context or to achieve previously unencountered goals. This ability to learn for the future we call ‘prospective learning’. We articulate four relevant factors that jointly define prospective learning. Continual learning enables intelligences to remember those aspects of the past which it believes will be most useful in the future. Prospective constraints (including biases and priors) facilitate the intelligence finding general solutions that will be applicable to future problems. Curiosity motivates taking actions that inform future decision making, including in previously unmet situations. Causal estimation enables learning the structure of relations that guide choosing actions for specific outcomes, even when the specific action-outcome contingencies have never been observed before. We argue that a paradigm shift from retrospective to prospective learning will enable the communities that study intelligence to unite and overcome existing bottlenecks to more effectively explain, augment, and engineer intelligences.
Abstract of the linked article:
Academic defenses of the humanities often make two assumptions: first, that the overwhelming public perception of the humanities is one of crisis, and second, that our understanding of what the humanities mean is best traced through a lineage of famous reference points, from Matthew Arnold to the Harvard Redbook. We challenge these assumptions by reconsidering the humanities from the perspective of a corpus of over 147,000 relatively recent national and campus newspaper articles. Building from the work of the WhatEvery1Says project (WE1S), we employ computational methods to analyze how the humanities resonate in the daily language of communities, campuses, and cities across the US. We compare humanities discourse to science discourse, exploring the distinct ways that each type of discourse communicates research, situates itself institutionally, and discusses its value. Doing so shifts our understanding of both terms in the phrase “public humanities.” We turn from the sweeping and singular conception of “the public” often invoked by calls for a more public humanities to the multiple overlapping publics instantiated through the journalistic discourse we examine. And “the humanities” becomes not only the concept named by articles explicitly “about” the humanities, but also the accreted meaning of wide-ranging mentions of the term in building names, job titles, and announcements. We argue that such seemingly inconsequential uses of the term index diffuse yet vital connections between individuals, communities, and institutions including, but not limited to, colleges and universities. Ultimately, we aim to show that a robust understanding of how humanities discourse already interacts with and conceives of the publics it addresses should play a crucial role in informing ongoing and future public humanities efforts.
Wednesday, January 19, 2022
Two rather general sets of remarks, one set prompted by Ted Gioia, the other by Tyler Cowen, followed by a bunch of specifics about classical music and jazz piano.
Early in the 21st Century old music seems to be pushing out new
Ted Gioia has an interesting piece, Is Old Music Killing New Music? (1.19.22). He opens:
I had a hunch that old songs were taking over music streaming platforms—but even I was shocked when I saw the most recent numbers. According to MRC Data, old songs now represent 70% of the US music market.
Those who make a living from new music—especially that endangered species known as the working musician — have to look on these figures with fear and trembling.
But the news gets worse.
The new music market is actually shrinking. All the growth in the market is coming from old songs.
Just consider these facts: the 200 most popular tracks now account for less than 5% of total streams. It was twice that rate just three years ago. And the mix of songs actually purchased by consumers is even more tilted to older music—the current list of most downloaded tracks on iTunes is filled with the names of bands from the last century, such as Creedence Clearwater and The Police.
And so on.
After offering a fair bit of discussion he considers and rejects the idea “that this decline in music is simply the result of lousy new songs. Music used to be better, or so they say.” Rather, “I listen to 2-3 hours of new music every day, and I know that there are plenty of outstanding young musicians out there. The problem isn’t that they don’t exist, but that the music industry has lost its ability to discover and nurture their talents.” He goes on to note:
In fact, nothing is less interesting to music executives than a completely radical new kind of music. [...] Anything that genuinely breaks the mold is excluded from consideration almost as a rule. That’s actually how the current system has been designed to work.
Even the music genres famous for shaking up the world—rock or jazz or hip-hop—face this same deadening industry mindset. I love jazz, but many of the radio stations focused on that genre play songs that sound almost the same as what they featured ten or twenty years ago. In many instances, they actually are the same songs.
He goes on to say a bit more and ends with the observation:
New music always arises in the least expected place, and when the power brokers aren’t even paying attention. And it will happen again just like that. It certainly needs to. Because the decision-makers controlling our music institutions have lost the thread. We’re lucky that the music is too powerful for them to kill.
OK, so change has to happen from the bottom up.
Set that aside.
* * * * *
Rick Beato discusses Gioia's thesis:
What happened to classical music?
A couple of weeks ago Tyler Cowen asked “Why has classical music declined?” He posed it in response to a request one of his readers, Rahul, had made to an earlier post (quoting Rahul’s remarks):
In general perception, why are there no achievements in classical music that rival a Mozart, Bach, Beethoven etc. that were created in say the last 50 years?
Is it an exhaustion of what's possible? Are all great motifs already discovered?
Or will we in another 50 or 100 years admire a 1900's composer at the same level as a Mozart or Beethoven?
Or was it something unique in that era (say 1800's) which was conducive to the discovery of great compositions? Patronage? Lack of distraction?
Cowen offers six numbered observations of his own, leading to a long discussion.
Here’s Cowen’s first comment:
The advent of musical recording favored musical forms that allow for the direct communication of personality. Mozart is mediated by sheet music, but the Rolling Stones are on record and the radio and now streaming. You actually get “Mick Jagger,” and most listeners prefer this to a bunch of quarter notes. So a lot of energy left the forms of music that are communicated through more abstract means, such as musical notation, and leapt into personality-specific musics.
From Mozart to the Rolling Stones, that’s quite a lot of musical territory to cover. Yikes!
* * * * *
I’ve been haunted by that conversation. While the quality of the responses is all over the place – no surprise there – what’s interesting is how many of them there are, 210 at the moment. The subject matters to a lot of people.
So I’ve been thinking about it making notes, and have lots of thoughts.
One of those thoughts is that this decline of classical music, if you want to call it that, has been followed by the ascendance of American music. I’d hesitate to go so far as to say that classical music was ‘killed’ by American pop, but there you have it in Chuck Berry, “Roll Over, Beethoven” (1956).
The idea is implicit in the juxtaposition of Mozart and The Rolling Stones in Cowen’s first comment. To be sure, the Stones are British, but they perform in a genre that arose in America, rock and roll. A lot of those “personality-specific” musics came out of America.
In the case of rock and roll, we can trace it back through jump bands and swing combos to early jazz and blues, which then disperse into the 19th century, where minstrelsy emerged as a major form of mass entertainment. The first quarter of the 20th century saw the migration of Blacks out of the South to the North, Midwest, and West. At the same time recording and radio allowed music to spread beyond the geographical locus of musical performers. And Europe was engulfed in the First World War, which you also mention.
That’s when “creators struck out in new directions” en mass. And so you had jazz/swing/blues vs. the long-hair and high-brow classics, a conflict that showed up, among other places, in films and cartoons (where it lingered for a while, e.g. Bugs Bunny in “Long-Haired Hare”, 1949). We had Gershwin and Broadway and Armstrong, Crosby, and Sinatra figuring out how to adapt singing style to microphones and Amplification.
What jazz pianists learned from the classical tradition
One thing that came up in the discussion of Tyler’s comments if the fact that, once movies acquired sound, the compositional strategies and techniques developed in 18th and (mostly) 19th century classical music showed up on the sound tracks of those movies (John Williams is one composer mentioned by name).
More generally, however, those techniques became the common inheritance of any musicians who would listen and learn. Consider these jazz musicians listed in a piece by Ethan Iversion, Theory and European Classical Music (NEC Missive #2):
Keith Romer, How A.I. Conquered Poker, NYTimes Magazine, 1.18.22.
Using his own simplified version of the game, in which two players were randomly “dealt” secret numbers and then asked to make bets of a predetermined size on whose number was higher, von Neumann derived the basis for an optimal strategy. Players should bet large both with their very best hands and, as bluffs, with some definable percentage of their very worst hands. (The percentage changed depending on the size of the bet relative to the size of the pot.) Von Neumann was able to demonstrate that by bluffing and calling at mathematically precise frequencies, players would do no worse than break even in the long run, even if they provided their opponents with an exact description of their strategy. And, if their opponents deployed any strategy against them other than the perfect one von Neumann had described, those opponents were guaranteed to lose, given a large enough sample.
The early days:
Unlike in chess or backgammon, in which both players’ moves are clearly legible on the board, in poker a computer has to interpret its opponents’ bets despite never being certain what cards they hold. Neil Burch, a computer scientist who spent nearly two decades working on poker as a graduate student and researcher at Alberta before joining an artificial intelligence company called DeepMind, characterizes the team’s early attempts as pretty unsuccessful. “What we found was if you put a knowledgeable poker player in front of the computer and let them poke at it,” he says, the program got “crushed, absolutely smashed.”
Partly this was just a function of the difficulty of modeling all the decisions involved in playing a hand of poker. Game theorists use a diagram of a branching tree to represent the different ways a game can play out. [...] For even a simplified version of Texas Hold ’em, played “heads up” (i.e., between just two players) and with bets fixed at a predetermined size, a full game tree contains 316,000,000,000,000,000 branches. The tree for no-limit hold ’em, in which players can bet any amount, has even more than that. “It really does get truly enormous,” Burch says. “Like, larger than the number of atoms in the universe.”
At first, the Alberta group’s approach was to try to shrink the game to a more manageable scale — crudely bucketing hands together that were more or less alike, treating a pair of nines and a pair of tens, say, as if they were identical. But as the field of artificial intelligence grew more robust, and as the team’s algorithms became better tuned to the intricacies of poker, its programs began to improve. Crucial to this development was an algorithm called counterfactual regret minimization. Computer scientists tasked their machines with identifying poker’s optimal strategy by having the programs play against themselves billions of times and take note of which decisions in the game tree had been least profitable (the “regrets,” which the A.I. would learn to minimize in future iterations by making other, better choices). In 2015, the Alberta team announced its success by publishing an article in Science titled “Heads-Up Limit Hold’em Poker Is Solved.” [...]
It quickly became clear that academics were not the only ones interested in computers’ ability to discover optimal strategy. One former member of the Alberta team, who asked me not to name him, citing confidentiality agreements with the software company that currently employs him, told me that he had been paid hundreds of thousands of dollars to help poker players develop software that would identify perfect play and to consult with programmers building bots that would be capable of defeating humans in online games. Players unable to front that kind of money didn’t have to wait long before gaining more affordable access to A.I.-based strategies. The same year that Science published the limit hold ’em article, a Polish computer programmer and former online poker player named Piotrek Lopusiewicz began selling the first version of his application PioSOLVER. For $249, players could download a program that approximated the solutions for the far more complicated no-limit version of the game. As of 2015, a practical actualization of John von Neumann’s mathematical proof was available to anyone with a powerful enough personal computer.
Koon is quick to point out that even with access to the solvers’ perfect strategy, poker remains an incredibly difficult game to play well. The emotional swings that come from winning or losing giant pots and the fatigue of 12-hour sessions remain the same challenges as always, but now top players have to put in significant work away from the tables to succeed. Like most top pros, Koon spends a good part of each week studying different situations that might arise, trying to understand the logic behind the programs’ choices. “Solvers can’t tell you why they do what they do — they just do it,” he says. “So now it’s on the poker player to figure out why.”
The best players are able to reverse-engineer the A.I.’s strategy and create heuristics that apply to hands and situations similar to the one they’re studying. Even so, they are working with immense amounts of information. When I suggested to Koon that it was like endlessly rereading a 10,000-page book in order to keep as much of it in his head as possible, he immediately corrected me: “100,000-page book. The game is so damn hard.”
Not every player I spoke to is happy about the way A.I.-based approaches have changed the poker landscape. For one thing, while the tactics employed in most lower-stakes games today look pretty similar to those in use before the advent of solvers, higher-stakes competition has become much tougher. As optimal strategy has become more widely understood, the advantage in skill the very best players once held over the merely quite good players has narrowed considerably. But for Doug Polk, who largely retired from poker in 2017 after winning tens of millions of dollars, the change solvers have wrought is more existential. “I feel like it kind of killed the soul of the game,” Polk says, changing poker “from who can be the most creative problem-solver to who can memorize the most stuff and apply it.”
Piotrek Lopusiewicz, the programmer behind PioSOLVER, counters by arguing that the new generation of A.I. tools is merely a continuation of a longer pattern of technological innovation in poker. Before the advent of solvers, top online players like Polk used software to collect data about their opponents’ past play and analyze it for potential weaknesses. “So now someone brought a bigger firearm to the arms race,” Lopusiewicz says, “and suddenly those guys who weren’t in a position to profit were like: ‘Oh, yeah, but we don’t really mean that arms race. We just want our tools, not the better tools.’”
Tuesday, January 18, 2022
Interesting new paper about the perennial question of lateralization and timing, from Chris Kell's lab in Frankfurt https://t.co/AeoVe5x9Zo Differential contributions of the two human cerebral hemispheres to action timing— David Poeppel (@davidpoeppel) November 21, 2019
Abstract from the article: Anja Pflug, Florian Gompf, et al., Differential contributions of the two human cerebral hemispheres to action timing, eLife 2019;8:e48404 DOI: 10.7554/eLife.48404:
Abstract: Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
Three against two is one of the most important rhythm ‘cells’ in all of music. What do I mean, three against two? You play three evenly spaced beats in one ‘stream’ in the same period of time you play two evenly spaced beats in another ‘stream.’ It sounds simple enough but, the problem is that three and two do not have a common divisor, making the ‘evenly spaced’ part of the formula a bit tricky. The two patterns coincide on the first beat, but the second and third beats of the three-beat stream happen at different times from the second beat of the two-beat stream. And if you think that’s a lot of verbiage for something that ought to be simple, when then you’re beginning to get the idea.
|Three Against Two|
In a slow tempo it may serve you to think of the second eighth-note of the triplet as being subdivided into two sixteenths. After both hands have played the first note of their respective groups simultaneously, the place of the aforsaid sixteenth note is to be filled by the second note of the couplet.
Sunday, January 16, 2022
Dick Macksey’s library made the NYTimes today. Macksey died in July of 2019, but his library, or rather a simulacrum of it, lives on. That particular photo appeared on the web some time before Macksey’s death, two, three, I don’t know, maybe four years before. I don’t know when I first saw it, but I recognized it immediately. That library is legendary in the circle of Macksey’s colleagues, students and former students, and who knows who else. We’ve all seen it and marveled.
I took my first course of Dick Macksey in the Spring of 1966. It was on the autobiographical novel and when we read Tristram Shandy, he brought in a first edition for the class to see, nine small (quarto-sized) volumes. I likely visited his house that semester with the class. I remember seeing the three volumes of The Principia Mathematica on a chair in the entryway. But he’d not yet converted the garage into a library – that was a year later. After that I visited the library many times, sometimes with a class, sometimes after one of the films shown on Wednesday and Friday evenings in Levering Hall. Macksey would host discussions of the films, and further viewings – he had a screen and a 16mm projector in the library. The library seemed full at the time, but not nearly so full as it is in that photo.
If one looks, one can easily find photos of libraries on the web, public and institutional libraries, but home libraries as well. Many of them are magnificent, grander and more luxurious that Dick’s. They show well. But don’t look used.
Dick’s library was used, of course by Dick himself, but by others as well. That’s how the library looks, well used. And that despite the fact that there are no people in it. It appears inhabited, alive. It’s those lights and their relationship to the books and shelves. That photo reveals the library as the living embodiment of the inquiring mind.
I wonder how long it will keep spinning through the web?
Wednesday, January 12, 2022
Cara Buckley, “Don’t Just Watch: Team Behind ‘Don’t Look Up’ Urges Climate Action,” NYTimes, 1.11.22:
After the film premiered in December, climate scientists took to social media and penned opinion essays, saying they felt seen at last. Neil deGrasse Tyson tweeted that it seemed like a documentary. Several admirers likened the film to “A Modest Proposal,” the 18th-century satirical essay by Jonathan Swift.
Naysayers, meanwhile, said the comet allegory was lost on those who took it literally, and questioned why Mr. McKay hadn’t been more straightforward about global warming. Writing in The New Yorker, Richard Brody said if scientists didn’t like what film critics had to say about science, “the scientists should stop meddling with art.”
Either way, at a time when leaders are failing to take the necessary measures to tackle the planet emergency, and the volume and ferocity of so-called “natural” disasters reach ever graver peaks, there is little question that the movie has struck a pretty big nerve. According to Netflix, which self reports its own figures and was the studio behind the film and its distributor, the movie is one of the its most popular films ever, amassing an unprecedented 152 million hours viewed in one week.
“The goal of the movie was to raise awareness about the terrifying urgency of the climate crisis, and in that, it succeeded spectacularly,” said Genevieve Guenther, the founder and director of End Climate Silence, an organization that promotes media coverage of climate change.
Tuesday, January 11, 2022
Powers of Ten, the short 1968 documentary by Ray and Charles Eames has been updated to reflect current knowledge:
Note, however, that while the original version zooms in to the micro-scale world after having zoomed out to the macro-scale, this new version does only the zoom-out.
Here's a post that presents the original film along with a bit of commentary.H/t 3QD.
Monday, January 10, 2022
The sexuality which Beethoven had evoked and expressed so directly in the second and third variations of the “Arietta” was quickly sublimated, urging composers to ever more subtle and complex chromatic games, stretching movements over longer and longer time periods, from ten minutes to twenty, to half an hour or more for a single movement of a Mahler symphony. Haydn and Mozart wrote complete symphonies that weren’t that long. It did the same thing to opera, most particularly, to Wagner’s opera. Wagner would stretch it to four, five, or six hours, flowing and ebbing, building and collapsing, and ultimately exhausting. But never really fulfilling.
Friday, January 7, 2022
It seems to me, off-hand, this Quine's discussion has some bearing on John Horgan's concerns about the end of science. It may also have some bearing on the notion of superintelligent machines. After all, presumably those machines can do things that humans cannot. Could they know things that we cannot?
After a fair amount of discussion, Quine concludes (c. 25:00):
Questions, let us remember, are in language. Language is learned by people from people only in relation ultimately to observable circumstances of utterance. The relation of language to observation is often very devious, but observation is finally all there is for language to be anchored to. If a question could, in principle, never be answered, then one feels that language has gone wrong. Language has parted its mooring and the question has no meaning.
On this philosophy, of course, our question has a sweeping answer. The question was whether there are things man could never know. The question was whether there are questions, meaningful questions, that man could in principle never answer. On this philosophy the answer to this question of questions is no.
That's the end of Quine's remarks. The rest of the video is given over to an interview with an anthropologist.
H/t 3 Quarks Daily.
Tuesday, January 4, 2022
The film’s creators say it’s a satire about climate change. But what do they know? Yglesias says:
If you insist on listening to the creators and seeing it as about climate, then while you might appreciate a few moments, I think you’ll mostly be annoyed and then start saying “but it’s not even funny” blah blah blah.
But that’s not the only way to read a text.
In policy terms, there’s not some sharp tradeoff between taking steps to minimize the risks of climate catastrophe and taking steps to minimize other kinds of catastrophes, and I don’t love framings that put it that way. But the use of a story about a comet collision as a metaphor for climate change — which I actually think works really well as a direct lesson about the risk of a comet hitting the planet as depicted in the film — struck me as funny. And I really encourage people to watch it with an open mind and see it as part of the cinema of existential risk and not just quibble about climate change.
I agree. It works as a story about a collision with a comet, but it works best if we read it more broadly, much more broadly.
On climate change:
The fundamental problem of climate change is that it involves asking people to make changes now for the sake of preventing harms that occur largely in the future to people living in other countries. It’s a genuine problem from hell, and it’s not actually solved by understanding the science or believing the factual information. This is exactly why ideas like McKay’s Manhattan Project [on carbon capture] are so important. While there is a lot we can do to improve the situation with more aggressive deployment of the technology we have, we also really do need more technological breakthroughs that will make lots of tradeoffs less painful and make progress easier.
Yglesias goes on to talk about the existential risk actually posed by comets, by supervolcanos, and but future pandemics.
Back to the film:
... in this case the message is much bigger than climate change. There is a range of often goofy-sounding threats to humanity that don’t track well onto our partisan cleavages or culture war battles other than that addressing them invariably involves some form of concerted action of the sort that conservatives tend to disparage. And this isn’t a coincidence. If existential threats were materializing all the time, we’d be dead and not streaming satirical films on Netflix. So the threats tend to sound “weird,” and if you talk a lot about them you’ll be “weird.” They don’t fit well into the grooves of ordinary political conflict because ordinary political conflict is about stuff that happens all the time.
So read Ord’s book “The Precipice” and find out all about it. Did you know that the Biological Weapons Convention has just four employees? I do because it’s in the book. Let’s maybe give them six?
For all that, though, I am genuinely shocked that the actual real-world emergence of SARS-Cov-2 has not caused more people to care about pandemic risk. The havoc that this pandemic has wreaked just in terms of economic harm and annoyance has been devastating.
I have argued that Disney’s Fantasia is a signal work in a film culture that is fundamentally transnational. By transnational I don’t mean universal, or anything like it. I mean only that film culture at that time, the mid-20th Century, operated across national borders and, indeed, had been doing so since its inception.
Note: The Wikipedia has a short entry on transnational cinema:
A key argument of Transnational cinema is the necessity for a redefinition, or even refutation, of the concept of a national cinema. National identity has been posited as an 'imaginary community' that in reality is formed of many separate and fragmented communities defined more by social class, economic class, sexuality, gender, generation, religion, ethnicity, political belief and fashion, than nationality.