Tuesday, September 22, 2020

High on a low hill in Hoboken on the banks of the Hudson

Why, in the course of an intellectual life, can it take years to see the obvious?

I’m thinking of my own intellectual life, of course. And I have two examples in mind, 1) my realization that literary form was at the center of my interest in literature, and 2) my recent realization about the impossibility of direct brain-to-brain thought transmission.

Literary form

My first major piece of intellectual work was my 1972 MA thesis on “Kubla Khan.” That thesis focused on the poem’s form and in a sense set the direction for my career, and it led me to focus on computation and the cognitive sciences. In graduate school at SUNY Buffalo I wrote papers that were concerned with form, on Sir Gawain and the Green Knight, Much Ado About Nothing, Othello, “The Cat and the Moon”, and Wuthering Heights. I revised the Sir Gawain paper and published it at the time [1], and some years later material from the two Shakespeare papers was the basis of a publication [2]. I also published a paper about narrative form that based, in part, on my 1978 dissertation. So I was examining form from the beginning and yet I didn’t realize that it was the center of my focus.

It wasn’t until the mid 1990s that I realized that it was form I was looking at all along [4]. And that realization came about indirectly. In cruising the web I discovered that the Stanford Humanities Review had devoted an issue to cognitive science an literature. Herbert Simon had written an article setting forth his views [5] and 33 critics had responded. Obviously there was now a group of literary scholars interested in cognitive science. I read their stuff, went to a couple of conferences some of these people attended (Haj Ross’s Languaging conferences at North Texas) and realized that these people were not at all interested in the things I was.

First and foremost, their version of cognitive science did not include computation, which was central to my interest. It was in the course of thinking that through that I realized that they weren’t interested in form either, but I was. And now that I thought about it, form was central to my work in literature, wasn’t it? That puts we into the late 1990s, when I took a detour from literature to write a book on music. So it wasn’t until the early 2000s that I was able focus on form, when I returned to “Kubla Khan” and then “This Lime-Tree Bower My Prison” with articles in PsyArt: An Online Journal for the Psychological Study of the Arts, and culminating in 2006 article on literary morphology, where I put form front and center [6].

When it was there from the beginning, why did it take me so long to get there?

Brain-to-brain thought transmission

The second case is brain-to-brain thought transmission. I first took the matter up in January 2002 when I posted a thought experiment to Brainstorms, an online community established by Howard Rheingold. In that experiment I imagined we had the technology to do it without harming brain tissue, but how do we determine which neurons to link together in the respective brains? Since brains are unique there is no inherent unique mapping between neurons in two brains. This is unlike the situation with gross body parts, where one person’s right thumb corresponds to another person’s right thumb, and so forth.

It was until a decade later, in 2013, when I put the thought experiment online [7] that I imagined a much simpler and more direct counter argument: How does a brain tell where a given impulse comes from? Brains have no mechanisms for distinguishing between endogenous and exogenous impulses.

Why did it take me a decade to move from the complex argument to the simple one?

What’s going on?

I don’t really know. But it must be a function of how one’s mind is set-up when you first approach a problem. In the case of literary texts, literary criticism is about meaning; that’s what you’re taught in school. More than that, when we read any text for whatever purpose, we’re reading it for meaning. That’s the orientation. In literary study one learns about form, of course, but you don’t focus on it in an analytic or descriptive way.

So, to focus on form I had to break away from my prior training, but also from my natural inclination toward texts. And, come to think of it, this is not simply a matter of will, but of method as well. By the time I made the break I had several examples of close formal analysis from my own work. Those gave me a standpoint from which to make the break.

Is it the same with brain-to-brain thought transmission? That’s not a topic that normally comes up in the study of neuroscience. No one is examining what happens when you link two brains together so we don’t have any specific framework at all. What do we do? We apply a default framework of some kind? Where do we get that framework? Most likely from our experience with electrical and electronic circuits, especially in computers.

And that’s not a useful framework at all. In fact it gets in the way because brain circuitry is not at all like neural circuitry. Anyone who studies neuroscience knows this, of course, but they may not have the knowledge linked strongly to this kind of problem. In my case I had my conversations with Walter Freeman on the uniqueness of brains, and that focused my attention on neurons and on comparison between brains at the single neuron level. Thought was enough to bring me to the realization that we had no coherent and consistent way to link two brains together on the scale of 10s of millions of neurons.

But that wasn’t enough to take me the whole way to the realization that there is no way for brains to identify foreign spikes. But I did do something like that in my review of Auger’s Electric Meme, which I also wrote in 2002 [8]. But why did it take me a decade to connect the two together? What was my new standpoint?

References

[1] William Benzon, Sir Gawain and the Green Knight and the Semiotics of Ontology, Semiotica, 3/4, 1977, 267-293, https://www.academia.edu/238607/Sir_Gawain_and_the_Green_Knight_and_the_Semiotics_of_Ontology/.

[2] William Benzon, At the Edge of the Modern, or Why is Prospero Shakespeare's Greatest Creation? Journal of Social and Evolutionary Systems, 21 (3): 259-279, 1998, https://www.academia.edu/235334/At_the_Edge_of_the_Modern_or_Why_is_Prospero_Shakespeares_Greatest_Creation.

[3] William Benzon, The Evolution of Narrative and the Self, Journal of Social and Evolutionary Systems, 16( 2): 129-155, 1993, https://www.academia.edu/235114/The_Evolution_of_Narrative_and_the_Self.

[4] See these two blog posts, William Benzon, How I discovered the structure of “Kubla Khan” & came to realize the importance of description, blog post, New Savanna, October 9, 2017, http://new-savanna.blogspot.com/2017/10/how-i-discovered-structure-of-kubla.html; Things change, but sometimes they don’t: On the difference between learning about and living through [revising your priors and the way of the world], blog post, New Savanna, July 26, 2020, http://new-savanna.blogspot.com/2020/07/things-change-but-sometimes-they-dont.html.

[5] Herbert Simon, “Literary Criticism: A Cognitive Approach.” Stanford Humanities Review 4, No. 1, (1994) 1-26.

[6] William Benzon, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

[7] William Benzon, Why we'll never be able to build technology for Direct Brain-to-Brain Communication, blog post, New Savanna, September 26, 2018, http://new-savanna.blogspot.com/2013/05/why-well-never-be-able-to-build.html.

[8] William L. Benzon, Colorless Green Homunculi, Human Nature Review 2 (2002) 454-462, https://www.academia.edu/41181169/Colorless_Green_Homunculi.

Monday, September 21, 2020

Why we have four limbs and five digits on each

Some of the locals know this area as Narnia [graffiti]




Measuring culture worldwide with a sample of two billion humans using data from Facebook

Nick Obradovich, Ömer Özak, Ignacio Martín, et al., Expanding the Measurement of Culture with a Sample of Two Billion Humans, NBER Working Paper No. 27827, Issued in September 2020.
Abstract: Culture has played a pivotal role in human evolution. Yet, the ability of social scientists to study culture is limited by the currently available measurement instruments. Scholars of culture must regularly choose between scalable but sparse survey-based methods or restricted but rich ethnographic methods. Here, we demonstrate that massive online social networks can advance the study of human culture by providing quantitative, scalable, and high-resolution measurement of behaviorally revealed cultural values and preferences. We employ publicly available data across nearly 60,000 topic dimensions drawn from two billion Facebook users across 225 countries and territories. We first validate that cultural distances calculated from this measurement instrument correspond to traditional survey-based and objective measures of cross-national cultural differences. We then demonstrate that this expanded measure enables rich insight into the cultural landscape globally at previously impossible resolution. We analyze the importance of national borders in shaping culture, explore unique cultural markers that identify subnational population groups, and compare subnational divisiveness to gender divisiveness across countries. The global collection of massive data on human behavior provides a high-dimensional complement to traditional cultural metrics. Further, the granularity of the measure presents enormous promise to advance scholars' understanding of additional fundamental questions in the social sciences. The measure enables detailed investigation into the geopolitical stability of countries, social cleavages within both small and large-scale human groups, the integration of migrant populations, and the disaffection of certain population groups from the political process, among myriad other potential future applications.
On the use of data from Facebook:
Facebook places particular importance in classifying the interests of its users. As a result, the company has inadvertently built the largest platform for the measurement of culture in existence (see Figure 1B). For- tunately for scholars, Facebook makes this information accessible to prospective marketers via a marketing Application Programming Interface (API). Using information drawn from users’ self-reported interests, click- ing behaviors on Facebook, likes on Facebook, software downloads, GPS location, behavior on other sites that employ Facebook ads (Figure 1B), this API provides the ability to create and analyze social groups of interest along hundreds of thousands of interest dimensions and down to very fine spatial and temporal resolution (the zip code-by-day level in the US). Table S4 illustrates examples of cultural categories along with corresponding Facebook interests both for traditional and non-traditional cultural elements. By making its platform open to those interested in marketing to its users, Facebook has enabled scholars to interrogate its measures of global human interests and construct freely available measures of culture.

We use data gleaned from scraping the Facebook Marketing API to construct a high-dimensional measure of culture. We gathered nearly 60,000 diverse interests by sequentially interrogating Facebook’s platform and then constructed – for each administrative unit in our analysis – a vector of the share of individuals in that unit that held each interest (see Methods for added detail). Importantly, each interest on the platform is indexed by a unique identifier, allowing for consistency across languages globally. We use these data to investigate culture at the country, subnational, and local levels.
H/t Tyler Cowen.

Sunday, September 20, 2020

Pulling up to the dock

Hardware path dependence and the future of AI

Sara Hooker, The Hardware Lottery, arXiv:2009.06489v1 [cs.CY] 14 Sep 2020
Abstract: Hardware, systems and algorithms research communities have historically had different incentive structures and fluctuating motivation to engage with each other explicitly. This historical treatment is odd given that hardware and software have frequently determined which research ideas succeed (and fail). This essay introduces the term hardware lottery to describe when a research idea wins because it is suited to the available software and hardware and not because the idea is superior to alternative research directions. Examples from early computer science history illustrate how hardware lotteries can delay research progress by casting successful ideas as failures. These lessons are particularly salient given the advent of domain specialized hardware which makes it increasingly costly to stray off of the beaten path of research ideas. This essay posits that the gains from progress in computing are likely to become even more uneven, with certain research directions moving into the fast-lane while progress on others is further obstructed.
From the article (pp. 2-3):
Today, in contrast to the necessary specialization in the very early days of computing, machine learning researchers tend to think of hardware, software and algorithm as three separate choices. This is largely due to a period in computer science history that radically changed the type of hardware that was made and incentivized hardware, software and machine learning research communities to evolve in isolation.
Hooker goes on to talk about “the general purpose era”, based on the so-called von Neumann architecture of a single processor accessing large banks of memory:
The predictable increases in compute and memory every two years meant hardware design became risk-adverse. Even for tasks which demanded higher performance, the benefits of moving to specialized hardware could be quickly eclipsed by the next generation of general purpose hardware with ever growing compute. (p. 3)
She goes on to point out that the algorithmic concepts underlying neural networks were invented between 1963 and 1989 but implementation was hampered by the single processor architecture, which was not well-suited to the kind of processing required by neural networks. Then in the early 2000s graphic processing units (GPUs), which had been introduced in the 1970s to accelerate graphics processing for video games, movies, and animation, were repurposed for processing neural networks. Rapid graphics processing requires the parallel processing of relatively simple instruction sequences and so do neural networks. Thus (p. 6): “A striking example of this jump in efficiency is a comparison of the now famous 2012 Google paper which used 16,000 CPU cores to classify cats (Le et al., 2012) to a paper published a mere year later that solved the same task with only two CPU cores and four GPUs (Coates et al., 2013).”

Languages built for single-processer architectures were not well-suited for use on GPUs, which required the development of new languages. Meanwhile, most mainstream research was focused on symbolic approaches, which were compatible with single processor architectures and existing languages, such as LISP and PROLOG.

Back on the hardware side, tensor processing units (TPUs) have more recently been developed for deep learning. And so it goes.
In many ways, hardware is catching up to the present state of machine learning research. Hardware is only economically viable if the lifetime of the use case lasts more than 3 years (Dean, 2020). Betting on ideas which have longevity is a key consideration for hardware developers. Thus, Co-design effort has focused almost entirely on optimizing an older generation of models with known commercial use cases. For example, matrix multiplies are a safe target to optimize for because they are here to stay — anchored by the widespread use and adoption of deep neural networks in production systems. (p. 7)
But (p. 7), “There is still a separate question of whether hardware innovation is versatile enough to unlock or keep pace with entirely new machine learning research directions.”

You see the problem. We are well past the era where one hardware model more or less suits all applications. Specialized hardware is better for various purposes, and requires suitably specialized software. But hardware development and more expensive and takes more time and so is biased toward potential commercial markets where development cost can be recouped through sales.

Hooker goes on to point out that, however successful artificial neural networks have been so far, they are quire different from biological networks (pp. 8-9):
... rather that there are clearly other models of intelligence which suggest it may not be the only way. It is possible that the next breakthrough will re- quire a fundamentally different way of modelling the world with a different combination of hardware, software and algorithm. We may very well be in the midst of a present day hardware lottery.
That is, current hardware may well stand in the way of further advances. After further discussion of both hardware and software issues, Hooker concludes (p. 11):
Today the hardware landscape is increasingly heterogeneous. This essay posits that the hardware lottery has not gone away, and the gap between the winners and losers will grow increasingly larger. In order to avoid future hardware lotteries, we need to make it easier to quantify the opportunity cost of settling for the hardware and software we have.

Before Photoshop

Saturday, September 19, 2020

Why we sleep


Abstract from the linked article (which is not behind a paywall):
Sleep serves disparate functions, most notably neural repair, metabolite clearance and circuit reorganization. Yet the relative importance remains hotly debated. Here, we create a novel mechanistic framework for understanding and predicting how sleep changes during ontogeny and across phylogeny. We use this theory to quantitatively distinguish between sleep used for neural reorganization versus repair. Our findings reveal an abrupt transition, between 2 and 3 years of age in humans. Specifically, our results show that differences in sleep across phylogeny and during late ontogeny (after 2 or 3 years in humans) are primarily due to sleep functioning for repair or clearance, while changes in sleep during early ontogeny (before 2 or 3 years) primarily support neural reorganization and learning. Moreover, our analysis shows that neuroplastic reorganization occurs primarily in REM sleep but not in NREM. This developmental transition suggests a complex interplay between developmental and evolutionary constraints on sleep.

Let's get lost



Beyond "AI" – toward a new engineering discipline

Another bump to the time, this time because I'm thinking about Facebook, the future of social media, and the need for new institutional actors to counter-act both for-profit social media companies and the government. AI in the form of Intelligent Infrastructure, see below, surely has a role to play here.

* * * * *
 
I'm bumping this to the top of the queue in response to remarks by Ted Underwood on Twitter and by Willard McCarty in the Humanist Discussion Group.

* * * * *

Mark Liberman at Language Log posted a link to an excellent article by Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet", Medium 4/19/2018. Here are some passages.
Whether or not we come to understand “intelligence” any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life. While this challenge is viewed by some as subservient to the creation of “artificial intelligence,” it can also be viewed more prosaically — but with no less reverence — as the creation of a new branch of engineering. Much like civil engineering and chemical engineering in decades past, this new discipline aims to corral the power of a few key ideas, bringing new resources and capabilities to people, and doing so safely. Whereas civil engineering and chemical engineering were built on physics and chemistry, this new engineering discipline will be built on ideas that the preceding century gave substance to — ideas such as “information,” “algorithm,” “data,” “uncertainty,” “computing,” “inference,” and “optimization.” Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.

While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.
He goes on to observe that the issues involved are too often discussed under the rubric of "AI", which has meant various things at various times. The phrase was coined in the 1950s to denote the creation of computing technology possessing a human-like mind. Jordan calls this "human-imitative AI" and notes that it was largely an academic enterprise whose objective, the creation of "high-level reasoning and thought", remains elusive. In contrast:
The developments which are now being called “AI” arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics — the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.
This work is often packages as machine learning (ML).
Since the 1960s much progress has been made, but it has arguably not come about from the pursuit of human-imitative AI. [...] Although not visible to the general public, research and systems-building in areas such as document retrieval, text classification, fraud detection, recommendation systems, personalized search, social network analysis, planning, diagnostics and A/B testing have been a major success — these are the advances that have powered companies such as Google, Netflix, Facebook and Amazon.

One could simply agree to refer to all of this as “AI,” and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as “AI researchers.” But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play.
He then goes on to coin two more terms, "Intelligence Augmentation" (IA) and "Intelligent Infrastructure" (II). In the first
...computation and data are used to create services that augment human intelligence and creativity. A search engine can be viewed as an example of IA (it augments human memory and factual knowledge), as can natural language translation (it augments the ability of a human to communicate). Computing-based generation of sounds and images serves as a palette and creativity enhancer for artists.
The second involves
...a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies.
And now we get to his central question:
Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.

Second, and more importantly, success in these domains is neither sufficient nor necessary to solve important IA and II problems.
And he goes on to explore that theme.

Moreover,
...the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.
Coming to the end he makes and interesting historical observation:
It was John McCarthy (while a professor at Dartmouth, and soon to take a position at MIT) who coined the term “AI,” apparently to distinguish his budding research agenda from that of Norbert Wiener (then an older professor at MIT). Wiener had coined “cybernetics” to refer to his own vision of intelligent systems — a vision that was closely tied to operations research, statistics, pattern recognition, information theory and control theory. McCarthy, on the other hand, emphasized the ties to logic. In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era, under the banner of McCarthy’s terminology. (This state of affairs is surely, however, only temporary; the pendulum swings more in AI than in most fields.)

We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA and II.

This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives.
His concluding paragraphs:
Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.

In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.

Jersey City, on Cornilson Avenue looking East in the early morning sun

Facebook or freedom, Part 3: The game goes on [Media Notes, special edition #1]

It’s been 12 days since my last Facebook post, on September 7, and I’ve had at least four 48 hour reprieves since then, maybe 5 – I’ve not been keeping a strict count. What do I mean by a reprieve? I switch to a new page on FB and Wham! they hit me with the new interface without warning. But so far they’ve also given me the option of switching back, which I’ve done. When I do so, however, they always as why I’m doing it, is it because you don’t like something or want something else? No, I reply, it’s because I don’t like you arbitrarily messing with my mind.

I assume that one of these days I’m not going get that option. When, though, and why do they keep giving my 48 hours and then not following through? Do they want to keep on collecting my comment – I’ve only got one – on why I’m doing back? There’s got to be some logic here.

I’ve been watching this new docudrama on Netflix, The Social Dilemma, which I don’t much like (more on that later), which is about social media, including Facebook of course, and how the tech wizards behind the screen do everything to control us, including running little experiments where they make some change in what we see and note how we respond. That makes sense. From which it follows that my behavior is being closely monitored, not necessarily by a person – I can’t imagine that I’m important enough – but more likely by some program module.

Is that module primed for each individual user? That is, FB has been tracking our behavior on FB and across the web, right? So it’s built up some statistical profile of each person’s activity. I’d assume it’s making guesses about how we’re going to behave in various circumstances. We’ll either confirm the guess, thus confirming its (Bayesian) priors, or not, in which case it updates those priors. Either way the algorithm ‘learns’ something.

So, what kinds of guesses is FB making about my behavior with respect to this interface change? When people make a comment about why they want to keep the old interface, how do they deal with those comments? Are they looking for something in particular? I’d assume that very few – none? – get read by a human. What’s their AI-module checking for?

What’s their plan for me on the next time they switch me over? Are the going to give me the option of switching back? If so, what’s their prediction about whether or not I’ll accept the offer? If they predict that, yes, I’ll switch back, do they already know how many more times they’ll give me to stay with the old interface?

I don’t know. Just how sophisticated is their manipulation software? I don’t know that either. In a sense, neither do they.

* * * * *

And that brings me to The Social Dilemma. I’m about two-thirds of the way through, and it took me two, maybe three, sittings to get that far. Will I finish it? I don’t know, but maybe Facebook, Google, and of course Netflix know, or at least have their guesses. Maybe they’re even betting on my behavior. Now there’s a concept, have the different social networks bet on user behavior? How will the bets be paid off? Shares of stock? User information?

The Social Dilemma switches back an forth between scenes where various experts from industry and academia tell us about how social media – Facebook, Twitter, Google, Instagram, etc. – tracks and manipulates our behavior and dramatized scenes where we see people respond to the manipulations of a nefarious AI, personified by Vincent Kartheiser. I don’t feel that I’m learning anything new. Yes, yes, they’re doing all this stuff and yes, yes, they’re really good at it. But there’s no real detail, no substantial argumentation, just all this assertion sandwiched in between this cheesy dramatization, which is useless.

Coming into this program I KNEW there was a problem, tracking, manipulation, fragmentation and polarization in the civic sphere, and so forth. I was looking for something to give form and substance to all these things I already know in a loose and baggy was. The Social Dilemma isn’t doing that.

Three useful reviews:
The Oremus piece reminded me of Marcuse's notion of repressive desublimation, where an oppressive regime allows the expression of some dissent (desublimation) but not so much as to threaten in regime in any material way (repressive). It's a kind of tax the oppressors levy on the oppressed for the privilege of speaking truth to power, but nothing more than speech.

For example, and from Oremus' article:
But if The Social Dilemma largely succeeds in answering its opening question (“What’s the problem?”), there’s a second, crucial, stage-setting scene that the film seems to forget about as it goes on. It’s when the film’s central real-life figure, the “humane tech” advocate Tristan Harris, recounts how he grew disillusioned with his work as a young designer at Google, and eventually wrote an explosive internal presentation that rocked the company to its core — or so it seemed. The presentation argued that Google’s products had grown addictive and harmful, and that the company had a moral responsibility to address that. It was passed around frantically within the company’s ranks, quickly reached the CEO’s desk, and Harris’ colleagues inundated him with enthusiastic agreement. “And then… nothing,” Harris recalls. Having acknowledged ruefully that their work was hurting people, and promising to do better, Googlers simply went back to their work as cogs in the profit machine. What Harris thought had served as a “call to arms” was really just a call to likes and faves — workplace slacktivism.
But also, as an example of the film's narrow tech-bro-centric POV:
As the activist Evan Greer of Fight for the Future points out, the film almost entirely ignores social media’s power to connect marginalized young people, or to build social movements such as Black Lives Matter that challenge the status quo. This omission is important, not because it leaves the documentary “one-sided,” as some have complained, but because understanding social media’s upsides is critical to the project of addressing its failures. (I wrote in an earlier newsletter that the BLM protests should remind us why social media is worth fixing.)
* * * * *

Meanwhile, why not let people have the interface they want? In fact, why not offer several interfaces? After all, Facebook is a huge company; they could afford to do this, no?

All Facebook needs is to be able to pass ads to users, makes suggestions, and track usage. Does everyone have to have the same interface to make this happen?

But then there’s that general principle: We’ve got you under our thumb. That makes sense. I don’t like it at all – which is why I’m putting up (ultimately futile) resistance – but I understand it.

Somewhere out on the web there’s got to be images of Zuckerberg tricked out in Borg cyber gear.

Thursday, September 17, 2020

Harry James plays the bejesus out of an insane arrangement of "St. Louis Blues" (1946)



It's hard to describe what's going on here, but it's obviously from a movie (Do You Love Me? 1946). The introduction is silly and pretentious, but James shows his blues chops at 0:33. From there to the end it's up and down, and James is always so down he's up. Or is it so up he's down?

Let's take a closer look, for this is a performance that has layers within layers, worlds within worlds. The tune itself is one of the best-known tunes in jazz and pop, "St. Louis Blues," by W. C. Handy. Handy was a trumpeter, composer and band leader who was born in 1873 and died in 1958. "St. Louis Blues" was published in 1914 and was no ordinary blues. It had three distinct melodies, two based on the 12-bar blues form and a 16-bar melody based on a habanera rhythm, giving the song what Jelly Roll Morton called that "Latin tinge."

The audience for this movie would surely have been familiar with James and with "St. Louis Blues," which had been recorded many times by many artists, including W. C. Handy himself, the Original Dixieland Jazz Band, Bessie Smith with Louis Armstrong, Fats Waller, Cab Calloway, Rudy Vallee (the sheet music with his face on the front includes ukulele tablature), Bing Crosby with Duke Ellington, Benny Goodman, Django Reinhardt, Guy Lombardo, and on and on – you get the idea. So the audience knows that "St. Louis Blues" is not the kind of song that goes with a symphony orchestra sitting in front of classical pillars. 

Moreover, my Spidey sense tells me that a movie with that song in that setting probably features a conflict between "straight" music and "hot" swing, a theme played out in many movies and cartoons from the 1930s through the 1940s. Note that this theme is not simply about music. It's also about social class and about America vs. Europe.

The Wikipedia plot summary for the movie doesn't say much, but it says enough to confirm my intuition:
Jimmy Hale (Dick Haymes) is a successful singer. He chases Katharine "Kitten" Hilliard (Maureen O'Hara), a prim, bespectacled music school dean who, after traveling to the big city, transforms herself into a desirable, sophisticated lady. Jimmy isn't the only one eager to win Katharine's affections: it turns out that the smooth-as-silk trumpeter and bandleader Barry Clayton (Harry James) has designs on Katharine as well.
The plot summary provided by the American Film Institute is considerably more detailed and describes this scene early in the film:
Katherine then leaves for New York to consult with Dr. Herbert Benham, a composer and critic, who is to be guest conductor at the Festival. On the crowded train into New York, Katherine is forced to stand in the vestibule of a car reserved for Barry Clayton's band, and he invites her in. Barry and the band attempt to entertain her with a swing number but, much to Barry's chagrin, she announces that she doesn't like it, prompting Barry to declare that she has "ice water in her veins."
There you have it, the squares vs. the hep cats all tangled up in romance. This performance of "St. Louis Blues" takes place late in the film, so the audience has been watching the battle of the musical styles play out for over an hour by this time. They know what they're going to hear, a highly polished pastiche featuring some superb trumpet playing by Harry James.

This is not great music. But it is well-crafted and highly polished. That's what makes it worthy of our attention. Such arrangements were served up in movie after movie by staff arrangers. This is routine musical craftsmanship of a very high order. It's a craft that takes conceptions and techniques from different musical traditions and remixes them, thereby laying a foundation for the emergence of new forms and styles.

The arrangement begins with a legit introduction where the strings and brass play two phrases from one of the two blues themes in a straight, almost march-like, style, not a hint of swing. This quickly gives way to a clarinet melody (c. 0.17) that drops down low and works its way up in a series of arpeggios – I can't help but think this is a sly reference to and revision of the clarinet line that opens "Rhapsody in Blue". This quickly (0.27) gives way to a fast punchy orchestral passage culminating in a brass fanfare (0.32) that builds to a climax, and then stops (0.37) and drops you over the edge of a cliff, as you didn't see it coming.

We've been set up.

James turns around (he'd been conducting the orchestra up to this point), picks up his trumpet, and digs deep into nasty blusey jazz trumpet on the Latin theme (0.40). Notice how he bends his notes, plays notes with valves only half depressed, and throws in "shakes," jazz ornaments that are a bit like trills, but utterly different in that they are rough while trills are delicate. This is jazz trumpet technique that's not taught in conservatories, at least not in those days. That band, for that's what the orchestra has become, a jazz band, plays a slinky vamp to back him up. That goes on until 1.34.

At this point we've been presented with two (opposing) musical worlds, the classical world (played for laughs), and jazz (played with utter sincerity). The rest of the arrangement plays in the space opened up between them.

James lowers his trumpet and returns to conduct the orchestra, which goes into a straight rendition of the blues theme that opened the piece, played in sweeping romantic fashion by the strings. We get through 12 bars of that and have another change of pace. Starting at roughly 2:16 the strings take up the other blues theme, straight time, played pizzicato, as unjazzy a sound as you could imagine. They finish the 12-bars and the brass comes crashing in at 2:33. Now the violins switch to their bows playing rapid figures of the sort I associate with bustling street scenes where people are rushing about doing things. James comes in playing rapid trumpet figures in a complementary fashion. This is 19th century virtuoso showpiece material with slight swing inflections. We're poised between the legit and jazz musical worlds – a very American bit of stylistic juggling.

Trumpeting brass figures signal a change (2:51) and we move to a more or less standard big-band shout chorus where the ensemble, sans strings, plays a thick texture of backing riffs while the soloist wails over the top.  James stops playing and the strings join in (3:29) and, once things get moving, James returns and puts a cherry on top (3:42), riding it for a few bars, pausing for a pair of simple two-note trumpet figures, and closing it out with an ascending triplet run (3:53) leading to the final chords.

Applause. And well deserved.

Crossover music wasn't invented yesterday. It's as American as sweet potato pie.

* * * * *

Two other posts about Harry James: Terry Teachout appreciates Harry James; A Monday Morning Music Lesson: Harry James, concerto for Trumpet, 1941.

The importance of context