Monday, July 29, 2024

What do people enjoy?

Sunday, July 28, 2024

Bill Maher and William Shatner talk about deep fundamental things while also being silly

 

This clip is two years old. I believe that Maher was 66 at the time and Shatner was 90. Here's the description that came with the clip: "Bill and William Shatner comically riff on the last acceptable prejudice, the spectrum of human sexuality, bad crowds in comedy, the making of Religious, and the origin of the universe." What kind of conversation is this? It seems to me that it's both real and an act. These guys are performers and they're certainly performing here, but I don't think they've got a script. They're just making it up as they go along. 

At the time I'm posting this, almost 800,000 people have watched this clip. What's is say or imply about the world that 800,000 people have found this entertaining?

Paul Fry on Influence: Eliot and Bloom (Theory of Literature)

87,872 views Sep 1, 2009
Introduction to Theory of Literature (ENGL 300)

In this lecture on the psyche in literary theory, Professor Paul Fry explores the work of T. S. Eliot and Harold Bloom, specifically their studies of tradition and individualism. Related and divergent perspectives on tradition, innovation, conservatism, and self-effacement are traced throughout Eliot's "Tradition and the Individual Talent" and Bloom's "Meditation upon Priority." Particular emphasis is placed on the process by which poets struggle with the literary legacies of their precursors. The relationship of Bloom's thinking, in particular, to Freud's Oedipus complex is duly noted. The lecture draws heavily from the works of Pope, Borges, Joyce, Homer, Wordsworth, Longinus, and Milton.

00:00 - Chapter 1. Introduction to Harold Bloom
06:31 - Chapter 2. Mimesis and Imitatio
11:51 - Chapter 3. Bloom "Misreads" Eliot
29:34 - Chapter 4: Literary History: the Always Already Written "Strong Poem"
48:09 - Chapter 5. Lacan and Bloom on Tony the Tow Truck

Complete course materials are available at the Open Yale Courses website: http://open.yale.edu/courses

This course was recorded in Spring 2009.

Pop Tarts on a pier along the Hudson

Laughter on the campaign trail: Harris (yes) vs. Trump (no)

Jason Zinoman, Kamala Harris’s Laugh Is a Campaign Issue. Our Comedy Critic Weighs in. NYTimes, July 28, 2024.

Plato warned against a love of laughter, suggesting it indicates a loss of control. Ever alert to the theater of power, Trump rarely laughs, dating back to long before he was in politics. Recounting his season appearing on “The Apprentice,” the magician Penn Jillette marveled at how he would regularly spend hours watching Trump talk and never notice the slightest chuckle.

Watching interviews with both candidates, it’s clear that there’s a sizable disparity in laughter. Trump scoffs and occasionally smirks, which can be a crowd-pleaser. But chuckling isn’t his thing. On talk shows, Harris does it to deflect and connect, to establish intimacy, but also to underline the absurdity of something. At its most effusive, her laugh can draw attention to itself, and taken out of context, it can seem as if she’s the only one in on the joke.

Harris has said she got her guffaw from her mother. But that’s not her only laugh. In her debut campaign speech, she even got a big response from muffling a snicker after saying that in the past, “I have taken on perpetrators of all kinds.”

This hint of a laugh sets up her most successful stump line so far: “So hear me when I say, I know Donald Trump’s type.”

The case against laughing is that it makes a leader come off as less serious. This rests on a common misunderstanding that laughter is primarily a response to something funny. Research over the past few decades has backed up what philosophers have said for over a century, which is that laughter is inherently social, more about relationships and communication than jokes.

Later:

Bill Clinton connected with people by biting his lip and making eye contact. He famously felt your pain. With her biggest laughs, the ones that she does with her whole body, Harris projects something else: joy.

There's more at the link.

Friday, July 26, 2024

The art of misdirection – Giving shots to infants and toddlers

One day my YouTube feed presented me with this sort video. I was curious, so I watched it. It was quite remarkable.

This one is similar, but instead of giving a shot to a toddler, we see a doctor givine a shot to an infant.

Magicians and pickpockets also employ misdirection, through to achieve somewhat different ends.

Thursday, July 25, 2024

Whisper Not - Benny Golson and McCoy Tyner

Whisper Not (Golson): Benny Golson (born 1929), tenor saxophone; McCoy Tyner (1938-2020), piano; Avery Sharpe, bass and Aaron Scott on drums. Jazz in Marciac 1997.

Hot pink petals

Communication stripped to the barest minimum?

Hays, D. G. (1973). "Language and Interpersonal Relationships." Daedalus 102(3): 203-216. The following passage is from pp. 204-205:
The experiment strips conversation down to its barest essentials by depriving the subject of all language except for two pushbuttons and two lights, and by suggesting to him that he is attempting to reach an accord with a mere machine. We brought two students into our building through different doors and led them separately to adjoining rooms. We told each that he was working with a machine, and showed him lights and pushbuttons. Over and over again, at a signal, he would press one or the other of the two buttons, and then one of two lights would come on. If the light that appeared corresponded to the button he pressed, he was right; otherwise, wrong. The students faced identical displays, but their feedback was reversed: if student A pressed the red button, then a moment later student B would see the red light go on, and if student B pressed the red button, then student A would see the red light. On any trial, therefore, if the two students pressed matching buttons they would both be correct, and if they chose opposite buttons they would both be wrong.

We used a few pairs of RAND mathematicians; but they would quickly settle on one color, say red, and choose it every time. Always correct, they soon grew bored. The students began with difficulty, but after enough experience they would generally hit on something. Some, like the mathematicians, chose one color and stuck with it. Some chose simple alternations (red-green-red-green). Some chose double alternations (red-red-green-green). Some adopted more complex patterns (four red, four green, four red, four green, sixteen mixed and mostly incorrect, then repeat). The students, although they were sometimes wrong, were rarely bored. They were busy figuring out the complex patterns of the machine.

But where did the patterns come from? Although neither student knew it, they arose out of the interaction of two students.

Monday, July 22, 2024

Double reflection of the sun

The risks of proliferating AI agents

Malcolm Murray, The Shifting Nature Of AI Risk, 3 Quarks Daily, July 22, 2024.

Second and more importantly, much of the focus in AI is currently on building AI agents. As can be seen in comments from Sam Altman and Andrew Ng, this is seen as the next frontier for AI. When intelligent AI agents are available at scale, this is likely to change the risk landscape dramatically. [...]

Introducing agentic intelligences could mean that we lose the ability to analyze AI risks and chart their path through the risk space. The analysis can become too complex since there are too many unknown variables. Note that this is not a question of “AGI” (Artificial General Intelligence), just agentic intelligences. We already have AI models that are much more capable than humans in many domains (playing games, producing images, optical recognition). However, the world is still recognizable and analyzable because these models are not agentic. The question of AGI and when we will “get it”, a common discussion topic, is a red herring and the wrong question. There is no such thing as one type of “general” intelligence. The question should be when will we have a plethora of agentic intelligences operating in the world trying to act according to preferences that may be largely unknown or incomprehensible to us.

For now, what this means is that we need to increase our efforts on AI risk assessment, to be as prepared as possible as AI risk continues to get more complex. However, it also means we should start planning for a world where we won’t be able to understand the risks. The focus in that world needs to instead be on resilience.

There's more at the link.

Sunday, July 21, 2024

A touch of yellow

Nine-year old chess prodigy hopes to become the youngest grandmaster ever

Isabella Kwai, At 5, She Picked Up Chess as a Pandemic Hobby. At 9, She’s a Prodigy. NYTimes, July 21, 2024.

Since learning chess during a pandemic lockdown, Bodhana Sivanandan has won a European title in the game, qualified for this year’s prestigious Chess Olympiad tournament, and established herself as one of England’s best players.

She also turned 9 in March. That makes Bodhana, a prodigy from the London borough of Harrow, the youngest player to represent England at such an elite level in chess, and quite possibly the youngest in any international sporting competition.

“I was happy and I was ready to play,” Bodhana said in a phone interview, two days after she learned that she had been selected for this year’s Olympiad, an international competition considered to be the game’s version of the Olympics.

The fourth-grader, who learned chess four years ago when she stumbled across a board her father was planning to discard, knows exactly what she wants to accomplish next. “I’m trying to become the youngest grandmaster in the world,” she said, “and also one of the greatest players of all time.”

Chess is one of the arenas in which prodigies emerge. Music and math are others. Why? I assume it has something to do with their brains. What?

I note also that playing chess has been central to the development of AI and that it is the first arena in which computers and equaled and then surpassed the best human performance. What can we make of that? I don’t know, but surely there’s something to be discovered here.

* * * * *

Note the final paragraph of this post, On the significance of human language to the problem of intelligence (& superintelligence).

The question of machine superintelligence would then become:

Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.

Human Go players learn from superhuman AIs

There are more links in the thread.

* * * * *

So: "Last year, we found superhuman Go AIs are vulnerable to “cyclic attacks”. This adversarial strategy was discovered by AI but replicable by humans."

Superhuman Go AIs discover a new region of the Go search-space. That's one thing. The fact that, once discovered, humans are able to exploit this region against a superhuman Go AI. That is just as interesting. 

One question we can ask about superintelligence is whether or not so-called superintelligent AIs can do things that are inherently and forever beyond human capacity. In this particular case, we have humans learning things initially discovered by AIs.

Saturday, July 20, 2024

Coffee and cream

Masha Gessen: Are we on the edge of an autocratic breakthrough?

Masha Gessen, Biden and Trump Have Succeeded in Breaking Reality, NYTimes, July 20, 2024.

The last three paragraphs:

As for Trump, despite the gestures he made in his speech on Thursday night toward national reconciliation, tolerance and unity, the convention reflected the ultimate consolidation of his power. If he is elected, a second Trump administration seems likely to bring what the Hungarian sociologist Balint Magyar has termed an “autocratic breakthrough” — structural political change that is impossible to reverse by electoral means. But if we are in an environment in which nothing is believable, in which imagined secrets inspire more trust than the public statements of any authority, then we are already living in an autocratic reality, described by another of Arendt’s famous phrases: “Nothing is true and everything is possible.”

It’s tempting to say that Trump’s autocratic movement has spread like an infection. The truth is, the seeds of this disaster have been sprouting in American politics for decades: the dumbing down of conversation, the ever-growing role of money in political campaigns, the disappearance of local news media and local civic engagement and the consequent transformation of national politics into a set of abstracted images and stories, the inescapable understanding of presidential races as personality contests.

None of this made the Trump presidency inevitable, but it made it possible — and then the Trump presidency pushed us over the edge into the uncanny valley of politics. If Trump loses this year — if we are lucky, that is — it will not end this period; it will merely bring an opportunity to undertake the hard work of recovery.

Nahre Sol talks with Tigan Hamasyan

From the Wikipedia entry for Hamasyan:

Tigran Hamasyan (Armenian: Տիգրան Համասյան; born July 17, 1987) is an Armenian jazz pianist and composer. He plays mostly original compositions, strongly influenced by the Armenian folk tradition, often using its scales and modalities. In addition to this folk influence, Hamasyan is influenced by American jazz traditions and, to some extent, as on his album Red Hail, by progressive rock. His solo album A Fable is most strongly influenced by Armenian folk music. Even in his most overt jazz compositions and renditions of well-known jazz pieces, his improvisations often contain embellishments based on scales from Middle Eastern/Southwest Asian traditions.

Friday, July 19, 2024

Friday Fotos: Ever more flowers

Refactoring training data to make smaller LLMs

Tentatively, most interesting thing I've seen coming out of the AI world in a year or so.

A tweet by Andrej Karpathy:

LLM model size competition is intensifying… backwards!

My bet is that we'll see models that "think" very well and reliably that are very very small. There is most likely a setting even of GPT-2 parameters for which most people will consider GPT-2 "smart". The reason current models are so large is because we're still being very wasteful during training - we're asking them to memorize the internet and, remarkably, they do and can e.g. recite SHA hashes of common numbers, or recall really esoteric facts. (Actually LLMs are really good at memorization, qualitatively a lot better than humans, sometimes needing just a single update to remember a lot of detail for a long time). But imagine if you were going to be tested, closed book, on reciting arbitrary passages of the internet given the first few words. This is the standard (pre)training objective for models today. The reason doing better is hard is because demonstrations of thinking are "entangled" with knowledge, in the training data.

Therefore, the models have to first get larger before they can get smaller, because we need their (automated) help to refactor and mold the training data into ideal, synthetic formats.

It's a staircase of improvement - of one model helping to generate the training data for next, until we're left with "perfect training set". When you train GPT-2 on it, it will be a really strong / smart model by today's standards. Maybe the MMLU will be a bit lower because it won't remember all of its chemistry perfectly. Maybe it needs to look something up once in a while to make sure.

That tweet is linked to this:

Wednesday, July 17, 2024

AI made up half of VC investment last quarter

A view from the window

What’s it mean to understand how LLMs work?

I don’t think we know. What bothers me is that people in machine learning seem to think of word means as Platonic ideals. No, that’s not what they’d say, but some such belief seems implicit in what they’re doing. Let me explain.

I’ve been looking through two Anthropic papers on interpretability: Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, and Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. They’re quite interesting. In some respects they involve technical issues that are a bit beyond me. But, setting that aside, they also involve masses to detail that you just have to slog through in order to get a sense of what’s going on.

As you may know, the work centers on things that they call features, a common term in this business. I gather that:

  • features are not to be identified with individual neurons or even well-defined groups of neurons, which is fine with me,
  • nor are features to be closely identified with particular tokens. A wide range of tokens can be associated with any given feature.

There is a proposal that these features are some kind of computational intermediate.

We’ve got neurons, features, and tokens. I believe that the number of token types is on the order of 50K or so. The number of neurons is considerably larger varies depending on the size of the model, but will be 3 or 4 orders of magnitude larger. The weights on those neurons characterize all possible texts that can be constructed with those tokens. Features are some kind of intermediate between neurons and texts.

The question that keeps posing itself to me is this: What are we looking for here? What would an account of model mechanics, if you will, look like?

A month or so ago Lex Fridman posted a discussion with Ted Gibson, an MIT psycholinguist, which I’ve excerpted here at New Savanna. Here’s an excerpt:

LEX FRIDMAN: (01:30:35) Well, let’s take a stroll there. You wrote that the best current theories of human language are arguably large language models, so this has to do with form.

EDWARD GIBSON: (01:30:43) It’s a kind of a big theory, but the reason it’s arguably the best is that it does the best at predicting what’s English, for instance. It’s incredibly good, better than any other theory, but there’s not enough detail.

LEX FRIDMAN: (01:31:01) Well, it’s opaque. You don’t know what’s going on.

EDWARD GIBSON: (01:31:03) You don’t know what’s going on.

LEX FRIDMAN: (01:31:05) Black box.

EDWARD GIBSON: (01:31:06) It’s in a black box. But I think it is a theory.

LEX FRIDMAN: (01:31:08) What’s your definition of a theory? Because it’s a gigantic black box with a very large number of parameters controlling it. To me, theory usually requires a simplicity, right?

EDWARD GIBSON: (01:31:20) Well, I don’t know, maybe I’m just being loose there. I think it’s not a great theory, but it’s a theory. It’s a good theory in one sense in that it covers all the data. Anything you want to say in English, it does. And so that’s how it’s arguably the best, is that no other theory is as good as a large language model in predicting exactly what’s good and what’s bad in English. Now, you’re saying is it a good theory? Well, probably not because I want a smaller theory than that. It’s too big, I agree.

It's that smaller theory that interests me. Do we even know what such a theory would look like?

Classically, linguists have been looking for grammars, a finite set of rules that characterizes all the sentences in a language. When I was working with David Hays back in the 1970s, we were looking for a model of natural language semantics. We chose to express that model as a directed graph. Others were doing that as well. Perhaps the central question we faced was this: what collection of node types and what collection of arc types did we need to express all of natural language semantics? Even more crudely, what collection of basic building blocks did we need in order to construct all possible texts?

These machine language people seem to be operating under the assumption that they can figure it out by an empirical bottom-up procedure. That strikes me as being a bit like trying to understand the principles governing the construction of temples by examining the materials from which they’re constructed, the properties of rocks and mortar, etc. You can’t get there from here. Now, I’ve some ideas about how natural language semantics works, which puts me a step ahead of them. But I’m not sure how far that gets us.

What if the operating principles of these models can’t be stated in any existing conceptual framework? The implicit assumption behind all this work is that, if we keep at it with the proper tools, sooner or later the model is going to turn out to an example of something we already understand. To be sure, it may be an extreme, obscure, and extraordinarily complicated example, but in the end, it’s something we already understand.

Imagine that some UFO crashes in a field somewhere and we are able to recover it, more or less intact. Let us imagine, for the sake of argument, that the pilots have disappeared, so all we’ve got is the machine. Would we be able to figure out how it works? Imagine that somehow a modern digital computer were transported back in time and ended up in the laboratory of, say, Nikola Tesla. Would he have been able to figure out what it is and how it works?

Let’s run another variation on the problem. Imagine that some superintelligent, but benevolent aliens were to land, examine our LLMs, and present us with documents explaining how they work. We would be able to read and understand those documents. Remember, these are benevolent aliens, so they’re doing their best to help us. I can imagine three possibilities:

  1. Yes, perhaps with a bit of study, we can understand the documents.
  2. We can’t understand them right away, but the aliens establish a learning program that teaches us what we know to understand those documents.
  3. The documents are forever beyond us.

I don’t believe three. Why not? Because I don’t believe our brains limit us to current modes of thought. In the past we’ve invented new ways of thinking; no reason why would could continue doing so, or learn new methods under the tutelage of benevolent aliens.

That leaves us with 1 and 2. Which is it? At the moment I’m leaning toward 2. But of course those superintelligent aliens don’t exist. We’re going to have to figure it out for ourselves.

Sunday, July 14, 2024

Mood-congruent memory revisited

Faul, L., & LaBar, K. S. (2023). Mood-congruent memory revisited. Psychological Review, 130(6), 1421–1456. https://doi.org/10.1037/rev0000394 (ungated version: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076454/)

Abstract: Affective experiences are commonly represented by either transient emotional reactions to discrete events or longer term, sustained mood states that are characterized by a more diffuse and global nature. While both have considerable influence in shaping memory, their interaction can produce mood-congruent memory (MCM), a psychological phenomenon where emotional memory is biased toward content affectively congruent with a past or current mood. The study of MCM has direct implications for understanding how memory biases form in daily life, as well as debilitating negative memory schemas that contribute to mood disorders such as depression. To elucidate the factors that influence the presence and strength of MCM, here we systematically review the literature for studies that assessed MCM by inducing mood in healthy participants. We observe that MCM is often reported as enhanced accuracy for previously encoded mood-congruent content or preferential recall for mood-congruent autobiographical events, but may also manifest as false memory for mood-congruent lures. We discuss the relevant conditions that shape these effects, as well as instances of mood-incongruent recall that facilitate mood repair. Further, we provide guiding methodological and theoretical considerations, emphasizing the limited neuroimaging research in this area and the need for a renewed focus on memory consolidation. Accordingly, we propose a theoretical framework for studying the neural basis of MCM based on the neurobiological underpinnings of mood and emotion. In doing so, we review evidence for associative network models of spreading activation, while also considering alternative models informed by the cognitive neuroscience literature of emotional memory bias. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Clouds over Jersey City, Hoboken, and the Hudson River

Economic growth in Roman Britain over four centuries

Scott G. Ortman, José Lobo, Lisa Lodwick, Rob Wiseman, Olivia Bulik, Victoria Harbison , and Luís M. A. Bettencourt, Identification and measurement of intensive economic growth in a Roman imperial province, Science Advances, 5 Jul 2024, Vol 10, Issue 27, DOI: 10.1126/sciadv.adk5517

Abstract: A key question in economic history is the degree to which preindustrial economies could generate sustained increases in per capita productivity. Previous studies suggest that, in many preindustrial contexts, growth was primarily a consequence of agglomeration. Here, we examine evidence for three different socioeconomic rates that are available from the archaeological record for Roman Britain. We find that all three measures show increasing returns to scale with settlement population, with a common elasticity that is consistent with the expectation from settlement scaling theory. We also identify a pattern of increase in baseline rates, similar to that observed in contemporary societies, suggesting that this economy did generate modest levels of per capita productivity growth over a four-century period. Last, we suggest that the observed growth is attributable to changes in transportation costs and to institutions and technologies related to socioeconomic interchange. These findings reinforce the view that differences between ancient and contemporary economies are more a matter of degree than kind.

H/t Tyler Cowen.

Rebuilding a 70-year old engine [a bit of the physical world]

I bought an UNUSED 70-year old ROLLS-ROYCE crate engine! How bad could it be?
Pacific Northwest Hillbilly

0:00 unload and unbox
11:08 getting engine unstuck
32:17 lubrication
54:54 electrical stuff
1:01:31 starter
1:12:35 priming oil and spark
1:17:56 carburetor
1:23:56 finishing touches
1:28:59 start attempt
1:35:43 end

I didn't watch the whole thing. Rather, I skipped around until I got to the end. Probably watched, say, 45 minutes worth. I wonder when we'll have a robot capable of doing this kind of work.You might want to waltz over to YouTube and look at some of the comments.

Friday, July 12, 2024

The Birth of "Jaws"

Brian Raftery, 50 Years Ago, ‘Jaws’ Hit Bookstores, Capturing the Angst of a Generation, NYTimes, July 12, 2024. The article opens:

In 1973, the first chapter of an unpublished novel was photocopied and passed around the Manhattan offices of Doubleday & Co. with a note. “Read this,” it dared, “without reading the rest of the book.”

Those who accepted the challenge were treated to a swift-moving tale of terror, one that begins with a young woman taking a postcoital dip in the waters off Long Island. As her lover dozes on the beach, she’s ravaged by a great white shark.

“The great conical head struck her like a locomotive, knocking her up out of the water,” the passage read. “The jaws snapped shut around her torso, crushing bones and flesh and organs into a jelly.”

Tom Congdon, an editor at Doubleday, had circulated the bloody, soapy excerpt to drum up excitement for his latest project: a thriller about a massive fish stalking a small island town, written by a young author named Peter Benchley.

Congdon’s gambit worked. No one who read the opening could put the novel down. All it needed was a grabby title. Benchley had spent months kicking around potential names (“Dark White”? “The Edge of Gloom”?). Finally, just hours before deadline, he found it.

“Jaws,” he wrote on the manuscript’s cover page.

When it was released in early 1974, Benchley’s novel kicked off a feeding frenzy in the publishing industry — and in Hollywood. “Jaws” spent months on the best-seller lists, turned Benchley from an unknown to a literary celebrity and, of course, became the basis for Steven Spielberg’s blockbusting 1975 film adaptation.

While most readers were drawn to the book’s shark-centric story line, “Jaws” rode multiple mid-1970s cultural waves: It was also a novel about a frayed marriage, a financially iffy town and a corrupt local government — released at a time of skyrocketing divorce rates, mass unemployment and a presidential scandal.

At a time of change and uncertainty, “Jaws” functioned as an allegory for whatever scared or angered the reader.

That is, it functioned as a vehicle for giving form to free-floating anxiety.

As you may know, I've written a bit about Spielberg's movie version of the story, some time with assistance from ChatGPT. As this paragraph from the article indicates, the movie cut some important elements from the story:

Amity itself is on the brink of ruin, having barely survived the early ’70s recession. Also in decline: Brody’s marriage to his class-conscious wife, Ellen, who has a sexually charged encounter with Hooper at a surf-and-turf spot. Then there’s the town’s mayor, Larry Vaughn, who’s so deeply indebted to the mob, he’ll do whatever it takes to keep the beaches open — even if it means people die.

There's much more at the link.

Friday Fotos: The water around Hoboken

Late Admissions: Lessons from Glenn Loury’s Memoir

From the webpage:

In his stunning new memoir, "Late Admissions: Confessions of a Black Conservative," economist Glenn Loury surveys the stratospheric highs and abysmal lows of a life lived in both the spotlight and the shadows. Loury tells the story of his rise from Chicago’s South Side to Harvard’s Kennedy School, his transformation from working-class factory clerk to elite economic theorist, and his emergence as a fire-breathing conservative public intellectual in Ronald Reagan’s America. Yet Loury’s public positions were often at odds with private behavior—like serial adultery and hard drug use—that threatened to derail his ascent when they were exposed. "Late Admissions" documents a unique man’s struggle to defeat “the enemy within” and explores the entanglements of race and personal identity, reason and religious belief, and tradition and generational conflict—all of which have shaped both his life and American culture since the 1960s.

Please join us for a virtual conversation featuring Glenn Loury, author of "Late Admissions: Confessions of a Black Conservative." Loury, together with Gerald Early of the Washington University of St. Louis and Manhattan Institute Senior Fellow Jason Riley, will unpack the larger themes in his compelling memoir and lessons they hold for today.

Wednesday, July 10, 2024

Purple irises

Burnout Coaches

Martha C. White, Seeing Workplace Misery, Burnout Coaches Offer Company, New York Times, July 9, 2024.

Even before the Covid-19 pandemic disrupted how and where people work, the World Health Organization recognized burnout. In 2019, it defined the hallmarks of this type of chronic workplace stress as exhaustion, cynicism and ineffectuality — all attributes that make it tough for people to bounce back on their own, said Michael P. Leiter, a professor emeritus at Acadia University in Nova Scotia who studies burnout.

“It’s hard, at that point, to pull yourself up by your bootstraps,” he said. “It’s really helpful to have a secondary point of view or some emotional support.”

Enter the burnout coach.

Operating in a gray area between psychotherapy and career coaching, and without formal credentialing and oversight, “burnout coach” can be an easy buzzword to advertise. Basically anybody can hang out a shingle.

As a result, more people are marketing themselves as burnout coaches in recent years, said Chris Bittinger, a clinical assistant professor of leadership and project management at Purdue University who studies burnout. “There’s no barrier to entry,” he said. [...]

This lack of oversight makes it difficult to say how many burnout coaches there are, but researchers who study burnout such as Mr. Leiter say a pressure-cooker corporate culture, a shortage of mental health care resources and the disruption of the pandemic have created a critical mass of burned-out workers searching for ways to cope. [...]

Interest in burnout coaches comes amid shifting views on workplace wellness. William Fleming, a fellow at Oxford University’s Wellbeing Research Center, found that many employer-provided wellness services, like sleep apps and mindfulness seminars, largely don’t live up to claims of improving mental health.

“Those interventions — not only are many of them not working, but they’re backfiring,” said Kandi Wiens, the co-director of the medical education master’s degree program at the University of Pennsylvania and a burnout researcher.

Mr. Fleming said these initiatives were ineffective because they focus on the individual rather than issues like overwork or lack of resources that lead to burnout.

Tuesday, July 9, 2024

Remnants

The Aristocrats [Media Notes 138]

The Aristocrats is a 2005 documentary about a joke of the same name which Wikipedia characterizes as

a taboo-defying, off-color joke that has been told by numerous stand-up comedians and dates back to the vaudeville era. It relates the story of a family trying to get an agent to book their stage act, which is remarkably vulgar and offensive. The punch line reveals that they incongruously bill themselves as “The Aristocrats”. When told to audiences who know the punch line, the joke's humor depends on the described outrageousness of the family act.

Because the objective of the joke is its transgressive content, it is most often told privately, such as by comedians to other comedians.

I saw the film shortly after it was released. I went with a friend. We saw it in the afternoon at a packed house in a theatre off Union Square in Manhattan. It seemed like wall-to-wall laughter for the duration of the film. At the time it was the funniest film I’d ever seen.

It was still funny when I watched it on Netflix yesterday, but I didn’t laugh nearly so hard. I suspect that’s mostly because I watched it alone. Watching films with others intensifies the experience.

The film is simple. For almost an hour and a half you hear over 80 entertainers, mostly comedians, either telling the joke or talking about it. As the Wikipedia entry indicates, the joke is simple. The punchline comes at the end when the booking agent asks, What do you call yourselves? The answer: The Aristocrats. In a few versions the answer is different, “The Sophisticates.” For the first five or ten minutes of the film we never hear a complete version of the joke. All we know is the title, “The Aristocrats,” that the joke itself is extraordinarily vulgar, and that it’s a joke that comedians share among themselves, never including it in their public routine. The first full version we hear is relatively short, and, though it’s quite vulgar, it’s also quite tame in comparison to what comes later in the film.

“The Aristocrats” thus serves as a testing ground for comedians to strut their stuff among their fellows. The opening premise is the same for everyone as is the punchline. The difference lies in how you get from one to the other, the characters – two generations or three, any animals as well? – the acts and configurations, the timing and pacing, the inflections, the dynamics.

I wonder what’s happened to “The Aristocrats” in the two decades since the movie was released? In the “old” days, before the movie, how did a young comic first come to hear it? Upon hearing it the first time, how did they react? What about the first time they tell it? How much did they practice first, if at all? But now the joke is available to anyone who sees the film. Do comedians still tell it among themselves? Has anyone put it into the public routine even once, twice, seven times?

And why “the aristocrats”? What does aristocracy have to do with it? The ostensible premise would seem to contrast some conception of aristocratic refinement and high-mindedness with the unremitting earthiness and obscenity of the events being evoked. Yet tropes of aristocratic debauchery and perversion must be as old as aristocracy.

It's a very strange joke, is though that four-letter word were adequate to the phenomenon itself.

Monday, July 8, 2024

America's sources of energy from 1776 to 2020

The cognitive benefits of music

Rafael Román-Caballero, Miguel A. Vadillo, Laurel J. Trainor, Juan Lupiáñez, Please don't stop the music: A meta-analysis of the cognitive and academic benefits of instrumental musical training in childhood and adolescence, Educational Research Review, Volume 35, 2022, 100436, ISSN 1747-938X, https://doi.org/10.1016/j.edurev.2022.100436.

Highlights

  • Benefits of musical training have been examined across disparate musical activities.
  • Instrumental learning is ideal for investigating the causal benefits of musical training.
  • Learning to play an instrument has a positive impact on cognitive skills and academic achievement.
  • Children and adolescents who self-select musical training tend to have better performance at baseline.
  • Cross-sectional results may reveal both preexisting and caused cognitive advantages.

Abstract: An extensive literature has investigated the impact of musical training on cognitive skills and academic achievement in children and adolescents. However, most of the studies have relied on cross-sectional designs, which makes it impossible to elucidate whether the observed differences are a consequence of the engagement in musical activities. Previous meta-analyses with longitudinal studies have also found inconsistent results, possibly due to their reliance on vague definitions of musical training. In addition, more evidence has appeared in recent years. The current meta-analysis investigates the impact of early programs that involve learning to play musical instruments on cognitive skills and academic achievement, as previous meta-analyses have not focused on this form of musical training. Following a systematic search, 34 independent samples of children and adolescents were included, with a total of 176 effect sizes and 5998 participants. All the studies had pre-post designs and, at least, one control group. Overall, we found a small but significant benefit (g‾Δ = 0.26) with short-term programs, regardless of whether they were randomized or not. In addition, a small advantage at baseline was observed in studies with self-selection (g‾pre = 0.28), indicating that participants who had the opportunity to select the activity consistently showed a slightly superior performance prior to the beginning of the intervention. Our findings support a nature and nurture approach to the relationship between instrumental training and cognitive skills. Nevertheless, evidence from well-conducted studies is still scarce and more studies are necessary to reach firmer conclusions.

Little Island in the late afternoon

I Am: Celine Dion [Media Notes 137]

I’ve been aware of Celine Dion at least since she sang the theme song (“My Heart Will Go On”) for Titanic in 1997, but I’d probably heard something from her before then. I didn’t follow her at all, however. To me she was just another pop singer, very good on so-called “power ballads,” but I can’t follow everything, can I? Still, every now and then, I’d hear something. When she was diagnosed with still-person syndrome, I heard about that. Then, a week or two ago the trailer for a documentary showed up on my Netflix feed: I Am: Celine Dion. I watched the trailer. “She looks old.” Not old old, just old. That’s what I thought. A couple of days later I decided to watch the whole film.

According to an interview with the director, Irene Taylor, in The New York Times, they didn’t know about her illness when they started filming. When the illness was finally diagnosed, they decided to continue. Dion had given them permission to film everything, and they did: feeding the dog, light sabers with her sons, a fascinating trip to a warehouse where her many costumes were stored, recording sessions, even therapy. This footage was cut with documentary and concert footage throughout her career, allowing us to see the past through the present, and the present in view of the past. At several points she talked about how she hated to disappoint her fans. Her sorrow was real, her connection with her fans is real.

The climax of the documentary came during a therapy session:

Tell me about your reaction in the moment toward the end of the documentary, when Dion starts to seize up during physical therapy.

I could just see this stiffness that was not like the flowing, lithe dancer that I had been filming for several months doing her physical therapy. Within a couple of minutes, she was moaning in pain.

I wanted to know if she was breathing, because she was moaning and then she stopped. I put the microphone, which was at the end of a pole you can discreetly put closer to your subject, underneath the table. I couldn’t hear her breathing.

I was very panicked. I was looking around the room, and I saw that her therapist called for her head of security. Her bodyguard immediately came into the room. I could see right away these two men were there to take care of her and they were trained to do it.

Probably within about three minutes, once this human response to want to be helpful and drop everything subsided, Nick [Midwig, the film’s director of photography] and I eased into filming everything as it happened. It was very uncomfortable. I’ve never been in a situation with a camera that has been that touch and go.

Facticity, that’s how it struck me, the raw truth. There is no more.

We saw both, the human being, and the artist; the artist animating the human, the human being an artist.

Sunday, July 7, 2024

Purple allium

Genes and politics

Saturday, July 6, 2024

Blossoms and the 14th St. viaduct

Your grandma's robot companion

Erin Nolan, For Older People Who Are Lonely, Is the Solution a Robot Friend? NYTimes, July 6, 2024.

ElliQ, a voice-activated robotic companion powered by artificial intelligence, is part of a New York State effort to ease the burdens of loneliness among older residents. Though people can experience feelings of isolation at any age, older adults are especially susceptible as they’re more likely to be divorced or widowed and to experience declines in their cognitive and physical health.

New York, like the rest of the country, is rapidly aging, and state officials have distributed free ElliQ robots to hundreds of older adults over the past two years.

Created by the Israeli start-up Intuition Robotics, ElliQ consists of a small digital screen and a separate device about the size of a table lamp that vaguely resembles a human head but without any facial features. It swivels and lights up when it speaks.

Unlike Apple’s Siri and Amazon’s Alexa, ElliQ can initiate conversations and was designed to create meaningful bonds. Beyond sharing the day’s top news, playing games and reminding users to take their medication, ElliQ can tell jokes and even discuss complicated subjects like religion and the meaning of life.

Many older New Yorkers have embraced the robots, according to Intuition Robotics and the New York State Office for the Aging, the agency that has distributed the devices. In interviews with The New York Times, many users said ElliQ had helped them keep their social skills sharp, stave off boredom and navigate grief.

There's much more at the link.

* * * * *

Some relevant research: De Freitas, Julian, Ahmet K Uguralp, Zeliha O Uguralp, and Puntoni Stefano. "AI Companions Reduce Loneliness." Harvard Business School Working Paper, No. 24-078, June 2024.

Abstract: Chatbots are now able to engage in sophisticated conversations with consumers in the domain of relationships, providing a potential coping solution to widescale societal loneliness. Behavioral research provides little insight into whether these applications are effective at alleviating loneliness. We address this question by focusing on “AI companions”: applications designed to provide consumers with synthetic interaction partners. Studies 1 and 2 find suggestive evidence that consumers use AI companions to alleviate loneliness, by employing a novel methodology for fine-tuning large language models (LLMs) to detect loneliness in conversations and reviews. Study 3 finds that AI companions successfully alleviate loneliness on par only with interacting with another person, and more than other activities such watching YouTube videos. Moreover, consumers underestimate the degree to which AI companions improve their loneliness. Study 4 uses a longitudinal design and finds that an AI companion consistently reduces loneliness over the course of a week. Study 5 provides evidence that both the chatbots’ performance and, especially, whether it makes users feel heard, explain reductions in loneliness. Study 6 provides an additional robustness check for the loneliness-alleviating benefits of AI companions.

Will AIs be able to create new knowledge?

This is a quick and dirty reflection on the question posed in the following tweet:

That question has been on my mind for some time: Will AIs be able to create new knowledge? Just what does that mean, “new knowledge”? It’s one thing to take an existing conceptual language and use it to say something that’s not been said before. It’s something else to come up with fundamentally new words. I think that latter’s what that tweet’s about. General relativity was something of a fundamentally new kind, not just a complex elaboration of and variation over existing kinds.

In my previous post, On the significance of human language to the problem of intelligence (& superintelligence), I pointed out that animals are more or less biologically “wired” into their world. They can’t conceptualize their way out of it. The emergence of language in humans allowed us to bootstrap our way beyond the limits of our biological equipment.

I figure there are two aspects of that: 1) coming up with the new concept, and 2) verifying it. The tweet focuses on the first, but without the second, the capacity to come up with new concepts won’t get us very far. And when we’re talking about new concepts, I think we’re talking about adding a new element to the conceptual ontology. Verifying requires cooperation among epistemologically independent agents, agents that can make observations and replicate those observations. (See remarks in: Intelligence, A.I. and analogy: Jaws & Girard, kumquats & MiGs, double-entry bookkeeping & supply and demand.)

Now, let’s think about the current regime of deep learning technology, LLMs and the rest. These devices learn their processes and structures from large collections of data. They’re going to acquire the ontology that’s latent in the data. If that is so, how are they going to be able to come up with new items to add to the ontology? It’s not at all obvious to me that they’ll be able to do so. The data on which they learn, that’s their environment. It seems to me that they must be as “locked” into that environment as an animal is. Further, adding a new item to the ontology would require changing the network, which is beyond the capacity of these devices.

And then there’s they requirement of cooperation between independent epistemological agents. The phenomenon of confabulation is evidence for the importance of independent epistemological agents. The only requirement inherent in one such agent is logical consistency: that it emit collections of tokens that are consistent with the existing collection. The only thing that keeps humans for continuous confabulation is the fact that we must communicate with one another. It is the existence of a world independent of our individual awareness that provides us with a way of grounding our statements, of freeing ourselves from the pitfalls of our linguistic fluency.

* * * * *

I’ve been working my way through episodes of House, M.D. Every episode contains segments where House and his team participate in differential diagnosis, which involves rapid conversational interaction among them. In the first episode of season 4, “Alone,” House no longer has a team. He ends up bouncing ideas off of a janitor. That doesn’t go so well.

Friday, July 5, 2024

Life in the billionaire bunkers after the apocalypse

Billionaires Are Prepping for Doomsday | Glenn Loury, Nikita Petrov & Mark Sussman | The Glenn Show

Timestamps:

0:00 Intro
1:28 Biden’s abysmal debate performance
5:01 Billionaire survivalists
12:53 So you made it to your luxury bunker. Now what?
16:22 The postapocalyptic war of all against all
21:29 Is the very existence of billionaires the real doomsday scenario?
26:20 The doomsday tax
32:24 Will the bunker dwellers have a reason to live?
40:39 The problem of government after nuclear holocaust
46:08 The literal bunker mentality
52:08 Was Mark responsible for Sam Bankman-Fried’s downfall? Who can say for sure?

Colt Clark and the Quarantine Kids + COUSINS play Surfin' U.S.A.

Watch the little guy at the lower left.

A Chinese challenge to Amazon

Spencer Soper, Bloomberg Tech Daily Newsletter, July 5, 2024.

Temu has taught Amazon.com Inc. an important lesson: US shoppers can be patient if it saves them money.

News broke last week that Amazon is planning a low-priced store for apparel and home goods shipped directly to US shoppers from China, signaling that the ecommerce giant is taking seriously the threat posed by discounters like Temu and Shein.

Amazon helped change the way people shop by building a vast network of warehouses designed to stockpile products and quickly send them to customers. That model requires products manufactured in China to be shipped by sea in bulk to the US and trucked to warehouses around the country.

The upside is that products are close to shoppers, enabling delivery in just a day or two. Amazon has been investing in sharpening that delivery advantage over rivals with warehouses closer to consumers to increase the number of products they’ll receive quickly.

That’s not what Chinese rivals like Temu have been doing, in part because they can’t afford to compete with such a cost-intensive strategy.

Bulk shipments of products from China are typically subject to tariffs, which were increased to about 25% in 2018. And Amazon pays for its warehouses through fees charged to online merchants and customers via the $139 annual cost of an Amazon Prime subscription.

Temu and Shein are operating what’s often described as the factory-to-consumer model. Products are shipped in smaller batches from Chinese factories to warehouses close to Chinese airports. When US shoppers place orders, the products are put on US-bound cargo planes and sent directly to customers. But it takes a week or two for shoppers to get what they purchased. The lower delivery costs let Temu and Shein offer prices that Amazon has been unable to match with its costly logistics operation.

The consumer packages shipped from China slip into the US without tariffs thanks to a century-old loophole called “de minimis,” which essentially means any shipment with less than $800 in products enters the US tax-free.

There's more at the link.