Friday, September 13, 2024

There are no AI-shaped holes lying around

Friday, September 6, 2024

Mary Spender and Adam Neely talk about being musicians on tour and on YouTube

Rick Beato on the current Spotify top ten 10

Two points:

  • First time since in the last three or four years that the Spotify top ten didn't include any rap or hip hop.
  • First time since ??? when there's a top ten tune with a key change: Sabrina Carpenter, "Please, Please, Please."

Thursday, August 29, 2024

The Dimensions of Dimensionality

Wednesday, August 28, 2024

Gavriil is picking up Beethoven's 'Fur Elise'

 

Gavriil is 2 years and 11 months.

Pianist Robert Levin improvises when he plays Mozart

Zachary Woolfe, A Pianist Who’s Not Afraid to Improvise on Mozart, NYTimes, Aug. 27 2024. The article opens:

Cadenzas are a concerto soloist’s time to shine: the moments when the rest of the orchestra dramatically drops out and a single musician gets the chance to command the stage.

For about half of Mozart’s piano concertos, cadenzas he wrote have been preserved, and those are what you usually hear in concerts and on recordings. Other composers later filled in the gaps with cadenzas that have also become traditional. Some performers write their own.

But 250 years ago, when Mozart was a star pianist, he wouldn’t have performed prewritten cadenzas — even ones he had composed.

“When Mozart wrote his concertos, they were a vehicle for his skills,” the pianist and scholar Robert Levin said by telephone from Salzburg, Austria — Mozart’s hometown — where he teaches at the Mozarteum University. “He was respected as a composer and lionized as a performer, but it was as an improviser that he was on top of the heap.”

Levin, 76, has long argued that Mozart, as a player, made up new cadenzas and ornaments in the moment. And he has sought to revive that spirit of improvisation in a landmark cycle of the concertos on period instruments, a 13-album project begun more than 30 years ago with the Academy of Ancient Music, led by Christopher Hogwood.

After a two-decade gap caused by record company budget cuts, and with the last installment finally released this summer, the cycle takes an invaluable place as the most complete survey of Mozart’s music for keyboard and orchestra.

There's much more at the link.

Monday, August 26, 2024

Everyone dance, from toddlers to grandparents [NYC]

Rachel Sherman, A New Generation of Club Kids Is Born. They’re Younger Than You Think. NYTimes, Aug. 26, 2024.

At a recent dance party in Brooklyn, Berk Sawyer, wearing Nike high-tops and a white bodysuit covered in city icons like pigeons and a hot-dog stand, bopped his head to the heavy bass. Occasionally, he bounced so hard he tumbled to the floor.

Thankfully, at 13 months old and two feet tall, Berk was never too far from the ground.

Berk was running away from his mother, Rena Deitz, at St. James Joy, a lively all-age, block party in the Clinton Hill neighborhood of Brooklyn.

All around him, the brownstone-lined street was filled with grooving toddlers and their parents. Some nursed beers, some nursed. True to the party’s name, joy spread through a conga line, and through the swirls of dancers who paired off to salsa.

“It’s one of the few places you can come to dance with a baby,” said Ms. Deitz, 36, who used to seek out nightlife before becoming a parent, adding: “It starts to scratch the itch.”

St. James Joy is now one of a handful of dance parties around New York City where house music fanatics and babies alike find a dose of social life together in broad, pre-bedtime, daylight. It’s a city tradition that has grown in recent years. At multigenerational dance parties, attendees can listen to music by veteran New York D.J.s played on serious speakers — without any remixes of “Baby Shark.” It’s a small way for weary parents to find a dance floor release, while burning off energy with their children. And this summer there’s been no shortage of options.

There's more at the link.

Saturday, August 24, 2024

Friday, August 9, 2024

Cultural group selection and human cooperation: a conceptual and empirical review

Smith D. Cultural group selection and human cooperation: a conceptual and empirical review. Evol Hum Sci. 2020 Feb 7;2:e2. doi: 10.1017/ehs.2020.2. PMID: 37588374; PMCID: PMC10427285.  

Abstract: Cultural group selection has been proposed as an explanation for humans' highly cooperative nature. This theory argues that social learning mechanisms, combined with rewards and punishment, can stabilise any group behaviour, cooperative or not. Equilibrium selection can then operate, resulting in cooperative groups outcompeting less-cooperative groups. This process may explain the widespread cooperation between non-kin observed in humans, which is sometimes claimed to be altruistic. This review explores the assumptions of cultural group selection to assess whether it provides a convincing explanation for human cooperation. Although competition between cultural groups certainly occurs, it is unclear whether this process depends on specific social learning mechanisms (e.g. conformism) or a norm psychology (to indiscriminately punish norm-violators) to stabilise groups at different equilibria as proposed by existing cultural group selection models. Rather than unquestioningly adopt group norms and institutions, individuals and groups appear to evaluate, design and shape them for self-interested reasons (where possible). As individual fitness is frequently tied to group fitness, this often coincides with constructing group-beneficial norms and institutions, especially when groups are in conflict. While culture is a vital component underlying our species' success, the extent to which current conceptions of cultural group selection reflect human cooperative evolution remains unclear.

Thursday, August 8, 2024

Looking back....purple memories....

The difficulties of transplanting chip manufacturing culture from Taiwan to Arizona

John Liu, What Works in Taiwan Doesn’t Always in Arizona, a Chipmaking Giant Learns, NYTimes, Aug. 8, 2024:

Taiwan Semiconductor Manufacturing Company, one of the world’s biggest makers of advanced computer chips, announced plans in May 2020 to build a facility on the outskirts of Phoenix. Four years later, the company has yet to start selling semiconductors made in Arizona. [...]

In Taiwan, TSMC has honed a highly complex manufacturing process: A network of skilled engineers and specialized suppliers, backed by government support, etches microscopic pathways into pieces of silicon known as wafers.

But getting all this to take root in the American desert has been a bigger challenge than the company expected.

“We keep reminding ourselves that just because we are doing quite well in Taiwan doesn’t mean that we can actually bring the Taiwan practice here,” said Richard Liu, the director of employee communications and relations at the site.

In recent interviews, 12 TSMC employees, including executives, said culture clashes between Taiwanese managers and American workers had led to frustration on both sides. TSMC is known for its rigorous working conditions. It’s not uncommon for people to be called into work for emergencies in the middle of the night. In Phoenix, some American employees quit after disagreements over expectations boiled over, according to the employees, some of whom asked not to be named because they were not authorized to speak publicly.

The company, which has pushed back the plant’s start date, now says it expects to begin chip production in Arizona in the first half of 2025.

Cultural expectations about work hours are one thing. But there are other factors involved:

On top of working to address the cultural differences in the workplace, TSMC is gearing up to recruit skilled workers to staff the Arizona plant for years to come. The company faces similar challenges in Japan and Germany, where it is also expanding.

In Taiwan, TSMC is able to draw on thousands of engineers and decades of relationships with suppliers. But in the United States, TSMC must build everything from the ground up.

“Here at this site, a lot of things we actually have to do from scratch,” Mr. Liu said.

The article goes on to talk about worker training and talks about how local colleges and universities are creating programs directed at chip manufacturing.

“We have a generation of students whose parents have never once stepped foot into an advanced manufacturing factory,” said Scott Spurgeon, the center’s superintendent. “Their concept of that is still much like the old mom-and-pop manufacturing where you show up every day and come out with dirty clothes and dirty hands.”

I'm wondering how much culturally transmitted tacit knowledge there is in those relationships that exist in Taiwan, but not Arizona.

There's more at the link.

The ‘Orgasm Gap’ Isn’t Going Away for Straight Women

Amanda N Gesselman, Margaret Bennett-Brown, Simon Dubé, Ellen M Kaufman, Jessica T Campbell, Justin R Garcia, The lifelong orgasm gap: exploring age’s impact on orgasm rates, Sexual Medicine, Volume 12, Issue 3, June 2024, qfae042, https://doi.org/10.1093/sexmed/qfae042

Abstract

Background

Research demonstrates significant gender- and sexual orientation–based differences in orgasm rates from sexual intercourse; however, this “orgasm gap” has not been studied with respect to age.

Aim

The study sought to examine age-related disparities in orgasm rates from sexual intercourse by gender and sexual orientation.

Methods

A survey sample of 24 752 adults from the United States, ranging in age from 18 to 100 years. Data were collected across 8 cross-sectional surveys between 2015 and 2023.

Outcomes

Participants reported their average rate of orgasm during sexual intercourse, from 0% to 100%.

Results

Orgasm rate was associated with age but with minimal effect size. In all age groups, men reported higher rates of orgasm than did women. Men’s orgasm rates ranged from 70% to 85%, while women’s ranged from 46% to 58%. Men reported orgasm rates between 22% and 30% higher than women’s rates. Sexual orientation impacted orgasm rates by gender but not uniformly across age groups.

Clinical Translation

The persistence of the orgasm gap across ages necessitates a tailored approach in clinical practice and education, focusing on inclusive sexual health discussions, addressing the unique challenges of sexual minorities and aging, and emphasizing mutual satisfaction to promote sexual well-being for all.

Strengths and Limitations

This study is the first to examine the orgasm gap with respect to age, and does so in a large, diverse sample. Findings are limited by methodology, including single-item assessments of orgasm and a sample of single adults.

Conclusion

This study revealed enduring disparities in orgasm rates from sexual intercourse, likely resulting from many factors, including sociocultural norms and inadequate sex education.

The New York Times has an article about the study: Catherine Pearson, The ‘Orgasm Gap’ Isn’t Going Away for Straight Women, August 6, 2024.

Wednesday, August 7, 2024

Surprise!

"Inside Out" provides a language for therapy

Melena Ryzik, How ‘Inside Out’ and Its Sequel Changed Therapy, NYTimes, Aug. 7, 2024. Opening paragraphs of the article:

In 2012, when Olivia Carter was just starting out as a school counselor, she employed all sorts of strategies to help her elementary-age students understand and communicate their feelings — drawing, charades, color association, role playing. After 2015, though, starting those conversations became a lot easier, she said. It took just one question: “Who has seen the movie ‘Inside Out’?”

That Pixar hit, about core emotions like joy and sadness, and this summer’s blockbuster sequel, which focuses on anxiety, have been embraced by educators, counselors, therapists and caregivers as an unparalleled tool to help people understand themselves. The story of the moods steering the “control panel” in the head of a girl named Riley has been transformational, many experts said, in day-to-day treatment, in schools and even at home, where the films have given parents a new perspective on how to manage the turmoil of growing up.

“As therapeutic practice, it has become a go-to,” said David A. Langer, president of the American Board of Clinical Child and Adolescent Psychology. In his household, too: “I have 9-year-old twins — we speak about it regularly,” said Langer, who’s also a professor of psychology at Suffolk University. “Inside Out” finger puppets were in frequent rotation when his children were younger, a playful way to examine the family dynamic. “The art of ‘Inside Out’ is explicitly helping us understand our internal worlds,” Langer said.

And it’s not just schoolchildren that it applies to. “I’ve been stealing lines from the movie and quoting them to adults, not telling them that I’m quoting,” said Regine Galanti, a psychologist and author in private practice on Long Island, speaking of the new film.

Anxiety:

And the new movie’s focus on anxiety, which has reached crisis proportions among adolescents, normalizes experiences that for young people could seem isolating or overwhelming, and makes them relatable.

“Almost every day there’s a student who’s struggling or having a panic attack,” Carter said. “I could see this being something that I lean on pretty heavily for a long time.”

Later:

“INSIDE OUT” ARRIVED at a moment when educators and caregivers were paying more attention to what’s known as social-emotional learning, prioritizing connection and communication skills, and recognizing, not tamping down, children’s sensibilities as part of their self-regulation. [...]

Acknowledging feelings “is like a magical thing,” Damour said. “If a person says, ‘I feel sad,’ they suddenly feel less sad.”

That “Inside Out” helps families have those conversations together amplifies one of its messages, to embrace our personalities in all their shades and shadows.

There's much more at the link.

Monday, August 5, 2024

Is OpenAI imploding? More leaders are jumping ship.

Terry Eagleton on the death of criticism

From the YouTube page:

April 9, 2010: One of Britain's most influential literary critics, Terry Eagleton is Distinguished Professor of English Literature at the University of Lancaster, and Visiting Professor at the National University of Ireland, Galway. In addition to his widely known "Literary Theory: An Introduction", Professor Eagleton is the author of over forty books, including "The Ideology of the Aesthetic", and "The Illusions of Postmodernism". Part of the Townsend Center for the Humanities' Forum on the Humanities and the Public World.

Eagleton's remarks on "culture," starting at 45:38, a particularly interesting.

Sunday, August 4, 2024

Breakfast horizons

The Marshmallow Test does not reliably predict adult functioning

Jessica F. Sperber, Deborah Lowe Vandell, Greg J. Duncan, Tyler W. Watts, Delay of gratification and adult outcomes: The Marshmallow Test does not reliably predict adult functioning. Child Development, 00, 1-15. 29 July 2024 https://doi.org/10.1111/cdev.14129

Abstract: This study extends the analytic approach conducted by Watts et al. (2018) to examine the long-term predictive validity of delay of gratification. Participants (n = 702; 83% White, 46% male) completed the Marshmallow Test at 54 months (1995–1996) and survey measures at age 26 (2017–2018). Using a preregistered analysis, Marshmallow Test performance was not strongly predictive of adult achievement, health, or behavior. Although modest bivariate associations were detected with educational attainment (r = .17) and body mass index (r = −.17), almost all regression-adjusted coefficients were nonsignificant. No clear pattern of moderation was detected between delay of gratification and either socioeconomic status or sex. Results indicate that Marshmallow Test performance does not reliably predict adult outcomes. The predictive and construct validity of the ability to delay of gratification are discussed.

AI models collapse when trained on recursively generated data

Shumailov, I., Shumaylov, Z., Zhao, Y. et al. AI models collapse when trained on recursively generated data. Nature 631, 755–759 (2024). https://doi.org/10.1038/s41586-024-07566-y

Abstract: Stable diffusion revolutionized image creation from descriptive text. GPT-2 (ref. 1), GPT-3(.5) (ref. 2) and GPT-4 (ref. 3) demonstrated high performance across a variety of language tasks. ChatGPT introduced such language models to the public. It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

Monday, July 29, 2024

What do people enjoy?

Sunday, July 28, 2024

Bill Maher and William Shatner talk about deep fundamental things while also being silly

 

This clip is two years old. I believe that Maher was 66 at the time and Shatner was 90. Here's the description that came with the clip: "Bill and William Shatner comically riff on the last acceptable prejudice, the spectrum of human sexuality, bad crowds in comedy, the making of Religious, and the origin of the universe." What kind of conversation is this? It seems to me that it's both real and an act. These guys are performers and they're certainly performing here, but I don't think they've got a script. They're just making it up as they go along. 

At the time I'm posting this, almost 800,000 people have watched this clip. What's is say or imply about the world that 800,000 people have found this entertaining?

Paul Fry on Influence: Eliot and Bloom (Theory of Literature)

87,872 views Sep 1, 2009
Introduction to Theory of Literature (ENGL 300)

In this lecture on the psyche in literary theory, Professor Paul Fry explores the work of T. S. Eliot and Harold Bloom, specifically their studies of tradition and individualism. Related and divergent perspectives on tradition, innovation, conservatism, and self-effacement are traced throughout Eliot's "Tradition and the Individual Talent" and Bloom's "Meditation upon Priority." Particular emphasis is placed on the process by which poets struggle with the literary legacies of their precursors. The relationship of Bloom's thinking, in particular, to Freud's Oedipus complex is duly noted. The lecture draws heavily from the works of Pope, Borges, Joyce, Homer, Wordsworth, Longinus, and Milton.

00:00 - Chapter 1. Introduction to Harold Bloom
06:31 - Chapter 2. Mimesis and Imitatio
11:51 - Chapter 3. Bloom "Misreads" Eliot
29:34 - Chapter 4: Literary History: the Always Already Written "Strong Poem"
48:09 - Chapter 5. Lacan and Bloom on Tony the Tow Truck

Complete course materials are available at the Open Yale Courses website: http://open.yale.edu/courses

This course was recorded in Spring 2009.

Pop Tarts on a pier along the Hudson

Laughter on the campaign trail: Harris (yes) vs. Trump (no)

Jason Zinoman, Kamala Harris’s Laugh Is a Campaign Issue. Our Comedy Critic Weighs in. NYTimes, July 28, 2024.

Plato warned against a love of laughter, suggesting it indicates a loss of control. Ever alert to the theater of power, Trump rarely laughs, dating back to long before he was in politics. Recounting his season appearing on “The Apprentice,” the magician Penn Jillette marveled at how he would regularly spend hours watching Trump talk and never notice the slightest chuckle.

Watching interviews with both candidates, it’s clear that there’s a sizable disparity in laughter. Trump scoffs and occasionally smirks, which can be a crowd-pleaser. But chuckling isn’t his thing. On talk shows, Harris does it to deflect and connect, to establish intimacy, but also to underline the absurdity of something. At its most effusive, her laugh can draw attention to itself, and taken out of context, it can seem as if she’s the only one in on the joke.

Harris has said she got her guffaw from her mother. But that’s not her only laugh. In her debut campaign speech, she even got a big response from muffling a snicker after saying that in the past, “I have taken on perpetrators of all kinds.”

This hint of a laugh sets up her most successful stump line so far: “So hear me when I say, I know Donald Trump’s type.”

The case against laughing is that it makes a leader come off as less serious. This rests on a common misunderstanding that laughter is primarily a response to something funny. Research over the past few decades has backed up what philosophers have said for over a century, which is that laughter is inherently social, more about relationships and communication than jokes.

Later:

Bill Clinton connected with people by biting his lip and making eye contact. He famously felt your pain. With her biggest laughs, the ones that she does with her whole body, Harris projects something else: joy.

There's more at the link.

Friday, July 26, 2024

The art of misdirection – Giving shots to infants and toddlers

One day my YouTube feed presented me with this sort video. I was curious, so I watched it. It was quite remarkable.

This one is similar, but instead of giving a shot to a toddler, we see a doctor givine a shot to an infant.

Magicians and pickpockets also employ misdirection, through to achieve somewhat different ends.

Thursday, July 25, 2024

Whisper Not - Benny Golson and McCoy Tyner

Whisper Not (Golson): Benny Golson (born 1929), tenor saxophone; McCoy Tyner (1938-2020), piano; Avery Sharpe, bass and Aaron Scott on drums. Jazz in Marciac 1997.

Hot pink petals

Communication stripped to the barest minimum?

Hays, D. G. (1973). "Language and Interpersonal Relationships." Daedalus 102(3): 203-216. The following passage is from pp. 204-205:
The experiment strips conversation down to its barest essentials by depriving the subject of all language except for two pushbuttons and two lights, and by suggesting to him that he is attempting to reach an accord with a mere machine. We brought two students into our building through different doors and led them separately to adjoining rooms. We told each that he was working with a machine, and showed him lights and pushbuttons. Over and over again, at a signal, he would press one or the other of the two buttons, and then one of two lights would come on. If the light that appeared corresponded to the button he pressed, he was right; otherwise, wrong. The students faced identical displays, but their feedback was reversed: if student A pressed the red button, then a moment later student B would see the red light go on, and if student B pressed the red button, then student A would see the red light. On any trial, therefore, if the two students pressed matching buttons they would both be correct, and if they chose opposite buttons they would both be wrong.

We used a few pairs of RAND mathematicians; but they would quickly settle on one color, say red, and choose it every time. Always correct, they soon grew bored. The students began with difficulty, but after enough experience they would generally hit on something. Some, like the mathematicians, chose one color and stuck with it. Some chose simple alternations (red-green-red-green). Some chose double alternations (red-red-green-green). Some adopted more complex patterns (four red, four green, four red, four green, sixteen mixed and mostly incorrect, then repeat). The students, although they were sometimes wrong, were rarely bored. They were busy figuring out the complex patterns of the machine.

But where did the patterns come from? Although neither student knew it, they arose out of the interaction of two students.

Monday, July 22, 2024

Double reflection of the sun

The risks of proliferating AI agents

Malcolm Murray, The Shifting Nature Of AI Risk, 3 Quarks Daily, July 22, 2024.

Second and more importantly, much of the focus in AI is currently on building AI agents. As can be seen in comments from Sam Altman and Andrew Ng, this is seen as the next frontier for AI. When intelligent AI agents are available at scale, this is likely to change the risk landscape dramatically. [...]

Introducing agentic intelligences could mean that we lose the ability to analyze AI risks and chart their path through the risk space. The analysis can become too complex since there are too many unknown variables. Note that this is not a question of “AGI” (Artificial General Intelligence), just agentic intelligences. We already have AI models that are much more capable than humans in many domains (playing games, producing images, optical recognition). However, the world is still recognizable and analyzable because these models are not agentic. The question of AGI and when we will “get it”, a common discussion topic, is a red herring and the wrong question. There is no such thing as one type of “general” intelligence. The question should be when will we have a plethora of agentic intelligences operating in the world trying to act according to preferences that may be largely unknown or incomprehensible to us.

For now, what this means is that we need to increase our efforts on AI risk assessment, to be as prepared as possible as AI risk continues to get more complex. However, it also means we should start planning for a world where we won’t be able to understand the risks. The focus in that world needs to instead be on resilience.

There's more at the link.

Sunday, July 21, 2024

A touch of yellow

Nine-year old chess prodigy hopes to become the youngest grandmaster ever

Isabella Kwai, At 5, She Picked Up Chess as a Pandemic Hobby. At 9, She’s a Prodigy. NYTimes, July 21, 2024.

Since learning chess during a pandemic lockdown, Bodhana Sivanandan has won a European title in the game, qualified for this year’s prestigious Chess Olympiad tournament, and established herself as one of England’s best players.

She also turned 9 in March. That makes Bodhana, a prodigy from the London borough of Harrow, the youngest player to represent England at such an elite level in chess, and quite possibly the youngest in any international sporting competition.

“I was happy and I was ready to play,” Bodhana said in a phone interview, two days after she learned that she had been selected for this year’s Olympiad, an international competition considered to be the game’s version of the Olympics.

The fourth-grader, who learned chess four years ago when she stumbled across a board her father was planning to discard, knows exactly what she wants to accomplish next. “I’m trying to become the youngest grandmaster in the world,” she said, “and also one of the greatest players of all time.”

Chess is one of the arenas in which prodigies emerge. Music and math are others. Why? I assume it has something to do with their brains. What?

I note also that playing chess has been central to the development of AI and that it is the first arena in which computers and equaled and then surpassed the best human performance. What can we make of that? I don’t know, but surely there’s something to be discovered here.

* * * * *

Note the final paragraph of this post, On the significance of human language to the problem of intelligence (& superintelligence).

The question of machine superintelligence would then become:

Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.

Human Go players learn from superhuman AIs

There are more links in the thread.

* * * * *

So: "Last year, we found superhuman Go AIs are vulnerable to “cyclic attacks”. This adversarial strategy was discovered by AI but replicable by humans."

Superhuman Go AIs discover a new region of the Go search-space. That's one thing. The fact that, once discovered, humans are able to exploit this region against a superhuman Go AI. That is just as interesting. 

One question we can ask about superintelligence is whether or not so-called superintelligent AIs can do things that are inherently and forever beyond human capacity. In this particular case, we have humans learning things initially discovered by AIs.

Saturday, July 20, 2024

Coffee and cream

Masha Gessen: Are we on the edge of an autocratic breakthrough?

Masha Gessen, Biden and Trump Have Succeeded in Breaking Reality, NYTimes, July 20, 2024.

The last three paragraphs:

As for Trump, despite the gestures he made in his speech on Thursday night toward national reconciliation, tolerance and unity, the convention reflected the ultimate consolidation of his power. If he is elected, a second Trump administration seems likely to bring what the Hungarian sociologist Balint Magyar has termed an “autocratic breakthrough” — structural political change that is impossible to reverse by electoral means. But if we are in an environment in which nothing is believable, in which imagined secrets inspire more trust than the public statements of any authority, then we are already living in an autocratic reality, described by another of Arendt’s famous phrases: “Nothing is true and everything is possible.”

It’s tempting to say that Trump’s autocratic movement has spread like an infection. The truth is, the seeds of this disaster have been sprouting in American politics for decades: the dumbing down of conversation, the ever-growing role of money in political campaigns, the disappearance of local news media and local civic engagement and the consequent transformation of national politics into a set of abstracted images and stories, the inescapable understanding of presidential races as personality contests.

None of this made the Trump presidency inevitable, but it made it possible — and then the Trump presidency pushed us over the edge into the uncanny valley of politics. If Trump loses this year — if we are lucky, that is — it will not end this period; it will merely bring an opportunity to undertake the hard work of recovery.

Nahre Sol talks with Tigan Hamasyan

From the Wikipedia entry for Hamasyan:

Tigran Hamasyan (Armenian: Տիգրան Համասյան; born July 17, 1987) is an Armenian jazz pianist and composer. He plays mostly original compositions, strongly influenced by the Armenian folk tradition, often using its scales and modalities. In addition to this folk influence, Hamasyan is influenced by American jazz traditions and, to some extent, as on his album Red Hail, by progressive rock. His solo album A Fable is most strongly influenced by Armenian folk music. Even in his most overt jazz compositions and renditions of well-known jazz pieces, his improvisations often contain embellishments based on scales from Middle Eastern/Southwest Asian traditions.

Friday, July 19, 2024

Friday Fotos: Ever more flowers

Refactoring training data to make smaller LLMs

Tentatively, most interesting thing I've seen coming out of the AI world in a year or so.

A tweet by Andrej Karpathy:

LLM model size competition is intensifying… backwards!

My bet is that we'll see models that "think" very well and reliably that are very very small. There is most likely a setting even of GPT-2 parameters for which most people will consider GPT-2 "smart". The reason current models are so large is because we're still being very wasteful during training - we're asking them to memorize the internet and, remarkably, they do and can e.g. recite SHA hashes of common numbers, or recall really esoteric facts. (Actually LLMs are really good at memorization, qualitatively a lot better than humans, sometimes needing just a single update to remember a lot of detail for a long time). But imagine if you were going to be tested, closed book, on reciting arbitrary passages of the internet given the first few words. This is the standard (pre)training objective for models today. The reason doing better is hard is because demonstrations of thinking are "entangled" with knowledge, in the training data.

Therefore, the models have to first get larger before they can get smaller, because we need their (automated) help to refactor and mold the training data into ideal, synthetic formats.

It's a staircase of improvement - of one model helping to generate the training data for next, until we're left with "perfect training set". When you train GPT-2 on it, it will be a really strong / smart model by today's standards. Maybe the MMLU will be a bit lower because it won't remember all of its chemistry perfectly. Maybe it needs to look something up once in a while to make sure.

That tweet is linked to this:

Wednesday, July 17, 2024

AI made up half of VC investment last quarter

A view from the window

What’s it mean to understand how LLMs work?

I don’t think we know. What bothers me is that people in machine learning seem to think of word means as Platonic ideals. No, that’s not what they’d say, but some such belief seems implicit in what they’re doing. Let me explain.

I’ve been looking through two Anthropic papers on interpretability: Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, and Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. They’re quite interesting. In some respects they involve technical issues that are a bit beyond me. But, setting that aside, they also involve masses to detail that you just have to slog through in order to get a sense of what’s going on.

As you may know, the work centers on things that they call features, a common term in this business. I gather that:

  • features are not to be identified with individual neurons or even well-defined groups of neurons, which is fine with me,
  • nor are features to be closely identified with particular tokens. A wide range of tokens can be associated with any given feature.

There is a proposal that these features are some kind of computational intermediate.

We’ve got neurons, features, and tokens. I believe that the number of token types is on the order of 50K or so. The number of neurons is considerably larger varies depending on the size of the model, but will be 3 or 4 orders of magnitude larger. The weights on those neurons characterize all possible texts that can be constructed with those tokens. Features are some kind of intermediate between neurons and texts.

The question that keeps posing itself to me is this: What are we looking for here? What would an account of model mechanics, if you will, look like?

A month or so ago Lex Fridman posted a discussion with Ted Gibson, an MIT psycholinguist, which I’ve excerpted here at New Savanna. Here’s an excerpt:

LEX FRIDMAN: (01:30:35) Well, let’s take a stroll there. You wrote that the best current theories of human language are arguably large language models, so this has to do with form.

EDWARD GIBSON: (01:30:43) It’s a kind of a big theory, but the reason it’s arguably the best is that it does the best at predicting what’s English, for instance. It’s incredibly good, better than any other theory, but there’s not enough detail.

LEX FRIDMAN: (01:31:01) Well, it’s opaque. You don’t know what’s going on.

EDWARD GIBSON: (01:31:03) You don’t know what’s going on.

LEX FRIDMAN: (01:31:05) Black box.

EDWARD GIBSON: (01:31:06) It’s in a black box. But I think it is a theory.

LEX FRIDMAN: (01:31:08) What’s your definition of a theory? Because it’s a gigantic black box with a very large number of parameters controlling it. To me, theory usually requires a simplicity, right?

EDWARD GIBSON: (01:31:20) Well, I don’t know, maybe I’m just being loose there. I think it’s not a great theory, but it’s a theory. It’s a good theory in one sense in that it covers all the data. Anything you want to say in English, it does. And so that’s how it’s arguably the best, is that no other theory is as good as a large language model in predicting exactly what’s good and what’s bad in English. Now, you’re saying is it a good theory? Well, probably not because I want a smaller theory than that. It’s too big, I agree.

It's that smaller theory that interests me. Do we even know what such a theory would look like?

Classically, linguists have been looking for grammars, a finite set of rules that characterizes all the sentences in a language. When I was working with David Hays back in the 1970s, we were looking for a model of natural language semantics. We chose to express that model as a directed graph. Others were doing that as well. Perhaps the central question we faced was this: what collection of node types and what collection of arc types did we need to express all of natural language semantics? Even more crudely, what collection of basic building blocks did we need in order to construct all possible texts?

These machine language people seem to be operating under the assumption that they can figure it out by an empirical bottom-up procedure. That strikes me as being a bit like trying to understand the principles governing the construction of temples by examining the materials from which they’re constructed, the properties of rocks and mortar, etc. You can’t get there from here. Now, I’ve some ideas about how natural language semantics works, which puts me a step ahead of them. But I’m not sure how far that gets us.

What if the operating principles of these models can’t be stated in any existing conceptual framework? The implicit assumption behind all this work is that, if we keep at it with the proper tools, sooner or later the model is going to turn out to an example of something we already understand. To be sure, it may be an extreme, obscure, and extraordinarily complicated example, but in the end, it’s something we already understand.

Imagine that some UFO crashes in a field somewhere and we are able to recover it, more or less intact. Let us imagine, for the sake of argument, that the pilots have disappeared, so all we’ve got is the machine. Would we be able to figure out how it works? Imagine that somehow a modern digital computer were transported back in time and ended up in the laboratory of, say, Nikola Tesla. Would he have been able to figure out what it is and how it works?

Let’s run another variation on the problem. Imagine that some superintelligent, but benevolent aliens were to land, examine our LLMs, and present us with documents explaining how they work. We would be able to read and understand those documents. Remember, these are benevolent aliens, so they’re doing their best to help us. I can imagine three possibilities:

  1. Yes, perhaps with a bit of study, we can understand the documents.
  2. We can’t understand them right away, but the aliens establish a learning program that teaches us what we know to understand those documents.
  3. The documents are forever beyond us.

I don’t believe three. Why not? Because I don’t believe our brains limit us to current modes of thought. In the past we’ve invented new ways of thinking; no reason why would could continue doing so, or learn new methods under the tutelage of benevolent aliens.

That leaves us with 1 and 2. Which is it? At the moment I’m leaning toward 2. But of course those superintelligent aliens don’t exist. We’re going to have to figure it out for ourselves.

Sunday, July 14, 2024

Mood-congruent memory revisited

Faul, L., & LaBar, K. S. (2023). Mood-congruent memory revisited. Psychological Review, 130(6), 1421–1456. https://doi.org/10.1037/rev0000394 (ungated version: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076454/)

Abstract: Affective experiences are commonly represented by either transient emotional reactions to discrete events or longer term, sustained mood states that are characterized by a more diffuse and global nature. While both have considerable influence in shaping memory, their interaction can produce mood-congruent memory (MCM), a psychological phenomenon where emotional memory is biased toward content affectively congruent with a past or current mood. The study of MCM has direct implications for understanding how memory biases form in daily life, as well as debilitating negative memory schemas that contribute to mood disorders such as depression. To elucidate the factors that influence the presence and strength of MCM, here we systematically review the literature for studies that assessed MCM by inducing mood in healthy participants. We observe that MCM is often reported as enhanced accuracy for previously encoded mood-congruent content or preferential recall for mood-congruent autobiographical events, but may also manifest as false memory for mood-congruent lures. We discuss the relevant conditions that shape these effects, as well as instances of mood-incongruent recall that facilitate mood repair. Further, we provide guiding methodological and theoretical considerations, emphasizing the limited neuroimaging research in this area and the need for a renewed focus on memory consolidation. Accordingly, we propose a theoretical framework for studying the neural basis of MCM based on the neurobiological underpinnings of mood and emotion. In doing so, we review evidence for associative network models of spreading activation, while also considering alternative models informed by the cognitive neuroscience literature of emotional memory bias. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Clouds over Jersey City, Hoboken, and the Hudson River

Economic growth in Roman Britain over four centuries

Scott G. Ortman, José Lobo, Lisa Lodwick, Rob Wiseman, Olivia Bulik, Victoria Harbison , and Luís M. A. Bettencourt, Identification and measurement of intensive economic growth in a Roman imperial province, Science Advances, 5 Jul 2024, Vol 10, Issue 27, DOI: 10.1126/sciadv.adk5517

Abstract: A key question in economic history is the degree to which preindustrial economies could generate sustained increases in per capita productivity. Previous studies suggest that, in many preindustrial contexts, growth was primarily a consequence of agglomeration. Here, we examine evidence for three different socioeconomic rates that are available from the archaeological record for Roman Britain. We find that all three measures show increasing returns to scale with settlement population, with a common elasticity that is consistent with the expectation from settlement scaling theory. We also identify a pattern of increase in baseline rates, similar to that observed in contemporary societies, suggesting that this economy did generate modest levels of per capita productivity growth over a four-century period. Last, we suggest that the observed growth is attributable to changes in transportation costs and to institutions and technologies related to socioeconomic interchange. These findings reinforce the view that differences between ancient and contemporary economies are more a matter of degree than kind.

H/t Tyler Cowen.

Rebuilding a 70-year old engine [a bit of the physical world]

I bought an UNUSED 70-year old ROLLS-ROYCE crate engine! How bad could it be?
Pacific Northwest Hillbilly

0:00 unload and unbox
11:08 getting engine unstuck
32:17 lubrication
54:54 electrical stuff
1:01:31 starter
1:12:35 priming oil and spark
1:17:56 carburetor
1:23:56 finishing touches
1:28:59 start attempt
1:35:43 end

I didn't watch the whole thing. Rather, I skipped around until I got to the end. Probably watched, say, 45 minutes worth. I wonder when we'll have a robot capable of doing this kind of work.You might want to waltz over to YouTube and look at some of the comments.

Friday, July 12, 2024

The Birth of "Jaws"

Brian Raftery, 50 Years Ago, ‘Jaws’ Hit Bookstores, Capturing the Angst of a Generation, NYTimes, July 12, 2024. The article opens:

In 1973, the first chapter of an unpublished novel was photocopied and passed around the Manhattan offices of Doubleday & Co. with a note. “Read this,” it dared, “without reading the rest of the book.”

Those who accepted the challenge were treated to a swift-moving tale of terror, one that begins with a young woman taking a postcoital dip in the waters off Long Island. As her lover dozes on the beach, she’s ravaged by a great white shark.

“The great conical head struck her like a locomotive, knocking her up out of the water,” the passage read. “The jaws snapped shut around her torso, crushing bones and flesh and organs into a jelly.”

Tom Congdon, an editor at Doubleday, had circulated the bloody, soapy excerpt to drum up excitement for his latest project: a thriller about a massive fish stalking a small island town, written by a young author named Peter Benchley.

Congdon’s gambit worked. No one who read the opening could put the novel down. All it needed was a grabby title. Benchley had spent months kicking around potential names (“Dark White”? “The Edge of Gloom”?). Finally, just hours before deadline, he found it.

“Jaws,” he wrote on the manuscript’s cover page.

When it was released in early 1974, Benchley’s novel kicked off a feeding frenzy in the publishing industry — and in Hollywood. “Jaws” spent months on the best-seller lists, turned Benchley from an unknown to a literary celebrity and, of course, became the basis for Steven Spielberg’s blockbusting 1975 film adaptation.

While most readers were drawn to the book’s shark-centric story line, “Jaws” rode multiple mid-1970s cultural waves: It was also a novel about a frayed marriage, a financially iffy town and a corrupt local government — released at a time of skyrocketing divorce rates, mass unemployment and a presidential scandal.

At a time of change and uncertainty, “Jaws” functioned as an allegory for whatever scared or angered the reader.

That is, it functioned as a vehicle for giving form to free-floating anxiety.

As you may know, I've written a bit about Spielberg's movie version of the story, some time with assistance from ChatGPT. As this paragraph from the article indicates, the movie cut some important elements from the story:

Amity itself is on the brink of ruin, having barely survived the early ’70s recession. Also in decline: Brody’s marriage to his class-conscious wife, Ellen, who has a sexually charged encounter with Hooper at a surf-and-turf spot. Then there’s the town’s mayor, Larry Vaughn, who’s so deeply indebted to the mob, he’ll do whatever it takes to keep the beaches open — even if it means people die.

There's much more at the link.

Friday Fotos: The water around Hoboken

Late Admissions: Lessons from Glenn Loury’s Memoir

From the webpage:

In his stunning new memoir, "Late Admissions: Confessions of a Black Conservative," economist Glenn Loury surveys the stratospheric highs and abysmal lows of a life lived in both the spotlight and the shadows. Loury tells the story of his rise from Chicago’s South Side to Harvard’s Kennedy School, his transformation from working-class factory clerk to elite economic theorist, and his emergence as a fire-breathing conservative public intellectual in Ronald Reagan’s America. Yet Loury’s public positions were often at odds with private behavior—like serial adultery and hard drug use—that threatened to derail his ascent when they were exposed. "Late Admissions" documents a unique man’s struggle to defeat “the enemy within” and explores the entanglements of race and personal identity, reason and religious belief, and tradition and generational conflict—all of which have shaped both his life and American culture since the 1960s.

Please join us for a virtual conversation featuring Glenn Loury, author of "Late Admissions: Confessions of a Black Conservative." Loury, together with Gerald Early of the Washington University of St. Louis and Manhattan Institute Senior Fellow Jason Riley, will unpack the larger themes in his compelling memoir and lessons they hold for today.