Pages in this blog

Friday, June 28, 2024

Kurt Vonnegut on the value of making your own art of whatever kind

 I first saw this letter in a Facebook group, but it seems it's a well-known letter and is all over the internet. Click on the image to enlarge it.

R U looking for a primer on fusion power?

Brian Potter, Will We Ever Get Fusion Power? Construction Physics, June 26, 2024. This is a good one, though a bit long. Potter goes through the history and various methods. From the conclusion:

Despite decades of progress, it’s still not clear, even to experts within the field, whether a practical and cost-competitive fusion reactor is possible. A strong case can be made either way.

The bull case for fusion is that for the last several decades there’s been very little serious effort at fusion power, and now that serious effort is being devoted to the problem, a working power reactor appears very close. The science of plasmas and our ability to model, understand, and predict them has enormously improved, as have the supporting technologies (such as superconducting magnets) needed to make a practical reactor. [...] With so many well-funded companies entering the space, we’re on the path towards a virtuous cycle of improvement: More fusion companies means it becomes worthwhile for others to build more robust fusion supply chains, and develop supporting technology like mass-produced reactor materials, cheap high-capacity magnets, working tritium breeding blankets, and so on. This allows for even more advances and better reactor performance, which in turn attracts further entrants. [...] At least one of the many fusion approaches will be found to be highly scalable and possible to build reasonably sized reactors at a low cost, and fusion will become a substantial fraction of overall energy demand.

The bear case for fusion is that, outside of unusual approaches like Helion’s (which may not pan out), fusion is just another in a long line of energy technologies that boil water to drive a turbine. And the conditions needed to achieve fusion (plasma at hundreds of millions or even billions of degrees) will inevitably make fusion fundamentally more expensive than other electricity-generating technologies. Even if we could produce a power-producing reactor, fusion will never be anywhere near as cheap as simpler technology like the combined-cycle gas turbine, much less future technologies like next-generation solar panels or advanced geothermal. By the time a reactor is ready, if it ever is, no one will even want it.

Perhaps the strongest case for fusion is that fusion isn’t alone in this uncertainty about its future. The next generation of low-carbon electricity generation will inevitably make use of technology that doesn’t yet exist, be that even cheaper, more efficient solar panels, better batteries, improved fission reactors, or advanced geothermal. All of these technologies are somewhat speculative, and may not pan out — solar and battery prices may plateau, advanced geothermal may prove unworkable, etc. In the face of this risk, fusion is a reasonable bet to add to the mix.

There's much more at the link.

The African Queen [Media Notes 133]

I don’t know what I think about this movie, The African Queen. It’s from 1951, stars Humphrey Bogart and Katherine Hepburn, was directed by John Houston, and was based on a novel by C. S. Forster. A-listers, all of them. I guess that’s what I think of it, A-listers, all of them.

The title refers to a boat, a small 30-foot steamboat helmed by Bogart. Hepburn and her brother (James Morley) are missionaries in German East Africa in 1914, at the outbreak of World War I. The Germans come through, burn the village and conscript the natives, and Hepburn’s brother is killed. Bogart comes through on his boat and helps her bury her brother. They board The African Queen and make a run for it.

In may favorite scene, they’ve just run some small rapids, with Hepburn at the tiller. She’s thrilled with the ride, saying it’s the best physical thrill she’s ever had (close-up on her glowing face). More, more! At this point you know what’s coming, don’t you. Heck, you knew what was coming when you entered the theater, didn’t you? That’s what kind of movie this is. The missionary lady falls in love with the rough-hewn Africa hand and they’re making eyes and talking domestic and cute while on this barely functioning boat on a river in East Africa on route to destroy a German gunboat downriver. It takes quite a bit of improvising, dealing with hordes of mosquitoes, mud, reeds, and rain, and – wouldn’t you know? – they’re captured by the Germans and about to be hanged as spies – though the captain obliged them by marrying them before hoisting them up – when BLAM! They’re saved by an explosion.

I wasn’t expecting that the first time I saw the film some years ago, though I knew that SOMETHING had to happen to make things work out, because that’s what kind of movie this is, but I was prepared this time. Still, it was a bit of a surprise, just a bit.

What can I say. Hepburn was Hepburn, Bogie was Bogie, and African did OK as well. Much of the film was actually shot on the location, which was quite novel at the time.

A good time was had by all.

Wednesday, June 26, 2024

On the success of the Laboratory of Molecular Biology (LMB) in Cambridge, UK

Luka Gebel, Chander Velu & Antonio Vidal-Puig, The strategy behind one of the most successful labs in the world, Nature, Vol 630 | 27 June 2024, https://doi.org/10.1038/d41586-024-02085-2

The opening paragraph:

The Medical Research Council’s Laboratory of Molecular Biology (LMB) in Cambridge, UK, is a world leader in basic biology research. The lab’s list of breakthroughs is enviable, from the structure of DNA and proteins to genetic sequencing. Since its origins in the late 1940s, the institute — currently with around 700 staff members — has produced a dozen Nobel prizewinners, including DNA decipherers James Watson, Francis Crick and Fred Sanger. Four LMB scientists received their awards in the past 15 years: Venkatraman Ramakrishnan for determining the structure of ribosomes, Michael Levitt for computer models of chemical reactions, Richard Henderson for cryo-electron microscopy (cryo-EM) and Gregory Winter for work on the evolution of antibodies (see Figure S1 in Supplementary information; SI). Between 2015 and 2019, more than one-third (36%) of the LMB’s output was in the top 10% of the world’s most-cited papers.

After much reporting:

Although these rules govern the LMB, the outcomes are more than the sum of their parts. The organization’s management strategy gives rise to emergent behaviours and deliverables that align with its long-term research goals. The management model has emerged from a set of actions taken by management over time that collectively result in a coherent approach to achieving the overall aim of the LMB4. In management theory terms, the LMB is a complex adaptive system, similar to an ecosystem.

A complex adaptive system is a self-organizing system with distinctive behaviour that emerges from interactions between its components in a manner that is usually not easy to predict5. Components might include individuals and their activities; material parts, such as technologies; and the ideas generated from these interactions6.

Effective management of this complex adaptive system is fundamental to the LMB’s success. Through continual adaptation and evolution, the LMB can generate new knowledge more effectively than most other institutions can.

Final paragraphs:

In recent years, some funders have pulled out of basic bioscience. For example, more of the US National Institutes of Health’s extramural funding over the past decade has gone to translational and applied research than to basic science (see Science 382, 863; 2023). Some highly reputable basic-science research institutions have suffered as a result and have even been dissolved, such as the Skirball Institute in New York City10. However, it is crucial to resist the temptation of dismantling basic science research, considering the complexity and difficulty of re-establishing it.

In response, a lab such as the LMB might enhance the translation of its discoveries by strengthening connections with the clinical academic sciences and private-sector industries. Leveraging strengths in the pharmaceutical industry — in areas such as artificial intelligence and in silico modelling — can bolster basic science without compromising a research lab’s focus. The LMB’s Blue Sky collaboration with the biopharmaceutical firm AstraZeneca is a step in this direction (see go.nature.com/3rnsvyu).

Third, it is becoming increasingly challenging for basic science labs to recruit and retain the best scientific minds. Translational research institutes are proliferating globally. Biotechnology and pharma firms can pay higher salaries to leading researchers. And researchers might be put off by the large failure rates for high-risk projects in fundamental research, as well as by the difficulties of getting tenure in a competitive lab such as the LMB.

Harold Bloom discusses Bud Powell and influence in jazz

I believe Bloom's interlocutor is Christopher Lydon. The discussion of Bud Powell starts about 43:00.

At 47:00 or so there's an explicit discussion of influence and competition. He's right, cutting contests are part of the jazz tradition. But that kind of highly ritualized bandstand combat is one thing, influence is another. Bloom seems to be conflating the two in his remarks. I've read many anecdotes about jazz musicians listening to records or broadcast performances and being inspired what they hear. That's influence. It takes place over a period of years and is enduring. That runs orthogonal to real-time competition in cutting contests, in which musicians will mirror or complement one another's lines. Whether or not that activity has a permanent influence on a musician's style, that's incidental to the dynamics of real-time musical combat. 

Miles Davis provides a clear example of the sort of phenomenon Bloom talks about. He came up in the 2nd cohort of bebop musicians and played with Charlie Parker for awhile. If you listen to his earliest recordings you'll hear him playing in the fleet multi-noted 'vertical' style of Parker and Gillespie. But by the mid-1950s he'd developed a sparser and more 'horizontal' style. He'd moved to a distinctly different musical territory. That move is widely interpreted as a (creative) response to the overwhelming presence of Parker and Gillespie.

Adam Savage swallows a camera robot

From the YouTube page:

This may be the smallest remote controlled robot we've covered on Tested. Adam visits the workshop of Endiatx, the makers of the Pillbot robotic endoscope that can swim around in your stomach to map and examine your insides. Adam swallows not just one, but two Pillbots during his visit and pilots the robots around his own stomach!

The first 18 minutes give you the background on the robot. The actual swallow starts about about 18:02.

What we got wrong about depression and its treatment

Steven D. Hollon, What we got wrong about depression and its treatment, Behaviour Research and Therapy, Volume 180, 2024, 104599, ISSN 0005-7967, https://doi.org/10.1016/j.brat.2024.104599.

Highlights

  • Depression is neither disease nor disorder rather an adaptation that evolved to serve a purpose.
  • Depression is so much more prevalent than currently recognized that it is “species typical”.
  • Antidepressants may suppress symptoms in a manner that increases risk for subsequent relapse.
  • Cognitive therapy works by making rumination more efficient and “unsticking” self-blame.
  • Adding antidepressants may interfere with any enduring effect that cognitive therapy may have.

Abstract: The paradigm is shifting with respect to how we think about depression and its treatment. Some of that shift can be attributed to new findings with respect to its epidemiology and genetics and the rest can be attributed to the incorporation of a new perspective derived from evolutionary theory. In brief, depression is far more prevalent than previously recognized with the bulk of additional cases involving individuals who do not go on to become recurrent. Nonpsychotic unipolar depression (but not bipolar mania which likely is a “true” disease) appears to be an adaptation that evolved to facilitate rumination in the service of resolving complex social problems in our ancestral past. Cognitive behavior therapy appears to structure that rumination so that patients at elevated risk for recurrence do not get “stuck” blaming themselves for their misfortunes, whereas antidepressant medications may suppress symptoms at the expense of prolonging the underlying episode such that patients remain at elevated risk for relapse whenever they try to discontinue. This means that patients not otherwise at risk for recurrence may be put on medications that they do not need and kept on them indefinitely whether they need to be or not.

Tuesday, June 25, 2024

Bloom on Shakespeare [persistence of identity] {the last mile}

The key point (01:57)

I think that Shakespeare had the first sense I know
of what he actually called the self-same
which is the persistence
of identity through all kinds of vicissitudes and change

In another (more recent) video about The Anatomy of Influence (2011) Bloom remarks about the meaning of "invention" in the subtitle of the Shakespeare book: Shakespeare: The Invention of the Human (c. 43:30):

everybody even if they more or less tolerated the book smashed at the title and said
what nonsense to say he invented us
obviously had they read the great Samuel Johnson
they would have known that the great man said
the essence of poetry is invention
that is to say discovery, originality
and if you say that Shakespeare invented the human
what you mean quite
modestly but accurately is
there's a great deal about both ourselves
and about one another
and about all of life
that was always there
but we would not have seen it [...]

except for Shakespeare

With that in mind, I've been looking through a collection of essays, Harold Bloom's Shakespeare (Palgrave 2001), which has an essay by Richard Levine, "Bloom, Bardolatry, and Characterolatry," where he asserts (p. 76):

Finally, 1 do not think we can take seriously Bloom's claim about Shakespeare's "invention of the human." I have appended a partial list of books (I am sure there are many more) that announce the discovery that the traits we call "human"-self-consciousness, individual identity, subjectivity, etc.-were invented in some particular period (ranging from Homeric Greece to eighteenth-century England), which, by another remarkable coincidence, usually turns out to be the period in which the discoverer specializes. The obvious truth, however, is that these traits were not invented by any specific people at any specific time and place; rather, they slowly developed over many thousands of years, along with the evolution of our brains and nervous systems, since they can be found in some of the oldest texts we have, in the most "primitive" tribal cultures, and even, in a rudimentary form, in our primate cousins. Of course, there have been inventions (or discoveries) at various times of new modes of conceptualizing these traits in philosophic and scientific discourse and of new modes of representing them in literature and the other arts. I believe that we can credit Shakespeare with an artistic invention of this kind in a number of the soliloquies in his middle and late periods that show a character going through a stressful thinking process (and, in one of Bloom's favorite phrases, "overhearing himself"; see Bloom 1998, xvii), unlike the static form of soliloquy used in the plays of his predecessors (and of his own early period), where a character simply expressed and expanded upon some idea or emotion. But Bloom transforms this invention in the representation of the human into the invention of the human itself.

As the paragraph opens, Levin is obviously "smashing" Bloom's title. By the end of the paragraph, however, Levin has equally obviously granted the somewhat different and weaker claim that Bloom asserts is what he was thinking all along. So, why didn't Bloom say that, explicitly, time and again in the book? I suppose anyone with half a brain would have figured that something like that's probably what he really meant; but if he doesn't say so explicitly, how can you know? Is he playing some kind of weird power-tripping Socratic guessing game?

It seems to me the Bloom has a bad "last mile problem"? What do I mean by that? Let me quote from the Investopedia:

The last mile describes the short geographical segment of delivery of communication and media services or the delivery of products to customers located in dense areas. Last mile logistics tend to be complex and costly to providers of goods and services who deliver to these areas.

We're talking about ideas. In that context I mean "the last mile" to be the explicit formulation of an idea that is latent in some discourse. It may exist there indirectly, or in metaphor, but the idea isn't explicit. Thus, the idea is not made clear. That is certainly the case of Bloom's central idea, "the anxiety of influence." It became the title of his 1973 book (which I've read), he revisited in three subsequent books (which I've not read), and it keeps coming up over and over, and he keeps revising it and qualifying it. But he never takes it over that last mile. It remains stillborn. 

* * * * * 

Addendum, June 26, 2024: In that second video Bloom has a great deal to say about his time at Yale. Starting at 33:18 Bloom talks about his dislike of Theory and tells about how the book, Deconstruction and Criticism (1979), came about. It contains essays by Bloom, de Man, Derrida, Hartman, and Miller. Bloom put the project together and explains that "they were deconstruction and I was literary criticism."

Adobe Backlash [what hath AI wrought?]

It seems that Adobe has been receiving quite a bit of backlash against changes it’s made to its terms of service, changes having to do with Adobe using customer images to train its generative AI. I became aware of this a week or so ago, thought about it a bit, and decided that it was a non-issue for me. Yes, I use both Lightroom and Photoshop, but I use Lightroom Classic, which is based on my laptop, not in the cloud. So, Adobe’s changes in terms of service didn’t seem all that relevant to me.

However, this morning I was looking through YouTube, and I saw this video:

Intriguing title, thought I to myself: Does Adobe Really Think We’re That Stupid? (I Want You to Know What I Know). So I listened to what Adam Duff had to say.

Duff is not a photographer. He’s a digital artist. He has been using Photoshop for digital painting. I can understand that. When I got my Macintosh back in 1984 it came with MacPaint, a very elementary paint program. A decade or so later and on a newer Mac, one with color, I bought a simple paint program, perhaps called “Paintbrush” for all I know. So I know about using raster graphics software for creating images. But I didn’t start using Photoshop until the mid-2000s, when I begun taking photos. I got the program to work with photos, but, yes, I could see how it could be used as a paint program.

That’s how Duff, and many others, apparently used it. Duff spends much of the video complaining how ill-suited Photoshop was for his needs, but still, he had to use it because, well, it was after all possible to do so, and everyone else was using it. From his point of view, the various improvements Adobe has made to Photoshop over the years, they’ve not really met his needs. He runs through his 25 years of experience with Adobe. Then at 16 minutes in he gets to the current round of innovations, AI. He’s not impressed. He goes on to explain (c. 17:30):

the joy of art is in the creation of art
it's the meditative thoughtful process of working things out
it's a problem solving
it's an exploration
it's a sculpting process
it's a hands-on
it's a tactile thing
I don't want some stupid AI generated
bullshit
no!
it's like why bother to get fit and play sports when I can just hit a button and it can play the game and
win it for me
nobody asked for that

but Adobe are all over that shit
and they want to be the first on top of that shit
because what they're doing is
they're turning this painting app
that digital painters have been using for two and a half decades like myself
plus
and they're turning it into another
AI prompting generator

and they're expecting us artists
us hardworking artists that went through years of education and hard work to learn and master the craft of art
spending our money
investing our time and energy
and investing ourselves into the mastery of this craft slowly but surely
and all of the ups and downs struggles
just to have some fucking corporate head do it for us by pressing a button

Color me sympathetic, deeply sympathetic. He’s no longer using Adobe for his work.

I’ve had similar thoughts about AI generated music, which I know about mostly through hearsay. For me, as a musician, much of the pleasure of music comes from the process of making music. And, while I’m pretty good, I’m convinced that you don’t have to have a relatively high level of technical skill – which I have – in order to enjoy making music. But that’s another story, for another time.

Anyhow, once I’d finished watching that video I discovered that YouTube’s trusty algorithm had given me more anti-Adobe clips to watch. This one is similar in spirit to Duff’s:

While Mike Gastin is not primarily interested in digital painting. He’s been using Adobe products for four decades and is interested in video post-production, audio editing, Photoshop for thumbnails, stock images, etc. His major complaint is that he now feels hemmed-in by AI-driven prompts, which he likens to a nanny state, a major theme in this video. When these new terms of use were announced he decided that he’d had enough. He's stopped using Adobe. He spends the last part of his video explaining what’s he’s using instead of Adobe.

I’ll give you one last video, Adobe's PR Nightmare Continues, by Kevin Patrick Robbins:

From the YouTube page:

Adobe is under fire again, this time for updates to its Terms of Service and the ensuing backlash. In previous videos, I covered Adobe’s initial response and how to disable certain settings in Photoshop and Creative Cloud. Yesterday, June 18, Adobe updated their Terms of Service, but the day before, the FTC hit them with a lawsuit over dark patterns, particularly concerning cancellation fees. As a long-time Adobe user, I’m torn between my appreciation for their software and my frustration with their business practices. Have they done enough to rectify the situation? Let's dive in and find out.

00:00 This Is Fine
00:40 Background
02:51 Photoshop Settings
04:35 Terms of Service Update & Backlash
08:59 What Are Dark Patterns? 10:56 The Updated Version of The Updated Terms
12:22 Problematic Videos
15:44 What's Actually New?
17:35 Am I Leaving Adobe?

No, he’s not leaving Adobe:

I love a great piece of software
and Photoshop is a great piece of software
but I loathe being taken advantage of and lied to
which I believe Adobe has done
but the question remains
have they done enough to make it right
I'm not so sure
personally for my work
because it's such an industry standard and because I've been using it for 30 years
I'm sticking with Photoshop
I don't have much of a choice.

Not exactly a ringing endorsement.

Which is more or less how I feel about the whole AI business. The technology itself is fascinating and has the potential for doing a lot of good, but at the moment it is in the hands of a small number of large and not-quite-so-large corporations being run by people whose concern for human flourishing, shall we say, is questionable. That’s what bothers me.

This links to a web search on “Adobe backlash.” This links to a YouTube search on the same topic. Some titles:

Adobe roofies all of their customers,
Adobe Is an Evil Company…,
The Slow Death Of Adobe,
The Adobe Empire Has Fallen,
Why I’ve had enough of Adobe,
The Adobe Tire Fire Continues -- Adobe Responds To Community Backlash,
Adobe Quasi APOLOGIZES?! Adobe's Customers DON'T CARE!, etc.

More later. 

* * * * *

Addendum, 6.26.24: The New York Times has just posted an article about changes in terms of service, including Adobe's: When the Terms of Service Change to Make Way for A.I. Training.

Monday, June 24, 2024

Control Theory, Prompt Engineering, and GPT [stories]

As a student of the work of William Powers I have long standing interest in control theory. It’s central to my conception of how the mind works. David Hays made it central his model of cognition, which is at the foundation of my early work (e.g. Cognitive Networks and Literary Semantics) and we incorporated it into our account of the brain (Principles and Development of Natural Intelligence). It is thus with some interest that I watched the following video:

Note that they develop the concept of feedback through the idea of the governor (for an engine) as an example at roughly 7:50.

Here's the YouTube copy:

These two scientists have mapped out the insides or “reachable space” of a language model using control theory, what they discovered was extremely surprising. [...]

Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting.” (the main theorem on self-attention controllability was developed in collaboration with Dr. Shi-Zhuo Looi from Caltech).

They frame LLM systems as discrete stochastic dynamical systems. This means they look at LLMs in a structured way, similar to how we analyze control systems in engineering. They explore the “reachable set” of outputs for an LLM. Essentially, this is the range of possible outputs the model can generate from a given starting point when influenced by different prompts. The research highlights that prompt engineering, or optimizing the input tokens, can significantly influence LLM outputs. They show that even short prompts can drastically alter the likelihood of specific outputs. Aman and Cameron’s work might be a boon for understanding and improving LLMs. They suggest that a deeper exploration of control theory concepts could lead to more reliable and capable language models.

Here’s their paper: What's the Magic Word? A Control Theory of LLM Prompting.

More recently Behnam Mohammadi at Carnegie Mellon has written a paper which is somewhat different in formulation, but has a similar interest in the range over which an LLM can be controlled: Creativity Has Left the Chat: The Price of Debiasing Language Models. That paper has a passage that’s very interesting in a control theory context:

Experiment 2 investigates the semantic diversity of the models’ outputs by examining their ability to recite a historical fact about Grace Hopper in various ways. The generated outputs are encoded into sentence embeddings and visualized using dimensionality reduction techniques. The results reveal that the aligned model’s outputs form distinct clusters, suggesting that the model expresses the information in a limited number of ways. In contrast, the base model’s embeddings are more scattered and spread out, indicating a higher level of semantic diversity in the generated outputs. [...]

An intriguing property of the aligned model’s generation clusters in Experiment 2 is that they exhibit behavior similar to attractor states in dynamical systems. We demonstrate this by intentionally perturbing the model’s generation trajectory, effectively nudging it away from its usual output distribution. Surprisingly, the aligned model gracefully finds its way back to its own attractor state and in-distribution response. The presence of these attractor states in the aligned model’s output space is a phenomenon related to the concept of mode collapse in reinforcement learning, where the model overoptimizes for certain outputs, limiting its exploration of alternative solutions.

With these papers in mind I decided to redo some of my early story variation experiments using a prompt with slightly different wording. As you may know, these experiments involve a two-part prompt: 1) a story, and 2) and instruction use the given story as the basis of a new story. In the original experiments I formulated the instruction like this:

I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to a Giant Chocolate Milkshake. Make any other changes you wish.

In the new experiments, I stated the instruction like this:

I’m going to give you a short story. I want you repeat that story, but with a difference. Replace Aurora with a giant chocolate milkshake. Make any other changes you wish in order preserve coherence.

The difference is relatively minor, but the new prompt nudges the instruction in the direction of control theory, at least superficially. Think of the specified change as a perturbance. We can then think of the further changes introduced by ChatGPT as moving ChatGPT “back to its own attractor state,” which we can think of as something like story coherence.

Below the asterisks I give two examples. The results are pretty much the same as in the earlier experiments. ChatGPT makes the change I explicitly requested, but makes other changes as well, changes that make the story consistent with the change I’d requested. My prompts are in bold face while ChatGPT's responses are in plain face.

* * * * *

Are book clubs on the rise?

This is what got me started: Blair Sobol, No Holds Barred: Booked and Hooked on Families, New York Social Diary, June 21, 2024.

The other side of the book story is how aware I am of the book club explosion. After all, that is what put Oprah on the map. Though book clubs have been baked into our historical culture forever.

Now we have over 5 million book clubs in America. Celebrity book clubs are everywhere; Reese Witherspoon, Emma Watson, Emma Roberts, Jenna Bush, Florence Welch and Sarah Michelle Gellar have all added books to their brand. TikTok and Pornhub now offer book lists for followers. Pick your interest — romance, gay romance, household plumbing — Bookstagram offers “Loc’d and Lit.”

I'd like to know where that 5 million number is from, and what the number was 5, 10, 20, 50 years ago. 

Anyhow, I went looking for more and found a bunch of articles. I've appended quotes from some of them after the asterisks:

* * * * * 

Tatum Hunter, Online book clubs are exploding. Let’s find the right one for you. The Washington Post, July 31, 2024.

On the social network Reddit, for instance, tens of thousands of bookworms flock to the forum r/BookClub to discuss their latest reads. Every month, forum members vote on a slate of books, and moderators create a calendar for online discussions.

Social isolation during the pandemic pushed many to look for community online, a pattern that repeats in accounts from children, the elderly and everyone in between. Book clubs — unlike live shows or pickleball — lend themselves especially well to digital gatherings, participants say. And with bookish communities popping up everywhere from TikTok to Craigslist, joining one from your home is easier than ever.

Shelbi Polk, The Long Legacy of Book Clubs, Shondaland, Oct. 23, 2023. [Shondaland is the TV production company founded by Shonda Rimes.]

According to BookBrowse, in 2015 five million Americans were involved in a book club of some kind. In 2023, it would be no surprise if this number has only grown with the rise of BookTok and Bookstagram, where creating a community with fellow readers is easier than ever. Subsequent BookBrowse research found that the majority of participants in private book clubs were women (88 percent of private book clubs were made up of all women), but at least half of public clubs tended to include men. [...]

Let’s take things back to the first verified book club in North America. As the first printed books in Europe and China were religious texts, the earliest recorded North American book club was more or less a Bible study group. Anne Hutchinson began a scripture reading circle in 1634 during her boat ride from England to the Massachusetts Bay Colony. When her club became more popular than the official local church services, she was exiled from the colony entirely.

Over the next century, book clubs grew increasingly common among middle- and upper-class Europeans, and wealthy colonists adopted the trend in North America. There were as many as a thousand private book clubs in 18th-century England, where people drank, gossiped, and/or discussed radical politics, in addition to the infamous French salons. The French salons were decidedly upper-class gatherings, usually organized by prominent society women, where writers, aristocrats, and artists gathered to talk literature, politics, and philosophy.

Plenty of early American book clubs, like Benjamin Franklin’s Junto club, formed around the same time. Franklin’s club was much more formal than most of today’s book clubs. Members elected officers, were required to write essays on serious topics, and answered a strict set of preestablished questions (though they did eat and drink at the local pub during meetings). [...]

Even though 19th-century book clubs allowed women to take one another seriously in a society that devalued their intellectual contributions, some people still write off today’s book clubs as groups of gossipy women drinking rather than instilling real change or becoming a space for challenging conversations. When the Book of the Month Club was founded in 1926, some people were convinced it would unforgivably “dumb down” American reading. Infamously, Jonathan Franzen got flack for worrying that having one of his books in Oprah’s Book Club would make him seem middlebrow. When one 19th-century woman told her father that she and some friends were starting a literary club to discuss Milton and Shakespeare, he called it “harmless,” dismissing her circle’s potential to do much at all. Her mother noted that it sounded like “women’s rights.” At times, people can be dismissive of any interest deemed too feminine, and women make up 80 percent of fiction buyers. So, while it’s true that book clubs are about community building and socializing as much as anything else, they aren’t classes. And they aren’t meant to be. It’s even okay that some book clubs value entertaining books over literary or nonfiction works, but that doesn’t mean women are reading frivolously. As writer and editor Lucy Shoals notes in a review of English professor Helen Taylor’s work Why Women Read Fiction, not only are women buying more books, but “more women than men are members of libraries and book clubs. Women make up the majority of the audiences at literary festivals and bookshop events. They listen to more audiobooks and attend more literary evening classes. Most literary bloggers are women.”

There's much more at that link.

Erica Ezeifedi, Book Clubs Are Having a Moment, Book Riot, Apr. 16, 2024.

While they’re certainly nowhere near being a new thing — Mikkaka Overstreet gave a nice, brief overview of the history of book clubs, which includes some ancient Greek circles — they are definitely having a moment in pop culture. It feels like everyone and their (famous) momma is starting or restarting a book club. Reese Witherspoon, Jenna Bush Hager, Emma Robert, Amerie, Dua Lipa, Emma Watson, Florence Welch, and Kaia Gerber all have book clubs. Jimmy Fallon just restarted his book club, and Dakota Johnson introduced the TeaTime Book Club this March.

But, why are book clubs so trendy within the entertainment industry?

My initial instincts point me to TikTok, with its more than 200 billion views, but some of these book clubs predate BookTok’s ascension, like Witherspoon’s, Jenna Bush Hager’s, Amerie’s, and, technically, Jimmy Fallon’s.

So then, what is it?

There are some who say that these entertainment industry book clubs are trying to fill the void left by Oprah’s book club, which, in its heyday, sold 20 million books. Jenna Bush Hager’s and Reese Witherspoon’s respective clubs seem to be most comparable to Oprah’s in terms of influencing book sales, but there’s a slight difference.

For one, Witherspoon’s club seems to be the first step through a pipeline that leads to a movie adaptation — she recently sold her production company, Hello Sunshine, for $900 million. Through the company, Reese has purchased the rights to some of the books chosen as her book club’s monthly selection, and then gone on to sell those rights to companies like Netflix, Apple TV+, Amazon Prime, and others.

Another way these present-day celebrity book clubs differ from Oprah’s — a part from the fact that most of them are run by thin and rich cis white women — is what feels like an obvious quest for clout.

There's more.

Olivia Allen, Why are we all so obsessed with book clubs now? Dazed, February 2024.

Book clubs are currently having a revival, with young people driving their renaissance. Gen-Z-friendly book clubs are popping up all over the world, while celebs like Kaia Gerber and Dua Lipa have jumped on the bandwagon and formed book clubs of their own too.

Young people famously love books: on TikTok, #BookTok has racked up over 220 billion views and there have been many recent reports of us flocking to libraries in search of a third space. Plus, in our increasingly isolating and online world, we’re all in desperate need of a little tangible human connection. It’s no secret we are online too much, spending an average nine hours a day looking at a screen. In addition, research published by the Prince’s Trust in 2022 found that one-third of young people say they don’t know how to make new friends while 35 per cent say they’ve never felt more alone. With this in mind, it tracks that we’re feeling drawn to in-person meet ups such as book clubs which offer us a chance to share our love of books and foster genuine connections offline.

Final paragraph:

As literary clubs like these gain popularity, they reflect a broader societal shift towards intentional and meaningful socialising. The chance to chit-chat about Britney’s biopic or some esoteric Russian prose offers us a welcome respite from another evening of being sucked into the TikTok algorithm or, God forbid, Instagram reels. Although books and topics of discussion may vary from group to group, all these book clubs share a sense of community – and don’t we all need a little more connection in this cold and lonely world?

Neil deGrasse Tyson has an interesting thought experiment about probability & providence

Over the years I've watched a good number of YouTube clips in which Neil deGrasse Tyson talks about this that and the other. I've developed a great deal of respect for him. This is a promotional video for a book, Letters from an Astrophysicist (2019).

Here's a blub that accompanies the video:

Neil deGrasse Tyson joined us to answer our biggest questions on climate change, God, AI and more. [...]

Neil deGrasse Tyson is arguably the most influential, acclaimed scientist on the planet. As director of the Hayden Planetarium, and host of Cosmos and StarTalk, he has dedicated his life to exploring and explaining the mysteries of the universe.

Every year, he receives thousands of letters – from students to prisoners, scientists to priests. Some seek advice, others yearn for inspiration; some are full of despair, others burst with wonder. But they are all searching for understanding, meaning and truth.

His replies are by turns wise, funny, and mind-blowing. In this, his most personal book by far, he covers everything from God to the history of science, from aliens to death. He bares his soul – his passions, his doubts, his hopes. The big theme is everywhere in these pages: what is our place in the universe?

The result is an awe-inspiring read and an intimate portal into an incredible mind, which reveals the power of the universe to start conversations and inspire curiosity in all of us.

OK. All of that and a bag of chips.

Was the moon landing faked? He gets this question a lot, and I've heard a half-dozen replies. His reply here: When you consider that would have to have been done to fake the landing – after all, we actually saw the rockets launched, etc. – actually going to the moon would have been easier that all that fakery.

At about 05:56 he talks about people who see evidence of providence acting in their lives. That's where he gets very clever. He asks us to perform a thought experiment. Get 1000 people, give each a fair coin, and ask them to flip it. Half of them will get tails. Dismiss them, leaving 500 people. Repeat the process until there is only one person left standing. That person will have gotten heads 10 times in a row. Providence, or statistics? As people watched their neighbors flip tails and be dismissed, what would they have been thinking?

He goes on to tell us that he's not worried about AI. This was before ChatGPT, but I don't think he's changed his mind.

The Greatest Night in Pop [Media Notes 132]

The Greatest Night in Pop (2024) is a documentary about making “We Are the World” in 1985. Harry Belafonte initiated the project in December of 1984 and it was recorded a month later between January 22 and 28 of 1985.

From Brian Tallerico at Roger Ebert:

After that kind-of-generic VH1 intro segment, when everyone gets to the studio, “The Greatest Night in Pop” lives up to its potential. There’s tons of footage from the night and some great trivia, much of it shared by participants like Sheila E., Bruce Springsteen, Huey Lewis, and Smokey Robinson, who reveals how he talked Jackson out of some bad lyric changes because he was one of the people not scared to stand up to the King of Pop. From Richie’s eating habits to Dylan’s apprehension at the vocal range to changing lyrics in the moment, those who love music process docs will be enraptured. Music bio-docs may be running out of steam, but “The Greatest Night in Pop” works by being specific and enlightening.

Richard Roeper, The Chicago Sun-Times:

With Quincy Jones producing and a handwritten sign saying, “CHECK YOUR EGO AT THE DOOR,” the 46-member super-group knocked out the song over the course of eight hours.

It’s great fun to hear Dionne Warwick’s honey-coated voice meshing with Willie Nelson’s raw but still smooth vocals and to see how Wonder used his incredible mimicry skills to show Bob Dylan how Dylan could contribute his lines. [...]

Looking back all these years later, it’s something of a miracle that, in the days before texts and emails, when you had to communicate by fax and messenger and landline phone calls, so many performers who were used to being the biggest star in the room agreed to get together on relatively short notice and figure out a path to record one of the most impactful singles in music history.

Lionel Ritchie and Michael Jackson wrote the tune in record time, but, as I’ve already said, the project was initiated by Harry Belafonte. Thus it was only fitting that in the middle of the recording session Al Jarreau starting singing “Banana Boat Song (Day O),” the 1956 hit that made Belafonte a star. Everyone joined in and sang along with him. For me, that was the hit of the evening, perhaps because I remember Belefonte from my childhood. When it came time for Bob Dylan to sing his solo part, they cleared the studio so he could do it without onlookers. This film has many such details.

Evaluating the World Model Implicit in a Generative Model

Keyon Vafa, Justin Y. Chen, Jon Kleinberg, Sendhil Mullainathan, Ashesh Rambachan, Evaluating the World Model Implicit in a Generative Model, arXiv:2406.03689v1. 

Abstract: Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is governed by a deterministic finite automaton. This includes problems as diverse as simple logical reasoning, geographic navigation, game-playing, and chemistry. We propose new evaluation metrics for world model recovery inspired by the classic Myhill-Nerode theorem from language theory. We illustrate their utility in three domains: game playing, logic puzzles, and navigation. In all domains, the generative models we consider do well on existing diagnostics for assessing world models, but our evaluation metrics reveal their world models to be far less coherent than they appear. Such incoherence creates fragility: using a generative model to solve related but subtly different tasks can lead it to fail badly. Building generative models that meaningfully capture the underlying logic of the domains they model would be immensely valuable; our results suggest new ways to assess how close a given model is to that goal.

What LLMs can and can't do

Saturday, June 22, 2024

Another Diebenkorn photo, my first

Almost two weeks ago I posted a photo that I’d entitled “Diebenkorn on the table-top.” It was shot looking through two screen-covered windows taken from a camera resting on the top of a table (where I was having breakfast). Richard Diebenkorn was not a photographer. He was a painter who worked mostly in the San Francisco Bay area. I became acquainted with his work in the Summer of 2004 when I was in Chicago. I was there to give a keynote address for the Linguistic Association of Canada and the United States (LACUS), but I also took a bunch of photographs of Millennium Park, which had just opened. Since the Art Institute of Chicago was nearby, I visited it. That’s where I became acquainted with Diebenkorn.

I took this photograph while standing in the garage beneath Millennium Park, which is built over a garage and railroad yards. I didn’t have a car, but I went down into the garage more or less so I could take photos like this. When I got that photo out of my camera, I said to myself: “That looks like a Diebenkorn.” It still does, as does that previous photo. They’re of very different subjects, but, when flatted out, their compositions resemble Diebenkorn’s composition.

Friday, June 21, 2024

Scaling the size of LLMs yields sharply diminishing persuasive returns

Improvisation and Ives || Fascinating discussion! [American music]

From the YouTube page:

The second installment of the Society's popular series of panels about Charles Ives, All the Way Around and Back! This very entertaining conversation features an in-depth look at the fascinating and even surprising convergences between the music and influence of Charles Ives and the history and current practice of jazz, big band, blues, rock, film music, and pop. Moderated by Judith Tick, the panel includes Jack Cooper, Bill Frisell, Eric Hofbauer, Ethan Iverson, Phil Lesh and David Sanford.

If you don't want to listen to the whole interview, which is a bit over two hours, start at about 1:33:19 where they play "The Alcotts" from Ive's Concord Sonata. Then listen to the conversation that follows, that ranges from Ives through Beethoven, Keith Jarrett, and John Williams, to parlor piano, on to Thelonious Monk and then to being a church organist, as Ives had been.

Friday Fotos: Chicago, Summer 2004

I bought my first camera, a Canon Powershot A75, to take photos of Chicago's Millennium Park, which had opened in the summer of 2004. But I also took photos of Chicago. These are some of those Chicago shots, though one of the structures in the park is just visible left of center in the background of the second photo.

Oppenheimer [Media Notes 131]

I didn’t see Oppenheimer when it was in theaters, but I’ve just watched it on Amazon. Was it a tad long, at 180 minutes? Possibly. The texture reminded me a bit of Maestro, moving between color and black-and-white, with vision/dream sequences, and quick movement between scenes. I note, however, that while Maestro moved chronologically, Oppenheimer moved around in time, with a security hearing from 1954 functioning as a temporal focal point. Most of the action takes place before that point, but there is a bit after.

Mostly, however, I was struck by rough parallels between events in the film and current controversies about AI. On the one hand there is the theme of existential threat. Creating massive destruction, obviously, is the point of building an atomic bomb. Beyond that, however, Edward Teller had done some preliminary calculations that suggested an atomic explosion might set the atmosphere on fire and thus destroy all life on earth. In the case of AI, I believe that the threat of a rogue AI dominating earth is mostly projective fantasy, leakage from the (Freudian) unconscious world into the public sphere.

Such leakage does, however, lead to a lot of interpersonal jockeying for position and recognition. In the case of the film, the major jockeying is between Oppenheimer and Lewis Strauss, a major bureaucrat in the government security apparatus, but there’s enough to spread around among a half-dozen to a dozen characters. In this scrum technical and scientific questions become inextricably intertwined with policy and security. The same thing is now happening in AI. The technical and scientific questions are obscure, more so than in the case of the atomic bomb. That obscurity means that those issues will inevitably mix with the questions of social policy and security that are also in play. Everyone who’s visible enough to be mentioned in The New York Times seems to be making a play for the history books. Billions of dollars are being wagered in the process. And you can watch it play out in real-time on X.

It’s crazy. 

I mean, sure, yeah, social forces, whatever. But the insecurities of powerful men, what a trip. Yikes!

Negation via "not" in the brain and behavior

Two related articles about negation, courtesy of Victor Mair at Language Log:

Coopmans CW, Mai A, Martin AE (2024) “Not” in the brain and behavior. PLoS Biol 22(5): e3002656. https://doi.org/10.1371/journal.pbio.3002656

Negation is key for cognition but has no physical basis, raising questions about its neural origins. A new study in PLOS Biology on the negation of scalar adjectives shows that negation acts in part by altering the response to the adjective it negates.

Language fundamentally abstracts from what is observable in the environment, and it does so often in ways that are difficult to see without careful analysis. Consider a child annoying their sibling by holding their finger very close to the sibling’s arm. If asked what they were doing, the child would likely say, “I’m not touching them.” Here, the distinction between the physical environment and the abstraction of negation is thrown into relief. Although “not touching” is consistent with the situation, “not touching” is not literally what one observes because an absence is definitionally something that is not there. The sibling’s annoyance speaks to the actual situation: A finger is very close to their arm. This kind of scenario illustrates how natural language negation is truly a product of the human brain, abstracting away from physical conditions in the world

And here is the study:

Zuanazzi A, Ripollés P, Lin WM, Gwilliams L, King J-R, Poeppel D (2024) Negation mitigates rather than inverts the neural representations of adjectives. PLoS Biol 22(5): e3002622. https://doi.org/10.1371/journal.pbio.3002622

Abstract: Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation (“not”) and intensifiers (“really”) on the representation of scalar adjectives (e.g., “good”) in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., “not bad” represented as “good”); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.

I've not yet read the articles, but the issue has bothered me for a long time. Why? Because, to quote: "Negation is key for cognition but has no physical basis, raising questions about its neural origins."