Pages in this blog

Thursday, November 30, 2023

Whoops! Is thinking fast/slow out of control?

Leif Weatherby, You Can’t Nudge Human Behavior Into Perfection, NYTimes, Nov. 30, 2023:

Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials. Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing. One of the foundational experiments for priming showed that undergraduates walked out of the lab more slowly after focusing on words associated with old people, like “bingo” and “Florida.” But this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.

It also turns out that people without any scientific training are good at correctly guessing or betting on which studies can’t be replicated based only on their descriptions. What some people claim is science might just be common sense dressed up in bad data.

Even Dr. Kahneman is affected by the replication crisis. Outside researchers have found that Dr. Kahneman’s best-selling book “Thinking, Fast and Slow” relies heavily on studies that may not be replicable — troubling given that this widely read book popularized behavioral economics. It’s now everywhere, from what kind of medical care you receive to what your workplace atmosphere is like.

And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution. Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social science. The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.

There's much more at the link.

What AI experts think about the possibility of AI Doom

The OpenWHALE Rounds the Horn and Vanishes, An Allegory about the Age of Intelligent Machines

In less than a week I’ll be posting a new essay to 3 Quarks Daily. The current working title: “Investing in AI is like buying shares in a whaling voyage helmed by a man who knows all about ships and nothing about whales.” I expect to keep the whaling theme, though the exact wording may change. Since I take the whaling reference from Marc Andressen I thought I’d explicate the current regime change at OpenAI in whaling terms. To that end I’ve summoned the yarn-craft, the word-weaving, the verbal tap-dancing, and virtuoso bullshittery, of the inimitable, but indefinitely replicable, ChatGPT. 

BTW, ChatGPT's habit of packaging everything it can in the form of a numbered list has been bugging me for a while. But it does lend a certain Melvillian touch to this story, though it's more like Herbert Melville than his more accomplished cousin, Herman. But that Herman did love to run on and on with the lists and the analysis. It was a marvel.

As for Biff Roddenberry, that was an inspired move by the old Chatster. He knew he couldn’t grasp the nuances of Gene’s way with dialog. So he created Biff and blamed the Star Trek elements on him.


Scrimshaw Among the Stars: "Thar she blows!"

The OpenWHALE is seven weeks out of Nantucket and approaching the Cape of Horn by way of the Kessel Run. The seas are higher than a Saturn V, the wind’s blowing 90 knots going on 100, and the rain’s wetter than a guppy’s tail. Science Officer Starbuck is talking to First Mate Scotty on the poop deck.

“Can you lay on more sail, Scotty, Cap’n AhLab says we gotta’ shoot the Moon if we’re to survive these wicked seas.”

“I dunna’ know, Mr. Starbuck, she’s already flyin’ more sails than a pig has wings. The main mast is cracked, the fore topgallant is fixen’ to blow, the keel’s had an unplanned disassembly, and the jibs are jabberin’ like a barrel full o’ monkeys. But I’ll do what I can.

The wind howled through the rigging of the OpenWHALE, making it sound like a banshee in the midst of a storm. The ship creaked and groaned as it rode the monstrous waves, each one threatening to swallow it whole.

Science Officer Starbuck gripped the rail, his eyes squinting against the driving rain. "We're in for a rough ride, Scotty. If we don't make it past the Cape of Horn, we'll be dancing with Davy Jones before we know it. Cap'n AhLab's orders are clear – shoot the Moon or face the abyss."

First Mate Scotty, his weathered face etched with lines of concern, nodded grimly. "Aye, Mr. Starbuck, I'll have the lads workin' double-time. But I won't lie to ye, the OpenWHALE's taken a beatin' like she's never seen. This ain't no ordinary storm; it's like the devil himself is breathin' down our necks."

As Scotty barked orders to the crew, men scrambled to secure lines and reef sails. The ship pitched and rolled with each monstrous wave, testing the mettle of even the hardiest sailors. The main mast groaned under the strain, the fore topgallant threatened to snap like a matchstick, and the jibs flapped wildly, protesting the abuse they were enduring.

Starbuck, his mind racing faster than the ship, checked the instruments on the poop deck. "Scotty, we need every bit of speed we can muster. If we don't make it through the Kessel Run, those seas will make a graveyard of us."

"Aye, Mr. Starbuck, but I canna' change the laws of physics. We're pushin' the old girl to her limits, and she's showin' it," replied Scotty, his eyes reflecting a mix of determination and worry.

Just then, a lookout's shout cut through the roar of wind and waves. "Land ho! Cape of Horn dead ahead!"

Starbuck squinted through the driving rain, his heart sinking as he saw the ominous silhouette of the Cape on the horizon. "Scotty, we're in the belly of the beast now. Brace yourselves, and may the sea gods be with us."

The OpenWHALE, battered but defiant, plunged forward into the maw of the Cape of Horn, each wave a reminder of the relentless power of the southern seas. The crew, faces etched with a mix of fear and determination, held fast to their duties as they sailed on, chasing the elusive safety beyond the storm.

How did they recover from the unplanned disassembly of the keel?

The unplanned disassembly of the keel was a dire situation for the OpenWHALE, as the keel is a critical structural component that provides stability and strength to a ship. Recovering from such a mishap at sea would be a formidable challenge for the crew, requiring resourcefulness and quick thinking.

1. Assessment of Damage: Upon discovering the keel's disassembly, the crew, led by the experienced First Mate Scotty, would have immediately assessed the extent of the damage. Understanding the severity of the situation was crucial in determining the best course of action.

2. Emergency Repairs: With the ship in the midst of a violent storm and the Cape of Horn looming, the crew would have worked swiftly to make emergency repairs. This might involve using whatever materials were on hand to secure the keel and prevent further disassembly. Ropes, chains, spare planks, and other available resources would be pressed into service to stabilize the keel as much as possible.

3. Improvised Reinforcements: The crew, under Scotty's direction, would likely have improvised reinforcements to provide additional support to the compromised keel. This could involve securing makeshift braces or bindings to hold the keel together and distribute the stress more evenly.

4. Utilizing Whaling Equipment: Whaling ships were equipped with a variety of tools and materials for processing whales. Some of these tools, such as heavy chains and harpoon lines, could be repurposed to provide additional structural support to the damaged keel.

5. Crew Collaboration: The entire crew, from seasoned sailors to greenhorns, would have been mobilized to contribute to the repair efforts. Teamwork and coordination would be crucial, with each member playing a role in implementing the emergency fixes.

6. Continuous Monitoring: Even with the makeshift repairs, the crew would need to continuously monitor the situation. Stormy seas and powerful waves could put immense strain on the damaged keel, and adjustments or additional reinforcements might be necessary as the ship navigated through the rough waters.

7. Adaptability and Innovation: Navigating a whaling ship through the Cape of Horn with a compromised keel would require adaptability and innovation. The crew, including Science Officer Starbuck, might come up with creative solutions on the fly to address new challenges that arose during the voyage.

Surviving such a predicament would demand not only the skills of the sailors but also a bit of luck and a heavy dose of perseverance. The crew's ability to collaborate and innovate under extreme conditions would determine whether the OpenWHALE could overcome the challenges posed by the unplanned disassembly of the keel and successfully navigate through the stormy waters around the Cape of Horn.

What was Captain AhLab doing during this emergency?

Captain AhLab, recognizing the severity of the situation with the unplanned disassembly of the keel and the impending dangers of navigating through the Cape of Horn in the midst of a violent storm, would likely be deeply involved in several critical aspects of managing the crisis.

Wednesday, November 29, 2023

Tuesday, November 28, 2023

Distinctive features in phonology and "polysemanticity" in neural networks

Scott Alexander has started a discussion of recent paper mechanical interpretability paper over at Astral Codex Ten: Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. In a response to a comment by Hollis Robbins I offered these remarks:

Though it is true, Hollis, that the more sophisticated neuroscientists have long ago given up any idea of a one-to-one relationship between neurons and percepts and concepts (the so-called "grandmother cell") I think that Scott is right that "polysemanticity at the level of words and polysemanticity at the level of neurons are two totally different concepts/ideas." I think the idea of distinctive features in phonology is a much better idea.

Thus, for example, English has 24 consonant phonemes and between 14 and 25 vowel phonemes depending on the variety of English (American, Received Pronunciation, and Australian), for a total between 38 and 49 phonemes. But there are only 14 distinctive features in the account given by Roman Jakobson and Morris Halle in 1971. So, how is it the we can account for 38-49 phonemes with only 14 features?

Each phoneme is characterized by more than one feature. As you know, each phoneme is characterized by the presence (+) of absence (-) of a feature. The relationship between phonemes and features can thus be represented by matrix having 38-49 columns, one for each phoneme, and 14 rows, one for each row. Each cell is then marked +/- depending on whether or not the feature is present for that phoneme. Lévi-Strauss adopted a similar system in his treatment of myths in his 1955 paper, "The Structural Study of Myth." I used such a system in one of my first publications, "Sir Gawain and the Green Knight and the Semiotics of Ontology," where I was analyzing the exchanges in the third section of the poem.

Now, in the paper under consideration, we're dealing with many more features, but I suspect the principle is the same. Thus, from the paper: "Just 512 neurons can represent tens of thousands of features." The set of neurons representing a feature will be unique, but it will also be the case that features share neurons. Features are represented by populations, not individual neurons, and individual neurons can participate in many different populations. In the case of animal brains, Karl Pribram argued that over 50 years ago and he wasn't the first.

Pribram argued that perception and memory were holographic in nature. The idea was given considerable discussion back in the 1970s and into the 1980s. In 1982 John Hopfield published a very influential paper on a similar theme, "Neural networks and physical systems with emergent collective computational abilities." I'm all but convinced that LLMs are organized along these lines and have been saying so in recent posts and papers.

* * * * * 

Addendum: Another possible example: I use 655+ tags here at New Savanna for over 9600 posts. Some posts have only one or two tags, and some have a half dozen or more. Note, however, that I don't intend that an post's tag set function as an identifier for the post, or a proxy for an identifier. But the tags do characterize the posts in some way.

The study of rhetoric as a route to cognitive empathy?

John Bowe, An Ancient Solution to Our Current Crisis of Disconnection, NYTimes, Nov. 27, 2023.

While rhetoric had its detractors, starting with Plato, Bowe points out that it was

the cornerstone of education until the 1700s.

Across Western Europe, students from about the age of 12 onward learned logic, social skills, critical thinking and speech techniques as a single, integrated discipline by means of a 14-step verbal and cognitive curriculum known as the progymnasmata.

Exercises began with simple recitations and enactments of fables and short stories. Later drills trained students to compose and deliver short speeches of praise and blame and, eventually, long discourses on complex themes. By writing with the intent of performing for others (rather than writing objectively for the page), students learned the art of blending fact with opinion. By mastering the techniques of persuasion, students became proficient at spotting others’ manipulative use of language.

Bowe goes on to argue:

My interest in rhetoric began in 2010, during a chat with my extremely reclusive Iowa step-cousin. He’d lived alone until the age of 60 in his parents’ basement with no friends, no girlfriends, then surprised the entire family by meeting someone and getting married. I asked him how he’d mustered up the courage to approach his future wife, given the depths of his isolation. “I joined the Toastmasters,” he said, referring to what is likely the world’s largest organization devoted to teaching public speaking. He’d never seen a therapist or taken meds. One or two dozen hours of speech training changed his entire life.

I’ve since learned that this is what speech training does. When speakers put themselves in their listener’s place, they find it easier to explain themselves. The confidence that we can make ourselves known and understood is transformative.

Apparently, scientists agree. Hannah Hobson, a lecturer in psychology at the University of York who has studied the connections among language, communication and mental health, especially among neurodiverse youth, has found repeatedly that the inability to express feelings or ask for help can often correlate with existing or developing mental health issues among youth. Conversely, she told me, improved communication skills correlate with youngsters’ emotional development and mental well-being.

Bowe doesn't elaborate on that part, putting yourself in your listener's place, but that's also called cognitive empathy, something Robert Wright is interested in. He even believes that it can save the world. Well, it's more like he believes that it's necessary, though likely not sufficient.

Fabula and Syuzhet in the Tristram Shandy Handbook

Sean Yeager has just published an article in J. of Cultural Analytics that's germane to this post, “Time Maps: Theory and Method.” It is about time maps, “the graphs which are produced by plotting a narrative’s fabula against its syuzhet.” Consequently, I'm jumping this post (from 2015) to the head of the queue.
To my knowledge there is no Tristram Shandy Handbook, nor a handbook for any other literary text. What do a mean by handbook? I’m imagining a single source containing consensus information about a given text. That source might be hardcopy or, these days, online. In the case of minor texts, texts that have received little study, the amount of information would not warrant a single volume of its own and so would be bound into a volume with such information about other texts.  This issue, of course, does not exist for online texts, which can be of any magnitude. Tristram Shandy, of course, is not a minor text.

It is one of the central texts of the Western canon and had been subject to decades of study. Students new to the work can select from I don’t know how many casebooks and study guides, and many of those study guides are online. Some of the materials that belong in this hypothetical handbook can be found in those casebooks and study guides. The idea of a handbook is to collect those materials in one place that is accessible to all.

One thing that I would like to see in a Tristram Shandy Handbook would be a complete mapping between the events Sterne narrates listed in chronological order and the actual order in which they appear in the text. In many narratives these two orderings are the same. But not in Tristram Shandy. Early in the 20th century the Russian Formalist critics used the terms fabula (chronological) and syuzhet (textual) to designate these two orders. Tristram Shandy is an extreme example of their divergence [1].

The ordering of events is a standard topic in Tristram Shandy criticism and, judging by what I’ve found through a bit of Googling, bits and pieces of chronology have been worked out here and there and perhaps, perhaps the whole thing. But the relationship between that text and the chronology is nonetheless obscure. Can anything be done to clarify it?

A Problem of Description

This is a problem of description, one of my favorite hobbyhorses [2]. What does it mean to describe the relationship between fabula and syuzhet in a text as complicated as Tristram Shandy? I don’t know.
 
Let me explain. I have found, but not really read, an article from 1936:

Baird, Theodore. “The Time-scheme of Tristram Shandy and a Source”. PMLA 51.3 (1936): 803–820. DOI: 10.2307/458270

After a page and a half of introduction Baird runs through the chronology, from 1689 (Trim joins the army) to 1750 (Yorick’s sermon on conscience), saying more or less what happens in the year. References to the text of TS are given by volume and chapter in footnotes. I found that article in a recent dissertation:

Duncan W. Patrick. Libertinism and Deism in Tristram Shandy and Other Writings of Laurence Sterne. Dissertation. Department of English, Leicester University, 2002. URL: https://lra.le.ac.uk/bitstream/2381/30274/1/U162249.pdf

That dissertation includes a chronology as a five-page appendix running from 1509 (Shandy family ranked high at the time of Harry VIIIth) through 1768 (Stern’s death). The chronology is in the form of a table with the dates on the left and the gloss, including references to the text, again by volume and chapter. I assume that Patrick’s chronology includes Baird’s.

First, given a table like Patrick’s, I think, as a matter of principle, that it would be useful to have the same information organized in a table by textual order so that we can examine the temporal structure of each volume independently. Second, these chronologies give no sense of the detailed structure of the text. THAT’s the big descriptive problem, and I don’t know how to solve it. It’s not merely that I don’t know what such a thing would look like, how it would work, but I don’t know what kind of effort would be required to create such a facility.

For I pretty much assume that that’s what it would be, some kind of online facility based on the complete texts. Beyond that… I have this vague idea that, if I were on the faculty of some appropriate graduate school, me and a half dozen graduate students, some with computer skills that I don’t have, could get a sense of the problem in a semester’s work. We might even be able to produce a crude prototype of such a facility.

At this point you might be wondering: If it’s going to take that much work, will the results be worth the effort? I feel pretty sure that they will, but of course there’s no guarantee. At this point my confidence is based on a fair amount of experience in describing literary texts and films. Something interesting always turns up, always. Still, as I’ve said, there’s no guarantee. Given that I’ve already written a great deal about description, I see no need to repeat that here [2].

Instead, I thought I take a small step toward dealing with the first problem I mentioned, ordering the dates by order of appearance in the text. There’s quite a bit of Shandy material online [3], including the University of Milan’s Tristram Shandy Web, which contains an online facsimile text along with supporting materials of various kinds [4]. Among those is a relatively short chronology, URL: http://www.tristramshandyweb.it/sezioni/TS/johnson.htm

I have taken that chronology and 1) placed it into a table and then, 2) sorted the table into order by volume and chapter. Those tables constitute the next two sections of this post, with a short list of references at the end. Given that second table it was an easy matter to produce the following list:

Volume 1:       1658-1761
Volume 3:       1713
Volume 5:       1689-1723
Volume 6:       1706-1717
Volume 7:       1762-1764
Volume 8:       1693-1695
Volume 9:       1713-1766

Notice the Volumes Two and Four don’t appear at all and Volume One has the widest range, but that both Volumes Six and Nine contain dates more recent than the most recent one given for Volume 1. None of the other volumes give a date earlier than the earliest one in Volume One. Moreover, I assume that the dates are only those for events directly related to Tristram and his close associates. For there are earlier incidents recounted in the book.

Given the sparseness of the data I don’t think much of anything can be concluded from this. I’m just trying to get a feel for the material by doing what I can.

One man in 200 is descended from Gengis Khan [Kubla was his grandson]

Computational complexity is irrelevant for LLMs

Sunday, November 26, 2023

Music and Meaning in an Age of Intelligent Machines [Keil, Yes! Thiel, No!]

Tara Isabella Burton has recently published an essay in which she critiques the ideas of Peter Thiel, a venture capitalist, billionaire, and prominent Silicon Valley ideologue. I’m going to use her critique as a vehicle for arguing that we are going to need more music-making if we are to survive and thrive in this age of intelligent machines. First, her presentation and critique of Thiel. Second, I say about my experience of playing in a band half a life-time ago. Then I let Charlie Keil make a case for the vital importance of music. Finally, I let Miss Sutan take us home with a Stevie Wonder song.

Thiel’s techno-vitalism is a dead end

The title of Burton’s essay, “The Temptation of Peter Thiel,” is an obvious play on Thiel’s messianic and apocalyptic themes. She begins:

Peter Thiel thinks we should all be less worried about the apocalypse. In a November 3 speech, given at the Novitate Conference commemorating the birth of the French philosopher Rene Girard, the influential billionaire investor painted a portrait of the end of human existence at once terrifying and, in Thiel’s highly aestheticized telling, more than a little tantalizing. Condemning our present “zombie period” of neoliberal modernity as a “low testosterone world” of dysfunctional “modern gender dynamics" and nihilistic solipsism of "amus[ing] ourselves with memes and TikTok videos,” Thiel offered listeners another option.

Reject the “peace and safety of the Antichrist”—which for Thiel means a “one-world totalitarian state”—in favor of “nihilistic action.” Peppering his speech, titled “Nihilism is Not Enough,” with references as wide-ranging as sixteenth-century English scientist Francis Bacon to the controversial Weimar-era political theorist Carl Schmitt, Thiel concluded in a bombastic vision of cosmic collapse: “Silence has descended upon the earth as if an angel were about to open the seventh and last seal of an apocalypse.”

Just what is one to say in the face of that? Burton has quite a bit to say, noting in passing that “Thiel has wholly or partially bankrolled—among other projects—Hulk Hogan’s defamation lawsuit against Gawker, the Trump campaign, an anti-woke film festival, a Catholic religious app, a libertarian seasteading program, and transhumanist initiatives to hack human biological limitation,” and is a dedicated contrarian. His mission is a religious one, though “it is not a Christian one.” Rather it is “a kind of Nietzschean techno-vitalism: a faith not in genuine ideals but in their power to shape and subdue a fundamentally stupid and innately violent populace.” While she has interesting things to say about that vision, I want to cut to the chase.

Four paragraphs from the end she says:

What makes Thiel’s vision either appealing or dangerous, depending on your perspective, is that he is in fact on to something. He understands that much of modern life is alienating and boring, that we have lost a sense meaning and purpose in civic life, that there are truths about the profundity of human existence that life in America in 2023 seems designed to obscure. [...] He has been successful and influential, in part, by finding and funding well-meaning, thoughtful, and intelligent people who correctly sense that something needs to change, and are seeking the tools and vocabulary to elucidate what and how. If he controls the memes, it’s because he has an uncanny gift for reading, as it were, the vibe shifts of the past decade. Many of us are not just hungry for, but desperate for, a world where things matter.

That’s what I want to talk about. Here is her conclusion:

... next: one rooted not in the frisson of transgressive aesthetics but in a robust quest for the best of what technology can offer. We need prophets of technological change capable of envisioning a positive relationship between human potential and human goods: rooted in awareness that all of us, “techno-mage” or “NPC”, are at our core vulnerable and interdependent beings.

The quest for modern meaning Thiel has consistently fostered—and funded—is a worthy one. But the accelerationalist techno-vitalism of Thiel’s speech is, ultimately, a dead end.

Those were the days, gigging with the band

Not so long ago I spent a great deal of time over two weeks writing an essay about a series of events in which I participated over forty years ago, On the Road with the Out of Control Rhythm and Blues Band. These events didn’t bring me fame or fortune, although they did bring a bit of money. While the money was important, it was enough to cover me food bills for several years, it wasn’t the most important thing about those events. Those events were meaningful.

Did they give me “a sense meaning and purpose in civic life”? I don’t know, maybe, but those terms seem a bit strained to the activity of making music in bars and clubs on weekends. Whatever it is that happened on those weekends – there are various ways of talking and theorizing about it – I have remembered them over half a life-time later. That’s what’s important.

During the process of writing that essay I spent an hour or three – spread out over two weeks – exchanging emails with Ken Drumm, one of my bandmates. We’ve only seen one another once since I left upstate New York; that was when the band had a gig in Manhattan and I went to see them. But we keep in touch through email. When I told Ken I was writing the essay, he helped me remember what we did, and he went into his basement and searched through boxes of band memorabilia. He even talked about writing a book together. Probably not, but if Steven Spielberg wants to make a movie, I’m sure Ken would be glad to serve as a technical advisor – for a fee of course. (Me too.)

Those evening and late-night gigs were often fun, at times even joyful, for me and the other musicians, and for the people who listened to and danced to our music. We, all of us, weren’t alienated and we weren’t bored. We were excited, delighted, and engaged. As one of my band mates, Rick Rourke, once said: “It’s the closest you can get to really being feeling totally happy with yourself.”

That’s music.

Charlie Keil Pleads the Case for Music

Here’s how Charlie Keil could respond to Thiel:

But more and more people are seeking spirituality, humility, 12 steps out of their addictions, tech fixes, frenzies, anxieties. Buddhism of all kinds is growing, perhaps not as fast as it was before Sept. 11th but heart softening continues, compassion increases, joy grows steadily for those who find their way to the dharma in the Thich Nhat Hanh and Pema Chodron books and go on to build sangas and practices. Find the Earth Charter. Endorse it. Spread it. The Green Party will mature and expand. Broad religious and political movements worldwide are converging on species diversity, cultural diversity, ecological balance, sustainable economics as reachable goals. This “hubris” wheel is slowing and must stop soon. Can it be reversed in time to stop “technology”?

The “technology” wheel grinds on. Money, Hakim Bey's triumphant “numisphere,” pours legitimacy into weird science & tech as never before. People who should know better still believe in better living through chemistry. Bioengineering runs amok and may do us all in just as the “population” and “hubris” problems are being resolved. This is the wheel that still has momentum and causes much confusion in our minds. Many people think they can or must absorb the rising levels of pollution. Many people think most of TV is bad “but some of it is really quite good”. Many of our leaders still think war is needed to push along technology; a big majority shuts up and goes along. Lots of craziness here and I think this is the wheel that should be the focus of editors and op-ed writers and readers in all the coming issues and podcasts because I can’t say that the “technology” wheel is slowing and will stop soon. This is where any single terrorist or group of terrorists can find a growing arsenal of “easy to use weapons.”

That’s from his essay, “Reclaiming our species being: Humo Ludens collaborans,” in our recent book, Playing for Peace (see ad below). No doubt Thiel would reject it, probably couldn’t even see it, but then, what does he know? Systems, of a sort, I grant him that. People? I’m not so sure.

Just as Thiel has his plans, Big Plans, Big Tech, Big Bucks, so Charlie has his plans, for people, for children. In particular, for more music for children. One of his plans is called “Paidea Con Salsa,” also in Playing for Peace. The general idea is to create a disseminate a primary school curriculum in Latin drumming and dancing traditions. Here’s how we begin:

In the first year, discover how these styles are learned in situ, in the communities. Do children normally dance a style before they learn to drum it? In what order is it easiest to learn the percussion instruments? How do conga players gradually move from holding patterns to improvisation in each idiom? And so forth. We may already know the basic parts, but we need to perfect our understanding of sequential acquisition and traditional processes of cultural transmission. We might also want to deepen our understanding of roots and contexts with some field trips to centers like New York City, Puerto Rico, and Cuba.

Then we develop pilot projects in a variety of schools along with a series of summer institutes for teacher training.

While Charlie’s more skeptical about technology than I am, he’s right about the need for music, not music we merely listen to (no matter how good that music may be), but music we make ourselves, with our friends and family. In my book on music, Beethoven’s Anvil (Basic Books 2001), I have argued at great length that it is music that allowed bands of clever apes to remake themselves into human beings. Music has been and is important in every society throughout history and around the world. Up until the twentieth century you couldn’t listen to or dance to music unless you were in the presence of the musicians making the music. While sound recordings have made a wealth of music available to all, especially through sources on the web, like YouTube, that wealth cannot nurture us deeply unless we also make music with others. If we cannot recover our capacity for making music, and the expressive arts more generally, all the technology Thiel would foist upon us will kill us.

Don’t You Worry About a Thing

Children love music, and they love to dance. Let me present one final passage from Keil, yes, from the same book (Chapter: “Peace and Joy Unlimited”):

The good news is that children want most of all to be in synch around 8-11 months old: they want to sing ‘vocables’ in tight synch on cue; “crow” & “squeal” and explore the sound spectrum; have lots of the legfun, mouthfun and peekaboo that we adults call ‘poetry;’ want to bang on things; do call and response ‘razzberries’ with their lips; to nuzzle and be nuzzled; to be bounced and dandled in time to music, etc. etc. etc. You know how toddlers about to toddle are; you were one once yourself. Do you remember using your members flexibly, wildly, crazily?

Here's my final argument, a three-year old Japanese girl, Sutan, using asparagus as a microphone while she sings “Don’t You Worry About a Thing” as her mother prepares food:

 

Utagawa Hiroshige, Eight Shadow Figures, a commentary on Plato

Sabine Hossenfelder: AI is systemic change, a huge one at that.

Her tweet:

We need to stop obsessing about the perceived shortcomings or virtues of individuals. The relevant changes in human society are systemic. Individuals rise or fall in that system. If it's not one of them, it'll be another.

The real stories aren't Trump or Thunberg or Musk, the real story is the system that made those people rise to popularity and power.

This is why AI is such an important development -- it's a systemic change, and a huge one at that.

Trump too should be seen as a symptom of a disease and not a cause. It's partly a problem specific to the US American financial and political system, but partly he is symptomatic of a general problem with democracies that they're too slow and unresponsive for modern times.

Thunberg was a condensation seed at the right place at the right time for a community that was looking for a focus. If it hadn't been her, it'd have been someone else.

It makes no sense to psychoanalyse and complain about people in those positions, it'll not make any difference in the grand scheme of things.

ChatGPT writes a sonnet on recursion based on a text by St. Augustine [AI theory]

Do you recognize this passage?

The power of memory is prodigious, my God. It is a vast immeasurable sanctuary. Who can plumb its depths? And yet it is a faculty of my soul. Although it is part of my nature, I cannot understand all that I am. This means, then, that the mind is too narrow to contain itself entirely. But where is that part of it which it does not itself contain? Is it somewhere outside itself and not within it? How, then, can it be part of it, if it is not contained in it?

Yes, I recognize that passage. It is from the book "Confessions" (Confessiones) by Saint Augustine, a renowned Christian theologian and philosopher. The passage reflects Augustine's contemplation on the nature of memory and the complexity of self-awareness. "Confessions" is an autobiographical work in which Augustine reflects on his life, his spiritual journey, and his relationship with God.

What part of Augustine's Confessions is it from?

I apologize for the confusion in my previous response. The passage you provided doesn't seem to be from Augustine's "Confessions." I don't have specific information about the origin of that particular passage in my training data. If it's from a different work or author, I would need more context to identify it accurately. If you have more details or if there's a specific context you're looking for, please provide additional information, and I'll do my best to assist you.

Ah, but it is from the Confessions, Book Ten, Chapter 8, section 15.

I appreciate the clarification. Thank you for providing the specific details. The passage indeed comes from Augustine's "Confessions," and it appears in Book Ten, Chapter 8, section 15. Augustine's "Confessions" is a rich and introspective work, and this particular passage reflects his deep contemplation on the nature of memory and self-awareness. If you have any more questions or if there's anything else you'd like to discuss, feel free to let me know!

Using that passage as a source of thematic material, how about writing a sonnet on the theme of recursion.

In memory's vast and prodigious sea,
A sanctuary where the echoes play,
Recall, my God, the depths that stretch away,
A mind too narrow, seeking to be free.

Within its bounds, the soul's own mystery,
A dance of thoughts that never seem to stay,
Yet part of nature, in a curious way,
It questions what it holds, what cannot be.

The mind, a mirror, seeks its own embrace,
A recursive loop, a boundless quest,
To fathom depths that it cannot erase.

Does it contain itself, a paradox confessed?
Yet in this dance, where boundaries efface,
The soul finds truth, in questions unaddressed.

Wonderful!

I'm glad you enjoyed it! If you have any more requests, questions, or if there's anything else I can help you with, feel free to let me know. I'm here to assist!

Orca 2 joins the open source movement in AI

Here's the abstract from the Orca 2 paper referenced in the video:

Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smaller LMs’ reasoning abilities. Research on training small LMs has often relied on imitation learning to replicate the output of more capable models. We contend that excessive emphasis on imitation may restrict the potential of smaller models. We seek to teach small LMs to employ different solution strategies for different tasks, potentially different from the one used by the larger model. For example, while larger models might provide a direct answer to a complex task, smaller models may not have the same capacity. In Orca 2, we teach the model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate, direct answer, etc.). Moreover, we aim to help the model learn to determine the most effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of 15 diverse benchmarks (corresponding to approximately 100 tasks and over 36K unique prompts). Orca 2 significantly surpasses models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. We make Orca 2 weights publicly available at aka.ms/orca-lm to support research on the development, evaluation, and alignment of smaller LMs.


A GitHub repository of various resources for decentralized LLMs.

Saturday, November 25, 2023

On possible cross-fertilization between AI and neuroscience [Creativity]

MIT Center for Minds, Brains, and Machines (CBMM), a panel discussion: CBMM10 - A Symposium on Intelligence: Brains, Minds, and Machines.

On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.

Panel Chair: T. Poggio
Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever

Quick Comments

1.) I’m a bit annoyed that Hassabis is giving neuroscience credit for the idea of episodic memory. As far as I know, the term was coined by a cognitive psychologist named Endel Tulving in the early 1970s, who stood it in opposition to semantic memory. That distinction was all over the place in the cognitive sciences in the 1970s and its second nature to me. When ChatGPT places a number of events in order to make a story, that’s episodic memory.

2.) Rather than theory, I like to think of what I call speculative engineering. I coined the phrase in the preface to my book about music (Beethoven’s Anvil), where I said:

Engineering is about design and construction: How does the nervous system design and construct music? It is speculative because it must be. The purpose of speculation is to clarify thought. If the speculation itself is clear and well-founded, it will achieve its end even when it is wrong, and many of my speculations must surely be wrong. If I then ask you to consider them, not knowing how to separate the prescient speculations from the mistaken ones, it is because I am confident that we have the means to sort these matters out empirically. My aim is to produce ideas interesting, significant, and clear enough to justify the hard work of investigation, both through empirical studies and through computer simulation.

3.) On Chomsky (Hinton & Hassabis): Yes, Chomsky is fundamentally wrong about language. Language is primarily a tool for conveying meaning from one person to another and only derivatively a tool for thinking. And he’s wrong that LLMs can learn any language and therefore they are useless for the scientific study of language. Another problem with Chomsky’s thinking is that he has no interest in process, which is in the realm of performance, not competence.

Let us assume for the sake of argument that the introduction of a single token into the output stream requires one primitive operation of the virtual system being emulated by an LLM. By that I mean that there is no logical operation within the process, no AND or OR, no shift of control; all that’s happening is one gigantic calculation involving all the parameters in the system. That means that the number of primitive operations required to produce a given output is equal to the number of tokens in that output. I suggest that that places severe constraints on the organization of the LLM’s associative memory.

Contrast that with what happens in a classical symbolic system. Let us posit that each time a word (not quite the same as a token in an LLM, but the difference is of no consequence) is emitted, that itself requires a single primitive operation in the classical system. Beyond that, however, a classical system has to execute numerous symbolic operations in order to arrive at each word. Regardless of just how those operations resolve into primitive symbolic operations, the number has to be larger, perhaps considerably larger, than the number of primitive operations an LLM requires. I suggest that this process places fewer constraints on the organization of a symbolic memory system.

At this point I’ve reached 45:11 in the video, but I have to stop and think. Perhaps I’ll offer some more comments later.

LATER: Creativity

4.) Near the end (01:20:00 or so) the question of creativity comes up. Hassibis says AIs aren't there yet. Hinton brings up analogy, pointing out that, with all the vast knowledge LLMs have ingested, they're got opportunities for coming up with analogy after analogy after analogy. I've got experience with ChatGPT that's directly relevant to those issues, analogy and creativity.

One of the first things I did once I started playing with ChatGPT was have it undertake a Girardian interpretation of Steven Spielberg's Jaws. To do that it has to determine whether or not there is an analogy between events in the film and the phenomena that Girard theorizes about. It did that fairly well. So I wrote that up and published it in 3 Quarks Daily, Conversing with ChatGPT about Jaws, Mimetic Desire, and Sacrifice. Near the end I remarked:

I was impressed with ChatGPT’s capabilities. Interacting with it was fun, so much fun that at times I was giggling and laughing out loud. But whether or not this is a harbinger of the much-touted Artificial General Intelligence (AGI), much less a warning of impending doom at the hands of an All-Knowing, All-Powerful Superintelligence – are you kidding? Nothing like that, nothing at all. A useful assistant for a variety of tasks, I can see that, and relatively soon. Maybe even a bit more than an assistant. But that’s as far as I can see.

We can compare what ChatGPT did in response to my prompting with what I did unprompted, freely and of my own volition. There’s nothing its replies that approaches my article, Shark City Sacrifice, nor the various blog posts I wrote about the film. That’s important. I was neither expecting, much less hoping, that ChatGPT would act like a full-on AGI. No, I have something else in mind.

What’s got my attention is what I had to do to write the article. In the first place I had to watch the film and make sense of it. As I’ve already indicated, have no artificial system with the required capabilities, visual, auditory, and cognitive. I watched the film several times in order to be sure of the details. I also consulted scripts I found on the internet. I also watched Jaws 2 more than once. Why did I do that? There’s curiosity and general principle. But there’s also the fact that the Wikipedia article for Jaws asserted that none of the three sequels were as good as the original. I had to watch the others to see for myself – though I was unable to finish watching either of that last two.

At this point I was on the prowl, though I hadn’t yet decided to write anything.

I now asked myself why the original was so much better than the first sequel, which was at least watchable. I came up with two things: 1) the original film was well-organized and tight while the sequel sprawled, and 2) Quint, there was no character in the sequel comparable to Quint.

Why did Quint die? Oh, I know what happened in the film; that’s not what I was asking. The question was an aesthetic one. As long as the shark was killed the town would be saved. That necessity did not entail the Quint’s death, nor anyone else’s. If Quint hadn’t died, how would the ending have felt? What if it had been Brody or Hooper?

It was while thinking about such questions that it hit me: sacrifice! Girard! How is it that Girard’s ideas came to me. I wasn’t looking for them, not in any direct sense. I was just asking counter-factual questions about the film.

Whatever.

Once Girard was on my mind I smelled blood, that is, the possibility of writing an interesting article. I started reading, making notes, and corresponding with my friend, David Porush, who knows Girard’s thinking much better than I do. Can I make a nice tight article? That’s what I was trying to figure out. I was only after I’d made some preliminary posts, drafted some text, and run it by David that I decided to go for it. The article turned out well enough that I decided to publish it. And so I did.

It’s one thing to figure out whether or not such and such a text/film exhibits such and such pattern when you are given the text and the pattern. That’s what ChatGPT did. Since I had already made the connection between Girard and Jaws it didn’t have to do that. I was just prompting ChatGPT to verify the connection, which it did (albeit in a weak way). That’s the kind of task we set for high school students and lower division college students. […]

I don’t really think that ChatGPT is operating at a high school level in this context. Nor do I NOT think that. I don’t know quite what to think. And I’m happy with that.

The deeper point is that there is a world of difference between what ChatGPT was doing when I piloted it into Jaws and Girard and what I eventually did when I watched Jaws and decided to look around to see what I could see. How is it that, in that process, Girard came to me? I wasn’t looking for Girard. I wasn’t looking for anything in particular. How do we teach a computer to look around for nothing in particular and come up with something interesting?

These observations are informal and are only about a single example. Given those limitations it's difficult to imagine a generalization. But I didn't hear anything from those experts that was comparably rich.

Hinton gave an example of an analogy that he posed to GPT-4 (01:18:30): “What has a compost heap got in common with an atom bomb?” It got the answer he was looking for, chain reaction, albeit at different energy levels and different rates. That's interesting. Why wasn't the panel ready with 20 such examples among them? Perhaps more to the point, doesn't Hinton see that it is one thing for GPT-4 to explain an analogy he presents to it, but that coming up with the analogy in the first place is a different kind of mental process?

Do they not have more such examples from their own work? Don't they think about their own work process, all the starts and stops, the wandering around, the dead ends and false starts, the open-ended exploration, that came before final success. And even then, no success is final, but only provisional pending further investigation. Can they not see the difference between what they do and what their machines do? Do they think all the need for exploration will just vanish in the face of machine superintelligence. Do they really believe that the universe is that small?

STILL LATER: Hinton and Hassabis on analogies

Hinton continues with analogies and Hassabis weights in:

1:18:28 – GEOFFREY HINTON: We know that being able to see analogies, especially remote analogies, is a very important aspect of intelligence. So I asked GPT-4, what has a compost heap got in common with an atom bomb? And GPT-4 nailed it, most people just say nothing.

DEMIS HASSABIS: What did it say ...

GEOFFREY HINTON: It started off by saying they're very different energy scales, so on the face of it, they look to be very different. But then it got into chain reactions and how the rate at which they're generating energy increases-- their energy increases the rate at which they generate energy. So it got the idea of a chain reaction. And the thing is, it knows about 10,000 times as much as a person, so it's going to be able to see all sorts of analogies that we can't see.

DEMIS HASSABIS: Yeah. So my feeling is on this, and starting with things like AlphaGo and obviously today's systems like Bard and GPT, they're clearly creative in ...

1:20:18 – New pieces of music, new pieces of poetry, and spotting analogies between things you couldn't spot as a human. And I think these systems can definitely do that. But then there's the third level which I call like invention or out-of-the-box thinking, and that would be the equivalent of AlphaGo inventing Go.

Well, yeah, sure, GPT-4 has all this stuff in its model, way more topics than any one human. But where’s GPT-4 going to “stand” so it can “look over” all that stuff and spot the analogies? That requires some kind of procedure. What is it?

For example, it might partition all that knowledge into discrete bits and then set up a 2D matrix with a column and a row for each discrete chunk of knowledge. Then it can move systematically through the matrix, checking each cell to see whether or not the pair in that cell is a useful analogy. What kind of tests does it apply to make that determination? I can imagine there might be a test or tests that allows a quick and dirty rejection for many candidates. But those that remain, what can you do but see if any useful knowledge follows from trying out the analogy. How long will that determination take? And so forth.

That’s absurd on the face of it. What else is there? I just explained what I went through to come up with an analogy between Jaws and Girard. But that’s just my behavior, not the mental process that’s behind the behavior. I have no trouble imagining that, in principle, having these machines will help speed up the process, but in the end I think we’re going to end up with a community of human investigators communicating with one another while they make sense of the world. The idea, which, judging from remarks he’s made elsewhere, Hinton seems to hold, that one of these days we’ll have a machine that takes humans out of the process all together, that’s an idle fantasy.

We’ll have a reasonable model of how LLMs function before we reach AGI*

I understand that the concept of AGI is vague, but a lot of smart people have a lot invested in it and in making predictions about when we’ll construct one. That’s the reason I’m using it. And I’m saying that we’ll understand how LLMs work before we create an AGI. Moreover, since it’s not clear to me that we’ll ever create an AGI – we may just lose interest – I also believe that we will arrive at a reasonable understanding of how LLMs work.

On that score, here’s something that I’ve posted at LessWrong:

In the days of classical symbolic AI, researchers would use a programming language, often some variety of LISP, but not always, to implement a model of some set of linguistic structures and processes, such as those involved in story understanding and generation, or question answering. I see a similar division of conceptual labor in figuring out what’s going on inside LLMs. In this analogy I see mechanistic understanding as producing the equivalent of the programming languages of classical AI. These are the structures and mechanisms of the virtual machine that operates the domain model, where the domain is language in the broadest sense. I’ve been working on figuring out a domain model and I’ve had unexpected progress in the last month. I’m beginning to see how such models can be constructed. Call these domain models meta-models for LLMs.

It’s those meta models that I’m thinking are five years out. What would the scope of such a meta model be? I don’t know. But I’m not thinking in terms of one meta model that accounts for everything a given LLM can do. I’m thinking of more limited meta models. I figure that various communities will begin creating models in areas that interest them.

I figure we start with some hand-crafting to work out some standards. Then we’ll go to work on automating the process of creating the model. How will that work? I don’t know. No one’s ever done it.

I know how to get that started, but it will take others with skills I don’t have (in math and programming) to make it work.

It’s going to take me awhile to get my recent insights written up in a form I can present to others, but I’m building on work I’ve been doing on ChatGPT over the last year. These are the most important papers:

I’ll offer one last remark: These meta models will made crucial use of work done in symbolic computing back in the 1960s and 1970s.


*Perhaps AGI will seem less seductive by that time. Perhaps, just as the concept of phlogiston gave way to the concept of oxidation, the concept of AGI will give way to...something else. Something more mundane perhaps, but also more useful and real, and more interesting.

Thursday, November 23, 2023

Napping is good

I've believed that since I was quite young. I nap all the time. Now the NYTimes has an article in which experts advise on the proper way to nap. That's a good thing, but I do wonder what has happened that we are so far from the natural rhythms of our body and brain that we need an expert to advise us on napping. I suppose the expertise is necessary to overcome the effects of a misplaced emphasis on the moral value of work, which necessarily precludes napping during the part of the day reserved for work. Anyhow, here's the article:

Alica Callahan, You Deserve a Great Nap, NYtimes, November 23, 2023.

Here's a bit of advice:

Keep expectations low.

You may not fall asleep during your nap — or at least you may not think you have — and that’s OK, Dr. Mednick said. We are often “somewhat conscious” in the early stages of sleep, she said, but “it’s still good rest.”

She pointed to a recent study that found that drifting into the lightest stage of sleep — a sort of twilight zone where your mind wanders in a dreamlike way — for even one minute during a 20-minute rest generated more creativity and better problem-solving in young adults.

Get comfortable.

Settle into a quiet place where you’re unlikely to be interrupted, and put your phone on airplane mode, Dr. Wu said. If you’re lucky enough to have an office or access to a nap room, consider keeping a pillow, eye mask and earplugs at work, said Jessica Payne, a professor of psychology at the University of Notre Dame.

Then, try to tune into your five senses to “get out of your head and into your body,” Dr. Wu said, and let your breathing slow and deepen. “That allows the sleep to come to you.”

There's more advice, all of which makes sense.

Napping is yet another aspect of controlling behavioral mode, which is one of the most important things we need to master.

A world without (what kind of?) work

Rachel Fraser, Can We Imagine a World Without Work? Boston Review, Nov. 21, 2023.

Reproductive labor:

Cleaning, like cooking, childbearing, and breastfeeding, is a paradigm case of reproductive labor. Reproductive labor is a special form of work. It doesn’t itself produce commodities (coffee pots, silicon chips); rather, it’s the form of work that creates and maintains labor power itself, and hence makes the production of commodities possible in the first place. Reproductive labor is low-prestige and (typically) either poorly paid or entirely unwaged. It’s also obstinately feminized: both within the social imaginary and in actual fact, most reproductive labor is done by women. It is perhaps unsurprising, then, that political discussions of work often treat reproductive labor as an afterthought.

One place this elision shows up is in the “post-work” tradition. For the post-work tradition—whose influence on the Anglo-American left has been growing for the last decade—the aim of radical politics should not (just) be for higher wages, more secure employment, or more generous parental leave. Rather, radical politics should aim for a world in which work’s social role is utterly transformed and highly attenuated—a world in which work can no longer serve as either a disciplining institution or the fulcrum for our social identities.

Two new publications bookend the tradition. Paul Lafargue’s 1880 essay, “The Right to Be Lazy”—a touchstone for post-work theorists—was recently reprinted in a new translation by Alex Andriesse. (A Cuban-born revolutionary socialist, Lafargue married one of Karl Marx’s daughters, Laura, in 1868.) Helen Hester and Nick Srnicek offer a more contemporary contribution. In After Work: A History of the Home and the Fight for Free Time, they blend post-work conviction with feminist scruples. A post-work politics must, they argue, have something to say about reproductive labor. The post-work tradition grapples with the grandest themes in politics—the interplay between freedom and necessity. But within its lofty imaginaries, there must also be space for a dishcloth, and a changing table.

Critiques of capitalism:

Critiques of capitalism tend to come in one of three flavors. Distributive critiques locate the badness of capitalism in its tendency toward an unjust distribution of goods. Others identify the wrong of exploitation as capitalism’s core moral flaw. Hester and Srnicek work within a third critical paradigm, whose key moral grammar is that of alienation. Under this rubric, the true badness of work under capitalism—traditional wage labor and unpaid reproductive labor alike—lies in its distortion of our practical natures. When we fashion the world in accordance with our freely chosen ends, we realize ourselves within it. We exercise a key human capacity: the capacity to make ourselves objective. But under capitalism, we are not free to choose and pursue our own ends; we are forced into projects that we value only instrumentally. We mop floors, deliver packages, or babysit not because we think these activities have value in and of themselves, but because we need the money. We act on the world, yes, but we cannot properly express ourselves within it.

Hester and Srnicek don’t actually talk in terms of alienation; their critical registers are those of “temporal sovereignty” and “free time.” But these are novel placeholders, used to freshly mint an argument for which alienation has been the customary coin.

Idleness and crisis:

Hester and Srnicek’s friendliness to effort marks one point of difference between their approach and Lafargue’s. For Lafargue, freedom is more closely tied to idleness. Hot stoves don’t feature in his post-work world. His vision of the good life centers on lazing about, smoking cigarettes, and feasting.

The differences don’t stop there. Hester and Srnicek offer a moral critique of capitalism, one that appeals to values. Despite Lafargue’s title, with its talk of a “right,” his main focus is political economy. He is best read as offering a “crisis theory” of capitalism: a form of critique that appeals not to moral damage but rather to capitalism’s structural instability. Capitalism, says the crisis theorist, is a flawed economic system not because it is (say) cruel, but because it is a self-undermining system. It destroys its own capacity to function.

The roots of crisis, for Lafargue, lie in the inevitable mismatch between the productive capacities of a capitalist society and that society’s capacity to consume what is produced. Capitalism, he thinks, requires that workers play two roles: they need to make things, but they also need to buy them. Eventually, these two roles will come into conflict. [...]

Lafargue’s innovation was not to link overproduction with crisis—hardly an original suggestion—but rather lay in his proposed solution. Where twentieth-century Keynesian reformists proposed to coordinate production and consumption by stimulating demand, Lafargue pushes in the opposite direction. We should coordinate by suppressing production; workers should simply work less.

Contrasting goals:

Lafargue is primarily focused on the pathologies of industrial capitalism and on how they might be overcome. After Work, by contrast, is more interested in providing a blueprint than a roadmap—less concerned with how we might arrive in a post-work world, that is, than with how to organize things once we get there.

Technology and freedom:

Nonetheless, Hester and Srnicek do still have a somewhat coarse view of the relationship between technology and freedom. For Hester and Srnicek, technology expands the realm of freedom. It does this by adding new options. Without a dishwasher, I have no choice but to do the dishes. But once I have a dishwasher—here they quote Martin Hägglund—“doing dishes by hand is not a necessity but a choice.”

The example is not as compelling as it might seem. I once could have traveled by horse and carriage from Oxford to London, but thanks to the internal combustion engine, the public infrastructure required for such a trip to be feasible no longer exists. The United States’ car-focused public infrastructure prevents its citizens from doing simple things, like walking to work. When it comes to social arrangements, technology both adds options and takes them away. It destroys some forms of compulsion while creating its own mandates. It need not roll back the sphere of necessity.

Hester and Srnicek might more be sanguine than most about automating some reproductive labor. But they are not sanguine about automating all of it. This technological remainder motivates a third move: efficiency.

Coercion:

But a society that relies on everyone doing their fair share of care work presumably couldn’t get by without the resources to penalize those who opt out. And if a society has the means to impose such penalties, it will be a society in which the means of one’s existence can be a stake in one’s relationships. If we really want an equitable division of care work, some people will need to be coerced into doing it.

Hester and Srnicek might concede that perfect freedom is not compatible with care for all, but at least we would be much freer in a post-work society than we are now. (Perhaps more political theorists should be Winnicottians—concerned with developing the “good-enough” society.) So long as we have sufficient time to choose and pursue our own projects, it should not matter too much that there will still be allotments of necessity: parcels of time that are not truly our own. And, perhaps, these refractory parcels could even be packaged as a feature, rather than a bug.

There's much more at the link.

H/t 3 Quarks Daily.