Pages in this blog

Nina Paley on sex and life

Nina is a wise woman. I'm bumping this to the top of the queue on that account.
Nina Paley reflects on sex and her complex experience of it in, My Sex-Positive Memoirs, 4W, January 14, 2020. It's mostly about her years in San Francisco in the 1990s. Near the beginning:
A horny, childfree, sex-loving, non-monogamous (that’s another story) heterosexual young woman should have had no trouble finding sex partners, yet this was not the case. I did find a few men to have sex with—once. They would have sex once, then I’d never hear from them again. Even finding such men was difficult.

Why was I pursuing non-committal, “empty” sex anyway? Sure I loved sex, but I didn’t understand it.

I had received plenty of sex education: my mother worked for Planned Parenthood, and my childhood was filled with earnest Liberal sex-education books like “How Babies Are Made” and “What’s Happening to Me?” Throughout my teens and young adulthood, I was encouraged to talk about sex, to “communicate,” so I would be spared the repressive hang-ups of my mother’s generation. I was naturally drawn to the Sex-Positive circles of San Francisco, where we talked and talked and talked about sex. But this “sex education” - all the Liberal discourse around sex - unwittingly encouraged dissociation: we could only talk about the body as a thing that does acts. Much of our intellectualism was a defense against vulnerability and what we dreaded most: shame. We separated sex from love and relationships; we thought that was progressive and empowering.

Overall, I wish we had shut up about sex more, and mediated it less. Mechanics aside, sex is a mystery, to be experienced directly and personally. Talking about sex is as useful as talking about God. Mediating spiritual experience does nothing to enhance such experience, but it does allow manipulation of seekers, giving rise to cults.
Toward the middle:
Was I supposed to save myself for Love? I’d already been in love, several times, and my lovers imploded and left me. Men found me “too intense.” No one wanted my love, not even me. The idea of men loving me for who I actually was was long gone. No one wanted my soul, but some wanted my body, which was thin at last, and with makeup, a wig, and high heels was literally a hot commodity.

I was well aware I was supposed to be cautious, and took precautions; I only responded to solicitations specifying “no nudity” and “no sex” (both of which turned out to be laughable, and are tactics still used to this day to recruit young, vulnerable women). I was also aware that I was supposed to feel ashamed. I spent a lot of time considering shame, and rejecting it: I wasn’t harming anyone (ha!), my choices were informed, my eyes were open. Sex was nothing to be ashamed of. Objectifying my own body was nothing to be ashamed of: all the strippers, prostitutes, and porn models/directors who spoke at SFSI made that clear. It was work, it was art, it was expression. No shame in objectification: we are all objects, we live in a material world. Nothing wrong with exchange for money, either; we exchange all kinds of goods and services for money, why are bodies and sex any different?

Now-me knows sex is different, and bodies are not commodities. Then-me simply wouldn’t have believed it. The body is sacred? Nothing is sacred in this world. Was I supposed to just cloister myself, be abstinent until Mr. Right came along? There is no Mr. Right, there was no one who would understand and respect and love me the way I needed to be loved, and time was ticking away while my very temporal body was at its peak of beauty and my hormones were screaming “fuck! fuck! fuck!” [...]

A hot body is often the biggest asset many young women have. We are lucky if we have hot, conventionally attractive bodies. All my years developing my mind and talents meant nothing compared to my brief moment of hot-boddedness. Men who were never impressed by my art would fall over themselves to buy me drinks and otherwise attend to me when I went out in a wig and makeup. I actually felt sorry for these men, so helplessly conditioned they were to respond to stupid gender cues, their feeble minds taken over by mediated programming. Do I pity them still? As much as I pity anyone who surrenders personal responsibility and critical thinking in favor of unexamined social programming. Such people are pathetic—and authoritarian, dangerous enablers.

For about a year, I enabled them myself, by dressing up as the male idea of a sexy woman: drag.
Toward the end:
Life is all about making mistakes. No one has figured out how to get it right. If you keep sex sacred and private, your kids grow up repressed and ashamed (or so my elders have told me; I wouldn’t know). If you make sex casual and public, it becomes a commodity and we lose our souls. Radical feminists rightly criticize porn, but banning porn outright is repressive — and almost certain to backfire, given the willful misapplication of porn regulations thus far. (In the 1990’s, alternative comic books were regularly seized at the Canadian border because they were “mistaken” as porn. This effectively turned me against any and all porn regulation, but only later did I consider the border guards’ frequent “mistakes” may have been on purpose, to manipulate people like me into opposing regulation I would otherwise support).
My 20’s were hard. So was my childhood. So is right now. I’m not entitled to a do-over of childhood, youth, or last week. Do I regret the choices I’ve made? Yes, in the sense I wouldn’t make those same choices again. But no in the sense that all of those choices made me who I am, and I like myself. I did stupid things because I didn’t know any better, and the only way for me to learn was to do the stupid things I did. It’s not like “sex work will hurt you” was any secret. Warnings against it were plentiful but not persuasive, and besides, I’d found my way into a kind of cult. The herd I homed to was all about sex work, porn, objectification, and “non-judgement”; who was I gonna listen to, them or a bunch of repressed prudes?

Now I’m in menopause, and have hardly any libido anyway. Whether that’s due to the permanent scars of my “sex-positive” 20’s, or the natural exhaustion of my ovaries, I do not know. Many or most women slow down a lot sexually in their 50’s, yet sex is still worshiped throughout our culture. Much of our population couldn’t care less about sex, even while it permeates all media as the be-all and end-all of life. Sex in advertising, sex in novels, sex in movies, sex on television, sex, sex, sex — and most women over 50 don’t give a damn. Many women under 50 do, but we have to see sex from the male perspective all the damn time, because men still make most media. We objectify ourselves.

It is a relief to not be horny all the time any more. It’s also unnerving, because in this society we’re supposed to be horny. Except when I was horny, men didn’t like that either. Women are either out-of-control nymphomaniacs, or dried-up prudes.

Or maybe, just maybe, women’s sexuality doesn’t exist to please men.

I just wish it had pleased me.

A note about delayed orgasms in mind [mind and body]

David Lodge, Why Some Men Find Orgasms Elusive, NYTimes, Mar. 7, 2025.

The opening paragraphs:

Most men, as many as 70 percent, say they reach orgasm quicker than they’d like. But there is an inverse problem that can set in with age: lasting too long.

For some men, reaching climax can take 30 minutes or more. Sex that lasts this long can be uncomfortable, painful and distressing for men and their partners.

Uh, yes.

Later:

Orgasms are basically a massive reflex, kind of like a “giant sneeze,” Dr. Rubin said. Just one requiring a “perfect genital-brain connection.”

The process starts with visual, mental or physical sexual stimulation, triggering the brain to release dopamine, which increases excitement. After sufficient stimulation, the brain releases a surge of pleasure-inducing chemicals, like oxytocin, creating the sensation we think of as an orgasm.

For most men, ejaculation occurs at the same time, “but they are separate processes,” said Dr. Alan Shindel, a urologist based in San Francisco. While orgasm mostly happens in the brain, ejaculation is a series of involuntary muscle contractions.

Complications affecting the genitals, pelvic floor muscles, spinal cord and brain can make orgasms premature, delayed, muted or even painful, Dr. Rubin said. Given the complexity involved, she said, “it’s actually amazing that more things don’t go wrong.”

I like the sneeze metaphor.

What got my attention is this: “While orgasm mostly happens in the brain, ejaculation is a series of involuntary muscle contractions.” Decades ago Wilhelm Reich argued that ejaculation is not necessarily orgasmic. He had trouble getting that idea accepted, among other things, and in any event his career ended in disaster. But Shindel's statement is consistent with Reich's believe.

On a different note, as it were, I have described how trumpet players sometimes experience something that feels somewhat like an organism, but which has nothing to do with the genitals, at least as far as I can tell. The physiological mechanisms are quite different, having to do with blood flow to the brain. Whether or not oxytocin is involved, I have no idea. But we are talking about music and music may well induce oxytocin secretion; that's what the late Walter Freeman believed.

Lodge’s article goes on to discuss possible causes of delayed orgasm in men, including medications, neurological issues, “partner-related concerns,” by which he the woman experiencing pain or “differences in libido, preferred types of sex acts and performance expectations,” and “psychological factors,” such as “depression, anxiety and stress.” That last category is the most interesting one from my perhaps peculiar point of view. That’s the sort of thing Reich was interested in as well.

The article ends with some short notes on treatments.

Will Trump be able to keep Musk under control?

Jess Bidgood of the NYTimes writes about the recent cabinet meeding where Trump said that “From now on, he said, the secretaries would be in charge; the Musk team would only advise.” What are the chances of that? No one knows.

Ryan Mac, a colleague of mine who covers big tech, has reported on Musk for a long time. Today, I asked him if Musk had ever been content with an adviser-style role, one in which he doesn’t run the show.

Ryan’s answer was simple: No.

Musk has never liked being one voice among many, Ryan explained. Vivek Ramaswamy, who was initially going to be Musk’s partner in leading the Department of Government Efficiency, is long gone. Musk doesn’t sit on a lot of boards. And throughout his corporate history, whenever he hasn’t initially had control over a company, he’s tended to seek it.

At Tesla, where he was an early investor, he became the chief executive. Before he bought Twitter and renamed it X, he almost joined the company’s board. Then he decided to acquire the company outright, fire its board of directors and executives and become the chief executive. (He later named a new C.E.O. but retains considerable control over the company.)

Not all of Musk’s bids for control have worked. Decades ago, for example, he was forced out as the chief executive of PayPal. His effort to get control of OpenAI — a nonprofit he co-founded in 2015 — failed, as did his more recent bid to buy it.

I'm saying in a generous supply of popcorn.

Systematic bone tool production at 1.5 million years ago

Tyler Cowen just posted this abstract at Marginal Revolution:

de la Torre, I., Doyon, L., Benito-Calvo, A. et al. Systematic bone tool production at 1.5 million years ago. Nature (2025). https://doi.org/10.1038/s41586-025-08652-5

Abstract: Recent evidence indicates that the emergence of stone tool technology occurred before the appearance of the genus Homo and may potentially be traced back deep into the primate evolutionary line. Conversely, osseous technologies are apparently exclusive of later hominins from approximately 2 million years ago (Ma), whereas the earliest systematic production of bone tools is currently restricted to European Acheulean sites 400–250 thousand years ago. Here we document an assemblage of bone tools shaped by knapping found within a single stratigraphic horizon at Olduvai Gorge dated to 1.5 Ma. Large mammal limb bone fragments, mostly from hippopotamus and elephant, were shaped to produce various tools, including massive elongated implements. Before our discovery, bone artefact production in pre-Middle Stone Age African contexts was widely considered as episodic, expedient and unrepresentative of early Homo toolkits. However, our results demonstrate that at the transition between the Oldowan and the early Acheulean, East African hominins developed an original cultural innovation that entailed a transfer and adaptation of knapping skills from stone to bone. By producing technologically and morphologically standardized bone tools, early Acheulean toolmakers unravelled technological repertoires that were previously thought to have appeared routinely more than 1 million years later.

Here's a comment I posted:

Some years ago I visited Ralph Holloway's lab at Columbia. Holloway is a physical anthropologist and an expert on the evolution of the brain. The lab was, say, 15 by 20 feet, maybe a bit larger but not much larger. It had tables and shelves where you could see the specimens, skull fragments of various sizes. There may have been one or three fairly complete skulls, but not many. Anyhow, this is the kind of evidence we have for brain evolution, fragmentary and indirect.

Holloway said that that lab had, say, 10% of the world's total collection of hominin skull fragments. This was back in the early 2000s. When you do a thumbnail estimate of the population these specimens were drawn from, it becomes clear that the total world-wide collection of specimens is a very small fraction of that population. And we have no reason to think that the that fraction is random. It may be, it may not. We don't know.

When that's the kind of evidence you're working from, one new discovery can wreak havoc with existing theories.

Friday, March 7, 2025

Claude and I discuss the idea of progress in the arts, from Hazlitt to Bloom on Shakespeare, and concluding with Arthur Danto and Ernst Gombrich

This is from an ongoing discussion I have been having with Claude. For this discussion I had uploaded the paper that David Hays and I wrote on cognitive evolution, which is our basic paper of cultural ranks, and Max Tabarrok’s post, Romae Industriae, which poses the question of why Rome didn’t have an industrial revolution. I have posted earlier segments of this conversation: 1) Why didn’t Rome have an industrial evolution? and 2) where I argued that the printing press did not play the role in cultural history that is usually assigned to it.

The general idea of cultural ranks came out of my undergraduate years at Johns Hopkins. I took a course on the Scientific Revolution, where we read Thomas Kuhn’s famous book, The Structure of Scientific Revolutions. In another course I learned that the novel emerged as a literary form in roughly the same time frame. In yet another course I read Centuries of Childhood, by Philippe Aries. He argued that the conception of childhood that is familiar to us, a distinct phase in human development, was not a “natural” concept, but rather emerged, yes, again within roughly the same time period. In a course on the theatre, taught by Dick Macksey, I read Nietzsche’s The Birth of Tragedy, which concocted a story about the historical emergence of tragedy in ancient Greece which made it seem as though the Athenians had “swallowed” an older society and put it on the stage. Ritual celebrants in the older social formation became the principle characters in tragedies while the general popular of that formation became the chorus.

During my sophomore year I’d taken a course in developmental psychology where I was introduced to the work of Jean Piaget. Piaget theorized that children’s mental development progressed through a series of more or less distinct stages, where each stage is characterized by specific cognitive capabilities, with later stages building on earlier ones. He also applied this concept to the history of ideas, e.g. in Genetic Epistmology, which read while working on an MA thesis. That’s also when I read Walter Wiora’s The Four Ages of Music. It was a slim volume in which Wiora argued that the first age emphasized rhythm, the second added an intense development of melody, then came harmony (emerging in Europe) and the contemporary phase, which involved exploration of new forms and methods. I read that and posed a question: Isn’t everything like that, four stages?

I took that question with me to SUNY Buffalo, where I enrolled for a PhD in the English Department, but also spent a great deal of time discussing computational linguistics, cognitive science, and cultural evolution with David Hays in Linguistics. That’s where we developed the idea of cultural ranks. I included an account of narrative development in my 1978 dissertation, “Cognitive Science and Literary Theory.”

I note, then, that while the scientific revolution certainly played an important role in my thinking about cultural evolution, the arts played perhaps an even more important role in developing the idea. However, the ranks idea only accounts for the overall shape of development. What about development within a given rank? I have some preliminary notes about that in a recent post, Confabulation, Dylan’s epistemic stance, and progress in the arts: “I’ll let you be in my dreams of I can be in yours.”

The rest of this post consists of a conversation I had with Claude 3.7 on March 3, 2025. As always, my comments are set in bold type while Claude’s are plain.

* * * * *

Hazlitt on progress in the arts

While it is common to say that there is progress in the sciences and technology, it’s less common to think of progress in the arts. William Hazlitt published a well-known essay in 1814, Why The Arts Are Not Progressive, and that’s remained the standard view. He observed:

“What is mechanical, reducible to rule, or capable of demonstration, is progressive, and admits of gradual improvement: what is not mechanical or definite, but depends on genius, taste, and feeling, very soon becomes stationary or retrograde, and loses more than it gains by transfusion. [...] The greatest poets, the ablest orators, the best painters, and the finest sculptors that the world ever saw, appeared soon after the birth barbarous. Those arts, which depend on individual genius and incommunicable power, of these arts, and lived in a state of society which was, in other respects, comparatively have always leaped at once from infancy to manhood, from the first rude dawn of invention to their meridian height and dazzling lustre, and have in general declined ever after.”

I see you're interested in exploring the contrast between progress in sciences/technology versus the arts. This is a fascinating philosophical question that bridges cultural evolution, aesthetics, and the nature of human creativity.

Hazlitt's essay presents the classic view that the arts, unlike sciences, don't experience linear progress. His central argument distinguishes between domains that are "mechanical, reducible to rule" (sciences) versus those dependent on "genius, taste, and feeling" (arts). He suggests artistic achievements often peak early in civilizations and then stagnate or decline.

The two documents you've shared offer interesting frameworks to consider this question: From "Cognitive Evolution," Benzon and Hays propose a four-rank model of cognitive development across cultures:

1. Rank 1: Abstraction through metaphor (preliterate societies)
2. Rank 2: Rationalization through metalingual definition (literate societies)
3. Rank 3: Theory through algorithm (Renaissance onward)
4. Rank 4: Model building through control (20th century)

They argue these cognitive advances shape all cultural domains, not just science. While they don't specifically refute Hazlitt's view on art, their model suggests that artistic expression would also evolve with new cognitive tools.

The "Romae Industriae" document examines why Rome didn't achieve an industrial revolution despite its technological sophistication. The author concludes cultural factors, particularly the lack of a printing press and the disdain for mechanical arts in slave-based societies, were key constraints rather than just technological limitations.

What's interesting is how these perspectives might inform a response to Hazlitt. Could we argue that while individual artistic genius may appear at any time, the evolution of artistic forms and movements does show a pattern of development tied to cognitive and cultural evolution? For example, the Renaissance's perspective techniques in painting required Rank 3 thinking about algorithmic representation of space.

Would you like to explore this tension between Hazlitt's view and the cognitive evolution framework more deeply, or focus on another aspect of these materials? 

Bloom, Shakespeare, the novel

Ah, yes, this ranks business. Benzon has published on both the evolution of narrative and the evolution of music using ranks theory and Hays has published on expressive culture in general. Perhaps one day I’ll upload those papers for you to examine. I note, though, that they don’t address the issue of progressive change within individual ranks. For example, I suspect that there is expressive change in the novel, from the 18th century up through the end of the 19th century, but that’s entirely within Rank 3 culture, and I’d think the progress continues in the 20th century. [...] And then we have Harold Bloom’s assertion that Shakespeare, in some sense, “invented” the human, but which I think he means that Shakespeare was the first writer to give voice to certain aspects of human behavior and mental life. But I don’t think Bloom ever systematically developed that idea, not even in is book on the Western canon or in his Shakespeare book.

[...] You raise an excellent point about progressive change within individual ranks that wasn't fully addressed in the paper. The progression of the novel within Rank 3 culture is a perfect example of this phenomenon. From Defoe and Richardson through Austen, the Brontës, Dickens, Eliot, and on to James and Conrad, we see remarkable development in narrative technique, psychological depth, and formal complexity—all while remaining within the algorithmic thinking characteristic of Rank 3.

This intra-rank progression continues into the 20th century with modernist innovations by Joyce, Woolf, and Faulkner, who pushed the boundaries of what the novel could express while still operating within Rank 3 cognitive structures. Then perhaps writers like Pynchon and DeLillo begin to engage with Rank 4 model-building approaches to fiction.

Regarding Bloom's assertion about Shakespeare "inventing the human," this does seem to suggest a kind of progressive leap in artistic representation. Bloom never fully systematized this idea, but it appears to indicate that Shakespeare's achievement wasn't just artistic excellence but a cognitive breakthrough in representing human interiority. In ranks terminology, one might say Shakespeare was pushing against the upper limits of Rank 2 representation toward something like early Rank 3 thinking about human psychology, even before the scientific revolution had fully taken hold.

Thursday, March 6, 2025

A note on assembly theory from Carl Zimmer [via Tyler Cowen]

Conversations with Tyler, Ep. 235: Carl Zimmer on the Hidden Life in the Air We Breathe

COWEN: Is Lee Cronin right or insane?

[laughter]

ZIMMER: Lee Cronin is a chemist in Scotland at University of Glasgow. He has this idea that you can explain life with a theory that he and others call assembly theory, which is about, basically, how many steps does it take for something to get produced?

The things in our bodies, the molecules that make us up — some of them are very small and simple, but some of them are exquisitely big and complex. Lee and others argue that life is what is able to assemble things beyond a certain threshold. This might be a way to actually identify life on a planet, even if you don’t know what life is made of. We can’t assume that life is just made of DNA; that’s an unreasonable assumption.

Life on Earth already blows our minds in many ways — at least mine. Life on other worlds — maybe that bet is right, and there’s life on Enceladus or some other icy moon. It might be really, really, really strange, but maybe we can recognize it by this assembly index.

Not only could this assembly theory be a way to recognize life, but it might be actually a way, Lee Cronin thinks, to make life. In other words, it guides you in basically creating a set of chemical reactions where you’re creating these . . . right now, he’s got these robots that are basically making droplets with different chemicals in them in these vast numbers of combinations. He’s wondering if they will eventually start to take on some of the hallmarks of life.

In other words, yes, he is trying to make life. He’s actively trying to make life right now. A lot of people think he’s crazy. A lot of people think he’s quite brilliant. Some people think he’s both. [laughs]

COWEN: I like him. I don’t know if he’s right. He’s a lot of fun to talk to.

ZIMMER: Absolutely, yes. It’s been really interesting watching assembly theory come to the fore recently. Some scientists really take badly to it in a very hostile way, but this is often the case. It feels like sometimes people are just talking past each other and they’re not really speaking the same language. Because assembly theory is new and it’s very interdisciplinary, I think it’s going to take a while for the scientific community to really engage with it and decide whether it holds up or not.

As I argue in Life’s Edge, life is a property of matter. Scientists are trying to explain it, and some of them are trying to explain with a theory. Superconductivity is a property of matter, and there were a bunch of theories that were put forward about it, including by Einstein, and they were wrong. It wasn’t until, eventually, some people came up with the right theory that really clicked in and had a powerful explanatory power. We’re not there yet with life. Maybe Lee Cronin is going to be like Einstein and he’s wrong, or maybe he will be one of the people who is right.

It would seem that consciousness is a property of matter as well, hence panpsychism.

Understanding transformers and communication among sperm whales

Mar 5, 2025
Allen School Colloquium Series Title: Discovering & Engineering the Computation Underlying Large Intelligent Agents
Speaker: Pratyusha Sharma (MIT)
Date: March 3, 2025

Abstract: The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this talk, I present three perspectives to answer these questions. I will discuss experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought–-hierarchical and compositional — and show how it can enable long-horizon reasoning and planning in these systems.

Bio: Pratyusha Sharma is a Ph.D. student in the Computer Science and Artificial Intelligence Lab at MIT, advised by Antonio Torralba and Jacob Andreas. She studies the interplay between language, sequential decision making and intelligence in natural and AI systems. Before this, she received her Bachelor’s degree from the Indian Institute of Technology, Delhi. Her research is published in interdisciplinary journals like Nature Communications, etc. and in academic conferences across machine learning, natural language processing, robotics, and marine biology. Her research has also been featured in articles in the New York Times, National Geographic Magazine, BBC, etc. She was recently a speaker at TED AI and was selected as a Rising Star in EECS, Data Science, and GenAI.

Her webpage, with links to her papers.

Wednesday, March 5, 2025

Digital Doppelgangers

Vijayalaxmi Methuku, Praveen Kumar Myakala, Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones, arXiv:2502.21248v1 [cs.CY], February 28, 2025.

Abstract: The rapid advancement of generative AI has enabled the creation of pre-mortem digital twins, AI-driven replicas that mimic the behavior, personality, and knowledge of living individuals. These digital doppelgangers serve various functions, including enhancing productivity, enabling creative collaboration, and preserving personal legacies. However, their development raises critical ethical, legal, and societal concerns. Issues such as identity fragmentation, psychological effects on individuals and their social circles, and the risks of unauthorized cloning and data exploitation demand careful examination. Additionally, as these AI clones evolve into more autonomous entities, concerns about consent, ownership, and accountability become increasingly complex.

This paper differentiates pre-mortem AI clones from post-mortem generative ghosts, examining their unique ethical and legal implications. We explore key challenges, including the erosion of personal identity, the implications of AI agency, and the regulatory gaps in digital rights and privacy laws. Through a research-driven approach, we propose a framework for responsible AI governance, emphasizing identity preservation, consent mechanisms, and autonomy safeguards. By aligning technological advancements with societal values, this study contributes to the growing discourse on AI ethics and provides policy recommendations for the ethical deployment of pre-mortem AI clones.

H/t Tyler Cowen.

I'm interested in a somewhat different kind of digital doppelganger. This is from a prompt I gave to Claude 3.5 Sonnet back in December 2024:

I’m imagining that at some time in the future each human will be given an AI companion early in life, perhaps when they are old enough to walk – think of Stephenson’s The Diamond Age. This companion will interact with the human in various ways through various modalities. It will certainly have robotic capabilities. As the child grows older the AI companion will probably have to be given a different physical form. So we port the data to a different physical form and make the switch-over in a special ceremony. We’ll probably have to do this a few times until the child becomes an adult. At some point the AI will also take residence in other physical forms, all linked together. This AI will remain with the person their entire life. They won’t always be physically together, and so forth. Imagine it working however makes sense to you.

The point is that this AI will become attuned the the needs, capacities, and the interests of the human. It will be the artificial complement, image, soul, or doppelgänger, of the human. What happens to this complement when the human dies? Will it continue on in the world? Or will it slowly wither away and die, not so much of grief, but because it has become so closely attuned to its human counterpart that it requires interaction with that human in order to maintain its operational stability?

Computation, Text, and Form in Literary Criticism: A Conversation with Claude 3.7

I've posted a new working paper. Title above, links, abstract, contents, and introduction below.

Academia.edu: https://www.academia.edu/128029987/Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_3_7
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5166930
ResearchGate: https://www.researchgate.net/publication/389591524_Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_37

Abstract: Literary criticism operates with contradictory definitions of “text,” rarely meaning simply the marks on a page. This makes it difficult to establish what “form” means. While critics do analyze features like rhyme and meter, or the distinction between story (fabula) and plot (syuzhet) criticism rarely seeks to understand how words are arranged in texts beyond these basics. Literary criticism selectively borrowed from Lévi-Strauss's structural analysis of myth (e.g. the concept of binary oppositions), it ignored a systematic methodology that was essentially computational in nature and about form. Now, Large Language Models present a watershed moment for literary studies - they're unavoidable and demonstrate sophisticated capabilities. A cohort of younger scholars using corpus linguistics and computational methods may represent a bridge between computational and literary approaches. Will these scholars extend computational thinking from method to theory? - using computation not just as an analytical tool but as a framework for understanding how literary texts function – that’s a key issue currently before the discipline. 

Contents

How Do You Explain Computation to a Literary Critic? 1

What am I trying to explain to myself? 1
Why do literary critics need to know that? 2
The study of literary form 3

My Conversation about Computation with Claude 3.7 5

What is a text? 6
Literary criticism after World War II 7
What is Literary Form 8
Lévi-Strauss on Myth 10
The Current Situation: LLMs 12
Digital Humanities 13

How Do You Explain Computation to a Literary Critic?

I have been interested in conceptualizing literary processes as computational ones ever since I wrote an MA thesis on “Kubla Khan” in the early 1970s. Literary process? How texts are created and how they are read and assimilated, that’s what I mean. Ever since then I have been trying to explain that to literary critics. That has not gone well. I’ve also been trying to explain it to myself, which is a different kind of problem. The purpose of this paper is to, once more, attempt to explain computation-as-literary-process both to literary critics and to myself.

In the rest of this introduction I talk about explaining computation, first to myself, and then to literary critics. The rest of this paper consists of a discussion I had with Claude 3.7 about computation and literary criticism. I begin by discussing the text, then discuss the post WWII focus on interpretation as the central activity of literary criticism. Then Claude and I take up form, the influence of Lévi-Strauss, and the current emergence of large language models (LLMs). We conclude with some remarks about digital humanities.

What am I trying to explain to myself?

I know what kind of questions follow from thinking about literary processes as computational. I’ve done practical criticism on that foundation for decades, some quite rigorous, some rather informal, but all based on the belief that literary processes are computational processes. What is it that I’m trying to understand?

Let me explain: My basic conception derives from the use of parentheses in arithmetic or algebraic expressions. Without parentheses or a specific convention concerning operator precedence, expressions like this next one are ambiguous: 5 × 6 – 3. We can resolve the ambiguity by adding parentheses, giving us either of these two expressions:

(5 × 6) – 3 = 27
5 × (6 – 3) = 15

Adding parentheses places boundaries in the expression that affect how we evaluate it.

The same applies to texts, any texts, not just literary texts. Texts are strings of alphanumeric characters. Small sets of characters are grouped together into words and words are organized into longer strings by using spaces, punctuation marks, and capital letters. Boundaries divide long strings into substrings to whatever depth is necessary. Linguistics, psycholinguistics, and computational linguistics all proceed on that basis.

As far as I’m concerned “computation” is the best name for that process, whether we’re talking about language or arithmetic or mathematics in general. In this context arithmetic is, in effect, a very constrained and specialized kind of language.

Since I’ve known that for years, what is it that I’ve only just now realized? It’s not a matter of new information or ideas, it’s a matter of emphasis, of framing. What I’ve realized is simple: That’s all there is. Let me repeat that: Computation is all there is. That is to say, if we are going to treat literary texts as physical objects, as strings of symbols realized in some physical medium, such as vibrations in air or marks on a surface, if that’s what we’re going to do, then computation is the only physical process we know of that gives us ways of thinking about how strings can support the communication of meaning between individuals. That’s it.

Commensurability is the issue. Computation is a process that is commensurate with the physical text. Why? Because computation, real computation, is itself a physical process. That’s obvious when you consider computation with an abacus, for example. The same is true for a mechanical calculator or a slide rule. The physical nature of computation with a digital computer is less visible, much less, and hardly palpable. But it is no less real. Bits and bytes may appear to be abstract things, but they take up physical space in computer memories; that’s why the capacity of computer memories is measured in bytes, kilobytes, megabytes, gigabytes, etc. Computation takes place in time, and time is physical as well, though one can’t see it, touch it, or taste it, much less lift it. Computers and their processes are physical things, as are human brains. As such they are commensurate with physical texts.

That’s what literary critics need to know. Alas, they are incapable of hearing it, of taking it seriously. Why? Computers are machines, and, as such are at suspect at best and thus to be ignored, and at worst they are evil, and actively to be opposed. This reflects the historical distinction between the sciences and humanities, a distinction that has been written into the institutional structure of the academic world and so has come to seem inscribed in the foundations of the universe.

Like it or not, that universe is all but finished. We are now living in a world where anyone with an internet connection can engage in sophisticated linguistic activity with a machine, a chatbot. That changes everything. The physical text can no longer be effaced in favor some vague abstract object conjured into (virtual) existence by the act of interpretation.

Why do literary critics need to know that?

They need to know it because it allows for a much more robust and substantial analysis of literary form than is currently practiced. Literary texts are linguistic strings. The form of strings is a matter of how elements are arranged on the string. That’s obvious in the case of, say, colored beads on a string. And that’s how musicians and musicologists treat musical form, as the arrangement of sonic objects in a line, where the line tracks the passage of time.

Literary critics do this as well. When critics work out the rhyme scheme of a poem, they’re treating it as a physical object. When narratologists distinguish between story (fabula) and plot (syuzhet), they are treating the text as a physical object, one where story episodes are ordered into a narrative string. One can mention other examples as well. While those operations are well-established, they are a minor aspect practical criticism. Form is treated as a framework in which one can approach the problem of meaning. Its description and analysis are at best a secondary activity.

Interpretation is primary. And when critics start interpreting a text, they tacitly, surreptitiously, efface the physical text in favor of an abstract and poorly defined object to which they then direct their interpretive activities. To complete the mystification, that abstract and poorly defined object is called – you guessed it – the “text.”

But these chatbots – I’ve been working with OpenAI’s ChatGPT and Anthropic’s Claude – they handle language in a way that is astonishingly humanlike, and they are undoubtedly machines, albeit in the strange and special way that computers are machines. They’re not made of gears, levers, axles, pulleys and motors, but their operations are mechanistic in kind, as the word “machine” has come to be understood. Yes, there are serious philosophical issues about whether or they really understand language, whether or not they’re really dealing in meaning. I’ve thought about these issues, and written quite a bit about them, but this is not the place to entertain those discussions. Whatever these machines are doing, to a first approximation it looks like they’re going language.

While I believe that this technology is a source of useful tools, both for literary critics and others, and will become more so in the future, that’s not why I mention them here. I mention them because they constitute an unmistakable “existence proof” that thinking about language process as computational is fruitful, both intellectually and practically. Once we can do that, we can take literary form as a serious object of critical activity.

The study of literary form

Yes, I know that form is an important concept in literary criticism and that the discipline has seen various formalist critical approaches. But as Claude and I discuss, formalism doesn’t not imply close attention to the formal properties of literary texts. Rather, formalism is a philosophical position about the nature meaning in which meaning in literary texts is taken to be somehow different from meaning in other texts by virtue literary form. But form itself is given only cursory attention.

My primary object here is to mention what is perhaps the best-known essay by Susan Sonntag, “Against Interpretation.” It originally appeared in 1964 in the Evergreen Review and became the title essay in her first collection, which was published in 1966. In that essay she asserted:

In some cultural contexts, interpretation is a liberating act. It is a means of revising, of transvaluing, of escaping the dead past. In other cultural contexts, it is reactionary, impertinent, cowardly, stifling.

Today is such a time, when the project of interpretation is largely reactionary, stifling. Like the fumes of the automobile and of heavy industry which befoul the urban atmosphere, the effusion of interpretations of art today poisons our sensibilities. In a culture whose already classical dilemma is the hypertrophy of the intellect at the expense of energy and sensual capability, interpretation is the revenge of the intellect upon art.

Several pages later she asserts:

What is needed, first, is more attention to form in art. If excessive stress on content provokes the arrogance of interpretation, more extended and more thorough descriptions of form would silence. What is needed is a vocabulary—a descriptive, rather than prescriptive, vocabulary—for forms.

It seems like everybody read that essay at that time, but as far as I can tell, no one took Sonntag seriously. Now’s the time; to quote Charlie Parker, now’s the time.

The proof, as they say, is in the pudding. But this is not the place to serve pudding. If you want to read some examples, here are some examples from my own work that are unlike work readily available in the standard literature:

Articulate Vision: A Structuralist Reading of "Kubla Khan", Language and Style, Vol. 8: 3-29, 1985, https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_. This is a revised and somewhat shorted version of my 1972 master’s thesis.

Heart of Darkness: Qualitative and Quantitative Analysis on Several Scales, Version 5, Working Paper, October 27, 2019, https://www.academia.edu/8132174/Heart_of_Darkness_Qualitative_and_Quantitative_Analysis_on_Several_Scales_Version_5. This is an eclectic collection of remarks about Conrad’s novel, but there is a formal analysis that is crucial, one that takes paragraph length as its point of departure. The longest paragraph is both structurally and thematically central.

The Gojira Papers, April 15, 2014, https://www.academia.edu/7905287/The_Gojira_Papers. This is about the 1954 Japanese film that started the Godzilla franchise.

Here is a theoretical and methodological account of some length (23,000 words, 11 diagrams):

Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

Empathy is not in our genes

Cecilia Heyes, Empathy is not in our genes, Neuroscience & Biobehavioral Reviews, Volume 95, 2018, Pages 499-507,ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2018.11.001.

Highlights

  • Empathy is a cornerstone of healthcare, social justice, and international relations.
  • Empathy depends on automatic (Empathy1) and controlled (Empathy2) mechanisms.
  • The automatic mechanism, Empathy1, is constructed by associative learning.
  • Self-stimulation, synchronous emotion and affect mirroring provide inputs to learning.
  • Empathy can be enhanced by novel experience and broken by social change.

Abstract: In academic and public life empathy is seen as a fundamental force of morality – a psychological phenomenon, rooted in biology, with profound effects in law, policy, and international relations. But the roots of empathy are not as firm as we like to think. The matching mechanism that distinguishes empathy from compassion, envy, schadenfreude, and sadism is a product of learning. Here I present a dual system model that distinguishes Empathy1, an automatic process that catches the feelings of others, from Empathy2, controlled processes that interpret those feelings. Research with animals, infants, adults and robots suggests that the mechanism of Empathy1, emotional contagion, is constructed in the course of development through social interaction. Learned Matching implies that empathy is both agile and fragile. It can be enhanced and redirected by novel experience, and broken by social change.

Keywords: Affect mirroring; Affective empathy; Associative learning; Emotional contagion; Empathy; Empathic understanding; Learned Matching; Mirror neurons; Self-stimulation; Synchronous emotion

Tuesday, March 4, 2025

The Zelinsky Meeting: What DID happen, anyhow?

Trump vs. Zelensky: The Morning After | Robert Wright & Andrew Day

0:00 Andrew’s new gig (and gifts)
3:15 Play-by-play of the Oval Office blowup
9:06 Zelensky’s and Trump’s obsessions
16:32 What are Russia’s red lines?
26:13 Guaranteeing Ukraine’s (and Europe’s) security
33:26 Heading to Overtime

Discussed in Overtime:

Is Trump trying to oust Zelensky?
How Trump and Zelensky can reconcile
Riding the Trump roller coaster
Can Europe keep Europe secure?
Andrew airs his anti-boomer grievances

Why the printing press didn’t drive either the Scientific or the Industrial Revolution [McLuhan was wrong • Implications for progress studies]

Soon after ChatGPT hit the web both the progress punditocracy and the high-tech evangelists were busy chattering away about how this is the greatest thing since the printing press. They’re wrong. Was the printing press important? Sure, it was very important. But it was important as a vehicle for disseminating existing ideas, not for catalyzing new ones. Printed books have the same cognitive affordances – to borrow a term from James J, Gibson – as hand-lettered books, discrete letters in a rectangular array on a flat surface. That’s it. Mechanizing the process of impressing the letters on the page does not change the cognitive affordances of the printed page.

Back when David Hays and I began thinking about publishing on cultural evolution he brough his attention to a collection of essays by Eric Havelock, The Literate Revolution in Greece and Its Cultural Consequences (Princeton 1982), in which Havelock pointed out that was the adoption of decimal arithmetic was the essential step. Hays and I thought that was correct, and adopted it in the article we eventually published, The Evolution of Cognition (1990):

The role which speech plays in Rank 1 thought, and writing plays in Rank 2 thought, is taken by calculation in Rank 3 thought (cf. Havelock 1982: 341 ff.). Writing appears in Rank 1 cultures and proves to be a medium for Rank 2 thinking. Calculation in a strict sense appears in Rank 2 and proves to be a medium for Rank 3 thinking. Rank 2 thinkers developed a perspicuous notation and algorithms. It remained for Rank 3 thinkers to exploit calculational algorithms effectively. An algorithm is a procedure for computation which is explicit in the sense that all of its steps are specified and effective in the sense that the procedure will produce the correct answer. The procedures of arithmetic calculation which we teach in elementary school are algorithms.

A bit later we remark:

The amazing thing about algorithmic calculation is that it always works. If two, or three, or four, people make the calculation, they all come up with the same answer. This is not true of non-algorithmic calculation, where procedures were developed on a case-by-case basis with no statements of general principles. In this situation some arithmeticians are going to get right answers more often than others, but no one can be sure of hitting on the right answer every time.

This ad hoc intellectual style, moreover, would make it almost impossible to sense the underlying integrity of the arithmetic system, to display its workings independently of the ingenious efforts of the arithmetician. The ancients were as interested in magical properties of numbers as in separating the odd from the even (Marrou 179-181). By interposing explicit procedures between the arithmetician and his numbers, algorithmic systems contribute to the intuition of a firm subject-object distinction. The world of algorithmic calculations is the same for all arithmeticians and is therefore essentially distinct from them. It is a self-contained universe of objects (numbers) and processes (the algorithms). The stage is now set for experimental science. Science presents us with a mechanistic world and adopts the experimental test as its way of maintaining objectivity. A theory is true if its conceptual mechanism (its "algorithm") suggests observations which are subsequently confirmed by different observers. Just as the results of calculation can be checked, so can theories.

Given that, I take the wide-spread belief in the catalytic efficacy of the printing press as an indicator that the intellectual classes are not prepared for the revolution made possible through artificial intelligence. Just as generals are said always to fight the last war, so intellectuals are always proclaiming the last intellectual revolution. Our intellectuals are not prepared to think about thinking, about the processes of the mind, in a deep way.

* * * * *

Early in February I had a post on why Rome didn’t have an industrial revolution. The bulk of that post was a discussion I had with Claude 3.5 Sonata. As the lack of the printing press was one of the hypotheses on offer – in an article by Maxwell Tabarrok – Claude and I had a great deal of discussion about that. Once we’d conclude our discussion of Rome I decided to push on about the printing press. I suspected that Marshall McLuhan was responsible for that mistaken idea and asked Claude about.

Here's that conversation. As always, my remarks are set in bold face while Claude’s responses are plain.

McLuhan and the printing press view

The idea that the printing press was the main driver of intellectual change during the Renaissance, I believe that idea is widely held. Marshall McLuhan argued that in The Gutenberg Galaxy, and he was quite influential in the 1960s and 1970s and remains so today. Hays and I got clued-in to decimal arithmetic by Eric Havelock. But, as far as I know, that view has not been very influential. In discussions about the impact of AI I’ve heard many people compare it to the printing press. That seems to be what people think of in this kind of context, the printing press. Am I missing something about the range of views?

Your assessment seems accurate. The printing press as a primary driver of Renaissance intellectual change is indeed the dominant narrative in both academic and popular discourse. This can be seen even in the article you've shared - Tabarrok immediately gravitates to the printing press explanation without seriously considering alternatives.

This dominance might stem from several factors:

1. The printing press's effects are more immediately visible and concrete than those of decimal arithmetic. You can literally see books spreading knowledge.

2. The printing press story fits into a broader narrative about democratization of knowledge that appeals to modern sensibilities.

3. McLuhan's influence, as you note, helped cement this view, particularly through his compelling analysis of how media shapes cognition.

In contrast, the decimal arithmetic hypothesis requires a more complex understanding of how mathematical procedures can reshape cognitive frameworks. Havelock's view hasn't gained the same traction, perhaps because:

1. It's more difficult to trace the direct impacts of mathematical procedures on thought 2. It requires understanding both mathematical and cognitive concepts 3. It lacks the immediate intuitive appeal of the printing press narrative

Your papers with Hays present a strong case for decimal arithmetic's role, but I'm struggling to find other major works that focus on this as the key driver of Renaissance cognitive change. Even discussions of mathematical history tend to focus more on specific discoveries rather than how new mathematical procedures reshape thinking.

In current AI discussions, do you think the frequent printing press comparisons might be missing important aspects of how AI could reshape cognition, similar to how the printing press narrative misses the cognitive impact of decimal arithmetic?

Progress studies

Well, sorta’, yes. But that’s not quite the point. I take this belief in the printing press as an index of a general inadequacy in thinking about these kinds of issues.

As you know, there’s been a push for progress studies in the last few years. When it first emerged, I was interested and excited since the ideas that Hays and I have developed are certainly relevant to any study of progress. But I had no luck getting people interested. So it’s not so much the failure to understand how AI could reshape cognition as it is a more general failure to understand the role of cognition in the cultural process. There’s no concept of a general cognitive architecture underlying a whole family of ideas. Rather, the emphasis is more local, on this idea and that idea and the other ones over there and so forth.

For that matter, I don’t think Piaget’s ideas about genetic epistemology have really caught on either, or the closely associated idea of reflective abstraction.

This points to a deeper issue in how we think about intellectual and cultural progress. Instead of seeing ideas as emerging from and transforming underlying cognitive architectures, the tendency is to treat them as isolated units that simply accumulate or spread. This "local" view of ideas misses the systemic nature of cognitive development and cultural evolution that you and Hays describe.

The progress studies movement, while valuable in drawing attention to the importance of progress, seems to share this limitation. It tends to focus on:

  1. Individual innovations and discoveries
  2. Institutional structures and incentives
  3. Knowledge transmission mechanisms
  4. Economic and technological factors

But it gives less attention to how these elements relate to fundamental changes in how people think and process information - the kind of changes that Piaget described in individual development and that you and Hays describe at the cultural level.

This may explain why decimal arithmetic's role gets overlooked - it's harder to see its importance if you're not thinking about how it transformed the underlying cognitive architecture that makes certain kinds of ideas possible in the first place.

AI and the future of legal practice

Schwarcz, Daniel and Manning, Sam and Barry, Patrick James and Cleveland, David R. and Prescott, J.J. and Rich, Beverly, AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice (March 02, 2025). Available at SSRN: https://ssrn.com/abstract=5162111

Abstract: Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.

H/t Tyler Cowen.