Saturday, March 8, 2025

I you can't beat them, have fun with them.

Reflections of the sun one morning in Hoboken along the Hudson

A note about delayed orgasms in mind [mind and body]

David Lodge, Why Some Men Find Orgasms Elusive, NYTimes, Mar. 7, 2025.

The opening paragraphs:

Most men, as many as 70 percent, say they reach orgasm quicker than they’d like. But there is an inverse problem that can set in with age: lasting too long.

For some men, reaching climax can take 30 minutes or more. Sex that lasts this long can be uncomfortable, painful and distressing for men and their partners.

Uh, yes.

Later:

Orgasms are basically a massive reflex, kind of like a “giant sneeze,” Dr. Rubin said. Just one requiring a “perfect genital-brain connection.”

The process starts with visual, mental or physical sexual stimulation, triggering the brain to release dopamine, which increases excitement. After sufficient stimulation, the brain releases a surge of pleasure-inducing chemicals, like oxytocin, creating the sensation we think of as an orgasm.

For most men, ejaculation occurs at the same time, “but they are separate processes,” said Dr. Alan Shindel, a urologist based in San Francisco. While orgasm mostly happens in the brain, ejaculation is a series of involuntary muscle contractions.

Complications affecting the genitals, pelvic floor muscles, spinal cord and brain can make orgasms premature, delayed, muted or even painful, Dr. Rubin said. Given the complexity involved, she said, “it’s actually amazing that more things don’t go wrong.”

I like the sneeze metaphor.

What got my attention is this: “While orgasm mostly happens in the brain, ejaculation is a series of involuntary muscle contractions.” Decades ago Wilhelm Reich argued that ejaculation is not necessarily orgasmic. He had trouble getting that idea accepted, among other things, and in any event his career ended in disaster. But Shindel's statement is consistent with Reich's believe.

On a different note, as it were, I have described how trumpet players sometimes experience something that feels somewhat like an organism, but which has nothing to do with the genitals, at least as far as I can tell. The physiological mechanisms are quite different, having to do with blood flow to the brain. Whether or not oxytocin is involved, I have no idea. But we are talking about music and music may well induce oxytocin secretion; that's what the late Walter Freeman believed.

Lodge’s article goes on to discuss possible causes of delayed orgasm in men, including medications, neurological issues, “partner-related concerns,” by which he the woman experiencing pain or “differences in libido, preferred types of sex acts and performance expectations,” and “psychological factors,” such as “depression, anxiety and stress.” That last category is the most interesting one from my perhaps peculiar point of view. That’s the sort of thing Reich was interested in as well.

The article ends with some short notes on treatments.

Will Trump be able to keep Musk under control?

Jess Bidgood of the NYTimes writes about the recent cabinet meeding where Trump said that “From now on, he said, the secretaries would be in charge; the Musk team would only advise.” What are the chances of that? No one knows.

Ryan Mac, a colleague of mine who covers big tech, has reported on Musk for a long time. Today, I asked him if Musk had ever been content with an adviser-style role, one in which he doesn’t run the show.

Ryan’s answer was simple: No.

Musk has never liked being one voice among many, Ryan explained. Vivek Ramaswamy, who was initially going to be Musk’s partner in leading the Department of Government Efficiency, is long gone. Musk doesn’t sit on a lot of boards. And throughout his corporate history, whenever he hasn’t initially had control over a company, he’s tended to seek it.

At Tesla, where he was an early investor, he became the chief executive. Before he bought Twitter and renamed it X, he almost joined the company’s board. Then he decided to acquire the company outright, fire its board of directors and executives and become the chief executive. (He later named a new C.E.O. but retains considerable control over the company.)

Not all of Musk’s bids for control have worked. Decades ago, for example, he was forced out as the chief executive of PayPal. His effort to get control of OpenAI — a nonprofit he co-founded in 2015 — failed, as did his more recent bid to buy it.

I'm saying in a generous supply of popcorn.

Two views from the same table-top

Systematic bone tool production at 1.5 million years ago

Tyler Cowen just posted this abstract at Marginal Revolution:

de la Torre, I., Doyon, L., Benito-Calvo, A. et al. Systematic bone tool production at 1.5 million years ago. Nature (2025). https://doi.org/10.1038/s41586-025-08652-5

Abstract: Recent evidence indicates that the emergence of stone tool technology occurred before the appearance of the genus Homo and may potentially be traced back deep into the primate evolutionary line. Conversely, osseous technologies are apparently exclusive of later hominins from approximately 2 million years ago (Ma), whereas the earliest systematic production of bone tools is currently restricted to European Acheulean sites 400–250 thousand years ago. Here we document an assemblage of bone tools shaped by knapping found within a single stratigraphic horizon at Olduvai Gorge dated to 1.5 Ma. Large mammal limb bone fragments, mostly from hippopotamus and elephant, were shaped to produce various tools, including massive elongated implements. Before our discovery, bone artefact production in pre-Middle Stone Age African contexts was widely considered as episodic, expedient and unrepresentative of early Homo toolkits. However, our results demonstrate that at the transition between the Oldowan and the early Acheulean, East African hominins developed an original cultural innovation that entailed a transfer and adaptation of knapping skills from stone to bone. By producing technologically and morphologically standardized bone tools, early Acheulean toolmakers unravelled technological repertoires that were previously thought to have appeared routinely more than 1 million years later.

Here's a comment I posted:

Some years ago I visited Ralph Holloway's lab at Columbia. Holloway is a physical anthropologist and an expert on the evolution of the brain. The lab was, say, 15 by 20 feet, maybe a bit larger but not much larger. It had tables and shelves where you could see the specimens, skull fragments of various sizes. There may have been one or three fairly complete skulls, but not many. Anyhow, this is the kind of evidence we have for brain evolution, fragmentary and indirect.

Holloway said that that lab had, say, 10% of the world's total collection of hominin skull fragments. This was back in the early 2000s. When you do a thumbnail estimate of the population these specimens were drawn from, it becomes clear that the total world-wide collection of specimens is a very small fraction of that population. And we have no reason to think that the that fraction is random. It may be, it may not. We don't know.

When that's the kind of evidence you're working from, one new discovery can wreak havoc with existing theories.

Friday, March 7, 2025

Wynton Marsalis: According to his mother he was "just another MoFo out there."

 

“I don’t have a sentimental relation with myself.”

From yesterday's walk along the Hudson

Claude and I discuss the idea of progress in the arts, from Hazlitt to Bloom on Shakespeare, and concluding with Arthur Danto and Ernst Gombrich

This is from an ongoing discussion I have been having with Claude. For this discussion I had uploaded the paper that David Hays and I wrote on cognitive evolution, which is our basic paper of cultural ranks, and Max Tabarrok’s post, Romae Industriae, which poses the question of why Rome didn’t have an industrial revolution. I have posted earlier segments of this conversation: 1) Why didn’t Rome have an industrial evolution? and 2) where I argued that the printing press did not play the role in cultural history that is usually assigned to it.

The general idea of cultural ranks came out of my undergraduate years at Johns Hopkins. I took a course on the Scientific Revolution, where we read Thomas Kuhn’s famous book, The Structure of Scientific Revolutions. In another course I learned that the novel emerged as a literary form in roughly the same time frame. In yet another course I read Centuries of Childhood, by Philippe Aries. He argued that the conception of childhood that is familiar to us, a distinct phase in human development, was not a “natural” concept, but rather emerged, yes, again within roughly the same time period. In a course on the theatre, taught by Dick Macksey, I read Nietzsche’s The Birth of Tragedy, which concocted a story about the historical emergence of tragedy in ancient Greece which made it seem as though the Athenians had “swallowed” an older society and put it on the stage. Ritual celebrants in the older social formation became the principle characters in tragedies while the general popular of that formation became the chorus.

During my sophomore year I’d taken a course in developmental psychology where I was introduced to the work of Jean Piaget. Piaget theorized that children’s mental development progressed through a series of more or less distinct stages, where each stage is characterized by specific cognitive capabilities, with later stages building on earlier ones. He also applied this concept to the history of ideas, e.g. in Genetic Epistmology, which read while working on an MA thesis. That’s also when I read Walter Wiora’s The Four Ages of Music. It was a slim volume in which Wiora argued that the first age emphasized rhythm, the second added an intense development of melody, then came harmony (emerging in Europe) and the contemporary phase, which involved exploration of new forms and methods. I read that and posed a question: Isn’t everything like that, four stages?

I took that question with me to SUNY Buffalo, where I enrolled for a PhD in the English Department, but also spent a great deal of time discussing computational linguistics, cognitive science, and cultural evolution with David Hays in Linguistics. That’s where we developed the idea of cultural ranks. I included an account of narrative development in my 1978 dissertation, “Cognitive Science and Literary Theory.”

I note, then, that while the scientific revolution certainly played an important role in my thinking about cultural evolution, the arts played perhaps an even more important role in developing the idea. However, the ranks idea only accounts for the overall shape of development. What about development within a given rank? I have some preliminary notes about that in a recent post, Confabulation, Dylan’s epistemic stance, and progress in the arts: “I’ll let you be in my dreams of I can be in yours.”

The rest of this post consists of a conversation I had with Claude 3.7 on March 3, 2025. As always, my comments are set in bold type while Claude’s are plain.

* * * * *

Hazlitt on progress in the arts

While it is common to say that there is progress in the sciences and technology, it’s less common to think of progress in the arts. William Hazlitt published a well-known essay in 1814, Why The Arts Are Not Progressive, and that’s remained the standard view. He observed:

“What is mechanical, reducible to rule, or capable of demonstration, is progressive, and admits of gradual improvement: what is not mechanical or definite, but depends on genius, taste, and feeling, very soon becomes stationary or retrograde, and loses more than it gains by transfusion. [...] The greatest poets, the ablest orators, the best painters, and the finest sculptors that the world ever saw, appeared soon after the birth barbarous. Those arts, which depend on individual genius and incommunicable power, of these arts, and lived in a state of society which was, in other respects, comparatively have always leaped at once from infancy to manhood, from the first rude dawn of invention to their meridian height and dazzling lustre, and have in general declined ever after.”

I see you're interested in exploring the contrast between progress in sciences/technology versus the arts. This is a fascinating philosophical question that bridges cultural evolution, aesthetics, and the nature of human creativity.

Hazlitt's essay presents the classic view that the arts, unlike sciences, don't experience linear progress. His central argument distinguishes between domains that are "mechanical, reducible to rule" (sciences) versus those dependent on "genius, taste, and feeling" (arts). He suggests artistic achievements often peak early in civilizations and then stagnate or decline.

The two documents you've shared offer interesting frameworks to consider this question: From "Cognitive Evolution," Benzon and Hays propose a four-rank model of cognitive development across cultures:

1. Rank 1: Abstraction through metaphor (preliterate societies)
2. Rank 2: Rationalization through metalingual definition (literate societies)
3. Rank 3: Theory through algorithm (Renaissance onward)
4. Rank 4: Model building through control (20th century)

They argue these cognitive advances shape all cultural domains, not just science. While they don't specifically refute Hazlitt's view on art, their model suggests that artistic expression would also evolve with new cognitive tools.

The "Romae Industriae" document examines why Rome didn't achieve an industrial revolution despite its technological sophistication. The author concludes cultural factors, particularly the lack of a printing press and the disdain for mechanical arts in slave-based societies, were key constraints rather than just technological limitations.

What's interesting is how these perspectives might inform a response to Hazlitt. Could we argue that while individual artistic genius may appear at any time, the evolution of artistic forms and movements does show a pattern of development tied to cognitive and cultural evolution? For example, the Renaissance's perspective techniques in painting required Rank 3 thinking about algorithmic representation of space.

Would you like to explore this tension between Hazlitt's view and the cognitive evolution framework more deeply, or focus on another aspect of these materials? 

Bloom, Shakespeare, the novel

Ah, yes, this ranks business. Benzon has published on both the evolution of narrative and the evolution of music using ranks theory and Hays has published on expressive culture in general. Perhaps one day I’ll upload those papers for you to examine. I note, though, that they don’t address the issue of progressive change within individual ranks. For example, I suspect that there is expressive change in the novel, from the 18th century up through the end of the 19th century, but that’s entirely within Rank 3 culture, and I’d think the progress continues in the 20th century. [...] And then we have Harold Bloom’s assertion that Shakespeare, in some sense, “invented” the human, but which I think he means that Shakespeare was the first writer to give voice to certain aspects of human behavior and mental life. But I don’t think Bloom ever systematically developed that idea, not even in is book on the Western canon or in his Shakespeare book.

[...] You raise an excellent point about progressive change within individual ranks that wasn't fully addressed in the paper. The progression of the novel within Rank 3 culture is a perfect example of this phenomenon. From Defoe and Richardson through Austen, the Brontës, Dickens, Eliot, and on to James and Conrad, we see remarkable development in narrative technique, psychological depth, and formal complexity—all while remaining within the algorithmic thinking characteristic of Rank 3.

This intra-rank progression continues into the 20th century with modernist innovations by Joyce, Woolf, and Faulkner, who pushed the boundaries of what the novel could express while still operating within Rank 3 cognitive structures. Then perhaps writers like Pynchon and DeLillo begin to engage with Rank 4 model-building approaches to fiction.

Regarding Bloom's assertion about Shakespeare "inventing the human," this does seem to suggest a kind of progressive leap in artistic representation. Bloom never fully systematized this idea, but it appears to indicate that Shakespeare's achievement wasn't just artistic excellence but a cognitive breakthrough in representing human interiority. In ranks terminology, one might say Shakespeare was pushing against the upper limits of Rank 2 representation toward something like early Rank 3 thinking about human psychology, even before the scientific revolution had fully taken hold.

Thursday, March 6, 2025

Coming and going

A note on assembly theory from Carl Zimmer [via Tyler Cowen]

Conversations with Tyler, Ep. 235: Carl Zimmer on the Hidden Life in the Air We Breathe

COWEN: Is Lee Cronin right or insane?

[laughter]

ZIMMER: Lee Cronin is a chemist in Scotland at University of Glasgow. He has this idea that you can explain life with a theory that he and others call assembly theory, which is about, basically, how many steps does it take for something to get produced?

The things in our bodies, the molecules that make us up — some of them are very small and simple, but some of them are exquisitely big and complex. Lee and others argue that life is what is able to assemble things beyond a certain threshold. This might be a way to actually identify life on a planet, even if you don’t know what life is made of. We can’t assume that life is just made of DNA; that’s an unreasonable assumption.

Life on Earth already blows our minds in many ways — at least mine. Life on other worlds — maybe that bet is right, and there’s life on Enceladus or some other icy moon. It might be really, really, really strange, but maybe we can recognize it by this assembly index.

Not only could this assembly theory be a way to recognize life, but it might be actually a way, Lee Cronin thinks, to make life. In other words, it guides you in basically creating a set of chemical reactions where you’re creating these . . . right now, he’s got these robots that are basically making droplets with different chemicals in them in these vast numbers of combinations. He’s wondering if they will eventually start to take on some of the hallmarks of life.

In other words, yes, he is trying to make life. He’s actively trying to make life right now. A lot of people think he’s crazy. A lot of people think he’s quite brilliant. Some people think he’s both. [laughs]

COWEN: I like him. I don’t know if he’s right. He’s a lot of fun to talk to.

ZIMMER: Absolutely, yes. It’s been really interesting watching assembly theory come to the fore recently. Some scientists really take badly to it in a very hostile way, but this is often the case. It feels like sometimes people are just talking past each other and they’re not really speaking the same language. Because assembly theory is new and it’s very interdisciplinary, I think it’s going to take a while for the scientific community to really engage with it and decide whether it holds up or not.

As I argue in Life’s Edge, life is a property of matter. Scientists are trying to explain it, and some of them are trying to explain with a theory. Superconductivity is a property of matter, and there were a bunch of theories that were put forward about it, including by Einstein, and they were wrong. It wasn’t until, eventually, some people came up with the right theory that really clicked in and had a powerful explanatory power. We’re not there yet with life. Maybe Lee Cronin is going to be like Einstein and he’s wrong, or maybe he will be one of the people who is right.

It would seem that consciousness is a property of matter as well, hence panpsychism.

Understanding transformers and communication among sperm whales

Mar 5, 2025
Allen School Colloquium Series Title: Discovering & Engineering the Computation Underlying Large Intelligent Agents
Speaker: Pratyusha Sharma (MIT)
Date: March 3, 2025

Abstract: The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this talk, I present three perspectives to answer these questions. I will discuss experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought–-hierarchical and compositional — and show how it can enable long-horizon reasoning and planning in these systems.

Bio: Pratyusha Sharma is a Ph.D. student in the Computer Science and Artificial Intelligence Lab at MIT, advised by Antonio Torralba and Jacob Andreas. She studies the interplay between language, sequential decision making and intelligence in natural and AI systems. Before this, she received her Bachelor’s degree from the Indian Institute of Technology, Delhi. Her research is published in interdisciplinary journals like Nature Communications, etc. and in academic conferences across machine learning, natural language processing, robotics, and marine biology. Her research has also been featured in articles in the New York Times, National Geographic Magazine, BBC, etc. She was recently a speaker at TED AI and was selected as a Rising Star in EECS, Data Science, and GenAI.

Her webpage, with links to her papers.

Wednesday, March 5, 2025

Digital Doppelgangers

Vijayalaxmi Methuku, Praveen Kumar Myakala, Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones, arXiv:2502.21248v1 [cs.CY], February 28, 2025.

Abstract: The rapid advancement of generative AI has enabled the creation of pre-mortem digital twins, AI-driven replicas that mimic the behavior, personality, and knowledge of living individuals. These digital doppelgangers serve various functions, including enhancing productivity, enabling creative collaboration, and preserving personal legacies. However, their development raises critical ethical, legal, and societal concerns. Issues such as identity fragmentation, psychological effects on individuals and their social circles, and the risks of unauthorized cloning and data exploitation demand careful examination. Additionally, as these AI clones evolve into more autonomous entities, concerns about consent, ownership, and accountability become increasingly complex.

This paper differentiates pre-mortem AI clones from post-mortem generative ghosts, examining their unique ethical and legal implications. We explore key challenges, including the erosion of personal identity, the implications of AI agency, and the regulatory gaps in digital rights and privacy laws. Through a research-driven approach, we propose a framework for responsible AI governance, emphasizing identity preservation, consent mechanisms, and autonomy safeguards. By aligning technological advancements with societal values, this study contributes to the growing discourse on AI ethics and provides policy recommendations for the ethical deployment of pre-mortem AI clones.

H/t Tyler Cowen.

I'm interested in a somewhat different kind of digital doppelganger. This is from a prompt I gave to Claude 3.5 Sonnet back in December 2024:

I’m imagining that at some time in the future each human will be given an AI companion early in life, perhaps when they are old enough to walk – think of Stephenson’s The Diamond Age. This companion will interact with the human in various ways through various modalities. It will certainly have robotic capabilities. As the child grows older the AI companion will probably have to be given a different physical form. So we port the data to a different physical form and make the switch-over in a special ceremony. We’ll probably have to do this a few times until the child becomes an adult. At some point the AI will also take residence in other physical forms, all linked together. This AI will remain with the person their entire life. They won’t always be physically together, and so forth. Imagine it working however makes sense to you.

The point is that this AI will become attuned the the needs, capacities, and the interests of the human. It will be the artificial complement, image, soul, or doppelgänger, of the human. What happens to this complement when the human dies? Will it continue on in the world? Or will it slowly wither away and die, not so much of grief, but because it has become so closely attuned to its human counterpart that it requires interaction with that human in order to maintain its operational stability?

On the water, in the morning

Computation, Text, and Form in Literary Criticism: A Conversation with Claude 3.7

I've posted a new working paper. Title above, links, abstract, contents, and introduction below.

Academia.edu: https://www.academia.edu/128029987/Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_3_7
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5166930
ResearchGate: https://www.researchgate.net/publication/389591524_Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_37

Abstract: Literary criticism operates with contradictory definitions of “text,” rarely meaning simply the marks on a page. This makes it difficult to establish what “form” means. While critics do analyze features like rhyme and meter, or the distinction between story (fabula) and plot (syuzhet) criticism rarely seeks to understand how words are arranged in texts beyond these basics. Literary criticism selectively borrowed from Lévi-Strauss's structural analysis of myth (e.g. the concept of binary oppositions), it ignored a systematic methodology that was essentially computational in nature and about form. Now, Large Language Models present a watershed moment for literary studies - they're unavoidable and demonstrate sophisticated capabilities. A cohort of younger scholars using corpus linguistics and computational methods may represent a bridge between computational and literary approaches. Will these scholars extend computational thinking from method to theory? - using computation not just as an analytical tool but as a framework for understanding how literary texts function – that’s a key issue currently before the discipline. 

Contents

How Do You Explain Computation to a Literary Critic? 1

What am I trying to explain to myself? 1
Why do literary critics need to know that? 2
The study of literary form 3

My Conversation about Computation with Claude 3.7 5

What is a text? 6
Literary criticism after World War II 7
What is Literary Form 8
Lévi-Strauss on Myth 10
The Current Situation: LLMs 12
Digital Humanities 13

How Do You Explain Computation to a Literary Critic?

I have been interested in conceptualizing literary processes as computational ones ever since I wrote an MA thesis on “Kubla Khan” in the early 1970s. Literary process? How texts are created and how they are read and assimilated, that’s what I mean. Ever since then I have been trying to explain that to literary critics. That has not gone well. I’ve also been trying to explain it to myself, which is a different kind of problem. The purpose of this paper is to, once more, attempt to explain computation-as-literary-process both to literary critics and to myself.

In the rest of this introduction I talk about explaining computation, first to myself, and then to literary critics. The rest of this paper consists of a discussion I had with Claude 3.7 about computation and literary criticism. I begin by discussing the text, then discuss the post WWII focus on interpretation as the central activity of literary criticism. Then Claude and I take up form, the influence of Lévi-Strauss, and the current emergence of large language models (LLMs). We conclude with some remarks about digital humanities.

What am I trying to explain to myself?

I know what kind of questions follow from thinking about literary processes as computational. I’ve done practical criticism on that foundation for decades, some quite rigorous, some rather informal, but all based on the belief that literary processes are computational processes. What is it that I’m trying to understand?

Let me explain: My basic conception derives from the use of parentheses in arithmetic or algebraic expressions. Without parentheses or a specific convention concerning operator precedence, expressions like this next one are ambiguous: 5 × 6 – 3. We can resolve the ambiguity by adding parentheses, giving us either of these two expressions:

(5 × 6) – 3 = 27
5 × (6 – 3) = 15

Adding parentheses places boundaries in the expression that affect how we evaluate it.

The same applies to texts, any texts, not just literary texts. Texts are strings of alphanumeric characters. Small sets of characters are grouped together into words and words are organized into longer strings by using spaces, punctuation marks, and capital letters. Boundaries divide long strings into substrings to whatever depth is necessary. Linguistics, psycholinguistics, and computational linguistics all proceed on that basis.

As far as I’m concerned “computation” is the best name for that process, whether we’re talking about language or arithmetic or mathematics in general. In this context arithmetic is, in effect, a very constrained and specialized kind of language.

Since I’ve known that for years, what is it that I’ve only just now realized? It’s not a matter of new information or ideas, it’s a matter of emphasis, of framing. What I’ve realized is simple: That’s all there is. Let me repeat that: Computation is all there is. That is to say, if we are going to treat literary texts as physical objects, as strings of symbols realized in some physical medium, such as vibrations in air or marks on a surface, if that’s what we’re going to do, then computation is the only physical process we know of that gives us ways of thinking about how strings can support the communication of meaning between individuals. That’s it.

Commensurability is the issue. Computation is a process that is commensurate with the physical text. Why? Because computation, real computation, is itself a physical process. That’s obvious when you consider computation with an abacus, for example. The same is true for a mechanical calculator or a slide rule. The physical nature of computation with a digital computer is less visible, much less, and hardly palpable. But it is no less real. Bits and bytes may appear to be abstract things, but they take up physical space in computer memories; that’s why the capacity of computer memories is measured in bytes, kilobytes, megabytes, gigabytes, etc. Computation takes place in time, and time is physical as well, though one can’t see it, touch it, or taste it, much less lift it. Computers and their processes are physical things, as are human brains. As such they are commensurate with physical texts.

That’s what literary critics need to know. Alas, they are incapable of hearing it, of taking it seriously. Why? Computers are machines, and, as such are at suspect at best and thus to be ignored, and at worst they are evil, and actively to be opposed. This reflects the historical distinction between the sciences and humanities, a distinction that has been written into the institutional structure of the academic world and so has come to seem inscribed in the foundations of the universe.

Like it or not, that universe is all but finished. We are now living in a world where anyone with an internet connection can engage in sophisticated linguistic activity with a machine, a chatbot. That changes everything. The physical text can no longer be effaced in favor some vague abstract object conjured into (virtual) existence by the act of interpretation.

Why do literary critics need to know that?

They need to know it because it allows for a much more robust and substantial analysis of literary form than is currently practiced. Literary texts are linguistic strings. The form of strings is a matter of how elements are arranged on the string. That’s obvious in the case of, say, colored beads on a string. And that’s how musicians and musicologists treat musical form, as the arrangement of sonic objects in a line, where the line tracks the passage of time.

Literary critics do this as well. When critics work out the rhyme scheme of a poem, they’re treating it as a physical object. When narratologists distinguish between story (fabula) and plot (syuzhet), they are treating the text as a physical object, one where story episodes are ordered into a narrative string. One can mention other examples as well. While those operations are well-established, they are a minor aspect practical criticism. Form is treated as a framework in which one can approach the problem of meaning. Its description and analysis are at best a secondary activity.

Interpretation is primary. And when critics start interpreting a text, they tacitly, surreptitiously, efface the physical text in favor of an abstract and poorly defined object to which they then direct their interpretive activities. To complete the mystification, that abstract and poorly defined object is called – you guessed it – the “text.”

But these chatbots – I’ve been working with OpenAI’s ChatGPT and Anthropic’s Claude – they handle language in a way that is astonishingly humanlike, and they are undoubtedly machines, albeit in the strange and special way that computers are machines. They’re not made of gears, levers, axles, pulleys and motors, but their operations are mechanistic in kind, as the word “machine” has come to be understood. Yes, there are serious philosophical issues about whether or they really understand language, whether or not they’re really dealing in meaning. I’ve thought about these issues, and written quite a bit about them, but this is not the place to entertain those discussions. Whatever these machines are doing, to a first approximation it looks like they’re going language.

While I believe that this technology is a source of useful tools, both for literary critics and others, and will become more so in the future, that’s not why I mention them here. I mention them because they constitute an unmistakable “existence proof” that thinking about language process as computational is fruitful, both intellectually and practically. Once we can do that, we can take literary form as a serious object of critical activity.

The study of literary form

Yes, I know that form is an important concept in literary criticism and that the discipline has seen various formalist critical approaches. But as Claude and I discuss, formalism doesn’t not imply close attention to the formal properties of literary texts. Rather, formalism is a philosophical position about the nature meaning in which meaning in literary texts is taken to be somehow different from meaning in other texts by virtue literary form. But form itself is given only cursory attention.

My primary object here is to mention what is perhaps the best-known essay by Susan Sonntag, “Against Interpretation.” It originally appeared in 1964 in the Evergreen Review and became the title essay in her first collection, which was published in 1966. In that essay she asserted:

In some cultural contexts, interpretation is a liberating act. It is a means of revising, of transvaluing, of escaping the dead past. In other cultural contexts, it is reactionary, impertinent, cowardly, stifling.

Today is such a time, when the project of interpretation is largely reactionary, stifling. Like the fumes of the automobile and of heavy industry which befoul the urban atmosphere, the effusion of interpretations of art today poisons our sensibilities. In a culture whose already classical dilemma is the hypertrophy of the intellect at the expense of energy and sensual capability, interpretation is the revenge of the intellect upon art.

Several pages later she asserts:

What is needed, first, is more attention to form in art. If excessive stress on content provokes the arrogance of interpretation, more extended and more thorough descriptions of form would silence. What is needed is a vocabulary—a descriptive, rather than prescriptive, vocabulary—for forms.

It seems like everybody read that essay at that time, but as far as I can tell, no one took Sonntag seriously. Now’s the time; to quote Charlie Parker, now’s the time.

The proof, as they say, is in the pudding. But this is not the place to serve pudding. If you want to read some examples, here are some examples from my own work that are unlike work readily available in the standard literature:

Articulate Vision: A Structuralist Reading of "Kubla Khan", Language and Style, Vol. 8: 3-29, 1985, https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_. This is a revised and somewhat shorted version of my 1972 master’s thesis.

Heart of Darkness: Qualitative and Quantitative Analysis on Several Scales, Version 5, Working Paper, October 27, 2019, https://www.academia.edu/8132174/Heart_of_Darkness_Qualitative_and_Quantitative_Analysis_on_Several_Scales_Version_5. This is an eclectic collection of remarks about Conrad’s novel, but there is a formal analysis that is crucial, one that takes paragraph length as its point of departure. The longest paragraph is both structurally and thematically central.

The Gojira Papers, April 15, 2014, https://www.academia.edu/7905287/The_Gojira_Papers. This is about the 1954 Japanese film that started the Godzilla franchise.

Here is a theoretical and methodological account of some length (23,000 words, 11 diagrams):

Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

Let's play "spot the Buddha!"

Empathy is not in our genes

Cecilia Heyes, Empathy is not in our genes, Neuroscience & Biobehavioral Reviews, Volume 95, 2018, Pages 499-507,ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2018.11.001.

Highlights

  • Empathy is a cornerstone of healthcare, social justice, and international relations.
  • Empathy depends on automatic (Empathy1) and controlled (Empathy2) mechanisms.
  • The automatic mechanism, Empathy1, is constructed by associative learning.
  • Self-stimulation, synchronous emotion and affect mirroring provide inputs to learning.
  • Empathy can be enhanced by novel experience and broken by social change.

Abstract: In academic and public life empathy is seen as a fundamental force of morality – a psychological phenomenon, rooted in biology, with profound effects in law, policy, and international relations. But the roots of empathy are not as firm as we like to think. The matching mechanism that distinguishes empathy from compassion, envy, schadenfreude, and sadism is a product of learning. Here I present a dual system model that distinguishes Empathy1, an automatic process that catches the feelings of others, from Empathy2, controlled processes that interpret those feelings. Research with animals, infants, adults and robots suggests that the mechanism of Empathy1, emotional contagion, is constructed in the course of development through social interaction. Learned Matching implies that empathy is both agile and fragile. It can be enhanced and redirected by novel experience, and broken by social change.

Keywords: Affect mirroring; Affective empathy; Associative learning; Emotional contagion; Empathy; Empathic understanding; Learned Matching; Mirror neurons; Self-stimulation; Synchronous emotion

Tuesday, March 4, 2025

The Zelinsky Meeting: What DID happen, anyhow?

Trump vs. Zelensky: The Morning After | Robert Wright & Andrew Day

0:00 Andrew’s new gig (and gifts)
3:15 Play-by-play of the Oval Office blowup
9:06 Zelensky’s and Trump’s obsessions
16:32 What are Russia’s red lines?
26:13 Guaranteeing Ukraine’s (and Europe’s) security
33:26 Heading to Overtime

Discussed in Overtime:

Is Trump trying to oust Zelensky?
How Trump and Zelensky can reconcile
Riding the Trump roller coaster
Can Europe keep Europe secure?
Andrew airs his anti-boomer grievances

Vistas [looking for the mother ship]

Why the printing press didn’t drive either the Scientific or the Industrial Revolution [McLuhan was wrong • Implications for progress studies]

Soon after ChatGPT hit the web both the progress punditocracy and the high-tech evangelists were busy chattering away about how this is the greatest thing since the printing press. They’re wrong. Was the printing press important? Sure, it was very important. But it was important as a vehicle for disseminating existing ideas, not for catalyzing new ones. Printed books have the same cognitive affordances – to borrow a term from James J, Gibson – as hand-lettered books, discrete letters in a rectangular array on a flat surface. That’s it. Mechanizing the process of impressing the letters on the page does not change the cognitive affordances of the printed page.

Back when David Hays and I began thinking about publishing on cultural evolution he brough his attention to a collection of essays by Eric Havelock, The Literate Revolution in Greece and Its Cultural Consequences (Princeton 1982), in which Havelock pointed out that was the adoption of decimal arithmetic was the essential step. Hays and I thought that was correct, and adopted it in the article we eventually published, The Evolution of Cognition (1990):

The role which speech plays in Rank 1 thought, and writing plays in Rank 2 thought, is taken by calculation in Rank 3 thought (cf. Havelock 1982: 341 ff.). Writing appears in Rank 1 cultures and proves to be a medium for Rank 2 thinking. Calculation in a strict sense appears in Rank 2 and proves to be a medium for Rank 3 thinking. Rank 2 thinkers developed a perspicuous notation and algorithms. It remained for Rank 3 thinkers to exploit calculational algorithms effectively. An algorithm is a procedure for computation which is explicit in the sense that all of its steps are specified and effective in the sense that the procedure will produce the correct answer. The procedures of arithmetic calculation which we teach in elementary school are algorithms.

A bit later we remark:

The amazing thing about algorithmic calculation is that it always works. If two, or three, or four, people make the calculation, they all come up with the same answer. This is not true of non-algorithmic calculation, where procedures were developed on a case-by-case basis with no statements of general principles. In this situation some arithmeticians are going to get right answers more often than others, but no one can be sure of hitting on the right answer every time.

This ad hoc intellectual style, moreover, would make it almost impossible to sense the underlying integrity of the arithmetic system, to display its workings independently of the ingenious efforts of the arithmetician. The ancients were as interested in magical properties of numbers as in separating the odd from the even (Marrou 179-181). By interposing explicit procedures between the arithmetician and his numbers, algorithmic systems contribute to the intuition of a firm subject-object distinction. The world of algorithmic calculations is the same for all arithmeticians and is therefore essentially distinct from them. It is a self-contained universe of objects (numbers) and processes (the algorithms). The stage is now set for experimental science. Science presents us with a mechanistic world and adopts the experimental test as its way of maintaining objectivity. A theory is true if its conceptual mechanism (its "algorithm") suggests observations which are subsequently confirmed by different observers. Just as the results of calculation can be checked, so can theories.

Given that, I take the wide-spread belief in the catalytic efficacy of the printing press as an indicator that the intellectual classes are not prepared for the revolution made possible through artificial intelligence. Just as generals are said always to fight the last war, so intellectuals are always proclaiming the last intellectual revolution. Our intellectuals are not prepared to think about thinking, about the processes of the mind, in a deep way.

* * * * *

Early in February I had a post on why Rome didn’t have an industrial revolution. The bulk of that post was a discussion I had with Claude 3.5 Sonata. As the lack of the printing press was one of the hypotheses on offer – in an article by Maxwell Tabarrok – Claude and I had a great deal of discussion about that. Once we’d conclude our discussion of Rome I decided to push on about the printing press. I suspected that Marshall McLuhan was responsible for that mistaken idea and asked Claude about.

Here's that conversation. As always, my remarks are set in bold face while Claude’s responses are plain.

McLuhan and the printing press view

The idea that the printing press was the main driver of intellectual change during the Renaissance, I believe that idea is widely held. Marshall McLuhan argued that in The Gutenberg Galaxy, and he was quite influential in the 1960s and 1970s and remains so today. Hays and I got clued-in to decimal arithmetic by Eric Havelock. But, as far as I know, that view has not been very influential. In discussions about the impact of AI I’ve heard many people compare it to the printing press. That seems to be what people think of in this kind of context, the printing press. Am I missing something about the range of views?

Your assessment seems accurate. The printing press as a primary driver of Renaissance intellectual change is indeed the dominant narrative in both academic and popular discourse. This can be seen even in the article you've shared - Tabarrok immediately gravitates to the printing press explanation without seriously considering alternatives.

This dominance might stem from several factors:

1. The printing press's effects are more immediately visible and concrete than those of decimal arithmetic. You can literally see books spreading knowledge.

2. The printing press story fits into a broader narrative about democratization of knowledge that appeals to modern sensibilities.

3. McLuhan's influence, as you note, helped cement this view, particularly through his compelling analysis of how media shapes cognition.

In contrast, the decimal arithmetic hypothesis requires a more complex understanding of how mathematical procedures can reshape cognitive frameworks. Havelock's view hasn't gained the same traction, perhaps because:

1. It's more difficult to trace the direct impacts of mathematical procedures on thought 2. It requires understanding both mathematical and cognitive concepts 3. It lacks the immediate intuitive appeal of the printing press narrative

Your papers with Hays present a strong case for decimal arithmetic's role, but I'm struggling to find other major works that focus on this as the key driver of Renaissance cognitive change. Even discussions of mathematical history tend to focus more on specific discoveries rather than how new mathematical procedures reshape thinking.

In current AI discussions, do you think the frequent printing press comparisons might be missing important aspects of how AI could reshape cognition, similar to how the printing press narrative misses the cognitive impact of decimal arithmetic?

Progress studies

Well, sorta’, yes. But that’s not quite the point. I take this belief in the printing press as an index of a general inadequacy in thinking about these kinds of issues.

As you know, there’s been a push for progress studies in the last few years. When it first emerged, I was interested and excited since the ideas that Hays and I have developed are certainly relevant to any study of progress. But I had no luck getting people interested. So it’s not so much the failure to understand how AI could reshape cognition as it is a more general failure to understand the role of cognition in the cultural process. There’s no concept of a general cognitive architecture underlying a whole family of ideas. Rather, the emphasis is more local, on this idea and that idea and the other ones over there and so forth.

For that matter, I don’t think Piaget’s ideas about genetic epistemology have really caught on either, or the closely associated idea of reflective abstraction.

This points to a deeper issue in how we think about intellectual and cultural progress. Instead of seeing ideas as emerging from and transforming underlying cognitive architectures, the tendency is to treat them as isolated units that simply accumulate or spread. This "local" view of ideas misses the systemic nature of cognitive development and cultural evolution that you and Hays describe.

The progress studies movement, while valuable in drawing attention to the importance of progress, seems to share this limitation. It tends to focus on:

  1. Individual innovations and discoveries
  2. Institutional structures and incentives
  3. Knowledge transmission mechanisms
  4. Economic and technological factors

But it gives less attention to how these elements relate to fundamental changes in how people think and process information - the kind of changes that Piaget described in individual development and that you and Hays describe at the cultural level.

This may explain why decimal arithmetic's role gets overlooked - it's harder to see its importance if you're not thinking about how it transformed the underlying cognitive architecture that makes certain kinds of ideas possible in the first place.

AI and the future of legal practice

Schwarcz, Daniel and Manning, Sam and Barry, Patrick James and Cleveland, David R. and Prescott, J.J. and Rich, Beverly, AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice (March 02, 2025). Available at SSRN: https://ssrn.com/abstract=5162111

Abstract: Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.

H/t Tyler Cowen.

Monday, March 3, 2025

The confabulist real [dream time]

Confabulation, Dylan’s epistemic stance, and progress in the arts: “I’ll let you be in my dreams of I can be in yours.”

I continue to think about the tendency of LLMs to confabulate, that is, to make stuff up that is simply not true of the world. As I have remarked here and there, I tend to think that 1) confabulation is inherent in the architecture, and 2) that this “confabulation” is the default mode of human language. We just make things up.

However, we must live with one another and that requires cooperation. Effective communication requires agreement. It turns out that the external world is a convenient locus for that agreement. We agree that THAT tree over is a pine, that THAT apple is ripe, that THAT bird is a cardinal, that the stew is too salty, the earth is round and that the moon travels around the earth every 28 days. Some of these agreements may come easily, others are more difficult in the making.evolu

However true that may be, it does seem a bit odd to think of external reality as a vehicle for grounding agreement on language use. And, if I thought about it a bit, I could probably come up with some account of why that doesn’t seem quite right. But I’m stalking a different beast at the moment.

Consider this observation that Weston La Barre made in 1972 in The Ghost Dance: The Origins of Religion (p. 60):

... the Australian Bushman themselves equate dream-time with the myth-time that is mysteriously brought back in ritual; myth is as timeless as the unconscious mind. It is the delectability of dreams that makes them desirable, and it is their desirability that (along with lowered critical threshold) that gives them their intense “reality” and conviction. The fact that he dreams first force on man the need to epistemologize.

Let’s pretend we’ve just gotten up from a dream. What do we make of it? Or perhaps, how do we make anything of it?

We tell it to others, and they tell us their dreams. We talk and arrive at some agreement about what transpires in those dreams, about the nature of those events and their relationship to the waking world. I am thus reminded of Bob Dylan’s line in “Talkin’ World War III Blues”: “I’ll let you be in my dreams of I can be in yours.”

Is that what expressive culture is about, religious myth, ritual, folktales, drama, epics, novels, not to mention the visual and plastic arts as well? Are they vehicles for allowing us to reach agreement about certain of our, shall we say, confabulations?

This brings up a post from February, What is it about depicting sex in writing? From shared knowledge to common knowledge. Using three novels as examples (Pride and Prejudice, Tess of the D’Urbervilles, and Lady Chatterly’s Lover) I suggested that as we move from one to the other in historical order, we’re moving from shared knowledge of sexual matters to common knowledge. Those are terms from game theory. We talk of shared knowledge when everyone knows something, but aren’t aware that the others know that same thing. Once everyone comes to know that everyone knows that thing, whatever it is, that knowledge has become common knowledge.

I have thus begun to wonder: Is that how the arts function over time? I recently discussed the matter with Claude 3.7. Here’s its response:

It's not just that artistic styles change, but that art gradually makes explicit what was once implicit across many domains of human experience.

This pattern extends beyond sexuality. Consider how literature has treated:

  • Mental illness (from metaphorical to explicit)
  • Social class tensions (from coded to direct)
  • Religious doubt (from allegorical to overt)
  • Political criticism (from veiled to unmistakable)

In each case, what begins as shared knowledge eventually becomes common knowledge through artistic expression. This does suggest a kind of progress - not necessarily aesthetic progress, but progress in cultural honesty and transparency.

It is by no means obvious that this is the case. But I’m certainly sympathetic.

What is undeniable is that in the last 15 years or so digital humanists are found empirical evidence of unidirectional trends measured attributed of literary texts over long periods of time. Perhaps the most interesting example is in Matthew Jocker’s Macroanalysis, where he shows a unidirectional trend in a corpus of 3000 Anglophone novels from the 19th century. I discuss this in a number of posts. This working paper might be the place to start: On the direction of literary history: How should we interpret that 3300 node graph in Macroanalysis? There’s another working paper: On the Direction of 19th Century Poetic Style, Underwood and Sellers 2015. You might also look at this blog post from 2016, From Telling to Showing, by the Numbers, which is also about 19th century novels.

More later.

Sunday, March 2, 2025

The most rapid change in human written communication ever?

Download the paper here.

Red Jeep

Bipedalism, Hominin Fetuses, and Musical and Linguistic Evolution

Matz Larsson and Dean Falk, Direct Effects of Bipedalism on Early Hominin Fetuses Stimulated Later Musical and Linguistic Evolution, Current Anthropology, Published online February 21, 2025.

Abstract

We hypothesize that auditory and motor entrainment evolved in early hominin fetuses in direct response to their mothers’ bipedal footsteps and, later, contributed to the evolution of music and language via two related processes. First, selection for bipedalism transformed feet from grasping into weight-bearing organs, which negatively affected infants’ ability to cling to their mothers, provoking the emergence of novel affective vocal exchanges between mothers and infants that became building blocks for the emergence of motherese. Second, the derived ability to entrain movements to sound was incorporated during the prehistoric emergence of wide-ranging rhythmic behaviors such as synchronized chanting of nonlexical vocables and coordinated rhythmic clapping and stomping, which became instrumental during the more recent evolution of music. Like the derived ability to keep beat with rhythmic sounds, nascent motherese entailed entrainment of motor behavior (the physical production of pitch, timing, and vocalization rate) with external sources of sound (conversational utterances). If motherese was a precursor for language evolution, as many believe, music and language share phylogenetically derived substrates for auditory and motor entrainment that stemmed directly from bipedalism. If so, bipedalism was more important for serendipitously sculpting advanced cognition in our prehistoric ancestors than previously believed.

Nous émettons l’hypothèse que l’entraînement auditif/moteur a évolué chez les fœtus des premiers hominines en réponse directe aux pas bipèdes de leurs mères, et qu’il a ensuite contribué à l’évolution de la musique et du langage via deux processus connexes. Premièrement, la sélection en faveur de la bipédie a transformé les pieds en organes de soutien plutôt que des organes de préhension, ce qui a eu un impact négatif sur la capacité des nourrissons à s’accrocher à leurs mères, provoquant ainsi l’émergence d’échanges vocaux affectifs novateurs entre les mères et les nourrissons, qui sont devenus des éléments constitutifs de l’émergence du mamanais (motherese, en anglais). Deuxièmement, la capacité dérivée à synchroniser les mouvements avec le son a été intégrée lors de l’émergence préhistorique de comportements rythmiques variés, tels que le chant synchronisé de vocables non lexicaux et le battement de mains et de pieds coordonné, qui sont devenus des éléments essentiels durant l’évolution plus récente de la musique. Tout comme la capacité dérivée à suivre le rythme des sons rythmiques, le mamanais naissant impliquait l’entraînement du comportement moteur (la production physique de la hauteur, du timing et du taux de vocalisation) avec des sources sonores externes (les énonciations conversationnelles). Si le mamanais était un précurseur de l’évolution du langage, comme le pensent de nombreux chercheurs, la musique et le langage partagent alors des substrats phylogénétiquement dérivés pour l’entraînement auditif/moteur qui découle directement de la bipédie. Si tel est le cas, la bipédie aurait été plus importante pour sculpter de manière fortuite la cognition avancée de nos ancêtres préhistoriques que l’on croyait auparavant.

Wir gehen davon aus, dass sich die auditorische/motorische Synchronisation (Entrainment) bei frühen homininen Föten als direkte Reaktion auf den zweibeinigen Gang ihrer Mütter entwickelte und später über zwei verwandte Prozesse zur Entwicklung von Musik und Sprache beitrug. Erstens verwandelte die Selektion für die Zweibeinigkeit die Füße von Greiforganen in tragende Organe, was sich negativ auf die Fähigkeit der Säuglinge auswirkte, sich an ihre Mütter zu klammern und was zur Entstehung neuartiger affektiver Stimmaustausche zwischen Müttern und Säuglingen führte, die zu Bausteinen für die Entstehung der Motherese (Mutter-Baby-Dialog) wurden. Zweitens wurde die entwickelte Fähigkeit Bewegungen mit dem Klang zu synchronisieren, während der prähistorischen Entstehung weitreichender rhythmischer Verhaltensweisen, integriert, etwa beim synchronisierten Singen nicht-lexikalischer Vokabeln und beim koordinierten rhythmischen Klatschen und Stampfen, die in der neueren Entwicklung der Musik instrumental wurden. Ebenso wie die daraus entwickelte Fähigkeit, den Takt rhythmischer Klänge zu halten, erforderte die entstehende Motherese die Einbindung motorischen Verhaltens (der physischen Produktion von Tonhöhe, Timing und Vokalisierungsrate) in Verbindung mit externen Klangquellen (Gesprächsäußerungen). Wenn Motherese ein Vorläufer der Sprachentwicklung war, wie viele glauben, teilen Musik und Sprache phylogenetisch entstandene Substrate für die auditive/motorische Synchronisation, die direkt aus der Zweibeinigkeit stammen. Wenn dem so ist, war die Zweibeinigkeit für die zufällige Bildung fortgeschrittener Kognitionen bei unseren prähistorischen Vorfahren wichtiger als bisher angenommen.

Ipotizziamo che il trascinamento uditivo/motorio si sia evoluto nei primi feti di ominidi in risposta diretta ai passi bipedi delle loro madri e, in seguito, abbia contribuito all’evoluzione della musica e del linguaggio attraverso due processi correlati. In primo luogo, la selezione per il bipedismo ha trasformato i piedi da organi di presa in organi di carico, il che ha avuto un impatto negativo sulla capacità dei bambini di aggrapparsi alle madri, provocando l’emergere di nuovi scambi vocali, affettivi tra madri e bambini, che sono diventati elementi costitutivi per l’emergere del “maternese”. In secondo luogo, la capacità derivata di associare i movimenti al suono fu incorporata durante l’emergere preistorico di comportamenti ritmici ad ampio raggio, come il canto sincronizzato di vocaboli non lessicali e il battito delle mani e il calpestio ritmicamente coordinati, che divennero strumentali durante l’evoluzione più recente della musica. Come l’abilità derivata di seguire il ritmo dei suoni ritmici, il nascente “motherese” implicava il trascinamento del comportamento motorio (la produzione fisica dell’altezza, del ritmo e della velocità della vocalizzazione) con fonti sonore esterne (espressioni conversazionali). Se il maternese è stato un precursore dell’evoluzione del linguaggio, come molti credono, la musica e il linguaggio condividono substrati filogeneticamente derivati per il trascinamento uditivo/motorio che derivavano direttamente dal bipedismo. Se è così, il bipedismo è stato più importante di quanto si credesse in precedenza per scolpire in modo fortuito la cognizione avanzata nei nostri antenati preistorici.

‏راه رفتن انسانهای آغازین دلیل فرگشتی بوجود امدن و پیشرفت موسیقی و زبان انسان ها دارای این توانایی منحصر به فرد هستند که بدنشان را منطبق با امواج صوتی آهنگین به صورت ریتمیک و با زمانبدیه هماهنگ حرکت دهند( آنچه که entrainment خوانده می شود).فرضیه ما این است که این توانایی در جنین های هومینین اولیه به عنوان پاسخی مستقیم به راه رفتن دوپایی مادرانشان تحریک شد و همچنین این سبک راه رفتن (قدمهای دوپا) بعداً از طریق دو فرآیند مرتبط، به توسعه موسیقی و زبان کمک کرد. پاها که اساساً اندامی برای گرفتن بودند، حالا حمل کننده وزن هستند و این امر توانایی نوزادان را برای درکنار نگه داشتن مادر مختل میکند. فقدان تماس فیزیکی، نیاز و پیشرفت ارتباط صوتی عاطفی بین مادر و نوزاد را تحریک کرد، که به نوبه خود پایهای شد برای شیوهای خاص از اصوات جهت دار به سوی کودک که مادرانه نامیده می شود. توانایی پاسخ حرکتی ریتمیک به صدا همچنین می تواند رفتارهای ریتمیک مختلف، مانند دست زدن همزمان، کوبیدن پا و نجوای آهنگین بی کلام را در هم ادغام کرده باشد.این احتمال وجود دارد که چنین اشکالی از توانایی های ماقبل تاریخ به پایه ریزی علایق و توانایی های موسیقیایی انسان کمک کرده باشد.توانایی شکوفنده ی تعقیب متوازن صداهای موزون، و همچنین رشد زودهنگام کلام مادرانه، این فرض را برانگیخت که فرد دارای ظرفیت متوازن سازی عملکرد خود در ارتباط با منابع صوتی خارجی، برای مثال کلام دیگران (برای ایجاد زیر و بمی درست، زمانبندی مناسب و سرعت کلامی سازی) است. اگر همانطور که بسیاری معتقدند، کلام مادرانه پیش قراولی برای رشد زبان بوده است، پس موسیقی و زبان، بستر فیلوژنتیکی مشترکی برای اینتریمنت شنوایی/موتوری دارند. این بستر مشترک مستقیماً از رشد انسان در راه رفتن دوپا ناشی شده است. پس به این ترتیب راه رفتن بر دوپا، گامی مهمتر از آنچه قبلاً فکر میکردیم برای توسعه ی شناخت های ارتقا یافته ی اجداد بشر بوده است.‎

Människan har en enastående förmåga att utifrån ljudsignaler röra kroppen rytmiskt och i takt (s.k. entrainment). Vår hypotes är att denna förmåga stimulerades hos tidiga hominin-foster som ett direkt svar på deras mödrars tvåbenta gång samt att denna gångstil (bipedalism) senare, via två relaterade processer, bidrog till utvecklingen av musik och språk. Fötterna, som främst varit greppande organ blev nu viktbärande, vilket försämrade spädbarns förmåga att hålla fast vid sina mödrar. Utebliven fysisk kontakt, stimulerade behovet och utvecklingen av känslobaserad vokal kommunikation mellan mödrar och spädbarn, vilket i sin tur lade grunden till det specifika och barnriktade sätt att tala som kallas motherese. Förmågan att röra sig rytmiskt till ljud kunde dessutom inlemmas i diverse rytmiska beteenden, såsom synkroniserad handklappning, stampande fötter och (ordlös) sång. Det är sannolikt att sådana former av prehistorisk entrainment var med och lade grunden till människans musikaliska intresse och förmåga. Såväl den spirande förmågan att följa med i takt till rytmiska ljud, som den tidiga utvecklingen av motherese förutsatte att individen hade motorisk förmåga till entrainment i förhållande till externa ljudkällor, till exempel andras tal (för att skapa rätt tonhöjd, tajming, och vokaliseringshastighet). Om motherese, som många tror, var en föregångare till språkutveckling, så delar musik och språk fylogenetiska substrat för hörsel/motor entrainment som kan härledas direkt från människans utveckling av tvåfota gång. I så fall var bipedalism ett viktigare steg än vi tidigare trott vad gäller människosläktets utveckling av avancerad kognition.

私たちの提示する仮説は、聴覚運動の訓練は初期の人類の胎児において母親の二足歩行の足音への直接の反応として進化し、その後、二点の関連したプロセスにおいて、音楽と言語の進化に貢献したというものである。第一点に、二足歩行への選択は、足を掴む機能の器官から重力を支えるものへと変化させた。それは、赤子が母親にしがみつく能力には相応しくない結果となり、母親と幼児の間の斬新な感情的な声のやり取りの発生を促し、母親言葉の発生の構成要素となった。 第二点は、その声に由来する音への動きが、非語彙的な音に合わせた詠唱やリズミカルな手拍子や足拍子という広い範囲の先史時代のリズミカルな行動となった。それは、音楽の進化の手段となった。リズミカルな音の拍子を保つ能力が生まれたように、初期の母親言語は、外部的な音源(会話の発言)と運動行動訓練(ピッチやタイミングや発声の生産)が必然的に伴うのである。もし、多くが信じるように母親言語が言語進化の前駆であるなら、音楽と言語は、系統発生的に二足歩行から直接生じた聴覚運動の訓練の基板を共有する。もしそうであるのなら、二足歩行は、先史時代に偶然にも形成された高度な認知力にとって、以前に信じられていたよりもはるかに重要なのである。