Thursday, March 6, 2025

Coming and going

A note on assembly theory from Carl Zimmer [via Tyler Cowen]

Conversations with Tyler, Ep. 235: Carl Zimmer on the Hidden Life in the Air We Breathe

COWEN: Is Lee Cronin right or insane?

[laughter]

ZIMMER: Lee Cronin is a chemist in Scotland at University of Glasgow. He has this idea that you can explain life with a theory that he and others call assembly theory, which is about, basically, how many steps does it take for something to get produced?

The things in our bodies, the molecules that make us up — some of them are very small and simple, but some of them are exquisitely big and complex. Lee and others argue that life is what is able to assemble things beyond a certain threshold. This might be a way to actually identify life on a planet, even if you don’t know what life is made of. We can’t assume that life is just made of DNA; that’s an unreasonable assumption.

Life on Earth already blows our minds in many ways — at least mine. Life on other worlds — maybe that bet is right, and there’s life on Enceladus or some other icy moon. It might be really, really, really strange, but maybe we can recognize it by this assembly index.

Not only could this assembly theory be a way to recognize life, but it might be actually a way, Lee Cronin thinks, to make life. In other words, it guides you in basically creating a set of chemical reactions where you’re creating these . . . right now, he’s got these robots that are basically making droplets with different chemicals in them in these vast numbers of combinations. He’s wondering if they will eventually start to take on some of the hallmarks of life.

In other words, yes, he is trying to make life. He’s actively trying to make life right now. A lot of people think he’s crazy. A lot of people think he’s quite brilliant. Some people think he’s both. [laughs]

COWEN: I like him. I don’t know if he’s right. He’s a lot of fun to talk to.

ZIMMER: Absolutely, yes. It’s been really interesting watching assembly theory come to the fore recently. Some scientists really take badly to it in a very hostile way, but this is often the case. It feels like sometimes people are just talking past each other and they’re not really speaking the same language. Because assembly theory is new and it’s very interdisciplinary, I think it’s going to take a while for the scientific community to really engage with it and decide whether it holds up or not.

As I argue in Life’s Edge, life is a property of matter. Scientists are trying to explain it, and some of them are trying to explain with a theory. Superconductivity is a property of matter, and there were a bunch of theories that were put forward about it, including by Einstein, and they were wrong. It wasn’t until, eventually, some people came up with the right theory that really clicked in and had a powerful explanatory power. We’re not there yet with life. Maybe Lee Cronin is going to be like Einstein and he’s wrong, or maybe he will be one of the people who is right.

It would seem that consciousness is a property of matter as well, hence panpsychism.

Understanding transformers and communication among sperm whales

Mar 5, 2025
Allen School Colloquium Series Title: Discovering & Engineering the Computation Underlying Large Intelligent Agents
Speaker: Pratyusha Sharma (MIT)
Date: March 3, 2025

Abstract: The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this talk, I present three perspectives to answer these questions. I will discuss experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought–-hierarchical and compositional — and show how it can enable long-horizon reasoning and planning in these systems.

Bio: Pratyusha Sharma is a Ph.D. student in the Computer Science and Artificial Intelligence Lab at MIT, advised by Antonio Torralba and Jacob Andreas. She studies the interplay between language, sequential decision making and intelligence in natural and AI systems. Before this, she received her Bachelor’s degree from the Indian Institute of Technology, Delhi. Her research is published in interdisciplinary journals like Nature Communications, etc. and in academic conferences across machine learning, natural language processing, robotics, and marine biology. Her research has also been featured in articles in the New York Times, National Geographic Magazine, BBC, etc. She was recently a speaker at TED AI and was selected as a Rising Star in EECS, Data Science, and GenAI.

Her webpage, with links to her papers.

Wednesday, March 5, 2025

Digital Doppelgangers

Vijayalaxmi Methuku, Praveen Kumar Myakala, Digital Doppelgangers: Ethical and Societal Implications of Pre-Mortem AI Clones, arXiv:2502.21248v1 [cs.CY], February 28, 2025.

Abstract: The rapid advancement of generative AI has enabled the creation of pre-mortem digital twins, AI-driven replicas that mimic the behavior, personality, and knowledge of living individuals. These digital doppelgangers serve various functions, including enhancing productivity, enabling creative collaboration, and preserving personal legacies. However, their development raises critical ethical, legal, and societal concerns. Issues such as identity fragmentation, psychological effects on individuals and their social circles, and the risks of unauthorized cloning and data exploitation demand careful examination. Additionally, as these AI clones evolve into more autonomous entities, concerns about consent, ownership, and accountability become increasingly complex.

This paper differentiates pre-mortem AI clones from post-mortem generative ghosts, examining their unique ethical and legal implications. We explore key challenges, including the erosion of personal identity, the implications of AI agency, and the regulatory gaps in digital rights and privacy laws. Through a research-driven approach, we propose a framework for responsible AI governance, emphasizing identity preservation, consent mechanisms, and autonomy safeguards. By aligning technological advancements with societal values, this study contributes to the growing discourse on AI ethics and provides policy recommendations for the ethical deployment of pre-mortem AI clones.

H/t Tyler Cowen.

I'm interested in a somewhat different kind of digital doppelganger. This is from a prompt I gave to Claude 3.5 Sonnet back in December 2024:

I’m imagining that at some time in the future each human will be given an AI companion early in life, perhaps when they are old enough to walk – think of Stephenson’s The Diamond Age. This companion will interact with the human in various ways through various modalities. It will certainly have robotic capabilities. As the child grows older the AI companion will probably have to be given a different physical form. So we port the data to a different physical form and make the switch-over in a special ceremony. We’ll probably have to do this a few times until the child becomes an adult. At some point the AI will also take residence in other physical forms, all linked together. This AI will remain with the person their entire life. They won’t always be physically together, and so forth. Imagine it working however makes sense to you.

The point is that this AI will become attuned the the needs, capacities, and the interests of the human. It will be the artificial complement, image, soul, or doppelgänger, of the human. What happens to this complement when the human dies? Will it continue on in the world? Or will it slowly wither away and die, not so much of grief, but because it has become so closely attuned to its human counterpart that it requires interaction with that human in order to maintain its operational stability?

On the water, in the morning

Computation, Text, and Form in Literary Criticism: A Conversation with Claude 3.7

I've posted a new working paper. Title above, links, abstract, contents, and introduction below.

Academia.edu: https://www.academia.edu/128029987/Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_3_7
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5166930
ResearchGate: https://www.researchgate.net/publication/389591524_Computation_Text_and_Form_in_Literary_Criticism_A_Conversation_with_Claude_37

Abstract: Literary criticism operates with contradictory definitions of “text,” rarely meaning simply the marks on a page. This makes it difficult to establish what “form” means. While critics do analyze features like rhyme and meter, or the distinction between story (fabula) and plot (syuzhet) criticism rarely seeks to understand how words are arranged in texts beyond these basics. Literary criticism selectively borrowed from Lévi-Strauss's structural analysis of myth (e.g. the concept of binary oppositions), it ignored a systematic methodology that was essentially computational in nature and about form. Now, Large Language Models present a watershed moment for literary studies - they're unavoidable and demonstrate sophisticated capabilities. A cohort of younger scholars using corpus linguistics and computational methods may represent a bridge between computational and literary approaches. Will these scholars extend computational thinking from method to theory? - using computation not just as an analytical tool but as a framework for understanding how literary texts function – that’s a key issue currently before the discipline. 

Contents

How Do You Explain Computation to a Literary Critic? 1

What am I trying to explain to myself? 1
Why do literary critics need to know that? 2
The study of literary form 3

My Conversation about Computation with Claude 3.7 5

What is a text? 6
Literary criticism after World War II 7
What is Literary Form 8
Lévi-Strauss on Myth 10
The Current Situation: LLMs 12
Digital Humanities 13

How Do You Explain Computation to a Literary Critic?

I have been interested in conceptualizing literary processes as computational ones ever since I wrote an MA thesis on “Kubla Khan” in the early 1970s. Literary process? How texts are created and how they are read and assimilated, that’s what I mean. Ever since then I have been trying to explain that to literary critics. That has not gone well. I’ve also been trying to explain it to myself, which is a different kind of problem. The purpose of this paper is to, once more, attempt to explain computation-as-literary-process both to literary critics and to myself.

In the rest of this introduction I talk about explaining computation, first to myself, and then to literary critics. The rest of this paper consists of a discussion I had with Claude 3.7 about computation and literary criticism. I begin by discussing the text, then discuss the post WWII focus on interpretation as the central activity of literary criticism. Then Claude and I take up form, the influence of Lévi-Strauss, and the current emergence of large language models (LLMs). We conclude with some remarks about digital humanities.

What am I trying to explain to myself?

I know what kind of questions follow from thinking about literary processes as computational. I’ve done practical criticism on that foundation for decades, some quite rigorous, some rather informal, but all based on the belief that literary processes are computational processes. What is it that I’m trying to understand?

Let me explain: My basic conception derives from the use of parentheses in arithmetic or algebraic expressions. Without parentheses or a specific convention concerning operator precedence, expressions like this next one are ambiguous: 5 × 6 – 3. We can resolve the ambiguity by adding parentheses, giving us either of these two expressions:

(5 × 6) – 3 = 27
5 × (6 – 3) = 15

Adding parentheses places boundaries in the expression that affect how we evaluate it.

The same applies to texts, any texts, not just literary texts. Texts are strings of alphanumeric characters. Small sets of characters are grouped together into words and words are organized into longer strings by using spaces, punctuation marks, and capital letters. Boundaries divide long strings into substrings to whatever depth is necessary. Linguistics, psycholinguistics, and computational linguistics all proceed on that basis.

As far as I’m concerned “computation” is the best name for that process, whether we’re talking about language or arithmetic or mathematics in general. In this context arithmetic is, in effect, a very constrained and specialized kind of language.

Since I’ve known that for years, what is it that I’ve only just now realized? It’s not a matter of new information or ideas, it’s a matter of emphasis, of framing. What I’ve realized is simple: That’s all there is. Let me repeat that: Computation is all there is. That is to say, if we are going to treat literary texts as physical objects, as strings of symbols realized in some physical medium, such as vibrations in air or marks on a surface, if that’s what we’re going to do, then computation is the only physical process we know of that gives us ways of thinking about how strings can support the communication of meaning between individuals. That’s it.

Commensurability is the issue. Computation is a process that is commensurate with the physical text. Why? Because computation, real computation, is itself a physical process. That’s obvious when you consider computation with an abacus, for example. The same is true for a mechanical calculator or a slide rule. The physical nature of computation with a digital computer is less visible, much less, and hardly palpable. But it is no less real. Bits and bytes may appear to be abstract things, but they take up physical space in computer memories; that’s why the capacity of computer memories is measured in bytes, kilobytes, megabytes, gigabytes, etc. Computation takes place in time, and time is physical as well, though one can’t see it, touch it, or taste it, much less lift it. Computers and their processes are physical things, as are human brains. As such they are commensurate with physical texts.

That’s what literary critics need to know. Alas, they are incapable of hearing it, of taking it seriously. Why? Computers are machines, and, as such are at suspect at best and thus to be ignored, and at worst they are evil, and actively to be opposed. This reflects the historical distinction between the sciences and humanities, a distinction that has been written into the institutional structure of the academic world and so has come to seem inscribed in the foundations of the universe.

Like it or not, that universe is all but finished. We are now living in a world where anyone with an internet connection can engage in sophisticated linguistic activity with a machine, a chatbot. That changes everything. The physical text can no longer be effaced in favor some vague abstract object conjured into (virtual) existence by the act of interpretation.

Why do literary critics need to know that?

They need to know it because it allows for a much more robust and substantial analysis of literary form than is currently practiced. Literary texts are linguistic strings. The form of strings is a matter of how elements are arranged on the string. That’s obvious in the case of, say, colored beads on a string. And that’s how musicians and musicologists treat musical form, as the arrangement of sonic objects in a line, where the line tracks the passage of time.

Literary critics do this as well. When critics work out the rhyme scheme of a poem, they’re treating it as a physical object. When narratologists distinguish between story (fabula) and plot (syuzhet), they are treating the text as a physical object, one where story episodes are ordered into a narrative string. One can mention other examples as well. While those operations are well-established, they are a minor aspect practical criticism. Form is treated as a framework in which one can approach the problem of meaning. Its description and analysis are at best a secondary activity.

Interpretation is primary. And when critics start interpreting a text, they tacitly, surreptitiously, efface the physical text in favor of an abstract and poorly defined object to which they then direct their interpretive activities. To complete the mystification, that abstract and poorly defined object is called – you guessed it – the “text.”

But these chatbots – I’ve been working with OpenAI’s ChatGPT and Anthropic’s Claude – they handle language in a way that is astonishingly humanlike, and they are undoubtedly machines, albeit in the strange and special way that computers are machines. They’re not made of gears, levers, axles, pulleys and motors, but their operations are mechanistic in kind, as the word “machine” has come to be understood. Yes, there are serious philosophical issues about whether or they really understand language, whether or not they’re really dealing in meaning. I’ve thought about these issues, and written quite a bit about them, but this is not the place to entertain those discussions. Whatever these machines are doing, to a first approximation it looks like they’re going language.

While I believe that this technology is a source of useful tools, both for literary critics and others, and will become more so in the future, that’s not why I mention them here. I mention them because they constitute an unmistakable “existence proof” that thinking about language process as computational is fruitful, both intellectually and practically. Once we can do that, we can take literary form as a serious object of critical activity.

The study of literary form

Yes, I know that form is an important concept in literary criticism and that the discipline has seen various formalist critical approaches. But as Claude and I discuss, formalism doesn’t not imply close attention to the formal properties of literary texts. Rather, formalism is a philosophical position about the nature meaning in which meaning in literary texts is taken to be somehow different from meaning in other texts by virtue literary form. But form itself is given only cursory attention.

My primary object here is to mention what is perhaps the best-known essay by Susan Sonntag, “Against Interpretation.” It originally appeared in 1964 in the Evergreen Review and became the title essay in her first collection, which was published in 1966. In that essay she asserted:

In some cultural contexts, interpretation is a liberating act. It is a means of revising, of transvaluing, of escaping the dead past. In other cultural contexts, it is reactionary, impertinent, cowardly, stifling.

Today is such a time, when the project of interpretation is largely reactionary, stifling. Like the fumes of the automobile and of heavy industry which befoul the urban atmosphere, the effusion of interpretations of art today poisons our sensibilities. In a culture whose already classical dilemma is the hypertrophy of the intellect at the expense of energy and sensual capability, interpretation is the revenge of the intellect upon art.

Several pages later she asserts:

What is needed, first, is more attention to form in art. If excessive stress on content provokes the arrogance of interpretation, more extended and more thorough descriptions of form would silence. What is needed is a vocabulary—a descriptive, rather than prescriptive, vocabulary—for forms.

It seems like everybody read that essay at that time, but as far as I can tell, no one took Sonntag seriously. Now’s the time; to quote Charlie Parker, now’s the time.

The proof, as they say, is in the pudding. But this is not the place to serve pudding. If you want to read some examples, here are some examples from my own work that are unlike work readily available in the standard literature:

Articulate Vision: A Structuralist Reading of "Kubla Khan", Language and Style, Vol. 8: 3-29, 1985, https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_. This is a revised and somewhat shorted version of my 1972 master’s thesis.

Heart of Darkness: Qualitative and Quantitative Analysis on Several Scales, Version 5, Working Paper, October 27, 2019, https://www.academia.edu/8132174/Heart_of_Darkness_Qualitative_and_Quantitative_Analysis_on_Several_Scales_Version_5. This is an eclectic collection of remarks about Conrad’s novel, but there is a formal analysis that is crucial, one that takes paragraph length as its point of departure. The longest paragraph is both structurally and thematically central.

The Gojira Papers, April 15, 2014, https://www.academia.edu/7905287/The_Gojira_Papers. This is about the 1954 Japanese film that started the Godzilla franchise.

Here is a theoretical and methodological account of some length (23,000 words, 11 diagrams):

Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

Let's play "spot the Buddha!"

Empathy is not in our genes

Cecilia Heyes, Empathy is not in our genes, Neuroscience & Biobehavioral Reviews, Volume 95, 2018, Pages 499-507,ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2018.11.001.

Highlights

  • Empathy is a cornerstone of healthcare, social justice, and international relations.
  • Empathy depends on automatic (Empathy1) and controlled (Empathy2) mechanisms.
  • The automatic mechanism, Empathy1, is constructed by associative learning.
  • Self-stimulation, synchronous emotion and affect mirroring provide inputs to learning.
  • Empathy can be enhanced by novel experience and broken by social change.

Abstract: In academic and public life empathy is seen as a fundamental force of morality – a psychological phenomenon, rooted in biology, with profound effects in law, policy, and international relations. But the roots of empathy are not as firm as we like to think. The matching mechanism that distinguishes empathy from compassion, envy, schadenfreude, and sadism is a product of learning. Here I present a dual system model that distinguishes Empathy1, an automatic process that catches the feelings of others, from Empathy2, controlled processes that interpret those feelings. Research with animals, infants, adults and robots suggests that the mechanism of Empathy1, emotional contagion, is constructed in the course of development through social interaction. Learned Matching implies that empathy is both agile and fragile. It can be enhanced and redirected by novel experience, and broken by social change.

Keywords: Affect mirroring; Affective empathy; Associative learning; Emotional contagion; Empathy; Empathic understanding; Learned Matching; Mirror neurons; Self-stimulation; Synchronous emotion

Tuesday, March 4, 2025

The Zelinsky Meeting: What DID happen, anyhow?

Trump vs. Zelensky: The Morning After | Robert Wright & Andrew Day

0:00 Andrew’s new gig (and gifts)
3:15 Play-by-play of the Oval Office blowup
9:06 Zelensky’s and Trump’s obsessions
16:32 What are Russia’s red lines?
26:13 Guaranteeing Ukraine’s (and Europe’s) security
33:26 Heading to Overtime

Discussed in Overtime:

Is Trump trying to oust Zelensky?
How Trump and Zelensky can reconcile
Riding the Trump roller coaster
Can Europe keep Europe secure?
Andrew airs his anti-boomer grievances

Vistas [looking for the mother ship]

Why the printing press didn’t drive either the Scientific or the Industrial Revolution [McLuhan was wrong • Implications for progress studies]

Soon after ChatGPT hit the web both the progress punditocracy and the high-tech evangelists were busy chattering away about how this is the greatest thing since the printing press. They’re wrong. Was the printing press important? Sure, it was very important. But it was important as a vehicle for disseminating existing ideas, not for catalyzing new ones. Printed books have the same cognitive affordances – to borrow a term from James J, Gibson – as hand-lettered books, discrete letters in a rectangular array on a flat surface. That’s it. Mechanizing the process of impressing the letters on the page does not change the cognitive affordances of the printed page.

Back when David Hays and I began thinking about publishing on cultural evolution he brough his attention to a collection of essays by Eric Havelock, The Literate Revolution in Greece and Its Cultural Consequences (Princeton 1982), in which Havelock pointed out that was the adoption of decimal arithmetic was the essential step. Hays and I thought that was correct, and adopted it in the article we eventually published, The Evolution of Cognition (1990):

The role which speech plays in Rank 1 thought, and writing plays in Rank 2 thought, is taken by calculation in Rank 3 thought (cf. Havelock 1982: 341 ff.). Writing appears in Rank 1 cultures and proves to be a medium for Rank 2 thinking. Calculation in a strict sense appears in Rank 2 and proves to be a medium for Rank 3 thinking. Rank 2 thinkers developed a perspicuous notation and algorithms. It remained for Rank 3 thinkers to exploit calculational algorithms effectively. An algorithm is a procedure for computation which is explicit in the sense that all of its steps are specified and effective in the sense that the procedure will produce the correct answer. The procedures of arithmetic calculation which we teach in elementary school are algorithms.

A bit later we remark:

The amazing thing about algorithmic calculation is that it always works. If two, or three, or four, people make the calculation, they all come up with the same answer. This is not true of non-algorithmic calculation, where procedures were developed on a case-by-case basis with no statements of general principles. In this situation some arithmeticians are going to get right answers more often than others, but no one can be sure of hitting on the right answer every time.

This ad hoc intellectual style, moreover, would make it almost impossible to sense the underlying integrity of the arithmetic system, to display its workings independently of the ingenious efforts of the arithmetician. The ancients were as interested in magical properties of numbers as in separating the odd from the even (Marrou 179-181). By interposing explicit procedures between the arithmetician and his numbers, algorithmic systems contribute to the intuition of a firm subject-object distinction. The world of algorithmic calculations is the same for all arithmeticians and is therefore essentially distinct from them. It is a self-contained universe of objects (numbers) and processes (the algorithms). The stage is now set for experimental science. Science presents us with a mechanistic world and adopts the experimental test as its way of maintaining objectivity. A theory is true if its conceptual mechanism (its "algorithm") suggests observations which are subsequently confirmed by different observers. Just as the results of calculation can be checked, so can theories.

Given that, I take the wide-spread belief in the catalytic efficacy of the printing press as an indicator that the intellectual classes are not prepared for the revolution made possible through artificial intelligence. Just as generals are said always to fight the last war, so intellectuals are always proclaiming the last intellectual revolution. Our intellectuals are not prepared to think about thinking, about the processes of the mind, in a deep way.

* * * * *

Early in February I had a post on why Rome didn’t have an industrial revolution. The bulk of that post was a discussion I had with Claude 3.5 Sonata. As the lack of the printing press was one of the hypotheses on offer – in an article by Maxwell Tabarrok – Claude and I had a great deal of discussion about that. Once we’d conclude our discussion of Rome I decided to push on about the printing press. I suspected that Marshall McLuhan was responsible for that mistaken idea and asked Claude about.

Here's that conversation. As always, my remarks are set in bold face while Claude’s responses are plain.

McLuhan and the printing press view

The idea that the printing press was the main driver of intellectual change during the Renaissance, I believe that idea is widely held. Marshall McLuhan argued that in The Gutenberg Galaxy, and he was quite influential in the 1960s and 1970s and remains so today. Hays and I got clued-in to decimal arithmetic by Eric Havelock. But, as far as I know, that view has not been very influential. In discussions about the impact of AI I’ve heard many people compare it to the printing press. That seems to be what people think of in this kind of context, the printing press. Am I missing something about the range of views?

Your assessment seems accurate. The printing press as a primary driver of Renaissance intellectual change is indeed the dominant narrative in both academic and popular discourse. This can be seen even in the article you've shared - Tabarrok immediately gravitates to the printing press explanation without seriously considering alternatives.

This dominance might stem from several factors:

1. The printing press's effects are more immediately visible and concrete than those of decimal arithmetic. You can literally see books spreading knowledge.

2. The printing press story fits into a broader narrative about democratization of knowledge that appeals to modern sensibilities.

3. McLuhan's influence, as you note, helped cement this view, particularly through his compelling analysis of how media shapes cognition.

In contrast, the decimal arithmetic hypothesis requires a more complex understanding of how mathematical procedures can reshape cognitive frameworks. Havelock's view hasn't gained the same traction, perhaps because:

1. It's more difficult to trace the direct impacts of mathematical procedures on thought 2. It requires understanding both mathematical and cognitive concepts 3. It lacks the immediate intuitive appeal of the printing press narrative

Your papers with Hays present a strong case for decimal arithmetic's role, but I'm struggling to find other major works that focus on this as the key driver of Renaissance cognitive change. Even discussions of mathematical history tend to focus more on specific discoveries rather than how new mathematical procedures reshape thinking.

In current AI discussions, do you think the frequent printing press comparisons might be missing important aspects of how AI could reshape cognition, similar to how the printing press narrative misses the cognitive impact of decimal arithmetic?

Progress studies

Well, sorta’, yes. But that’s not quite the point. I take this belief in the printing press as an index of a general inadequacy in thinking about these kinds of issues.

As you know, there’s been a push for progress studies in the last few years. When it first emerged, I was interested and excited since the ideas that Hays and I have developed are certainly relevant to any study of progress. But I had no luck getting people interested. So it’s not so much the failure to understand how AI could reshape cognition as it is a more general failure to understand the role of cognition in the cultural process. There’s no concept of a general cognitive architecture underlying a whole family of ideas. Rather, the emphasis is more local, on this idea and that idea and the other ones over there and so forth.

For that matter, I don’t think Piaget’s ideas about genetic epistemology have really caught on either, or the closely associated idea of reflective abstraction.

This points to a deeper issue in how we think about intellectual and cultural progress. Instead of seeing ideas as emerging from and transforming underlying cognitive architectures, the tendency is to treat them as isolated units that simply accumulate or spread. This "local" view of ideas misses the systemic nature of cognitive development and cultural evolution that you and Hays describe.

The progress studies movement, while valuable in drawing attention to the importance of progress, seems to share this limitation. It tends to focus on:

  1. Individual innovations and discoveries
  2. Institutional structures and incentives
  3. Knowledge transmission mechanisms
  4. Economic and technological factors

But it gives less attention to how these elements relate to fundamental changes in how people think and process information - the kind of changes that Piaget described in individual development and that you and Hays describe at the cultural level.

This may explain why decimal arithmetic's role gets overlooked - it's harder to see its importance if you're not thinking about how it transformed the underlying cognitive architecture that makes certain kinds of ideas possible in the first place.

AI and the future of legal practice

Schwarcz, Daniel and Manning, Sam and Barry, Patrick James and Cleveland, David R. and Prescott, J.J. and Rich, Beverly, AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice (March 02, 2025). Available at SSRN: https://ssrn.com/abstract=5162111

Abstract: Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.

H/t Tyler Cowen.

Monday, March 3, 2025

The confabulist real [dream time]

Confabulation, Dylan’s epistemic stance, and progress in the arts: “I’ll let you be in my dreams of I can be in yours.”

I continue to think about the tendency of LLMs to confabulate, that is, to make stuff up that is simply not true of the world. As I have remarked here and there, I tend to think that 1) confabulation is inherent in the architecture, and 2) that this “confabulation” is the default mode of human language. We just make things up.

However, we must live with one another and that requires cooperation. Effective communication requires agreement. It turns out that the external world is a convenient locus for that agreement. We agree that THAT tree over is a pine, that THAT apple is ripe, that THAT bird is a cardinal, that the stew is too salty, the earth is round and that the moon travels around the earth every 28 days. Some of these agreements may come easily, others are more difficult in the making.evolu

However true that may be, it does seem a bit odd to think of external reality as a vehicle for grounding agreement on language use. And, if I thought about it a bit, I could probably come up with some account of why that doesn’t seem quite right. But I’m stalking a different beast at the moment.

Consider this observation that Weston La Barre made in 1972 in The Ghost Dance: The Origins of Religion (p. 60):

... the Australian Bushman themselves equate dream-time with the myth-time that is mysteriously brought back in ritual; myth is as timeless as the unconscious mind. It is the delectability of dreams that makes them desirable, and it is their desirability that (along with lowered critical threshold) that gives them their intense “reality” and conviction. The fact that he dreams first force on man the need to epistemologize.

Let’s pretend we’ve just gotten up from a dream. What do we make of it? Or perhaps, how do we make anything of it?

We tell it to others, and they tell us their dreams. We talk and arrive at some agreement about what transpires in those dreams, about the nature of those events and their relationship to the waking world. I am thus reminded of Bob Dylan’s line in “Talkin’ World War III Blues”: “I’ll let you be in my dreams of I can be in yours.”

Is that what expressive culture is about, religious myth, ritual, folktales, drama, epics, novels, not to mention the visual and plastic arts as well? Are they vehicles for allowing us to reach agreement about certain of our, shall we say, confabulations?

This brings up a post from February, What is it about depicting sex in writing? From shared knowledge to common knowledge. Using three novels as examples (Pride and Prejudice, Tess of the D’Urbervilles, and Lady Chatterly’s Lover) I suggested that as we move from one to the other in historical order, we’re moving from shared knowledge of sexual matters to common knowledge. Those are terms from game theory. We talk of shared knowledge when everyone knows something, but aren’t aware that the others know that same thing. Once everyone comes to know that everyone knows that thing, whatever it is, that knowledge has become common knowledge.

I have thus begun to wonder: Is that how the arts function over time? I recently discussed the matter with Claude 3.7. Here’s its response:

It's not just that artistic styles change, but that art gradually makes explicit what was once implicit across many domains of human experience.

This pattern extends beyond sexuality. Consider how literature has treated:

  • Mental illness (from metaphorical to explicit)
  • Social class tensions (from coded to direct)
  • Religious doubt (from allegorical to overt)
  • Political criticism (from veiled to unmistakable)

In each case, what begins as shared knowledge eventually becomes common knowledge through artistic expression. This does suggest a kind of progress - not necessarily aesthetic progress, but progress in cultural honesty and transparency.

It is by no means obvious that this is the case. But I’m certainly sympathetic.

What is undeniable is that in the last 15 years or so digital humanists are found empirical evidence of unidirectional trends measured attributed of literary texts over long periods of time. Perhaps the most interesting example is in Matthew Jocker’s Macroanalysis, where he shows a unidirectional trend in a corpus of 3000 Anglophone novels from the 19th century. I discuss this in a number of posts. This working paper might be the place to start: On the direction of literary history: How should we interpret that 3300 node graph in Macroanalysis? There’s another working paper: On the Direction of 19th Century Poetic Style, Underwood and Sellers 2015. You might also look at this blog post from 2016, From Telling to Showing, by the Numbers, which is also about 19th century novels.

More later.

Sunday, March 2, 2025

The most rapid change in human written communication ever?

Download the paper here.

Red Jeep

Bipedalism, Hominin Fetuses, and Musical and Linguistic Evolution

Matz Larsson and Dean Falk, Direct Effects of Bipedalism on Early Hominin Fetuses Stimulated Later Musical and Linguistic Evolution, Current Anthropology, Published online February 21, 2025.

Abstract

We hypothesize that auditory and motor entrainment evolved in early hominin fetuses in direct response to their mothers’ bipedal footsteps and, later, contributed to the evolution of music and language via two related processes. First, selection for bipedalism transformed feet from grasping into weight-bearing organs, which negatively affected infants’ ability to cling to their mothers, provoking the emergence of novel affective vocal exchanges between mothers and infants that became building blocks for the emergence of motherese. Second, the derived ability to entrain movements to sound was incorporated during the prehistoric emergence of wide-ranging rhythmic behaviors such as synchronized chanting of nonlexical vocables and coordinated rhythmic clapping and stomping, which became instrumental during the more recent evolution of music. Like the derived ability to keep beat with rhythmic sounds, nascent motherese entailed entrainment of motor behavior (the physical production of pitch, timing, and vocalization rate) with external sources of sound (conversational utterances). If motherese was a precursor for language evolution, as many believe, music and language share phylogenetically derived substrates for auditory and motor entrainment that stemmed directly from bipedalism. If so, bipedalism was more important for serendipitously sculpting advanced cognition in our prehistoric ancestors than previously believed.

Nous émettons l’hypothèse que l’entraînement auditif/moteur a évolué chez les fœtus des premiers hominines en réponse directe aux pas bipèdes de leurs mères, et qu’il a ensuite contribué à l’évolution de la musique et du langage via deux processus connexes. Premièrement, la sélection en faveur de la bipédie a transformé les pieds en organes de soutien plutôt que des organes de préhension, ce qui a eu un impact négatif sur la capacité des nourrissons à s’accrocher à leurs mères, provoquant ainsi l’émergence d’échanges vocaux affectifs novateurs entre les mères et les nourrissons, qui sont devenus des éléments constitutifs de l’émergence du mamanais (motherese, en anglais). Deuxièmement, la capacité dérivée à synchroniser les mouvements avec le son a été intégrée lors de l’émergence préhistorique de comportements rythmiques variés, tels que le chant synchronisé de vocables non lexicaux et le battement de mains et de pieds coordonné, qui sont devenus des éléments essentiels durant l’évolution plus récente de la musique. Tout comme la capacité dérivée à suivre le rythme des sons rythmiques, le mamanais naissant impliquait l’entraînement du comportement moteur (la production physique de la hauteur, du timing et du taux de vocalisation) avec des sources sonores externes (les énonciations conversationnelles). Si le mamanais était un précurseur de l’évolution du langage, comme le pensent de nombreux chercheurs, la musique et le langage partagent alors des substrats phylogénétiquement dérivés pour l’entraînement auditif/moteur qui découle directement de la bipédie. Si tel est le cas, la bipédie aurait été plus importante pour sculpter de manière fortuite la cognition avancée de nos ancêtres préhistoriques que l’on croyait auparavant.

Wir gehen davon aus, dass sich die auditorische/motorische Synchronisation (Entrainment) bei frühen homininen Föten als direkte Reaktion auf den zweibeinigen Gang ihrer Mütter entwickelte und später über zwei verwandte Prozesse zur Entwicklung von Musik und Sprache beitrug. Erstens verwandelte die Selektion für die Zweibeinigkeit die Füße von Greiforganen in tragende Organe, was sich negativ auf die Fähigkeit der Säuglinge auswirkte, sich an ihre Mütter zu klammern und was zur Entstehung neuartiger affektiver Stimmaustausche zwischen Müttern und Säuglingen führte, die zu Bausteinen für die Entstehung der Motherese (Mutter-Baby-Dialog) wurden. Zweitens wurde die entwickelte Fähigkeit Bewegungen mit dem Klang zu synchronisieren, während der prähistorischen Entstehung weitreichender rhythmischer Verhaltensweisen, integriert, etwa beim synchronisierten Singen nicht-lexikalischer Vokabeln und beim koordinierten rhythmischen Klatschen und Stampfen, die in der neueren Entwicklung der Musik instrumental wurden. Ebenso wie die daraus entwickelte Fähigkeit, den Takt rhythmischer Klänge zu halten, erforderte die entstehende Motherese die Einbindung motorischen Verhaltens (der physischen Produktion von Tonhöhe, Timing und Vokalisierungsrate) in Verbindung mit externen Klangquellen (Gesprächsäußerungen). Wenn Motherese ein Vorläufer der Sprachentwicklung war, wie viele glauben, teilen Musik und Sprache phylogenetisch entstandene Substrate für die auditive/motorische Synchronisation, die direkt aus der Zweibeinigkeit stammen. Wenn dem so ist, war die Zweibeinigkeit für die zufällige Bildung fortgeschrittener Kognitionen bei unseren prähistorischen Vorfahren wichtiger als bisher angenommen.

Ipotizziamo che il trascinamento uditivo/motorio si sia evoluto nei primi feti di ominidi in risposta diretta ai passi bipedi delle loro madri e, in seguito, abbia contribuito all’evoluzione della musica e del linguaggio attraverso due processi correlati. In primo luogo, la selezione per il bipedismo ha trasformato i piedi da organi di presa in organi di carico, il che ha avuto un impatto negativo sulla capacità dei bambini di aggrapparsi alle madri, provocando l’emergere di nuovi scambi vocali, affettivi tra madri e bambini, che sono diventati elementi costitutivi per l’emergere del “maternese”. In secondo luogo, la capacità derivata di associare i movimenti al suono fu incorporata durante l’emergere preistorico di comportamenti ritmici ad ampio raggio, come il canto sincronizzato di vocaboli non lessicali e il battito delle mani e il calpestio ritmicamente coordinati, che divennero strumentali durante l’evoluzione più recente della musica. Come l’abilità derivata di seguire il ritmo dei suoni ritmici, il nascente “motherese” implicava il trascinamento del comportamento motorio (la produzione fisica dell’altezza, del ritmo e della velocità della vocalizzazione) con fonti sonore esterne (espressioni conversazionali). Se il maternese è stato un precursore dell’evoluzione del linguaggio, come molti credono, la musica e il linguaggio condividono substrati filogeneticamente derivati per il trascinamento uditivo/motorio che derivavano direttamente dal bipedismo. Se è così, il bipedismo è stato più importante di quanto si credesse in precedenza per scolpire in modo fortuito la cognizione avanzata nei nostri antenati preistorici.

‏راه رفتن انسانهای آغازین دلیل فرگشتی بوجود امدن و پیشرفت موسیقی و زبان انسان ها دارای این توانایی منحصر به فرد هستند که بدنشان را منطبق با امواج صوتی آهنگین به صورت ریتمیک و با زمانبدیه هماهنگ حرکت دهند( آنچه که entrainment خوانده می شود).فرضیه ما این است که این توانایی در جنین های هومینین اولیه به عنوان پاسخی مستقیم به راه رفتن دوپایی مادرانشان تحریک شد و همچنین این سبک راه رفتن (قدمهای دوپا) بعداً از طریق دو فرآیند مرتبط، به توسعه موسیقی و زبان کمک کرد. پاها که اساساً اندامی برای گرفتن بودند، حالا حمل کننده وزن هستند و این امر توانایی نوزادان را برای درکنار نگه داشتن مادر مختل میکند. فقدان تماس فیزیکی، نیاز و پیشرفت ارتباط صوتی عاطفی بین مادر و نوزاد را تحریک کرد، که به نوبه خود پایهای شد برای شیوهای خاص از اصوات جهت دار به سوی کودک که مادرانه نامیده می شود. توانایی پاسخ حرکتی ریتمیک به صدا همچنین می تواند رفتارهای ریتمیک مختلف، مانند دست زدن همزمان، کوبیدن پا و نجوای آهنگین بی کلام را در هم ادغام کرده باشد.این احتمال وجود دارد که چنین اشکالی از توانایی های ماقبل تاریخ به پایه ریزی علایق و توانایی های موسیقیایی انسان کمک کرده باشد.توانایی شکوفنده ی تعقیب متوازن صداهای موزون، و همچنین رشد زودهنگام کلام مادرانه، این فرض را برانگیخت که فرد دارای ظرفیت متوازن سازی عملکرد خود در ارتباط با منابع صوتی خارجی، برای مثال کلام دیگران (برای ایجاد زیر و بمی درست، زمانبندی مناسب و سرعت کلامی سازی) است. اگر همانطور که بسیاری معتقدند، کلام مادرانه پیش قراولی برای رشد زبان بوده است، پس موسیقی و زبان، بستر فیلوژنتیکی مشترکی برای اینتریمنت شنوایی/موتوری دارند. این بستر مشترک مستقیماً از رشد انسان در راه رفتن دوپا ناشی شده است. پس به این ترتیب راه رفتن بر دوپا، گامی مهمتر از آنچه قبلاً فکر میکردیم برای توسعه ی شناخت های ارتقا یافته ی اجداد بشر بوده است.‎

Människan har en enastående förmåga att utifrån ljudsignaler röra kroppen rytmiskt och i takt (s.k. entrainment). Vår hypotes är att denna förmåga stimulerades hos tidiga hominin-foster som ett direkt svar på deras mödrars tvåbenta gång samt att denna gångstil (bipedalism) senare, via två relaterade processer, bidrog till utvecklingen av musik och språk. Fötterna, som främst varit greppande organ blev nu viktbärande, vilket försämrade spädbarns förmåga att hålla fast vid sina mödrar. Utebliven fysisk kontakt, stimulerade behovet och utvecklingen av känslobaserad vokal kommunikation mellan mödrar och spädbarn, vilket i sin tur lade grunden till det specifika och barnriktade sätt att tala som kallas motherese. Förmågan att röra sig rytmiskt till ljud kunde dessutom inlemmas i diverse rytmiska beteenden, såsom synkroniserad handklappning, stampande fötter och (ordlös) sång. Det är sannolikt att sådana former av prehistorisk entrainment var med och lade grunden till människans musikaliska intresse och förmåga. Såväl den spirande förmågan att följa med i takt till rytmiska ljud, som den tidiga utvecklingen av motherese förutsatte att individen hade motorisk förmåga till entrainment i förhållande till externa ljudkällor, till exempel andras tal (för att skapa rätt tonhöjd, tajming, och vokaliseringshastighet). Om motherese, som många tror, var en föregångare till språkutveckling, så delar musik och språk fylogenetiska substrat för hörsel/motor entrainment som kan härledas direkt från människans utveckling av tvåfota gång. I så fall var bipedalism ett viktigare steg än vi tidigare trott vad gäller människosläktets utveckling av avancerad kognition.

私たちの提示する仮説は、聴覚運動の訓練は初期の人類の胎児において母親の二足歩行の足音への直接の反応として進化し、その後、二点の関連したプロセスにおいて、音楽と言語の進化に貢献したというものである。第一点に、二足歩行への選択は、足を掴む機能の器官から重力を支えるものへと変化させた。それは、赤子が母親にしがみつく能力には相応しくない結果となり、母親と幼児の間の斬新な感情的な声のやり取りの発生を促し、母親言葉の発生の構成要素となった。 第二点は、その声に由来する音への動きが、非語彙的な音に合わせた詠唱やリズミカルな手拍子や足拍子という広い範囲の先史時代のリズミカルな行動となった。それは、音楽の進化の手段となった。リズミカルな音の拍子を保つ能力が生まれたように、初期の母親言語は、外部的な音源(会話の発言)と運動行動訓練(ピッチやタイミングや発声の生産)が必然的に伴うのである。もし、多くが信じるように母親言語が言語進化の前駆であるなら、音楽と言語は、系統発生的に二足歩行から直接生じた聴覚運動の訓練の基板を共有する。もしそうであるのなら、二足歩行は、先史時代に偶然にも形成された高度な認知力にとって、以前に信じられていたよりもはるかに重要なのである。

The return of shaky-cam [night crawler]

A drop in zero-sum thinking, America and beyond

Damien Cave, Welcome to the Zero Sum Era. Now How Do We Get Out? NYTimes, March 1, 2025. The lede: “Zero-sum thinking has spread like a mind virus, from geopolitics to pop culture.” Zero-sum thinking: “...the belief that life is a battle over finite rewards where gains for one mean losses for another.”

Later:

But nowhere is the rise of our zero-sum era more pronounced than on the world stage, where President Trump has been demolishing decades of collaborative foreign policy with threats of protectionist tariffs and demands for Greenland, Gaza, the Panama Canal and mineral rights in Ukraine. Since taking office, he has often channeled the age he most admires — the imperial 19th century.

And in his own past, zero-sum thinking was deeply ingrained. His biographers tell us he learned from his father that you were either a winner or loser in life, and that there was nothing worse than being a sucker. In Trumpworld, it’s kill or be killed; he who is not a hammer must be an anvil.

Mr. Trump may not be alone in this. Vladimir Putin of Russia and Xi Jinping of China have also displayed a zero-sum view of a world in which bigger powers get to do what they want while weaker ones suffer. All three leaders, no matter what they say, often behave as if power and prosperity were in short supply, leading inexorably to competition and confrontation..

Until recently, the international order largely was built on a different idea — that interdependence and rules boost opportunities for all. It was aspirational, producing fourfold economic growth since the 1980s, and even nuclear disarmament treaties from superpowers. It was also filled with gassy promises — from places like Davos or the G20 — that rarely improved day-to-day lives.

Deep history:

Zero-sum thinking probably seemed to make a lot of sense for our evolutionary ancestors, who were forced to compete for food to survive. But the mind-set has lingered and researchers have become more interested in mapping its impact.

The most recent work in the social sciences builds on the findings of George M. Foster, an anthropologist from the University of California, Berkeley. He did his field work in Mexico’s rural communities where he was the first researcher to show that some societies hold “an image of limited good.”

In 1965, he wrote that the people he studied in the hills of Michoacán view their entire universe “as one in which all of the desired things in life such as land, wealth, health, friendship and love, manliness and honor, respect and status, power and influence, security and safety, exist in finite quantity and are always in short supply.”

From the recent past:

The last time zero-sum thinking guided the world, Europe’s colonial powers of the 16th to 19th centuries saw wealth as finite, measured in gold, silver and land. Gains for one translated to losses for another and empires levied high tariffs to protect themselves from competitors.

Mr. Trump has romanticized the era’s tail end. “We were at our richest from 1870 to 1913,” he told reporters last month. “That’s when we were a tariff country.”

In fact, the United States is far richer now in household income and economic output. But of greater concern may be Mr. Trump’s refusal to acknowledge the historical context. Economists say the mercantilism and great-power rivalries of that imperial age hindered wealth creation, advanced inequality and often led to the most complete zero-sum game of all: war.

Possible causes of zero-sum thinking:

Economic inequality fosters such a belief about success. But zero-sum Americans may not really be squabbling over taxes, college, jobs or wealth.

Jer Clifton, a psychologist at the University of Pennsylvania who oversees extensive surveys of primal world beliefs, told me the current backlash may be rooted in a zero-sum conviction about something deeper: importance.

Many Americans seem to fear that if some other group matters more, they matter less. “In 21st-century America, the more common, driving fear is not food or resource scarcity, but not enough meaning,” Dr. Clifton said. “We are a people desperate to matter.”

On the bright side, “studies have found that people can be taught to see situations as nonzero sum with deliberation and guidance.”

There's more at the link.

Back in 1999 Robert Wright published Nonzero: The Logic of Human Destiny, which covers the evolution of human society from human origins to the present. He now has a substack of that title, Nonzero. I discuss Wright’s ideas in a number blog posts, and Nonzero specifically in: Cultural Evolution and Human Progress, from 2010, and A quick guide to cultural evolution for humanists [#DH digital humanities], from 2019.

* * * * *

Bonus, from the article: “Mr. Smithson said he often told students in his classes to see him as their opponent so they would collaborate with one another, not compete.”

Saturday, March 1, 2025

My favorite comfort food

Fareed Zakaria on Trump's destruction of American foreign policy

Ezra Klein, The Dark Heart of Trump’s Foreign Policy, NYTimes, March 1, 2025. Klein interviews Fareed Zakaria. Here's Klein's statement of Trump's foreign policy:

Let me try to reflect what Trump’s people tell me. There is this, as people call it, rules-based international order. And the thing that people like Joe Biden, Jake Sullivan and Fareed Zakaria say is that America benefits from that order — and benefits from being part of that order.

There has long been a critique from the left that America, in fact, dominates that order and doesn’t play by its rules. We break international law. We do the things we want to do and then use those rules on others when we don’t like what they’re doing.

But the critique from Trump is that’s not true: Of every country, America, as the strongest, is harmed the most by these restraints, rules and laws. Because we have so much leverage we could be using. We could slap tariffs on anybody for any reason and get them to do what we want. We have the strongest military of all the militaries. Everybody wants to be on our side, and everybody fears being on our bad side.

And what Trump is doing is systematically searching out the strength America has — the ways we can wield our weight and leverage. He’s untying our hands from behind our back.

Zakaria's response:

There is a certain truth to that. The United States does have enormous power. And by the way, they’re even right about the fact that the United States is more open to, for example, the world’s goods and services than the world is to ours. The United States has long practiced a kind of asymmetrical free trade.

After World War II, we decided we would open up our markets to Europe and East Asia, to Japan and South Korea. And the reason we did that was we were trying to build an international system where everyone benefited, where there really wasn’t that feeling of a beggar-thy-neighbor, zero-sum game, where everyone went into a competitive spiral, which then ended up in nationalism and war.

We were trying to build something different. And we thought: We can be a little generous here. Let’s let everyone grow, and we’ll do fine in the process.

And of course, the data is overwhelming. Yes, Europe, Japan, South Korea and places like that grew. But the United States absolutely dominated the world. Because it’s a classic positive-sum game. We created a much larger global economy — much larger trading system, huge capital flows — and we were at the center of it.

The dollar was the reserve currency of the world, which alone gives us incredible advantages. We’re the only country that doesn’t have to worry that much about debt and deficits, because we know that, at the end of the day, the dollar is the reserve currency.

And my feeling is, if you take that system and say: OK, we’re going to look at each bilateral relationship and see if we can squeeze this country for a slightly better deal, you probably will get a better deal. But two things will happen: The first thing is you will end up fracturing your alliances. Because the people with whom you have the most leverage are your allies. [...]

So the result of the Trump doctrine in action has been a war on America’s allies. But the second more important part is: Yes, you’ll gain a little bit here and there by getting slightly better tariff deals. [...] And what you will do, by squeezing each of these individual countries, humiliating them, forcing them to accept renegotiation of terms, is that you lose the relationships that you had built over eight decades, that created this extraordinary anchor of stability in the world, which was the Western alliance. And the gains are not that great.

There's much more at the link. 

Trump says we wants peace, but he's moving toward a world in which war is more likely.

Sun through clouds through branches

Algorithms that rule the world

The complete list of algorithms:

🔷Search Algorithms ⭢ Efficiently locates items in datasets.
↳ Uses: search engines, databases, dictionaries, file systems.

🔷RSA Algorithm ⭢ Secures data with asymmetric encryption.
↳ Uses: online banking, digital signatures, secure emails, VPNs.

🔷PageRank ⭢ Calculates web page relevance by links.
↳ Uses: search engines, content ranking, web indexing, SEO tools.

🔷Convolutional Neural Networks (CNNs) ⭢ Processes image patterns for analysis.
↳ Uses: facial recognition, medical imaging, object detection, self-driving cars.

🔷Transformers ⭢ Powers advanced natural language models.
↳ Uses: AI chatbots, translations, text generation, sentiment analysis.

🔷Fast Fourier Transform ⭢ Analyzes signals by converting them to frequency domain.
↳ Uses: communication, audio processing, image compression, medical imaging.

🔷Dijkstra’s Algorithm ⭢ Determines shortest paths in networks.
↳ Uses: GPS systems, network routing, game development, logistics planning.

🔷Gradient Descent ⭢ Optimizes machine learning model performance.
↳ Uses: neural networks, AI training, data fitting, predictive analytics.

🔷Secure Hash Algorithms ⭢ Protects data integrity through hashing.
↳ Uses: password storage, blockchain, file verification, digital signatures.

Has America's position in the world changed forever so that the country's word will never again be trusted?

From the Bulwark:

Eric and Eliot bemoan the shameful meltdown in the Oval Office between VP Vance, President Trump and President Zelensky. They discuss Vance’s ambush and whether he executed it alone or in concert with Trump and note that those who are blaming Zelensky for rising to the bait are objectively pro-Putin. They discuss Vance’s dark political views and they consider what Europeans can and should do. They also discuss Trump’s delegation of responsibility to others and his “Trump Gaza” AI generated video. Eric asks only partially tongue in cheek what the odds are on Trump declaring himself a god by the end of the term.

Near the late middle of this podcast Eliot Cohen asks (c. 22:50): Can America ever recover from (what appears to be) the Trump foreign policy. Will America’s word ever be trusted again? Cohen goes on to remark (23:15) that many people around the world are remarking, in effect, “actually we’ve known that for some time, Thank you very much. And they have some reason to say those things.” But Cohen does think (23:30) “this is qualitatively different. [...] Something’s broken, and this is not one you can put back together again.”

Perhaps we’re seeing the conversion of shared knowledge to common knowledge.