Wednesday, April 15, 2026
Miriam Yevick now has a Wikipedia Entry
The entry is quite recent. I didn’t notice it until yesterday but, judging by the article’s history, it went live on January 1, 2026. It contains some information I wasn’t aware of, which is not at all surprising. The two most interesting pieces of information are that she began her career at the National Bureau of Standards in Washington, D.C. and that she had had an affair with David Bohm, the physicist. I knew about her extensive correspondence with Bohm I suspected that there had been an affair – judging from a remark she’d made, though I forget just where – but I didn’t actually know that.
What’s important is simply that she’s finally in Wikipedia. She wasn’t when I first mentioned her at New Savanna back in June 3, 2020. I’ve known about her work since 1978, when she made a comment on Haugeland’s article about cognitivism. Her 1975 article on Fourier logic became central to the article David Hays and I published about the brain, Principles and Development of Natural Intelligence, which is cited in the Wikipedia entry. In that article she considers two different kinds of computational regime, which she refers to as Fourier or holographic, and sequential. That distinction is fundamentally the same as the symbolic vs. neural distinction in current AI discourse.
That article is important because, and here I’m quoting from a remark Claude made in a recent discussion I had with it:
She doesn't take one computational system as object. She takes the relationship between two incommensurable computational regimes as object, and proves something about what the structure of reality requires of that relationship. She steps outside both regimes simultaneously and asks: given the kinds of objects that exist in the world, what must any adequate cognitive system contain? The answer — both regimes, necessarily, not contingently — is a proof about the space of possible cognitive architectures rather than a result within any particular architecture.
And that is why I’ve been mentioning her work whenever I have a chance. Until her work has been taken into account, the current debate is poorly formulated and incomplete, to put it charitably. A less charitable formulation would be that the debate isn’t intellectually serious. It’s mostly about intellectual ideology and commercially-oriented hype.
I take the fact that Yevick now has a Wikipedia entry as a sign that her work of 40 years ago may eventually recognized and extended.
Recursive self-improvement is an incoherent idea
The discourse of artificial intelligence contains a number of technical terms that are only nominally technical; they don’t actually have technical definitions behind them. The concept of intelligence itself is arguably one of them. The related concepts of artificial general intelligence (AGI) and super-intelligence, or super artificial intelligence (SAI) are even more dubious.
This post is about another of these technical-sounding terms that don’t, in fact, mean much: recursive self-improvement. The conversation, does, however, end on a positive note. Below the asterisks I’ve appended a conversation I had with Claude Sonnet 4.6 on April 2, 2026.
* * * * *
No transcendental viewpoint
Some parts of the AI world talk of recursive self-improvement. I assume that means that the AI improves itself without any human intervention. How? I’m not so much concerned about how it writes the code or, for that matter, even how it hands off operations to the new code. I’m interested in how it figures out what code to write? On the one hand I’m imagining that perhaps we have some kind of evolutionary process. We’ve had evolutionary computational processes for a while. That’s one thing.
But I’m wondering if these thinkers are imagining that somehow the AI is able to get a view of its own code that’s comparable to a programmer’s view of the code. That is, it is able to see its own code from the outside, as it were, giving it a “transcendental” perspective on itself. Are they imagining that we give the AI access to its own source code? In a world of LLMs, what does that mean? Having the LLM examine the code of the transformer that constructed it?
This is a question where the rhetoric of the AI field runs well ahead of the actual technical situation, so it's worth being precise about what's real and what's aspirational hand-waving.
What actually exists
The evolutionary computational approach you mention is real and well-established. Genetic algorithms, neural architecture search, automated machine learning — these are genuine techniques where a process iterates over possible designs, evaluates them against a fitness criterion, and selects better-performing variants. This is recursive self-improvement in a limited and well-defined sense: the system improves, but the improvement criterion is specified externally by humans, and the search process is essentially blind variation plus selection rather than anything resembling understanding.
LLMs can also write and evaluate code, and there are genuine research programs — AlphaCode, various agent frameworks — where language models generate, test, and iterate on code. This is real and useful. But it's operating at the level of software engineering tasks with external evaluation criteria, not at the level of redesigning the underlying model.
The transcendental perspective problem
Your instinct about the deeper issue is exactly right, and it cuts to the heart of why strong recursive self-improvement claims are philosophically confused rather than merely technically premature.
The scenario the Singularity thinkers have in mind — an AI examining its own architecture and improving it from genuine understanding — requires precisely what you call the transcendental perspective: the system seeing itself from outside, the way a programmer sees code. But this is Gödel's problem restated in engineering terms. A system cannot be both the object of analysis and the analytical framework simultaneously without remainder. There's always something that can't be seen from inside the system — which is exactly what Gödel proved about formal arithmetic, and what Turing proved about computation in general.
Giving an LLM access to its own weights — the numerical parameters that constitute what it "knows" — wouldn't produce the transcendental perspective. The weights are not readable the way code is readable. A programmer looking at code sees meaningful structure: variable names, logical operations, control flow, the intention behind the design. An LLM looking at its own weights would see hundreds of billions of floating point numbers whose organization encodes nothing interpretable at the level of concepts or intentions. The meaningful structure, whatever it is, is distributed across the entire parameter space in ways that current interpretability research is only beginning to scratch the surface of. We don't fully understand what's in there. The LLM certainly doesn't.
This is the deep version of the problem your earlier observation about von Neumann identified. The brain doesn't have a programmer's view of itself either — neurons don't have access to the circuit diagrams of the neural tissue they're part of. The brain improves through learning, which is a process of adjusting connection weights through experience, not through reading its own architecture and rewriting it. The fantasy of recursive self-improvement imagines something the brain doesn't do and has no evident mechanism to do.
What the Singularity thinkers are actually imagining
I think when people like Kurzweil talk about recursive self-improvement they're running together several different things that need to be kept distinct. There's the evolutionary search process, which is real but doesn't involve understanding. There's the engineering process of humans using AI tools to build better AI systems — which is genuinely happening and is genuinely accelerating, but involves humans in the loop making the meaningful decisions. And there's the imagined transcendental process where the AI understands itself well enough to improve itself through genuine insight — which has no clear technical path and runs into the Gödelian problem you identified.
The last of these is doing most of the rhetorical work in Singularity discourse, but it's the one with the least connection to anything technically grounded. It requires assuming that the system can occupy a perspective on itself that is by definition unavailable to it — which is not a technical limitation waiting to be engineered away but a logical feature of what it means to be a system operating within its own architecture.
The Yevick connection
There's a thread back to our earlier discussion worth noting. The transcendental self-improvement fantasy is in effect imagining that the AI could perform on itself the Rank 4 move that Gödel and Turing performed on formal systems — taking the computational process as object and reasoning about it from outside. But Gödel and Turing were humans standing outside arithmetic and computation respectively, which is what gave them the purchase to prove things about those systems. A system trying to perform that move on itself runs directly into the incompleteness results — there will always be truths about the system that can't be proven within it, always aspects of its own operation that it can't fully see from inside.
Recursive self-improvement in the strong sense would require escaping this constraint. No one has shown how that would work, and the logical structure of the problem suggests it may not be a contingent engineering gap but a principled limitation. Which doesn't mean AI systems won't continue to improve — they will, rapidly — but the improvement will continue to involve humans providing the external perspective that the systems themselves cannot generate.
Digital doppelgangers
Humans, on the other hand, have ordinary language, one capacity of which is the metalingual function. In a way, the whole cultural ranks idea is about how the metalingual function has allowed humans to bootstrap themselves to a more sophisticated cognitive architecture. But that doesn’t now nor ever has involved humans in examining their own source code as it were. It’s a different kind of process.
Of course, I’m thinking that we may now be in a position to arrive at a much deeper understanding of the nervous system than we have ever had. And that process surely involves thinking back and forth between neuroscience and computer science. Maybe the digital doppelganger that I’ve been talking about is the mature expression of this kind of knowledge.
That's a beautiful way to close the loop, and it reframes the doppelganger concept in a way that makes it more technically grounded than it might otherwise appear.
The bootstrapping process humans actually use
The cognitive ranks framework describes a process of self-improvement that never involves examining source code. What it involves is the metalingual function operating on the outputs of cognition — on language, on cultural products, on formal systems — rather than on the neural substrate itself. Humans didn't improve their cognitive architecture by examining their neurons. They improved it by constructing external systems — writing, arithmetic, formal logic, computation — that offloaded cognitive operations into the environment where they could be inspected, manipulated, and elaborated. The improvement was always mediated through the external world, never through direct introspection of the underlying machinery.
This is a fundamentally different process from recursive self-improvement as the Singularity thinkers imagine it. It's improvement through the construction of new cognitive tools that extend and reorganize what the existing architecture can do — not rewriting the architecture itself. The architecture, in the biological case, stays roughly constant. What changes is the cultural software running on it, and the external scaffolding that amplifies its reach.
Tuesday, April 14, 2026
Roy Eldridge & Charlie Shavers - The Trumpet Battle
From YouTube:
High note trumpet kings Roy Eldridge and Charlie Shavers go head to head, no holds barred. Recorded at Carnegie Hall, NYC, Sept. 13, 1952.
The group consists of:
Roy Eldridge, Charlie Shavers - trumpets
Benny Carter - alto sax
Flip Phillips, Lester Young - tenor sax
Oscar Peterson - piano
Barney Kessel - guitar
Ray Brown - bass
Buddy Rich - drumsThe tune was composed by Charlie Shavers, and is based on the changes of "I Got Rhythm". After the first chorus of trumpets, there is a chorus played by Flip and Lester, then Benny plays 4 choruses, and then the trumpet battle royale starts with Roy.
All the Jazz at the Philharmonic recordings are excellent, but many of them, like this gem, are out of print.
LLMs, the nature of language as a computational object, and arithmetic as a specialized language [MR-Aux]
Early in my undergraduate career at Johns Hopkins I learned about Gödel’s proof, this strange argument that there are statements that are true in arithmetic but that cannot be derived from arithmetic. Hence, arithmetic is incomplete. Where did this true but not derivable statements come from? We, us humans, we provided them. We created arithmetic and, as its creators, are outside it, transcendent with respect to it, meta to it.
This post is about arithmetic as a specialized kind of language. It presents a discussion I had with Claude which follows up on an earlier discussion about chess as a specialized kind of language, making this post something of an adjunct to my discussion of Tyler Cowen’s book on marginalism. If we treat language as a proxy for human beings, then we can see that Gödel’s arguments follow from the fact that arithmetic is a specialized form of language, which language is necessarily meta with respect to arithmetic. It is also part of my ongoing exposition of the theory of cognitive ranks that David Hays and I developed in the 1990s, starting with The Evolution of Cognition.
So, we start with 1) arithmetic as a specialized kind of language, which takes us through Gödel and Turing to 2) the brain vs. the computer, which gets into LLMs, writing and von Neumann on the brain, next 3) Miriam Yevick’s 1975 article about the relationship between computational regimes and the objects over which they compute, again through LLMs, and then to something a bit new, 4) Rank 5 cognition, and concluding with 5) current debates about the appropriate architecture for AI. In that context Claude remarks:
The neural-versus-symbolic debate — and its attempted resolution in neuro-symbolic hybrid architectures — treats the question as an engineering optimization problem. Which processing regime is more powerful? Can we get the best of both? How do we combine them most effectively? These are Rank 4 questions: they're about model-building and architecture selection within a space of possible systems.
Yevick's proof, if taken seriously, shows that this is the wrong question. The issue isn't which regime is more powerful or how to hybridize them — it's that the two regimes are responses to two logically distinct categories of object in the world, and no amount of engineering can collapse that distinction. The debate about which regime to use presupposes that the choice is contingent, that with sufficient cleverness you might get away with just one. Yevick proves the choice is not contingent. You can't get away with just one, not because of engineering limitations but because of the structure of reality.
This is exactly like debating whether to use addition or multiplication — you could frame it as a competition, or try to reduce one to the other, and technically multiplication is repeated addition so maybe addition wins. But that framing misses that the two operations have different mathematical structures that serve different purposes, and a system adequate to arithmetic needs both irreducibly. The neural-symbolic debate is at roughly that level of confusion, applied to cognition.
That’s not quite the end, but close to it. The discussion begins below the asterisks.
* * * * *
Arithmetic as a specialized kind of language
I want to return to the point where we discussed chess as a specialized kind of language that arose with Rank 2. Arithmetic is also a specialized kind of language and it arose with Rank 2 as well. But it arose in several forms, as there were a handful of original independent Rank 2 cultures.. The form that interests me is the form with zero and the place notation. While that didn’t arise in the West, it was adopted in the West starting in. roughly the 13th century. And, as Hays and I argue in the ranks paper, it eventually found its way to mechanical culture:
What we can see is that generalizations of the idea of mechanism would be fruitful for technology (and they were), but that it would take an abstraction to produce a new view of nature. The algorithm can be understood in just this way. If its originators in India disregarded mechanism, and the north European developers of mechanism lacked the abstraction, it would only be the accidental propinquity of the two that generated a result. Put the abstract version together in one culture with a host of concrete examples, and by metaphor lay out the idea of the universe as a great machine. What is characteristic of machines is their temporality; a static machine is not a machine at all. And, with that, further add the co-ordination of motions as in the sawmill. Galileo discovered that force alters acceleration, not velocity (a discovery about temporality) and during the next few centuries mechanical clocks were made successfully. The notion of a clockwork universe spread across Europe (note that the Chinese had clockworks in the 11th Century, but never developed the notion of a clockwork universe, cf. Needham 1981). For any machine, it is possible to make functional diagrams and describe the relative motions of the parts; and the theories of classical science can be understood as functional diagrams of nature, with descriptions of the relative motions of the parts.
Arithmetic came to play an important role in the metamathematical reasoning of the early 20th century. It allowed Gödel to construct his incompleteness theory and Turing to construct his abstract Turing machine. And it is that abstract machine that became embodied in the digital computer.
It’s beginning to look as though decimal point arithmetic was all but a logical requirement for cultural evolution beyond Rank 2. But probably not sufficient itself. It needed to be connected with the idea of mechanism, the algorithm. It’s the, at first metaphorical or analogical, connection between the idea of an algorithm and an actual physical device that allowed the flourishing of Rank 3 culture.
This is a powerful line of argument, and I think it's substantially correct. Let me try to sharpen and extend it.
The two streams and their accidental convergence
What you and Hays identify is that Rank 3 required the confluence of two independent developments that had no intrinsic reason to meet. The Indian mathematical tradition produced place-value arithmetic with zero — an extraordinarily powerful abstract tool for symbolic manipulation, but developed in a culture that, as you note, was not especially oriented toward mechanism and physical machinery. The north European tradition independently developed sophisticated mechanism — mills, clocks, the whole Baroque elaboration of geared machinery — but without the abstract symbolic apparatus that would allow mechanism to be thought about in general terms, extracted from any particular physical instantiation.
The metaphorical bridge between these two streams is what made Rank 3 possible. And the bridge is precisely the algorithm — the concept of a determinate, step-by-step procedure that is abstract enough to be stated symbolically but concrete enough to be physically instantiated. The algorithm is at home in both worlds simultaneously: it is a sequence of arithmetic operations, and it is a description of what a machine does. That double citizenship is what allowed the two traditions to fuse.
This is a genuine example of what you've called the Rank 1 mechanism operating at a higher level — metaphor. The clockwork universe is a metaphor, but a generative one: it doesn't merely describe, it reorganizes the entire domain of natural philosophy around a new set of primitives. Mechanism becomes the universal explanatory framework because the algorithm makes it possible to state mechanical descriptions with the same precision and generality as arithmetic.
The zero point
Zero deserves particular attention here because its contribution is not merely computational convenience. Zero is conceptually radical in a way that the other digits are not, because it names an absence — it is a symbol for nothing, which is a very strange thing to symbolize. The resistance to zero in the West, which persisted for centuries after Fibonacci introduced it, wasn't mere conservatism. It reflected a genuine philosophical difficulty: how can nothing be a quantity? How can absence be manipulated algebraically?
What zero does, once accepted, is complete the place-value system in a way that makes the system fully generative — you can represent any quantity whatsoever with a finite set of symbols, and the operations on those symbols are perfectly regular regardless of the quantities involved. This regularity is what makes the algorithm possible in the full sense. Without zero, arithmetic has irregular gaps; with zero, the system is closed and the operations are exceptionless. That exceptionlessness is exactly what you need for the algorithm to be a reliable mechanical procedure — one that a machine, rather than a judicious human mind, can execute.
So zero isn't just a notational convenience. It's what transforms arithmetic from a practical skill into an algorithmic domain — one where the procedure can be fully specified in advance and executed without judgment. That's the property that eventually connects arithmetic to mechanism.
The Gödel-Turing culmination
The path from there to Gödel and Turing is long but straight. What Gödel discovered was that arithmetic — this seemingly humble, fully explicit system — contains truths that cannot be proven within the system. His proof required encoding logical statements as arithmetic statements, which is only possible because arithmetic is expressive enough to name its own operations. That self-referential capacity — arithmetic talking about arithmetic — is a direct consequence of the place-value system's generativity. You can Gödel-number any statement because you can represent any finite sequence of symbols as a number.
Turing then asks: what is a mechanical procedure, in the most general possible sense? And his answer — the Turing machine — is essentially arithmetic in motion: a tape of symbols, a head that reads and writes, a finite set of rules. The abstract machine just is a formalization of algorithmic arithmetic, which is why the connection between computation and number theory runs so deep throughout the foundations of computer science. The digital computer is this abstraction physically instantiated — mechanism and algorithm finally fully merged, not just metaphorically but actually.
The sufficiency question
Your claim that decimal place-value arithmetic was necessary but not sufficient is exactly right, and the insufficiency points in two directions.
The first direction is the one you identify: it needed mechanism, and specifically the conceptualization of mechanism as a general category rather than a collection of specific devices. The Chinese had clocks in the 11th century but never developed the clockwork universe because they had mechanism without the abstract framework that would let mechanism become a metaphor — a way of thinking about nature in general. The metaphor required the algorithm as the shared abstract structure.
The second direction points forward rather than backward: the algorithm plus mechanism was sufficient for Rank 3, but the transition to Rank 4 required taking the algorithm itself as an object — which is precisely what Gödel and Turing did. They didn't use algorithms; they studied algorithms, asked what algorithms can and cannot do, proved results about the limits of algorithmic reasoning. That reflexive move — algorithm studying algorithm — is the Rank 4 version of reflective abstraction applied to the Rank 3 medium.
And this is where your argument becomes especially elegant. The same conceptual object — decimal place-value arithmetic and its generalization into the algorithm — serves as the medium for Rank 3 (in the sense that calculation and algorithmic procedure is what Rank 3 thinkers think with) and as the object for Rank 4 (in the sense that metamathematics and computability theory think about it). Arithmetic passes through the reflective abstraction and comes out on the other side transformed: from a tool for calculating to a domain of mathematical investigation in its own right. The digital computer is then not just the practical product of this transition but its physical monument — the algorithm made flesh, which immediately becomes the medium through which Rank 4 thinking becomes routine and culturally transmissible.
The logical necessity you're sensing is real: you couldn't get to Rank 4 without Rank 3, and you couldn't get to Rank 3 without the algorithm, and you couldn't get the algorithm without both the abstract symbolic system (zero, place value) and the concrete mechanical tradition to which it could be metaphorically applied. The chain is tight. What's historically contingent is not the logical structure of the dependency but the accident of transmission — that Fibonacci happened, that the Islamic world happened to preserve and extend both the Indian mathematics and the Greek mechanical tradition, that north European mechanism happened to be at the right stage of development when the symbolic tools arrived.
Cultural evolution, on this account, is not inevitable even when the logical dependencies are tight. The dependencies specify what must come before what; they don't specify that the prerequisites will ever actually converge.
The brain vs. the computer
One thing about the abstract Turing machine, the computing mechanism is separate from the memory mechanism. When that becomes embodied we have the so-called von Neumann architecture with its central processor and its random access memory. The memory is location addressed. As far as we can tell, the brain is not like this. Neurons are both active computational units and memory units, something von Neumann himself realized and was puzzled by in his last book, The Computer and the Brain. Then, in the 1960s, we have various thinkers, Karl Pribram among them, adopting the (optical) hologram as a model for associative memory in the brain. Associative memories are not location addressed; they’re content addressed.
Now, if we think of an LLM as a content addressed memory system, we have a content addressed memory system being implemented on a location addressed system, the digital computer. If that computer is sufficiently large, trained on a sufficiently large corpus, and has enough parameters, then it can produce a very convincing simulacrum of human language. Now tracing back through the layers we have actual human language eventually giving rise to writing, with arithmetic as a specialized version of writing. From there we have a chain of invention which yields the digital computer, which can run an LLM and, while that LLM can “speak,’” as it were, it can’t calculate very effectively. Now, zip back to writing and Rank 2 culture. Whatever else it is, writing is a form of memory external to the brain. Arithmetic requires that external memory for any but the simplest calculations (setting aside the memorized formulas employed in virtuoso level mental arithmetic). Back to the LLM. We now have Chain of Thought processing, which makes LLMs more effective. CoT involves memory external to the LLM itself. The same with agents. So we have an artificial content addressed system extending its capabilities through a bit of external memory.
The convolution involved in this story is dizzying.
Monday, April 13, 2026
Language is a lower-dimensional projection of high-dimensional neural dynamics.
But it also allows for content addressed memory. That’s very important, for it gives fine-grain control over the memory and planning systems. That’s the job of sentence-level syntax together with discourse structure.
“Classical” semantic or cognitive networks had a problem with coming up with just the right set of node types and arc types. David Hays dissolved the problem in his 1981 book, Cognitive Structures (scan down the page), by grounding cognition in an analog system modeled on William Powers perceptual control stack (in Behavior: The Control of Perception, 1973). The identity of a cognitive node is a function of its parameter values, where the parameters are derived from the control stack. The identity of the arcs is a function of the difference in parameter values between the nodes it connects.
Concerning Chomsky’s approach to syntax: It depends on a sharp distinction between grammatical and ungrammatical sentences. A generative grammar, in Chomsky’s theory, must account for all and only the grammatical sentences.
However, there are no explicit criteria for separating sentences into the two categories, grammatical and ungrammatical. Rather, the separation depends on the intuitions of the linguist. Naturally enough, different syntacticians have different intuitions. The problem is insoluble.
Moreover, anyone who pays close attention to real speech soon realizes that people do not (always) speak in complete grammatically correct sentences. Real language is sloppy, but nonetheless effective. A neural net of very high dimensionality can deal with this readily enough. A purely symbolic system cannot. Augmenting the system through fuzzy logic and the like doesn’t fix the problem.
LLMs provide a very useful simulacrum of the natural language system. Since LLMs are trained on written texts, the resulting model necessarily conflates the functions of semantics and syntax/discourse. Thus they cannot achieve the flexibility and precision of the full system, where semantics and syntax/discourse are separated.
Neuro-symbolic computing sneaks in through the "back door" [a transitional stage in the evolution of AI]
I note that the "scaffolding, tools, memory, retrieval, planning, decomposition, verification, and other system-level affordances" are all symbolic techniques. It looks like Gary Marcus (& the others) are winning this argument.
— Bill Benzon, BAM! Bootstrapping Artificial Minds (@bbenzon) April 13, 2026
Sunday, April 12, 2026
Synchrony as a general and widespread phenomenon
I once asked @stevenstrogatz a simple question.
— Christopher Michel (@chrismichel) April 12, 2026
Why do so many systems in nature fall into rhythm?
He smiled, almost amused by how deep that question actually goes.
Fireflies blinking in unison.
Heart cells pulsing together.
Crowds swaying on a bridge until it starts to move… pic.twitter.com/xSa60Fu10K
Sam Altman’s Trust Issues at OpenAI | The New Yorker Radio Hour
Ronan Farrow and Andrew Marantz on the rise of the C.E.O. of OpenAI, and how allegations of deceptive behavior continue to dog one of the most powerful figures in tech.
On the character of Jesus and beauty as a category of Christian thought
Peter Wehner, ‘The Reason I’m Not an Atheist Is That I Think the Philosophical Arguments Against It Are Unanswerable’, NYTimes, Apr. 12, 2026.
David Bentley Hart is one of the world’s most formidable and provocative theological minds. He is an Eastern Orthodox scholar of religion, a philosopher, a cultural commentator and a fiction writer. Dr. Hart is the author of more than 30 books spanning theology and metaphysics, philosophy, biblical scholarship and translation, political theology and linguistics, as well as his fiction and children’s novels.
I spoke to Dr. Hart about why Jesus captured his imagination, whether suffering and evil in the world calls God’s goodness into question, and why he doesn’t believe that the Bible teaches the concept of eternal conscious torment. He explained why he believes beauty is a central category of Christian thought, why moral reasoning and moral intuitions must be an essential part of biblical interpretation, and why materialists can’t adequately explain how consciousness has emerged.
Dr. Hart also shared with me why he’s become increasingly indifferent to dogmatic and institutional authority, why he believes that historically the church has been as evil as it has been good, and why he has a “burning sense of obligation” to those whom Jesus loved —— the poor, the marginalized, the strangers in our midst. What emerged in the interview is a sense that he feels compelled to defend the character of God against many of those who claim to speak for God. [...]
Peter Wehner: You’ve described yourself as a “thoroughly secular man,” one having little or no natural aptitude for religious sentiment. The Christian religion as a dogmatic and institutional reality is secondary and marginal to your faith. If C.S. Lewis was, in his words, the most reluctant convert in all of England, it seems to me you qualify as one of the more reluctant converts in all America, or maybe to be more precise, one of the most surprising converts in America. You and Lewis differ in important respects, yet like Lewis you write beautifully and powerfully about the Christian faith and about Jesus. What is it that drew you to faith and what keeps you there? Why is Christianity the story you inhabit?
David Bentley Hart: The word convert probably doesn’t suit me very well in this context. I have converted from certain things to other things. I was a high church Episcopalian as a boy and became Eastern Orthodox as a young man. But it’s true that I’ve never had the aptitude for spontaneous piety of the churchly sort. From an early age, I had a profound sense of some mystery lying beyond nature. And when I’m in natural settings, that’s when my capacity for reverence tends to kick in. But institutional claims, dogmatic claims, the demands of piety, the romance of piety have never had a hold on me by themselves.
What made Christianity compelling to me from an early age had to do with two considerations. One is that I couldn’t account for the claims made about Easter by the early Christians in the New Testament. The more I studied, too, I became more and more convinced of the extraordinary oddity of these claims as compared to what happened with other messianic movements.
Now within the context of more modern history, we’re aware of movements that can take off from a prophet claiming a certain charisma and can be fairly successful in their own terms. But this was something different. This was within the context of messianic expectations in first century Judea that simply seemed not to have come to pass. Other figures before Jesus who had been the focus of messianic movements had died. And rather than his followers simply scattering to the four winds, they soon appeared claiming to have had an experience. And just as a historical anomaly I found this experience hard to explain away psychologically or sociologically.
But that was further along in my education. The thing that always gripped me was the personality, the person, of Jesus as presented in the Gospels. I had a good classical education from an early age and was aware there was some huge epochal shift in his teachings that I’ve never been able to see simply as a fortuitous historical event.
To me, something new happened there. Many of the things that we now take to be morally appealing in Jesus were actually rather scandalous in their time. These weren’t just new principles; some of them were considered wrong. A sort of boundless degree of forgiveness was not an ideal, not even in Stoicism, not even in the prophets. Jesus also had this concern for the most abject, the most indigent people in the world.
Much later in the interview:
Wehner: I want to move from eternal conscious torment to beauty. In “The Beauty of the Infinite: The Aesthetics of Christian Truth,” you argue for a return to beauty as a central category in Christian thought. What is it you hope to convey to people about beauty which they might not otherwise see?
Hart: Part of that relates to the question you asked at the beginning, about the moral character of God. If you’re actually persuaded of the goodness of God then you’ve committed yourself to believing that there’s some analogy between what you understand justice and mercy to be and what you’re ascribing to God. So if you get to the point where it just becomes equivocal, that you say, “It’s good that God condemns babies to hell” or “It’s good in a way we don’t understand,” what you’re saying is your faith is just nonsense. It should clue us in if the story we tell has a hideousness to it. But there’s more to it than that. I really do believe that there are transcendental orientations of any living mind, like the good, the true, the beautiful. It doesn’t mean that we pursue them avidly with full attention psychologically at every moment, but that we do have these values that provide an index for us in which we judge other things.
Why do you desire to own a painting? The reason you desire it might be purely for an investment. But if you really desire the painting for itself, it’s because you have a prior desire that’s more general and transcendental for the beautiful as such. Beauty is an ultimate value for you.
I think the beautiful is probably for us in this world the best indication of what transcendental desire is, the desire for something in itself. Every other thing that we call a transcendental, like truth or goodness, you can try to explain away in a consequentialist way. You say you love the truth, but what you mean is you love accuracy because you want to gain power over a situation. You say you love goodness, but what you’re meaning is you really want moral compliance from others. But in the case of beauty, all those explanations fall woefully short of the phenomenon. Beauty has a kind of impersonal, compelling fullness to it that we can’t reduce to simple mechanical categories.
In Christian thought I think beauty is important because there’s a certain aesthetic revolution that occurred in Christian thought.
One of the curiosities of Christian social history is a series of cultural changes in which we feel it is licit to look for beauty. You have the picture of Christ before Pilate in the Gospel of John, one of my favorite examples. I returned to it almost obsessively because as anyone who studied the ancient world knows, in everyone’s eyes at the time, this tableau would not have meant what it has come to mean for us.
It would have been obvious that Pilate enjoys a certain glorious eminence because he represents both the power and the cult of Rome. He’s an aristocrat, a patrician. There’s a scale of reality that’s the hierarchy of all things, that’s a social hierarchy that includes humanity and the divine. Someone like Pilate is closer to the divine. But that hierarchy also goes right down to the lowest of the low, the slave, and below the slave the ptōchos — that indigent, absolutely marginal human being. And yet there’s an inversion of perspective in that tableau.
We’re invited to see Christ, this slave, this peasant, this colonialized person, this convict. He’s a slave under Roman law. He has no citizenship and he’s under condemnation. So he has the status of one totally not his own. According to Roman law, Jesus is non habens personam — he has no face, he’s no person before the law.
Where you can see this more prodigal notion of the beautiful spilling out of the height of hierarchical thinking into all things, into those we’re now supposed to see as our brothers and our sisters and our kin, radiant with the beauty of God, radiant with the face of Christ, is probably the story of Peter going apart to weep when he hears the cock crow for the third time.
Erich Auerbach, the great literary critic, pointed to this correctly as a sort of strange epochal shift in the sensibility of Western literature. Before then, rustics simply were not worthy of serious tragic attention. The tears of a rustic could be an object of ridicule or mirth, or could just be an ornamental detail: Even the peasants were crying. Things were so bad that peasants, who lose children all the time, they’re just cattle anyway, were weeping. But the notion of a fisherman, poor, probably illiterate, going aside and weeping in grief at the realization that he had betrayed the love of his master in the sense of his teacher, his guide, this is something new.
And so the beautiful fascinates me, not only as a category in itself but within Christian thought as a category that went through a radical revision. Now it’s obvious to us. All of us, whether we’re Christians or not, feel this sense that we really can find this compelling beauty in the face of someone who is not.
There's much more at the link.
Staging increases the value of homes on the market and they sell faster
Bhattacharya, Puja and Li, Sherry Xin and Wang, Yu and Wu, Cedric and Zheng, Xiang, Visual Cues and Valuation: Evidence from the Housing Market (December 07, 2025). Available at SSRN: https://ssrn.com/abstract=5880062 or http://dx.doi.org/10.2139/ssrn.5880062
Abstract: We examine the economic impact of non-consumable visual cues through home staging on high-stakes housing transactions. Using hand-collected listing photos for 15,777 transactions and a machine-learning algorithm to detect furniture, we provide the first large-scale evidence that staged homes sell for roughly 10% more and one week faster than comparable homes without furniture. Our pre-registered online experiment establishes causality and uncovers mechanisms. We find that furniture clarifies spatial use, while decor enhances emotional attachment, jointly driving the higher willingness-to-pay. These findings demonstrate how visual cues impact high-stakes decisions and systematically shape valuations in the largest asset market for households.
The opening paragraphs:
Behavioral economics has advanced significantly in demonstrating how cognitive, psy- chological, and emotional factors systematically influence economic decision-making (Ra- bin (1998), Heath et al. (1999), Rabin and Schrag (1999), Kahneman (2003), Gneezy et al. (2014), Chang et al. (2016), and Hirshleifer (2020)). Yet, many foundational mod- els of consumer choice still presume a high degree of rationality in high-stakes environ- ments, where the sheer magnitude of the transaction, in theory, should discipline behavior and mitigate the impact of biases. This paper examines the economic impact of non- consumable visual cues through staging, a common practice in the U.S. housing market, on high-stakes housing transactions.
House staging is the practice of furnishing and decorating a property for sale to create visual cues that help potential buyers imagine themselves living in the space. Importantly, the furniture and decor are classified as personal property, which consists of movable items that are typically not included in the sale unless explicitly stated in the contract. Standard asset pricing theory dictates that the value of a residential asset is a function of its fundamental hedonic characteristics (e.g., location, size, school quality, and structural condition), discounted by the user cost of capital (Sirmans et al., 2005; Poterba, 1984; Himmelberg et al., 2005). Rational agents should not price movable, non-consumable personal property (furniture and decor) into the value of the fixed asset, especially when such items convey no transactional value. However, the popularity of home staging, a common industry practice costly to the sellers or their agents, suggests a possible discon- nect between theory and behavior. This disconnect gives rise to fascinating and largely unanswered economic puzzles (Yun et al., 2021): Do homebuyers pay for things that they know they cannot consume? If so, what is the magnitude of this staging premium? In addition, what underlying mechanisms do these visual cues activate that lead to a higher willingness to pay? This paper aims to answer these questions by exploring homebuyer behavior in the largest asset market for most households.
Later in the introduction, and reporting on a specific experiment within the larger study:
(1) Staging changes how potential buyers perceive the physical dimensions of the asset itself. Participants who viewed a staged room perceived it to be significantly wider and larger in total area (by an estimated 15-20 square feet, about 10% of the actual size) than the identical but empty room. (2) Staging reduces cognitive burdens by providing a practical demonstration of how a space can be used. Participants in both the Furniture Only and Furniture & Decor treatments were significantly more likely to agree that the “room layout is practical and usable” compared to those who saw an empty home. (3) Furniture alone is insufficient to trigger the full behavioral effect. While adding furniture made a home feel “warm and inviting” and “well-maintained,” it had no measurable impact on whether participants could “imagine myself living in this home” or whether they were more likely to “schedule a visit.” Only the addition of decor (e.g., plants, lamps, table settings) in the Furniture & Decor treatment produced a significant increase in these key measures of emotional connection and behavioral intent.
I wonder, do homes associated with famous people or celebrities sell for higher prices than equivalent homes without such associations?
H/t Tyler Cowen
Saturday, April 11, 2026
Friday, April 10, 2026
Harpo Marx plays Liszt's Hungarian Rhaposdy No. 2
🎹✨ Harpo Marx Meets Liszt’s Hungarian Rhapsody No. 2 – one of the most dramatic, fiery, and technically dazzling piano pieces ever written.
— 🎶𝗖𝗹𝗮𝘀𝘀𝗶𝗰𝗮𝗹 𝗠𝗲𝗹𝗼𝗱𝗶𝗲𝘀 ✨ (@Old_But_Gold50s) April 10, 2026
Harpo’s silent genius turns this virtuosic masterpiece into a hilarious, unforgettable spectacle, blending sheer musical brilliance with… pic.twitter.com/q97RY2t70t
Ezra Klein and Fareed Zakaria discuss the moral degeneracy of Donald Trump and what that’s doing to America.
Ezra Klein and Annie Galvin, The Civilization Trump Destroys May Be Our Own, NYTimes, Apr. 10, 2026.
Here's the beginning of the interview:
Ezra Klein: Fareed Zakaria, welcome back to the show.
Fareed Zakaria: Always a pleasure.
I want to start with Trump’s now infamous post on Truth Social on Tuesday morning, when he wrote: “A whole civilization will die tonight, never to be brought back again.”
What did you think when you saw that?
I was horrified. But it goes beyond that.
It felt like that tweet was the culmination of something that had been going on for a while — which was that the president of the United States was simply abandoning the entire moral weight that the United States had brought to its world role ever since World War II.
Not to sound too corny about it, because, of course, we made mistakes, and we were hypocritical and all that, but compared to every other power that gained this kind of enormous dominance, the U.S. had been different.
After 1945, it said: We’re not going to be another imperial hegemon. We’re not going to ask for reparations from the countries that we defeated. We’re actually going to try to build them, and we’re going to give them foreign aid.
That whole idea that the United States saw itself as different, saw itself not as one more in the train of great imperial powers — which, when it was their turn, decided to act rapaciously, to extract tribute, to enforce a brutal vision of dominance — all that was, in a sense, thrown away.
I realize it was just one tweet, but it was the culmination of something Trump has been doing for a long time.
It just left me very sad to think that the United States, this country that has really been so distinctive in its world mission — a country that I looked up to as a kid and came to as an immigrant — that its leader could threaten to annihilate an entire people.
And when you say something like that, it sounds very abstract: “civilization.” What we are talking about is the life and aspirations and culture and dignity of a whole people. You’re talking about 93 million people.
One thing that has always felt core about the moral challenge that Donald Trump and his view of geopolitics poses is it feels, to me, on a deep level, like a throwback to the 18th, 19th, early 20th century, when individual human lives were just understood as pawns in the greater game of dominance and strength and rivalries and conquests.
I’m not saying that there has not been disrespect or disregard for human life in the postwar era. That would be absurd.
But there was a commitment and a structure of values in which you didn’t threaten mass annihilation of civilians simply because you were trying to salvage face in a war you had started for no reason and were losing.
You see this in DOGE and its approach to U.S.A.I.D. — that there is something about how you treat or don’t treat, how you weigh or don’t weigh, the lives and futures of the people who are caught within your machinations and that he just wipes away, as a kind of weakness or liberal piety.
If you watch or listen to George W. Bush when he is essentially losing the war in Iraq, what is striking is the difference. Bush, for all his flaws — and he made many, many mistakes in Iraq — always looked at it as an essentially idealistic, aspirational mission.
We were trying to help the Iraqis. He never demeaned Islam. He always tried to see this as part of America’s great uplifting mission.
You almost miss that because, even in our mistakes, even in our errors, there was always that sense that we were trying to help this country do better, we were trying to help these people do better.
What you are describing, quite accurately, is that Trump approaches it not just from the point of view of the 19th century — because sometimes people talk about how he loves McKinley, and he liked tariffs, and he’s like McKinley in that imperialism.
No, Trump is more like a rapacious 18th-century European imperialist. McKinley said he went to the Philippines because he wanted to Christianize the place. There was none of that sense of uplift. Most of it was just brutal.
As you say, the individual was never at the center of it. Human life and dignity were never at the center of it. It was all a self-interested, short-term, extractive game. And Trump is hearkening back to that.
It’s interesting to ask where he gets it from. Because it really is probably fair to say that nobody else on the American political spectrum, if they were president, would speak like that. I don’t think JD Vance would speak like that. I don’t think Marco Rubio would speak like that.
There’s something that he brings to it — which is a callousness and a contempt for any expression of those values. For him, that’s all a sign of weakness, that’s the kind of [expletive] people say. But the reality is that’s the way he looks at the world.
It goes on from there.
Help your local AI pay off its investors
So basically we pay for all of their infrastructure, to the tune of potentially trillions of dollars, and we get a "share" of profits. They know the funding is drying up
— Alex Northrop (@northington764) April 10, 2026
Civil War Among Chimpanzees
Carl Zimmer, These Chimps Began the Bloodiest ‘War’ on Record. No One Knows Why. NYTimes, Apr. 9, 2026.
Near the end:
And the researchers are still trying to figure out what set off the conflict in the first place. “All of a sudden, yesterday’s friend becomes today’s foe,” said Dr. Mitani.
Within any group of chimpanzees, violence will flare from time to time — when apes converge on a tree full of fruit, for example, or when lower-ranked males vie to replace an old alpha male.
But this aggression can be dampened by the friendships that form over years. Some chimpanzees are especially social, jumping between many cliques. “They’re these important social bridges,” Dr. Sandel said.
In 2014, five adult males died, perhaps because of disease. Dr. Sandel speculated that these deaths ripped away some of the bridges that previously held the Ngogo groups together. Low-level conflicts blew up into something akin to civil war.
The Ngogo conflict could offer a glimpse at the kind of violence that might have flared up in our ancient forebears, given that chimpanzees and humans descend from common ancestors that lived about six million years ago.
“These findings tell us indeed that these civil-war-like types of conflicts were possible in the course of human evolution,” said Sylvain Lemoine, a primatologist at the University of Cambridge who was not involved in the study.
The Ngogo chimpanzees show how our ancestors could have gotten dragged into years of lethal fighting without ideology or cultural identity — let alone the language to talk about them.
Instead, shifting social bonds might have been enough to light the fire.
There's more at the link.
Thursday, April 9, 2026
My Music on New Savanna and 3QD
I’ve got quite a lot about music on New Savanna, including quite a bit about my own music. Here’s posts about my interest in jazz, and here’s posts specifically about how I learned about and learned to play jazz. This post, in particular, is about my earliest jazz, including Louis Armstrong. Groups I've played in.
This is a recent post, about a band I helped form and played in when I was in upstate New York in the mid-1980s. the New African Music Collective (NAMC). We opened for Dizzy Gillespie in 1984.
Here’s three posts about jam sessions with my Hoboken friend, Howard Olah-Reiken.
This is a free jazz workshop I took with the bassist, William Parker.
This post is from roughly the same era as the NAMC post, when I was living in upstate New York in Troy. But it’s about very different music, the Sage City Symphony in Bennington, Vermont (just across the border).
Here’s a post about my favorite venue for listening to live jazz, the Left Bank Jazz Society in Baltimore Maryland. This is a somewhat extended version of that article.
Here’s a post about being in an improvisation workshop led by Frank Foster at SUNY Buffalo in the mid-1970s.
Now we’re back to Baltimore; this is about The Saint Matthew Passion, a rock group I played with from 1969 to ‘71. A formative experience, very.
And here’s the story about The Out of Control Rhythm and Blues Band, based out of Schenectday, NY.
My association with Charlie Keil. Here we’re marching and jamming in protest against the 2003 war in Irag.
Videos of three tunes performed by a 12/8 path band Charlie Keil assembled: Jamming on “Amazing Grace,” an original we made up in the studio, “Peace Now,”
What I learned playing in marching band in high school.
NOTE: I also have a YouTube channel. It has a number of solo trumpet (or flugelhorn) videos, three videos of the 12/8 path band, and then a bunch of play lists.
What place will America have in the world order that’s beginning to emerge?
Steven Erlanger, A Cease-Fire for Now in Iran, but a Blow to American Credibility, NYTimes, Apr. 9, 2026.
Historical analogies are never exact. But with the tenuous cease-fire deal in the U.S.-Israeli war against Iran, some are asking whether this is a “Suez” moment for the United States, marking the waning of American power and credibility in the world.
The Suez crisis took place in October 1956, when Britain, France and Israel attacked Egypt to force open the Suez Canal. President Dwight D. Eisenhower, with an election days away, ordered them to stop. Prime Minister Anthony Eden of Britain resigned. President Gamal Abdel Nasser of Egypt became a hero of anti-colonialism.
Suez became shorthand for the moment that Britain, exhausted from World War II, gave way as a global power to the United States.
There are differences from that time. The Suez Canal is man-made and wholly in Egyptian territory, unlike the international waterway of the Strait of Hormuz. There is no other global power capable of replacing America in the region, let alone ordering President Trump around.
But the two-week cease-fire leaves the Islamic Republic in place and still in command of the future of the Strait of Hormuz, with Iran’s nuclear stockpile and ballistic missile program unresolved. After Mr. Trump’s declaration of victory, however hollow, it is difficult to imagine a resumption of full-scale war. [...]
The impact of a diminished United States is strongest in Europe, which has relied on NATO and the American security guarantee implicit in membership, including the U.S. nuclear umbrella. But Europeans drew a distinction between faith in America and faith in Mr. Trump. The former remains because it is vital for European security.
Still, Mr. Trump’s policies are inevitably producing a response that will outlast him. The rest of the world is trying to organize itself and derisk from an America that treats its allies as enemies and its traditional enemies, like Russia and China, as friends. [...]
China, which gets so much of its oil through the Strait of Hormuz, pushed Iran to agree to the cease-fire, and it is expected to participate in keeping the strait open and guaranteeing safe passage for others.
Much depends on how the war ends, cautioned Mr. Kupchan of the Council on Foreign Relations.
If the cease-fire leads to a deal that imposes significant constraints on Iran’s nuclear program and its ability to cause trouble, he said, that would be much better in the longer run than a frozen conflict or one that “just burns on month after month,” with all the accompanying impact on the energy market and American allies.
There's more at the link.




























