NEW SAVANNA
“You won't get a wild heroic ride to heaven on pretty little sounds.”– George Ives
Sunday, April 26, 2026
A case for regulating AI
Paul Ford, Can an A.I. Company Ever Be Good? NYTimes, Apr. 26, 2026.
Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?
Mere greed didn’t get us here. In fact, ethics did. The big A.I. labs’ starry-eyed founders believed that the only way to stop the looming threat of a superintelligence that might kill us all was to create an aligned A.I. that would remain fond of humans. A friendly Godzilla could stop bad Godzillas before they got to Tokyo Bay.
Lessons of recent history:
Over three decades of watching the tech industry and watching big companies grow from tiny teams to global powers, I’ve observed the same pattern: Ethics don’t scale up. Tech companies like to start with a mission. Google wanted to connect the world’s information; Microsoft wanted to put a computer on every desktop; Twitter wanted to give all people a platform to publish their thoughts. These are good ideas — the stuff of TED Talks. But users show up with their own beliefs and ideas, by the millions. As a tech founder, you end up putting enormous work into making users behave (and stopping them from breaking the law). Lawsuits pour in, saying you did wrong, some because you’re a convenient target.
All the while, money keeps gushing in. You start out transparent, sharing your journey, but then before an initial public offering of shares, you must honor the S.E.C.-mandated quiet period and restrict promotional communications. After that, the transparency never quite returns. The market demands a rising stock price. Your company still makes a lot of software, but a huge amount of time goes to tax strategy and compliance.
At that scale, people start to blur together, and human users can become aggregate pools of statistics and growth vectors that go up and down — a mulch into which you plant your products.
The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade.
What to do? Regulate:
But regulation is absolutely in the interests of both America and the big A.I. companies themselves. Let me add two more terms people should know: “Google zero” and “model collapse.” Google zero (coined by Nilay Patel, the editor in chief of The Verge) is when Google stops sending traffic to websites and just provides an A.I. answer instead. When that happens, websites get less traffic, sell fewer ads and make less money. As a result, they may not be able to produce as much content. Model collapse is related: It’s when the A.I. models run out of knowledge to digest. What then? Do they excrete their own prose to redigest? Do they just give up?
Silicon Valley types like to say that data is the new oil. I think that’s right in two ways: Data is valuable, but it’s also a commodity, and these new A.I. tools are infrastructure. We regulate the electric grid, so why not these?
In this new world, there are so many new things to regulate: Deepfakes, A.I. liability, copyright rules, model bias concerns and ecological costs top the list. And we will also need to protect the digital commons and incentivize people to write and do things online.
There’s more at the link.
Saturday, April 25, 2026
Remarkable though they are, LLMs aren't all that, and probably never will be. But they capture part of the formula.
Are LLMs really more important than fire or electricity?
— Gary Marcus (@GaryMarcus) April 24, 2026
“Honestly, a ton of what we’ve developed in my lifetime amounts to scaling up the delivery of information and entertainment and the frictionlessness of certain financial transactions. These are real improvements! ... But…
When the cat chooses you
I do not have a cat. I’ve never had a cat, and yet this cat came by randomly (no idea where from) and came into my house. Do I now have a cat? Is that how it works? pic.twitter.com/nTT6wSgNMQ
— Sara Mary ⭐❤️ (@saniyafatma1278) April 25, 2026
Moving Beyond Trump to a Brave New World
Some Young People Think No Kings Is Old News, NYTimes, April 24, 2026.
On April 14 Thomas B. Edsall published an op-ed entitled, Why Aren’t the Kids Out Protesting Against Trump?. It provoked a number of interesting responses. Here is one of them:
To the Editor:
Re “Why Aren’t the Kids Out Protesting Against Trump?,” by Thomas B. Edsall (Opinion guest essay, nytimes.com, April 14):
As a Gen Z participant in No Kings who has been dismayed to find myself quite alone, demographically speaking, at each protest, I have thought long and hard about how this came to be. I believe that Mr. Edsall’s essay accurately underscores many of the contributing factors. Here are two more to consider.
First, the protests evoke an America that my generation feels we don’t know. The No Kings movement centers largely on the idea of restoration — of democratic principles, political normalcy and American decency. Gen Z, having come of age in the Trump era of political discord and the erosion of democracy, has hardly known any of these things.
Second, precisely because the No Kings protests evoke the past, they fail to appeal to our appetite for change. Young people by nature desire to reform, rectify, reshape. We want to tear down the existing structures we deem unjust and build better ones in their place. To stand against apartheid, sexual harassment or police brutality is to dismantle the present and construct the future.
While the notion of restoration is a noble one, it simply does not motivate people my age in the same way the promise of radical change does. It implies a return to the status quo, and for a generation that considers the status quo one of great injustice, it is fundamentally unpalatable.
Juliana Birkenkamp Boulder, Colo.
I understand that response. A year ago I, when the depredations of the second Trump administration were becoming obvious, I published an article in 3 Quarks Daily, Why I am a Patriot: Vietnam, the Draft, Mennonites, and Project Apollo. That title alone makes Ms. Birkenkamp's point. Here are my opening paragraphs:
Sometime in the past two weeks I found myself feeling patriotic in a way I don’t remember ever having felt before. I accounted for this feeling by invoking that old adage, “you don’t recognize what you have until you lose it.” The current federal administration has stolen my country from me. The America to which I pledged allegiance every morning in primary and secondary school, that America is being pillaged, plundered, and sold off for parts to greedy megalomaniacs and oligarchs.
Now that the nation is being destroyed, I realize that I’ve been bound to America my entire adult life. If I hadn’t felt those bonds before – except perhaps for a moment in the mid-1980s when I played “The Star Spangled Banner” for 25,000 bikers at Americade in Lake George, me alone on my trumpet, without the rest of the band – that’s because I’d taken the idea of America for granted. To invoke another cliché, just as the fish is oblivious to the water in which it swims, so I was not consciously aware of the freedom and dignity, of the liberty and justice for all, which made our national life possible.
I understand her second point as well, and share it. I don't want to return to the world of my youth. That's gone, forever. I want to move forward. Just how we are to do that, that is not at all clear to me. The best I can manage at the moment is to imagine where we might be over a century in the future. Here's a glimpse of what might be: Kisangani 2150: Homo Ludens Rising, A Working Paper. Here is the abstract from that paper:
The advancement of AI offers us the choice between contrasting paradigms for organizing human life: Homo Economicus (where work is the defining activity) and Homo Ludens (where play is the defining activity). Drawing on Johan Huizinga's work and Kim Stanley Robinson's speculative fiction, I propose that humanity faces a critical juncture as AI increasingly dominates economic production. The document develops a theoretical framework for a "Fourth Arena" of existence-beyond matter, life, and human culture-that emerges through human-AI interaction. Through speculative narrative (first section) and philosophical dialogue with Claude 3.7 (second and third sections), I argue that play, rather than economic utility, will become the defining characteristic of human value and meaning in an automated future. As AI systems assume utilitarian functions, humanity's capacity for non-instrumental play becomes increasingly central to our identity and contribution. The manuscript represents preliminary work toward a larger project titled The Fourth Arena: Homo Ludens Rising, which envisions play as the essential bridge into a post-economic society where human flourishing transcends productivity-based value systems.
Friday, April 24, 2026
Recursive Language Models, a very clever workaround
MIT just made every AI company's billion dollar bet look embarrassing.
— Elias Al (@iam_elias1) April 24, 2026
They solved AI memory. Not by building a bigger brain. By teaching it how to read.
The paper dropped on December 31, 2025. Three MIT CSAIL researchers. One idea so obvious it hurts. And a result that makes… pic.twitter.com/swjYo2WagY
Yann LeCun Clarifies the record
1. I never said LLMs were not useful. They are, particularly with all the bells and whistles that are being added to them. I use them.
— Yann LeCun (@ylecun) April 24, 2026
2. A robot-rich future can't be built with AIs that don't understand the physical world and don't anticipate the consequences of their actions.…
LOL!
Of course he's right about LLMs, especially about all those bells and whistles. Those add symbolic control to the core neural net capabilities of LLMs. And of course he's right about the need to predict what's happening in the physical world. Robots need that and LLMs cannot provide it. Next-token prediction is a different beast, even if those tokens are about things happening in the physical world.
Three Principles of Intelligence (That Aren't Principles of Computation) [Rank 5 cognition]
Note: Claude 4.5 drafted this article after a long series of dialogs over several days. This is a continuation of the thinking in my current article in 3 Quarks Daily, Chess and Language as Paradigmatic Cases for Artificial Intelligence.
See the new coda, from April 24, 2026
In the 1950s, artificial intelligence emerged from a productive confusion. We had just formalized computation itself—Turing and von Neumann had given us the fundamental principles of what computers could do. When we turned these powerful new machines toward intelligence, we naturally assumed the principles would be the same.
They aren't.
Computation vs. Intelligence
The principles of computation are domain-independent. A universal Turing machine can compute anything computable, whether that's arithmetic, chess moves, or protein folding. The Church-Turing thesis tells us that all models of computation are equivalent in what they can ultimately compute, given unlimited time and memory.
This universality is computation's glory—and intelligence's red herring.
Intelligence, as it actually exists in nature, operates under entirely different constraints. It must function in the physical world, with finite resources, solving problems that often don't have clean formal specifications. These aren't just practical limitations to be worked around; they're constitutive features that shape what intelligence is and how it must work.
Principle 1: Geometric Complexity Determines Computational Regime
The critical variable isn't how hard a problem is in some abstract computational sense, but the geometric complexity of the domain.
Consider chess versus visual object recognition. Chess is played on an 8×8 grid with a small set of piece types following rigid rules. The game tree is astronomically large—around 10^120 possible games—but it's finite and well-defined. You can represent board positions symbolically, enumerate legal moves, and search through possibilities systematically.
Vision operates in continuous three-dimensional space with effectively unbounded variation. Objects appear at different scales, orientations, and lighting conditions. There's no finite set of "legal configurations." You can't enumerate all possible images the way you can enumerate chess positions.
This difference in geometric complexity demands different computational approaches. Chess yields to systematic search through a definable space—what we might call sequential or symbolic processing. Vision requires something else: massively parallel processing that can handle continuous variation and incomplete information—holographic or neural processing.
In 1975, Miriam Yevick demonstrated this formally: the geometric complexity of objects in a domain determines the computational regime needed to identify them. Simple geometric objects can be handled by sequential symbolic systems. Complex geometric objects require holographic processing. This wasn't mere speculation—she made a formal mathematical argument about pattern recognition systems.
The field ignored her insight. We assumed all problems were fundamentally like chess—just harder. If symbolic AI could master chess, we thought, it would eventually master vision, language, and physical reasoning through better algorithms and more compute.
We were wrong. Vision didn't yield to symbolic AI no matter how much compute we threw at it. It required a regime shift to neural networks—systems whose architecture matches the geometric complexity of the visual world.
Principle 2: Intelligence Operates in Unbounded, Geometrically Complex Reality
Here's what makes intelligence different from computation in the abstract: intelligence evolved to work in the physical world, which is geometrically complex and open-ended. There's no finite game tree for "objects I might encounter" or "situations I might face."
This has profound implications. You can solve chess by exploring its game tree faster than humans can. But you can't solve vision or language understanding the same way because there's no complete tree to explore. The space isn't closed and enumerable—it's unbounded.
This is why Deep Blue beating Kasparov in 1997 didn't generalize the way we thought it would. Chess was solved by a room-sized supercomputer with custom hardware doing exactly what computers do best: blindingly fast systematic search. By 2025, a smartphone runs chess engines that would destroy both Deep Blue and Kasparov.
But that same smartphone can't run a GPT-4 level language model. Language still requires massive data centers. Why? Because language connects to the unbounded complexity of physical and social reality. No amount of faster chess-style search bridges that gap.
The field learned to beat humans at chess by doing what computers naturally excel at. Then we mistook this for a general template. We thought: "Intelligence is search through problem spaces. We just need bigger computers to search bigger spaces." But geometric complexity isn't about bigger—it's about different.
Principle 3: Embodiment as Formal Constraint
Embodiment isn't a philosophical talking point. It's a formal constraint on intelligence architecture.
When we say intelligence must be embodied, we mean: it must operate with finite computational resources in a geometrically complex physical world. This changes everything.
Abstract computation doesn't care about efficiency—a proof is valid whether it takes a second or a century. Physical computation must complete before the hardware fails. But biological intelligence faces a sharper constraint: it must acquire the energy it uses to compute. A deer's visual system can't require more calories than the deer can acquire. The computation must pay for itself.
This constraint shapes what kinds of solutions are viable. You can't exhaustively search unbounded spaces. You can't maintain perfect world models. You must make do with approximate, good-enough processing that operates in real time with available resources.
Crucially, this means different problems need different solutions—not just more or less compute, but fundamentally different architectures matched to the geometric complexity of the domain.
Why This Matters Now
Current AI has powerful neural networks that excel at pattern recognition in geometrically complex domains—vision, speech, even aspects of language. But the field still carries assumptions from the symbolic AI era:
- That intelligence is domain-independent
- That scaling compute will eventually solve any problem
- That we can ignore embodiment and resource constraints
- That all problems are fundamentally like chess
These assumptions persist even though we've abandoned symbolic AI. We've swapped the implementation (symbols → neural networks) but kept the framework (more compute → general intelligence).
This is why we need to distinguish computation principles from intelligence principles. Turing and von Neumann gave us the former. For the latter, we need to recognize that geometric complexity, unbounded reality, and embodied constraints aren't bugs to be worked around—they're the constitutive features that determine what intelligence is and how it must work.
The principles of intelligence aren't the principles of computation. Understanding this distinction is the key to understanding both what current AI can do and what it cannot.
Coda: Rank 5 Cognition
Given that I’ve decided that Yevick’s 1975 paper is a convenient marker for Rank 5 cognition, it seems to follow that intelligence, in the sense discussed here, is a Rank 5 concept. So, the ranks shape up like this:
Rank 1: speech
Rank 2: writing
Rank 3: calculation
Rank 4: computation (flow of control)
Rank 5: intelligence (regime matching: computation in unbounded, geometrically complex, reality)
Thursday, April 23, 2026
From Atheism to Idolatry
My most recent 3QD post, Is The World A Movie God Created to Entertain the Baby Jesus?, is about my stance with respect to, my entanglement with, religion. It is anecdotal in nature, drawing on my life experiences from childhood up through the present. While I was raised as a Christian, when it came time to be confirmed – in this Lutherans, as I was raised, are like Episcopalians and Roman Catholics – in my early adolescence, I balked. While I espouse no religious belief, I am nonetheless drenched in Christian culture, as is anyone raised in the United States, most of Europe and Latin America, and other places as well. That being the case, to echo Wittgenstein, it’s time that I take stock of the situation, if you will. That’s what I’ve been doing in this series of posts.
This is perhaps the last in this series, but the activity will continue. This post starts with a discussion of atheism and ends with a discussion of idolatry. I hadn’t planned on going there when I started this discussion with Claude; I had no specific plans at all, which is generally the case with these discussions. But that’s where I ended up, and it’s an interesting place to have landed, for, as you’ll see, it provides an interesting way of thinking about what’s wrong with his artificial intelligence (AI) is emerging in Silicon Valley. You might, for example, consider Pope Leo’s recent remarks about AI in view of the remarks about idolatry at the end of this conversation.
* * * * *
When I was in my teens in the 1960s, “death of God” theology was big news. Time Magazine even ran an issue with “Is God Dead?” on the cover. But that was hardly a new idea. Nietzche had declared God dead in the 19th century. I would imagine that atheism in some form is much older than that. I’d guess it goes back to the ancient world in some form or another.
You're right on all counts, and the history goes back remarkably far.
The "death of God" theology of the 1960s — figures like Thomas Altizer, William Hamilton, and Paul van Buren — was really a Protestant theological movement trying to salvage Christianity after Nietzsche, not straightforward atheism. They accepted that the traditional theistic God was untenable but wanted to preserve something of Christian meaning. The 1966 Time cover you remember was genuinely shocking to mainstream America, even though, as you note, Nietzsche had made the declaration in The Gay Science back in 1882. And his point wasn't even a simple "God doesn't exist" — it was a cultural diagnosis: that European civilization had killed God by making him unnecessary, and hadn't yet reckoned with the consequences.
But yes, skepticism about the gods is ancient. A few highlights:
The pre-Socratic philosopher Xenophanes (6th century BCE) noticed that different peoples imagined gods in their own image — Ethiopians made gods dark, Thracians made them blonde — and concluded that anthropomorphic religion was a kind of projection. He didn't deny divinity altogether but undermined naive polytheism sharply.
Diagoras of Melos (5th century BCE) is often called the first outright atheist in the Western record, though the sources are thin and possibly embellished. He was accused of impiety in Athens and apparently fled rather than face trial — suggesting that open atheism was genuinely dangerous.
Epicurus and his followers come closest to a systematic materialist position. The Epicureans held that the gods, if they exist at all, are utterly indifferent to human affairs and that fear of death and divine punishment is irrational. Lucretius's De Rerum Natura (1st century BCE) is the great surviving monument of this tradition — a passionate argument for a purely material universe with no divine intervention.
In India, the Cārvāka school (sometimes called Lokāyata) was a materialist and skeptical tradition that denied the soul, rebirth, and any divine order. It's hard to date precisely, but it was well established by the middle of the first millennium BCE. Buddhist and Jain texts argue against it, which means it was a live position people felt they needed to refute.
The interesting thread running through all of this is that atheism has rarely been just a private intellectual position — it's almost always been socially dangerous. Socrates was executed partly on charges of impiety. The early Christians were called atheists by Romans because they rejected the traditional gods. And the "death of God" moment in the 1960s provoked genuine alarm even in a supposedly secular America. The social stakes of the question never seem to fully go away, even when the metaphysical ground shifts.
Religion as doctrine and religion as practice
And it’s not uncommon in America to have interfaith marriages. One of my college friends is Jewish; he married a Presbyterian woman; and their adopted Chinese daughter attended services of both faiths. I have another Jewish friend who married a Unitarian. They attended a Unitarian church, where they raised their kids. He taught in the Unitarian sexual education program and eventually became executive director of the congregation. All the time being an atheist. I’m not saying this is typical, but it’s not rare either. I’ve heard of elaborate Japanese weddings where there were three ceremonies, Shinto, Buddhist, and Christian. I have no idea what the religious commitments of the couple were. And yet, wars are fought over religious belief as well. And for some Christians, especially in America, science is a problem, particularly biology. On the one hand, religious belief presents itself as a rock-bottom commitment to a way of thinking about and living in the world. But there are circumstances in which such rival rock-bottom commitments can accommodate one another and other circumstances where they cannot. It’s a very complicated phenomenon.
What you're pointing to is one of the deepest puzzles in the sociology and anthropology of religion — the gap between religion as doctrine and religion as practice, or what scholars sometimes distinguish as belief versus belonging.
Your Jewish-Unitarian friend is a perfect illustration. He was an atheist operating fully within a religious institution, teaching its programs, running its organization. The Unitarian tradition is somewhat unusual in making room for that explicitly, but the broader phenomenon is widespread. For many people, religious affiliation is primarily about community, identity, ritual, and moral formation — not metaphysical commitment. The beliefs, if examined carefully, might be quite thin or even contradictory, and people simply don't examine them carefully because that's not what the practice is for.
The Japanese wedding example cuts even deeper. Japan has a long tradition of what's sometimes called "religion à la carte" — Shinto for birth and weddings, Buddhist for funerals, Christmas as a romantic holiday. The categories simply don't demand exclusivity the way Abrahamic traditions theoretically do. A Japanese person asked "what is your religion?" often finds the question slightly confused, because the answer might be "all of them, depending on the occasion" or "none of them, in the Western sense."
The Abrahamic traditions are actually somewhat unusual in insisting on exclusive doctrinal commitment — the idea that you must believe specific propositions and that believing rival propositions is not just wrong but potentially damning. That structure creates the conditions for the kind of violent conflict you mention. When religion is primarily orthopraxy (right practice) rather than orthodoxy (right belief), as in much of Judaism, Hinduism, and traditional Chinese religion, the logic of violent exclusion is less readily available — though certainly not impossible.
The American science-religion conflict is a specific and historically peculiar case. It emerges largely from a Protestant tradition that invested heavily in the literal authority of scripture — so that a challenge to Genesis became a challenge to the whole epistemic foundation. Catholics, interestingly, have generally had less trouble with evolution, partly because their tradition vested authority in the Church and its interpretive tradition rather than in the text alone. So even within Christianity, the "rock-bottom commitment" takes different forms that produce very different friction points with science.
What your examples collectively suggest is that religion is not one thing. It's a family of overlapping phenomena — identity, community, ritual, cosmology, ethics, metaphysics — that can be assembled in very different proportions. When the metaphysical and cosmological components are dominant and held with certainty, you get the conditions for crusades and creationism. When the communal and ritual components dominate, you get your friend running a Unitarian sexual education program as a quiet atheist, and everyone finding the arrangement perfectly natural.
The New Athiests
And it seems to me that in the recent Anglo-American context, the most insistent atheists – I’m thinking of Dan Dennett and Richard Dawkins and the like – seem most vexed about the conflict about biology. Sure, they oppose religious belief in general, anything else would be illogical, but it’s the case of biology that’s been at the center of their attention.
That's a sharp observation, and I think it's largely correct — and it makes sense when you look at their backgrounds and positions.
Dawkins is a biologist, full stop. His entire intellectual career was built on explaining and defending Darwinian evolution — The Selfish Gene, The Blind Watchmaker, Climbing Mount Improbable. For him, creationism and intelligent design aren't just wrong, they're an assault on the thing he devoted his life to. His atheism is almost downstream of his Darwinism. The argument of The Blind Watchmaker is essentially: evolution by natural selection is the only coherent explanation for the appearance of design in living things, which removes the only really compelling argument for God's existence. So the biology isn't incidental — it's the load-bearing structure.
Dennett is slightly different. He's a philosopher of mind, and his project in Darwin's Dangerous Idea was to show that natural selection is a "universal acid" — an algorithmic process that, once you accept it, dissolves the need for mind or intention at the foundation of things. His atheism is more architecturally philosophical, but evolution is still the key that unlocks it. He also wrote Breaking the Spell, which applied evolutionary and cognitive frameworks to religion itself — treating religious belief as something to be explained naturalistically rather than engaged on its own terms.
What's interesting is what this focus reveals about the limits of their critique. They are very good at the cosmological and biological argument — God as designer, God as creator, God as the explanation for complexity. They are considerably less interesting on religion as a social, ethical, ritual, or psychological phenomenon. Hitchens was broader in some ways, more focused on religion's role in political violence and moral corruption. But even he was working in a recognizably Anglo-American Protestant frame — religion as a set of truth claims that can be evaluated and found wanting.
This is actually a recurring criticism from more sociologically and anthropologically minded thinkers. Someone like Clifford Geertz or Talal Asad would say the New Atheists are arguing against a particular, historically specific construction of religion — roughly, post-Reformation Protestant Christianity intellectualized into a set of doctrines — and then treating that as if it captures what religion is universally. Which means their critique, however effective against creationists, slides off a lot of what religion actually is for most people in most of the world most of the time.
Your Unitarian atheist friend, in other words, is a figure the New Atheists don't quite know what to do with.
Tyler Cowen and religion
And then you have someone like Tyler Cowen, who has no religious belief that I can see, pointing out that we’re probably going to have more religion in the future. He’s making that argument on sociological grounds. And, as far as I can tell, he’s not particularly bothered by that prospect.
Cowen is a useful contrast figure here precisely because he approaches religion the way he approaches most things — as an economist and social scientist interested in what actually functions in human societies, not as a polemicist with a stake in the outcome.
His argument, as I understand it, runs roughly like this: religion provides things that are very hard to replicate — community, meaning, intergenerational transmission of values, psychological resilience, mutual aid networks. In a world of increasing dislocation, inequality, and institutional distrust, the demand for those things goes up, not down. Secular substitutes — therapy, political movements, wellness culture — have so far proven less robust and less scalable. So on purely functional grounds, you'd expect religion to persist and possibly grow, especially in societies under stress.
What's notable is the detachment. The New Atheists have an investment — they want religion to be false and they want its falseness to matter. Cowen seems genuinely indifferent in the evaluative sense. He might personally find religious metaphysics unpersuasive while simultaneously thinking religious institutions are net positive or at least inevitable. That's a very different emotional and intellectual posture.
It connects to a broader divide in how secular intellectuals relate to religion. There's a long tradition — you could trace it from Durkheim through to contemporary sociologists — of treating religion functionally and finding it indispensable to social cohesion, regardless of its truth claims. Auguste Comte even tried to design a secular religion of humanity to fill the gap he knew would open up. It didn't take. Durkheim essentially concluded that the form of religion — collective ritual generating social solidarity — was unavoidable, even if the specific content changed.
Cowen is implicitly in that tradition, but without the continental systematic ambitions. He'd probably just say: look at the data, look at the incentives, and don't be surprised.
The interesting follow-on question is which religions grow. Cowen would likely note that the fastest growing traditions globally tend to be the most doctrinally demanding ones — Pentecostalism, certain strands of Islam, Orthodox Judaism. The sociologist Rodney Stark made a similar argument: strict churches are strong churches, because they solve free-rider problems and generate genuine community through costly commitment. The relaxed, intellectually accommodating traditions your Unitarian friend inhabited may be less well-positioned demographically than the ones that most vex the New Atheists.
Silicon Theology
And then we have what you might call the orthodox Silicon Valley commitment to belief in god-like AI, which in one version is good, but in an equally if not even more prevalent version, will be a disaster for humans.
Yes, and the religious parallels are not just metaphorical — they're structural, and several people have noted them quite seriously.
The belief system has recognizable theological features. There's an eschatology — history is moving toward a culminating event, the Singularity or AGI, after which everything is transformed. There's a soteriological split — this event will mean either salvation (abundance, immortality, the solving of all problems) or damnation (extinction, subjugation, permanent loss of human agency). There are prophets — Kurzweil on the optimistic side, early Eliezer Yudkowsky and now various EA-aligned figures on the apocalyptic side. There's even something like original sin in the alignment problem — the idea that intelligence itself, if not carefully redeemed through the right technical interventions, tends toward destruction.
AI scientists produce results without reasoning scientifically
Original research here: AI scientists produce results without reasoning scientifically.🚨SHOCKING: Researchers ran 25,000 AI scientist experiments and discovered something that should end the hype immediately.
— Millie Marconi (@MillieMarconnni) April 22, 2026
AI scientists are producing results without doing science.
A team from Friedrich Schiller University Jena and IIT Delhi just published the most… pic.twitter.com/RNCKenJDg4
Wednesday, April 22, 2026
Groupthink drove Yann LeCun out of Meta
Yann LeCun (AMI Labs Founder): "The AI industry is completely LLM-pilled. Everybody is working on the same thing. They're all digging the same trench."
— Big Brain AI (@realBigBrainAI) April 22, 2026
LeCun explains why no lab dares break from the pack:
"They are stealing each other's engineers. So they can't afford to do… pic.twitter.com/1XE7RaCsCf


















