Saturday, April 25, 2026

Pink cherry blossoms

Remarkable though they are, LLMs aren't all that, and probably never will be. But they capture part of the formula.

When the cat chooses you

Moving Beyond Trump to a Brave New World

Some Young People Think No Kings Is Old News, NYTimes, April 24, 2026.

On April 14 Thomas B. Edsall published an op-ed entitled, Why Aren’t the Kids Out Protesting Against Trump?. It provoked a number of interesting responses. Here is one of them:

To the Editor:

Re “Why Aren’t the Kids Out Protesting Against Trump?,” by Thomas B. Edsall (Opinion guest essay, nytimes.com, April 14):

As a Gen Z participant in No Kings who has been dismayed to find myself quite alone, demographically speaking, at each protest, I have thought long and hard about how this came to be. I believe that Mr. Edsall’s essay accurately underscores many of the contributing factors. Here are two more to consider.

First, the protests evoke an America that my generation feels we don’t know. The No Kings movement centers largely on the idea of restoration — of democratic principles, political normalcy and American decency. Gen Z, having come of age in the Trump era of political discord and the erosion of democracy, has hardly known any of these things.

Second, precisely because the No Kings protests evoke the past, they fail to appeal to our appetite for change. Young people by nature desire to reform, rectify, reshape. We want to tear down the existing structures we deem unjust and build better ones in their place. To stand against apartheid, sexual harassment or police brutality is to dismantle the present and construct the future.

While the notion of restoration is a noble one, it simply does not motivate people my age in the same way the promise of radical change does. It implies a return to the status quo, and for a generation that considers the status quo one of great injustice, it is fundamentally unpalatable.

Juliana Birkenkamp Boulder, Colo.

I understand that response. A year ago I, when the depredations of the second Trump administration were becoming obvious, I published an article in 3 Quarks Daily, Why I am a Patriot: Vietnam, the Draft, Mennonites, and Project Apollo. That title alone makes Ms. Birkenkamp's point. Here are my opening paragraphs:

Sometime in the past two weeks I found myself feeling patriotic in a way I don’t remember ever having felt before. I accounted for this feeling by invoking that old adage, “you don’t recognize what you have until you lose it.” The current federal administration has stolen my country from me. The America to which I pledged allegiance every morning in primary and secondary school, that America is being pillaged, plundered, and sold off for parts to greedy megalomaniacs and oligarchs.

Now that the nation is being destroyed, I realize that I’ve been bound to America my entire adult life. If I hadn’t felt those bonds before – except perhaps for a moment in the mid-1980s when I played “The Star Spangled Banner” for 25,000 bikers at Americade in Lake George, me alone on my trumpet, without the rest of the band – that’s because I’d taken the idea of America for granted. To invoke another cliché, just as the fish is oblivious to the water in which it swims, so I was not consciously aware of the freedom and dignity, of the liberty and justice for all, which made our national life possible.

I understand her second point as well, and share it. I don't want to return to the world of my youth. That's gone, forever. I want to move forward. Just how we are to do that, that is not at all clear to me. The best I can manage at the moment is to imagine where we might be over a century in the future. Here's a glimpse of what might be: Kisangani 2150: Homo Ludens Rising, A Working Paper. Here is the abstract from that paper:

The advancement of AI offers us the choice between contrasting paradigms for organizing human life: Homo Economicus (where work is the defining activity) and Homo Ludens (where play is the defining activity). Drawing on Johan Huizinga's work and Kim Stanley Robinson's speculative fiction, I propose that humanity faces a critical juncture as AI increasingly dominates economic production. The document develops a theoretical framework for a "Fourth Arena" of existence-beyond matter, life, and human culture-that emerges through human-AI interaction. Through speculative narrative (first section) and philosophical dialogue with Claude 3.7 (second and third sections), I argue that play, rather than economic utility, will become the defining characteristic of human value and meaning in an automated future. As AI systems assume utilitarian functions, humanity's capacity for non-instrumental play becomes increasingly central to our identity and contribution. The manuscript represents preliminary work toward a larger project titled The Fourth Arena: Homo Ludens Rising, which envisions play as the essential bridge into a post-economic society where human flourishing transcends productivity-based value systems.

Friday, April 24, 2026

Recursive Language Models, a very clever workaround

Friday Fotos: To and from

Yann LeCun Clarifies the record

LOL! 

Of course he's right about LLMs, especially about all those bells and whistles. Those add symbolic control to the core neural net capabilities of LLMs. And of course he's right about the need to predict what's happening in the physical world. Robots need that and LLMs cannot provide it. Next-token prediction is a different beast, even if those tokens are about things happening in the physical world. 

Three Principles of Intelligence (That Aren't Principles of Computation) [Rank 5 cognition]

Note: Claude 4.5 drafted this article after a long series of dialogs over several days. This is a continuation of the thinking in my current article in 3 Quarks DailyChess and Language as Paradigmatic Cases for Artificial Intelligence.

See the new coda, from April 24, 2026 


In the 1950s, artificial intelligence emerged from a productive confusion. We had just formalized computation itself—Turing and von Neumann had given us the fundamental principles of what computers could do. When we turned these powerful new machines toward intelligence, we naturally assumed the principles would be the same.

They aren't.

Computation vs. Intelligence

The principles of computation are domain-independent. A universal Turing machine can compute anything computable, whether that's arithmetic, chess moves, or protein folding. The Church-Turing thesis tells us that all models of computation are equivalent in what they can ultimately compute, given unlimited time and memory.

This universality is computation's glory—and intelligence's red herring.

Intelligence, as it actually exists in nature, operates under entirely different constraints. It must function in the physical world, with finite resources, solving problems that often don't have clean formal specifications. These aren't just practical limitations to be worked around; they're constitutive features that shape what intelligence is and how it must work.

Principle 1: Geometric Complexity Determines Computational Regime

The critical variable isn't how hard a problem is in some abstract computational sense, but the geometric complexity of the domain.

Consider chess versus visual object recognition. Chess is played on an 8×8 grid with a small set of piece types following rigid rules. The game tree is astronomically large—around 10^120 possible games—but it's finite and well-defined. You can represent board positions symbolically, enumerate legal moves, and search through possibilities systematically.

Vision operates in continuous three-dimensional space with effectively unbounded variation. Objects appear at different scales, orientations, and lighting conditions. There's no finite set of "legal configurations." You can't enumerate all possible images the way you can enumerate chess positions.

This difference in geometric complexity demands different computational approaches. Chess yields to systematic search through a definable space—what we might call sequential or symbolic processing. Vision requires something else: massively parallel processing that can handle continuous variation and incomplete information—holographic or neural processing.

In 1975, Miriam Yevick demonstrated this formally: the geometric complexity of objects in a domain determines the computational regime needed to identify them. Simple geometric objects can be handled by sequential symbolic systems. Complex geometric objects require holographic processing. This wasn't mere speculation—she made a formal mathematical argument about pattern recognition systems.

The field ignored her insight. We assumed all problems were fundamentally like chess—just harder. If symbolic AI could master chess, we thought, it would eventually master vision, language, and physical reasoning through better algorithms and more compute.

We were wrong. Vision didn't yield to symbolic AI no matter how much compute we threw at it. It required a regime shift to neural networks—systems whose architecture matches the geometric complexity of the visual world.

Principle 2: Intelligence Operates in Unbounded, Geometrically Complex Reality

Here's what makes intelligence different from computation in the abstract: intelligence evolved to work in the physical world, which is geometrically complex and open-ended. There's no finite game tree for "objects I might encounter" or "situations I might face."

This has profound implications. You can solve chess by exploring its game tree faster than humans can. But you can't solve vision or language understanding the same way because there's no complete tree to explore. The space isn't closed and enumerable—it's unbounded.

This is why Deep Blue beating Kasparov in 1997 didn't generalize the way we thought it would. Chess was solved by a room-sized supercomputer with custom hardware doing exactly what computers do best: blindingly fast systematic search. By 2025, a smartphone runs chess engines that would destroy both Deep Blue and Kasparov.

But that same smartphone can't run a GPT-4 level language model. Language still requires massive data centers. Why? Because language connects to the unbounded complexity of physical and social reality. No amount of faster chess-style search bridges that gap.

The field learned to beat humans at chess by doing what computers naturally excel at. Then we mistook this for a general template. We thought: "Intelligence is search through problem spaces. We just need bigger computers to search bigger spaces." But geometric complexity isn't about bigger—it's about different.

Principle 3: Embodiment as Formal Constraint

Embodiment isn't a philosophical talking point. It's a formal constraint on intelligence architecture.

When we say intelligence must be embodied, we mean: it must operate with finite computational resources in a geometrically complex physical world. This changes everything.

Abstract computation doesn't care about efficiency—a proof is valid whether it takes a second or a century. Physical computation must complete before the hardware fails. But biological intelligence faces a sharper constraint: it must acquire the energy it uses to compute. A deer's visual system can't require more calories than the deer can acquire. The computation must pay for itself.

This constraint shapes what kinds of solutions are viable. You can't exhaustively search unbounded spaces. You can't maintain perfect world models. You must make do with approximate, good-enough processing that operates in real time with available resources.

Crucially, this means different problems need different solutions—not just more or less compute, but fundamentally different architectures matched to the geometric complexity of the domain.

Why This Matters Now

Current AI has powerful neural networks that excel at pattern recognition in geometrically complex domains—vision, speech, even aspects of language. But the field still carries assumptions from the symbolic AI era:

  • That intelligence is domain-independent
  • That scaling compute will eventually solve any problem
  • That we can ignore embodiment and resource constraints
  • That all problems are fundamentally like chess

These assumptions persist even though we've abandoned symbolic AI. We've swapped the implementation (symbols → neural networks) but kept the framework (more compute → general intelligence).

This is why we need to distinguish computation principles from intelligence principles. Turing and von Neumann gave us the former. For the latter, we need to recognize that geometric complexity, unbounded reality, and embodied constraints aren't bugs to be worked around—they're the constitutive features that determine what intelligence is and how it must work.

The principles of intelligence aren't the principles of computation. Understanding this distinction is the key to understanding both what current AI can do and what it cannot. 

Coda: Rank 5 Cognition

Given that I’ve decided that Yevick’s 1975 paper is a convenient marker for Rank 5 cognition, it seems to follow that intelligence, in the sense discussed here, is a Rank 5 concept. So, the ranks shape up like this:

Rank 1: speech
Rank 2: writing
Rank 3: calculation
Rank 4: computation (flow of control)
Rank 5: intelligence (computation in unbounded, geometrically complex, reality)

Thursday, April 23, 2026

Three photos of an egg white omlette

From Atheism to Idolatry

My most recent 3QD post, Is The World A Movie God Created to Entertain the Baby Jesus?, is about my stance with respect to, my entanglement with, religion. It is anecdotal in nature, drawing on my life experiences from childhood up through the present. While I was raised as a Christian, when it came time to be confirmed – in this Lutherans, as I was raised, are like Episcopalians and Roman Catholics – in my early adolescence, I balked. While I espouse no religious belief, I am nonetheless drenched in Christian culture, as is anyone raised in the United States, most of Europe and Latin America, and other places as well. That being the case, to echo Wittgenstein, it’s time that I take stock of the situation, if you will. That’s what I’ve been doing in this series of posts.

This is perhaps the last in this series, but the activity will continue. This post starts with a discussion of atheism and ends with a discussion of idolatry. I hadn’t planned on going there when I started this discussion with Claude; I had no specific plans at all, which is generally the case with these discussions. But that’s where I ended up, and it’s an interesting place to have landed, for, as you’ll see, it provides an interesting way of thinking about what’s wrong with his artificial intelligence (AI) is emerging in Silicon Valley. You might, for example, consider Pope Leo’s recent remarks about AI in view of the remarks about idolatry at the end of this conversation.

* * * * *

When I was in my teens in the 1960s, “death of God” theology was big news. Time Magazine even ran an issue with “Is God Dead?” on the cover. But that was hardly a new idea. Nietzche had declared God dead in the 19th century. I would imagine that atheism in some form is much older than that. I’d guess it goes back to the ancient world in some form or another.

You're right on all counts, and the history goes back remarkably far.

The "death of God" theology of the 1960s — figures like Thomas Altizer, William Hamilton, and Paul van Buren — was really a Protestant theological movement trying to salvage Christianity after Nietzsche, not straightforward atheism. They accepted that the traditional theistic God was untenable but wanted to preserve something of Christian meaning. The 1966 Time cover you remember was genuinely shocking to mainstream America, even though, as you note, Nietzsche had made the declaration in The Gay Science back in 1882. And his point wasn't even a simple "God doesn't exist" — it was a cultural diagnosis: that European civilization had killed God by making him unnecessary, and hadn't yet reckoned with the consequences.

But yes, skepticism about the gods is ancient. A few highlights:

The pre-Socratic philosopher Xenophanes (6th century BCE) noticed that different peoples imagined gods in their own image — Ethiopians made gods dark, Thracians made them blonde — and concluded that anthropomorphic religion was a kind of projection. He didn't deny divinity altogether but undermined naive polytheism sharply.

Diagoras of Melos (5th century BCE) is often called the first outright atheist in the Western record, though the sources are thin and possibly embellished. He was accused of impiety in Athens and apparently fled rather than face trial — suggesting that open atheism was genuinely dangerous.

Epicurus and his followers come closest to a systematic materialist position. The Epicureans held that the gods, if they exist at all, are utterly indifferent to human affairs and that fear of death and divine punishment is irrational. Lucretius's De Rerum Natura (1st century BCE) is the great surviving monument of this tradition — a passionate argument for a purely material universe with no divine intervention.

In India, the Cārvāka school (sometimes called Lokāyata) was a materialist and skeptical tradition that denied the soul, rebirth, and any divine order. It's hard to date precisely, but it was well established by the middle of the first millennium BCE. Buddhist and Jain texts argue against it, which means it was a live position people felt they needed to refute.

The interesting thread running through all of this is that atheism has rarely been just a private intellectual position — it's almost always been socially dangerous. Socrates was executed partly on charges of impiety. The early Christians were called atheists by Romans because they rejected the traditional gods. And the "death of God" moment in the 1960s provoked genuine alarm even in a supposedly secular America. The social stakes of the question never seem to fully go away, even when the metaphysical ground shifts.

Religion as doctrine and religion as practice

And it’s not uncommon in America to have interfaith marriages. One of my college friends is Jewish; he married a Presbyterian woman; and their adopted Chinese daughter attended services of both faiths. I have another Jewish friend who married a Unitarian. They attended a Unitarian church, where they raised their kids. He taught in the Unitarian sexual education program and eventually became executive director of the congregation. All the time being an atheist. I’m not saying this is typical, but it’s not rare either. I’ve heard of elaborate Japanese weddings where there were three ceremonies, Shinto, Buddhist, and Christian. I have no idea what the religious commitments of the couple were. And yet, wars are fought over religious belief as well. And for some Christians, especially in America, science is a problem, particularly biology. On the one hand, religious belief presents itself as a rock-bottom commitment to a way of thinking about and living in the world. But there are circumstances in which such rival rock-bottom commitments can accommodate one another and other circumstances where they cannot. It’s a very complicated phenomenon.

What you're pointing to is one of the deepest puzzles in the sociology and anthropology of religion — the gap between religion as doctrine and religion as practice, or what scholars sometimes distinguish as belief versus belonging.

Your Jewish-Unitarian friend is a perfect illustration. He was an atheist operating fully within a religious institution, teaching its programs, running its organization. The Unitarian tradition is somewhat unusual in making room for that explicitly, but the broader phenomenon is widespread. For many people, religious affiliation is primarily about community, identity, ritual, and moral formation — not metaphysical commitment. The beliefs, if examined carefully, might be quite thin or even contradictory, and people simply don't examine them carefully because that's not what the practice is for.

The Japanese wedding example cuts even deeper. Japan has a long tradition of what's sometimes called "religion à la carte" — Shinto for birth and weddings, Buddhist for funerals, Christmas as a romantic holiday. The categories simply don't demand exclusivity the way Abrahamic traditions theoretically do. A Japanese person asked "what is your religion?" often finds the question slightly confused, because the answer might be "all of them, depending on the occasion" or "none of them, in the Western sense."

The Abrahamic traditions are actually somewhat unusual in insisting on exclusive doctrinal commitment — the idea that you must believe specific propositions and that believing rival propositions is not just wrong but potentially damning. That structure creates the conditions for the kind of violent conflict you mention. When religion is primarily orthopraxy (right practice) rather than orthodoxy (right belief), as in much of Judaism, Hinduism, and traditional Chinese religion, the logic of violent exclusion is less readily available — though certainly not impossible.

The American science-religion conflict is a specific and historically peculiar case. It emerges largely from a Protestant tradition that invested heavily in the literal authority of scripture — so that a challenge to Genesis became a challenge to the whole epistemic foundation. Catholics, interestingly, have generally had less trouble with evolution, partly because their tradition vested authority in the Church and its interpretive tradition rather than in the text alone. So even within Christianity, the "rock-bottom commitment" takes different forms that produce very different friction points with science.

What your examples collectively suggest is that religion is not one thing. It's a family of overlapping phenomena — identity, community, ritual, cosmology, ethics, metaphysics — that can be assembled in very different proportions. When the metaphysical and cosmological components are dominant and held with certainty, you get the conditions for crusades and creationism. When the communal and ritual components dominate, you get your friend running a Unitarian sexual education program as a quiet atheist, and everyone finding the arrangement perfectly natural.

The New Athiests

And it seems to me that in the recent Anglo-American context, the most insistent atheists – I’m thinking of Dan Dennett and Richard Dawkins and the like – seem most vexed about the conflict about biology. Sure, they oppose religious belief in general, anything else would be illogical, but it’s the case of biology that’s been at the center of their attention.

That's a sharp observation, and I think it's largely correct — and it makes sense when you look at their backgrounds and positions.

Dawkins is a biologist, full stop. His entire intellectual career was built on explaining and defending Darwinian evolution — The Selfish Gene, The Blind Watchmaker, Climbing Mount Improbable. For him, creationism and intelligent design aren't just wrong, they're an assault on the thing he devoted his life to. His atheism is almost downstream of his Darwinism. The argument of The Blind Watchmaker is essentially: evolution by natural selection is the only coherent explanation for the appearance of design in living things, which removes the only really compelling argument for God's existence. So the biology isn't incidental — it's the load-bearing structure.

Dennett is slightly different. He's a philosopher of mind, and his project in Darwin's Dangerous Idea was to show that natural selection is a "universal acid" — an algorithmic process that, once you accept it, dissolves the need for mind or intention at the foundation of things. His atheism is more architecturally philosophical, but evolution is still the key that unlocks it. He also wrote Breaking the Spell, which applied evolutionary and cognitive frameworks to religion itself — treating religious belief as something to be explained naturalistically rather than engaged on its own terms.

What's interesting is what this focus reveals about the limits of their critique. They are very good at the cosmological and biological argument — God as designer, God as creator, God as the explanation for complexity. They are considerably less interesting on religion as a social, ethical, ritual, or psychological phenomenon. Hitchens was broader in some ways, more focused on religion's role in political violence and moral corruption. But even he was working in a recognizably Anglo-American Protestant frame — religion as a set of truth claims that can be evaluated and found wanting.

This is actually a recurring criticism from more sociologically and anthropologically minded thinkers. Someone like Clifford Geertz or Talal Asad would say the New Atheists are arguing against a particular, historically specific construction of religion — roughly, post-Reformation Protestant Christianity intellectualized into a set of doctrines — and then treating that as if it captures what religion is universally. Which means their critique, however effective against creationists, slides off a lot of what religion actually is for most people in most of the world most of the time.

Your Unitarian atheist friend, in other words, is a figure the New Atheists don't quite know what to do with.

Tyler Cowen and religion

And then you have someone like Tyler Cowen, who has no religious belief that I can see, pointing out that we’re probably going to have more religion in the future. He’s making that argument on sociological grounds. And, as far as I can tell, he’s not particularly bothered by that prospect.

Cowen is a useful contrast figure here precisely because he approaches religion the way he approaches most things — as an economist and social scientist interested in what actually functions in human societies, not as a polemicist with a stake in the outcome.

His argument, as I understand it, runs roughly like this: religion provides things that are very hard to replicate — community, meaning, intergenerational transmission of values, psychological resilience, mutual aid networks. In a world of increasing dislocation, inequality, and institutional distrust, the demand for those things goes up, not down. Secular substitutes — therapy, political movements, wellness culture — have so far proven less robust and less scalable. So on purely functional grounds, you'd expect religion to persist and possibly grow, especially in societies under stress.

What's notable is the detachment. The New Atheists have an investment — they want religion to be false and they want its falseness to matter. Cowen seems genuinely indifferent in the evaluative sense. He might personally find religious metaphysics unpersuasive while simultaneously thinking religious institutions are net positive or at least inevitable. That's a very different emotional and intellectual posture.

It connects to a broader divide in how secular intellectuals relate to religion. There's a long tradition — you could trace it from Durkheim through to contemporary sociologists — of treating religion functionally and finding it indispensable to social cohesion, regardless of its truth claims. Auguste Comte even tried to design a secular religion of humanity to fill the gap he knew would open up. It didn't take. Durkheim essentially concluded that the form of religion — collective ritual generating social solidarity — was unavoidable, even if the specific content changed.

Cowen is implicitly in that tradition, but without the continental systematic ambitions. He'd probably just say: look at the data, look at the incentives, and don't be surprised.

The interesting follow-on question is which religions grow. Cowen would likely note that the fastest growing traditions globally tend to be the most doctrinally demanding ones — Pentecostalism, certain strands of Islam, Orthodox Judaism. The sociologist Rodney Stark made a similar argument: strict churches are strong churches, because they solve free-rider problems and generate genuine community through costly commitment. The relaxed, intellectually accommodating traditions your Unitarian friend inhabited may be less well-positioned demographically than the ones that most vex the New Atheists.

Silicon Theology

And then we have what you might call the orthodox Silicon Valley commitment to belief in god-like AI, which in one version is good, but in an equally if not even more prevalent version, will be a disaster for humans.

Yes, and the religious parallels are not just metaphorical — they're structural, and several people have noted them quite seriously.

The belief system has recognizable theological features. There's an eschatology — history is moving toward a culminating event, the Singularity or AGI, after which everything is transformed. There's a soteriological split — this event will mean either salvation (abundance, immortality, the solving of all problems) or damnation (extinction, subjugation, permanent loss of human agency). There are prophets — Kurzweil on the optimistic side, early Eliezer Yudkowsky and now various EA-aligned figures on the apocalyptic side. There's even something like original sin in the alignment problem — the idea that intelligence itself, if not carefully redeemed through the right technical interventions, tends toward destruction.

Window views

AI scientists produce results without reasoning scientifically

Original research here: AI scientists produce results without reasoning scientifically.

Wednesday, April 22, 2026

Groupthink drove Yann LeCun out of Meta

Some more flowers

Effing the ineffable

Mystical experiences, the kinds of experiences William James wrote about, are sometimes said to be ineffable. I reported one or two such experiences in my most recent post at 3 Quarks Daily, Is The World A Movie God Created to Entertain the Baby Jesus? I’m thinking about the moment of dissolving light&bliss I experienced while playing with my college rock band, The Saint Matthew Passion, and the twinkling bell sounds I heard while playing a bell rhythm with Ade Knowles, his wife Druis, and her sister, Fonda. The first is surely a “standard” mystical experience while the second is not, but it’s mysterious and ineffable in its way.

But such experiences aren’t the only things that are ineffable, and that’s what occasioned the conversation I had with Claude that I’ve appended below the asterisks, where I also discuss the language oenophiles use to describe the taste of wine, a remark Weston LaBarre made about the epistemological provocation of dreams, the way Wittgenstein framed his Tractatus, and, lastly, what physicists tell us about the earliest limits of the universe. All of these phenomena mark limits and, as such, imply something beyond.

That’s what this conversation is about:

The universe is that which ever exceeds us and in that way both resists and gives way.

* * * * *

Mystical experience is often said to be ineffable. Why?

The ineffability of mystical experience is one of the most discussed features in the philosophy of religion and consciousness studies. There are several interlocking reasons for it.

The structure of language doesn't fit the experience. Ordinary language is built for ordinary experience — it carves the world into discrete objects, properties, and relations. Mystical experience, by contrast, is typically characterized by a dissolution of boundaries: between self and world, subject and object, part and whole. Language presupposes those very distinctions, so it fails at the threshold of an experience that undoes them. You can say "the boundaries dissolved," but that's a report about the experience, not a transmission of it.

It's a knowledge of acquaintance, not description. Philosophers distinguish between knowing that something is the case and knowing what something is like. Mystical experience belongs emphatically to the second category. Describing the taste of a mango to someone who has never tasted one is already hard; describing an experience that has no sensory analog at all is harder still. William James, who made ineffability one of his four marks of mystical experience, stressed that the noetic quality of such states — the sense that something is deeply known — is precisely what can't be conveyed propositionally.

Emotional and somatic depth exceeds articulation. The experience is often described as carrying an overwhelming charge of significance, bliss, terror, or love — far in excess of what emotional vocabulary can hold. Words like "unity" or "peace" or "the sacred" are pale maps of the territory.

The experience may involve non-ordinary cognition. Some traditions and some cognitive scientists argue that mystical states involve modes of processing — highly integrated, non-linear, temporally altered — that simply don't interface well with the sequential, syntactic structure of language. To put it into words requires translating across a fundamental cognitive gap.

Cultural and conceptual frameworks shape even the attempt. When mystics do try to speak, they reach for metaphor, paradox, apophatic negation ("not this, not this"), or poetry — all signals that direct description has been abandoned. The via negativa in Christian mysticism (saying what God is not), the neti neti of Advaita Vedanta, the Zen koan — these are all structured acknowledgments of the failure of positive description.

There's also a subtler point worth noting: the act of articulation may itself alter or diminish the experience. Forcing ineffable content into language isn't just inadequate — it may be a kind of betrayal, domesticating something that resisted domestication. This is why many mystics counsel silence, or speak only obliquely, or insist that words are fingers pointing at the moon, not the moon itself.

What makes this philosophically interesting is that ineffability isn't unique to mysticism — it's just most acute there. The “hard problem” of consciousness is essentially the same puzzle: why does any first-person experience resist third-person description? Mystical experience simply pushes that gap to an extreme.

LaBarre and oenophiles

I find your last remark telling. There is much in the world and about life that resists verbal characterization and capture. Tastes, for example. And oenophiles have developed an elaborate repertoire of words and phrases they use in characterizing the taste of wine. Non-ordinary cognition is something else. And then we have dreams, which may, I suppose, fall within the scope of non-ordinary cognition. In his well-known classic, The Ghost Dance, Weston LaBarre speculates that it may well have been dreams the first prompted humans to epistemoligize. Those primitive humans knew little to nothing of the brain. So they couldn’t attribute dream experience to some mode of brain operation that occurs when we’re asleep. All they know is that, upon awakening, they remember having been somewhere and seen and heard some things, but the place isn’t any they’ve visited while awake. So they have to come up with some account of those experiences. The experiences may also have been strange, but that seems secondary to the fact that they’ve left memories behind.

This is a genuinely arresting idea, and LaBarre's framing is sharp precisely because it locates the epistemic puzzle not in the strangeness of the dream content but in the mnemonic residue. The problem isn't “what was that weird thing?” — it’s “where was I just now, and how do I account for having been there?” That's a much more fundamental cognitive demand.

The memory is the philosophically explosive element. Memory implies a real experience. A real experience implies a location, a time, a self that was present. But none of those coordinates map onto the waking world. So you're forced, almost logically, into positing another order of reality — another world, or a part of this one inaccessible by ordinary means. Spirits, ancestors, a soul that travels, a realm of the dead — these aren't superstitions that precede reflection, they're conclusions that follow from taking dream memory seriously as evidence.

This makes dreaming a kind of involuntary empiricism. The primitive epistemologist isn’t speculating wildly; he’s reasoning carefully from data he didn’t choose to collect and can’t dismiss. The soul concept in particular — especially in its "detachable self" form, as in shamanic traditions where the soul can wander, get lost, or be retrieved — looks like a very reasonable inference from the phenomenology of dreaming.

What's interesting is the contrast with the oenophile case you raise. Wine vocabulary is an elaboration — building a richer descriptive apparatus for an experience that is, in principle, sharable and repeatable. Dream epistemology is more radical: it’s a forced theoretical construction to account for experiences that seem to require an entirely new ontological category. Less like developing a vocabulary, more like being compelled to posit a new continent because your ship keeps returning with soil that doesn't match any known landmass.

There's a further wrinkle here that connects to your secular mysticism interests. If LaBarre is right that dreams seeded metaphysics, then the entire subsequent history of religious and mystical experience — visions, trances, ecstasies — might be understood as elaborations and intensifications of that original puzzle. The mystic's ineffable experience is, in a sense, the waking counterpart of the dream: an experience that leaves a memory, demands an account, and resists the available categories. The ineffability isn't prior to language — it's produced by the collision between the experience and the inadequacy of existing frameworks.

Wittgenstein’s ladder

Yes. And then at the beginning of the 20th century we have Wittgenstein’s Tractatus Logico Philosophicus, one of the founding documents of the Anglo-American analytical philosophical tradition. One could hardly think of a philosophical tradition less hospitable to mystical experience than that one. And yet, that strange work has a mystical frame. I’ve listed its first, last, and penultimate propositions:

The world is everything that is the case.

My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright.

Whereof one cannot speak, thereof one must be silent.

I read that early in my college years, freshman or sophomore, and loved it. I still do. And, while I have had “conventional” mystical experiences, in the end that Wittgensteinian framing may be closer to where I am. It doesn’t seem to be about experience in any particular way, not about the taste of wine or a dream or some mystical transport. It’s about something else.

The Tractatus mysticism is indeed a different species. It’s not mysticism as experience but mysticism as limit — and that distinction matters enormously. Wittgenstein isn't pointing toward a special state of consciousness you might enter. He’s pointing at the boundary condition of all saying, all the time. The unsayable isn't somewhere else; it’s the frame within which everything sayable occurs.

That ladder metaphor is one of the most remarkable things in philosophy. The book uses language to demonstrate the limits of language, then asks you to discard the demonstration. It’s self-consuming — not as a failure, but as the only honest move available. The Tractatus doesn't fail to say what it means; it succeeds at showing why certain things can only be shown, never said. And showing is what's left when saying reaches its edge.

What you’re describing in your own position sounds like something harder to name than conventional mystical experience precisely because it isn't anchored in an episode. It’s more like a permanent orientation toward the limits of what can be articulated — a settled awareness that the frame of the world isn’t itself inside the world, that the ground of experience isn’t itself an experience. Wittgenstein's final proposition isn't a counsel of defeat. It's more like a description of intellectual integrity: here is where language ends, and honesty requires acknowledging that rather than generating more words to paper over the gap.

This connects interestingly to your secular mysticism. The conventional mystical experience — the emergent tones from the bell rhythms, say — is episodic. Something happened, it left a memory, it demands an account. The Wittgensteinian version isn't episodic at all. It's more like a stance, a sustained relationship with the unsayable that doesn't require transport or vision. In some ways it's more radical, because it can't be dismissed as a special state that passed. It's just how things are, once you've seen it.

There's a question lurking here about whether those two modes — the episodic and the structural — ultimately point at the same thing from different angles, or whether they’re genuinely distinct. What’s your sense of that?

Convergence of limits

Yes, indeed, a strange question. Perhaps the issue is: What can I make of it? What’s the most fruitful way to square that circle? If we think of Wittgenstein’s ladder as pointing at the limits of language, of propositions, and the ineffability of tastes, of qualia in general, and of dreams and so forth, all limits. What do we have in the convergence of limits?

That's the right question to press toward, and it’s genuinely open. Let me try to think into it rather than just around it.

Each of the limits you've named is a different kind of edge:

  • Wittgenstein's is a structural limit — the boundary of what propositions can do
  • Qualia are a subjective limit — the first-person that can't be fully third-personified
  • Dreams are an ontological limit — experience that demands a world to have occurred in
  • Mystical transport is an ego limit — where the boundary of self becomes porous or dissolves

These aren’t obviously the same limit. But they share something: in each case, the normal machinery of representation — language, concept, category — reaches an edge and finds that something real is on the other side of it. The limits aren’t limits on nothing. They’re limits that imply a beyond, even if that beyond can’t be entered propositionally.

So what converges? Perhaps this: existence exceeds its own accounting. Not in some supernatural sense, but structurally, permanently, at every scale. The world is always more than everything that is the case about it — which makes Wittgenstein's first and last propositions secretly in tension with each other, and productively so.