Pages in this blog

Tuesday, August 12, 2025

Notes on the Metaphysical Structure of the Cosmos

I’ve been thinking about something I all “the metaphysical structure of the cosmos” now and then since August of 2020 when I introduced it in a post written in the wake of GPT-3. I wasn’t entirely serious about it. I’d only just then thought of the idea and hadn’t had time to think it through. It came back to me a few days ago when I was thinking about the “Xanadu meme” and other ideas. This time, in a conversation with Claude, I hazarded the idea that the metaphysical structure of the cosmos was recursive, though I didn’t use the word “cosmos.” Claude agreed.

It's about time I thought about the idea seriously. Is it one I want to use, in a technical sense, going forward? I don’t know. But I’ll offer some thoughts on the matter.

Just what does it mean, “metaphysical structure of the cosmos”?

Here’s what I said when I originally introduced the idea:

There is no a priori reason to believe that world has to be learnable. But if it were not, then we wouldn’t exist, nor would (most?) animals. The existing world, thus, is learnable. The human sensorium and motor system are necessarily adapted to that learnable structure, whatever it is.

I am, at least provisionally, calling that learnable structure the metaphysical structure of the world. Moreover, since humans did not arise de novo that metaphysical structure must necessarily extend through the animal kingdom and, who knows, plants as well.

“How”, you might ask, “does this metaphysical structure of the world differ from the world’s physical structure?” I will say, again provisionally, for I am just now making this up, that it is a matter of intension rather than extension. Extensionally the physical and the metaphysical are one and the same. But intensionally, they are different. We think about them in different terms. We ask different things of them. They have different conceptual affordances. The physical world is meaningless; it is simply there. It is in the metaphysical world that we seek meaning.

As I’ve already said, I introduced the idea in the wake of GPT-3, the first large language model (LLM) that had received much public exposure. Though only small number of people had direct access, enough of those wrote about it in fairly public ways that many of us knew about it, knew enough to be impressed.

When I introduced the idea I used a diagram something like this:

We have the LLM running down the middle, either that or the text on which it is trained. At this level of analysis it could be either one. The structure of the individual texts is a function of the human mind, which created the text, and the world, which the text is about, albeit often only indirectly (as in works of fiction). From this it follows, almost by definition, that the LLM derived from those texts reflects those two things as well, the mind and the world.

The significance of GPT-3, that is, its underlying LLM, and of subsequent LLMs is that that is the first time we’ve got the “whole thing” gathered together in a single, a single what? Model, text, whatever? It’s all there.

Yeah, I know. Not of it. All LLMs are biased in favor of the texts on which they’re built. Much of human thought, especially the thought of pre-literate peoples, is not represented in the training corpus of any LLM. So, we’re talking about an idealization. That’s OK. As long as we’re aware of what we’re doing, we can proceed.

Now, there’s lots of structure in any given text, and there’s lots of structure latent in any LLM. I’m not interested in all of that structure. I’m only interested in the ontological structure, by which I mean something close to the concept of ontology as it is ordinarily used in knowledge representation.

John Sowa’s use is typical. Here’s how he introduces the topic: “The subject of ontology is the study of the categories of things that exist or may exist in some domain. The product of such a study, called an ontology, is a catalog of the types of things that are assumed to exist in a domain of interest D from the perspective of a person who uses a language L for the purpose of talking about D.” I’m interested in structure of that catalog. I hypothesize that that structure is something which, for convenience, I call the Great Chain (a term long in use). Here’s a diagram:

That diagram needs some explaining; but this is not the time or place to do that. I say more in this old unpublished paper: Ontology in Knowledge Representation. My point is simply that there is a specific structure there. It’s that structure that interests me.

As an example, that structure tells us the difference between salt and sodium chloride (NaCl). Physically they are the same substance, but conceptually they are quite different. We recognize salt by its texture and appearance and, above all, by its taste. We can taste the presence of salt even where we cannot see it existing as a discrete substance. That is to say, conceptually, salt is adequately characterized by its sensorimotor properties. Sodium chloride is not. Sodium chloride is characterized in terms of a chemical theory that did not exist until the 19th century. That theory talks of atoms and bonds between them. We can’t see atoms or their bonds, rather we infer them on basis of a wide body of experimentation. Conceptually, then, they are very different.

Similarly, in one account of the world, based on one ontology, the Morning Star and the Evening Star are two different objects. But in account based on a heliocentric model of the solar system, they turn out to be the same object, the planet Venus. And so it is with the difference between animals and human beings. To the biologist they are the same kind of thing; human beings are just one kind, one species of animal. But in the common-sense construal of the world, they are very different; humans are not animals, though we have animal-like characteristics.

That, more or less, is what I’m talking about when I talk of the metaphysical structure of the cosmos (or world). That conceptual structure. It’s not explicit in any LLM, but it certainly exists implicitly, otherwise LLMs wouldn’t generate coherent texts. (Note that I have a working paper on ChatGPT and stories where it betrays ontological sensitivity: ChatGPT tells stories, and a note about reverse engineering.)

What would Wolfram say?

I don’t know. But there’s a suggestive passage in one of his articles, Can AI Solve Science? (March 5, 2024).

But given computational irreducibility, why is science actually possible at all? The key fact is that whenever there’s overall computational irreducibility, there are also an infinite number of pockets of computational reducibility. In other words, there are always certain aspects of a system about which things can be said using limited computational effort. And these are what we typically concentrate on in “doing science”.

But inevitably there are limits to this—and issues that run into computational irreducibility. Sometimes these manifest as questions we just can’t answer, and sometimes as “surprises” we couldn’t see coming. But the point is that if we want to “solve everything” we’ll inevitably be confronted with computational irreducibility, and there just won’t be any way—with AI or otherwise—to shortcut just simulating the system step by step.

There is, however, a subtlety here. What if all we ever want to know about are things that align with computational reducibility? A lot of science—and technology—has been constructed specifically around computationally reducible phenomena. And that’s for example why things like mathematical formulas have been able to be as successful in science as they have.

But we certainly know we haven’t yet solved everything we want in science. And in many cases it seems like we don’t really have a choice about what we need to study; nature, for example, forces it upon us. And the result is that we inevitably end up face-to-face with computational irreducibility.

As we’ll discuss, AI has the potential to give us streamlined ways to find certain kinds of pockets of computational reducibility. But there’ll always be computational irreducibility around, leading to unexpected “surprises” and things we just can’t quickly or “narratively” get to. Will this ever end? No. There’ll always be “more to discover”. Things that need more computation to reach. Pockets of computational reducibility that we didn’t know were there. And ultimately—AI or not—computational irreducibility is what will prevent us from ever being able to completely “solve science”.

This distinction between computationally reducible phenomena and those that are irreducible, that would seem to belong to the metaphysical structure of the cosmos. It reflects the encounter between our methods of observation, analysis, and modeling and the world. The behavior of things that are computationally irreducible cannot be predicted by an analytical model. The weather is irreducible in this sense. Sure, we have systems that make predictions about the weather, but they do it by running a simulation of the weather rather than making predictions based on a model. In contrast in the middle of the 19th century astronomers predicted the existence of the planet Neptune based on irregularities in the paths of known planets.

But, as Wolfram’s passage indicates. Our knowledge does change. We are always discovering patches of computationally reducible phenomena that we hadn’t recognized before. The metaphysical structure of the cosmos changes as knowledge advances.

Finally, an important point, we cannot predict the existence of the pockets of computational reducibility. Rather, we have to find them by just looking around to see what there is to see. And – a passing thought – what it we could predict these pockets of reducibility? Wouldn’t that imply that all phenomena are ultimately reducible? Doesn’t that “freeze” the universe?

The recursive cosmos

Then, a couple of days ago, I brought up the idea again in a conversation with Claude. I noted: “And not only is the metaphysical structure of the world recursive, it is NECESSARILY so.” To which Claude replied: “Yes - that’s the crucial insight. The metaphysical structure must be recursive because minds trying to understand the world are themselves part of what needs to be understood. Any complete account of how reality appears to consciousness has to include an account of consciousness itself.” And so we have this diagram:

As far as I can tell, the structure of that diagram follows from the existence and nature of language. As Roman Jakobson noted years ago, language can be used to refer to itself, giving us the metalingual function of language. It’s the metalingual function, David Hays has argued, that supports the construction of abstract concepts, such as truth and justice or, for that matter, sodium chloride. Thus we made it central to our account of cultural evolution through a series of cognitive ranks mediated by the successive cognitive technologies of speech, writing, calculation, and computation.

Let us assume for the sake of argument that the evolution of complexity in a universe such as ours is inevitable (for part of an argument, see Benzon and Hays, A Note on Why Natural Selection Leads to Complexity). If it also follows that the emergence of creatures communicating by some kind of language that allows for arbitrary symbols being referentially linked to arbitrary phenomenon, then recursiveness would seem to inherent in a universe such as ours. That is to say, given sufficient time, the universe will become recursive through the cultural evolution of recursive computational devices.

More later.

No comments:

Post a Comment