Pages in this blog

Sunday, June 21, 2020

Wolfram says it's computation all the way down

One of the questions that comes about when you imagine that you might hold in your hand a rule that will generate our whole universe, how do you then think about that? What's the way of understanding what's going on? One of the most obvious questions is why did we get this universe and not another? In particular, if the rule that we find is a comparatively simple rule, how did we get this simple-rule universe?

The lesson since the time of Copernicus has been that our Earth isn't the center of the universe. We're not special in this or that way. If it turns out that the rule that we find for our universe is this rule that, at least to us, seems simple, we get to ask ourselves why we lucked out and got this universe with a simple rule. I have to say, I wasn't expecting that there would be a good scientific answer to that question. One of the surprises from this project to try to find the fundamental theory of physics has been that we have an understanding of how that works.

There are three levels of understanding of how the universe works in this model of ours. It starts from what one can think of as atoms of space, these elements that are knitted together by connectivity to form what ends up behaving like the physical space in which we move. The first level of what's going on involves these elements and rules that describe how elements connected in a particular way should be transformed to elements connected in some other way. This connectivity of the elements is what makes up when we look at, say, 10^100, 10^400 of these elements. That's what behaves like space as we're familiar with it, and not only space but also all of the things that are in space—all the matter and particles—are all just features of this underlying structure and its detailed way of connecting these elements together.

We've got this set of transformation rules that apply to those underlying elements. In this set up, space is a very different thing from time. One of the wrong turns of 20th-century physics was this idea that space and time should always be packaged together into this four-dimensional spacetime continuum. That's wrong. Time is different from space. Time is the inexorable operation of computation in figuring out what the next state will be from previous states, where our space to something that is a more specific extent of, in this particular case, the hypergraph that knits together these different elements.
One rule:
The main point is that at every event, you can have an update event that corresponds to every possible rule you might apply. You, as an observer of the universe, could choose a frame in which you're only considering the path through this ultra-multiway system of all possible applications of rules. You're only considering the path that corresponds to the application of one particular rule. It's like saying, I've got my way of describing the universe and I'm only going to consider that one; the fact that all these other possible paths are being followed, well, yes, but I'm not interested in those aspects of the universe—I'm just interested in the aspects of the universe that correspond to the particular rule that I've identified as being my reference frame for thinking about the universe. You might say, but there are all these other universes, and they're all doing different things. Because of this property of causal invariance, in the end it doesn't matter because they're all in some sense doing the same thing.
Universal computation:
This idea of universal computation already tells one that once you say the universe is something that can be generated by a universal computer, that fact is telling you that there's some sort of singleness to the way of describing the universe. In this rulial space of all possible rules, everything that's there is representable by a universal computer. You can translate between reference frames by essentially having what amounts to one universal computer description emulate another universal computer description by giving it the appropriate programming to emulate that other thing.

The one ultimate fact would be that the universe is computational, that the universe can be represented by a universal computer. That fact is not self-evident. It might not be true. It might be that there are hypercomputers that go beyond the computers we can build with Turing machines, that our universe might be a hypercomputer. Though, I don't think it is. What we're learning from this adventure in studying fundamental physics is that there is a description of the universe in terms of ordinary universal computation, and we're finding the details of how that works.
Of Gödel and God:
As I said, I have this potentially mathematical logic way of trying to understand that. Gödel himself had an attempt at a proof of the existence of God that was a proof for mathematical logic. These proofs of things like that are fraught with issues. The question is, can you cut through all of that if one has an understanding of how our universe works? I don't know the answer, but that's the thing I'm curious about.

Of the things that we do today, what will look as primitive in their description as to say that there's an immortal soul? We might now say this is an abstract computation that is immortal in the sense that that abstraction has nothing to do with the specifics of brain tissue; it's just an abstract, almost mathematical computation. What is it that we talk about today that will seem similarly naive? One of the main things that I see is this idea that so much of the universe isn't worth describing. What do I mean by that?
Computational irreducibility:
My claim of what I call the principle of computational equivalents is that computational irreducibility is ubiquitous among computational systems. In particular, it's what, for example, makes nature seem complex to us, because the computations it's running are of the same sophistication as the computations that are running in our brains. It's something where we can't readily predict what's going to happen because it's an irreducible computation. It's something where we just have to follow each step to see what happens. If it follows some computational rule, why can we say anything about what the universe does? Why isn't it all mired in computational irreducibility?

I began this project to try and figure out fundamental physics thirty years ago. I've stopped many times. And I had stopped for a longer period of time for reasons that might be interesting to talk about. When I restarted it, I expected that we might be able to say something about 10-1000 seconds after the beginning of the universe. But after that we will be so mired in computational irreducibility that we wouldn't be able to make big statements about the universe. It turns out I was wrong.

What we learned is that there is a layer of computational irreducibility. We already knew that within any computationally irreducible system, there are always pockets of computational irreducibility. What we realized is basically most of physics, as we know it, lives in a layer of computational reducibility that sits on top of the computational irreducibility that corresponds to the underlying stuff of the universe. [...]

If we can firmly establish this fundamental theory of physics, we know it's computation all the way down. Once we know it's computation all the way down, we're forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can't get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can't work. It says that even if we know the rules for the system, it may be the case that we can't work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can't have a predictive theoretical science of what's going to happen.
Three mistakes:
As I look back, there were basically three big mistakes in the history of physics that made it more difficult to see what we're now seeing. One mistake is a Euclid mistake, which is the idea that space is continuous, a point is indivisible, that there's a continuum of space, that space isn't made from discreet atoms. Another wrong turn was in the early 1900s, the idea of spacetime. Einstein didn't really talk about that; he talked about space and he talked about time. Then Minkowski came along and said, mathematically it's convenient to package space and time together and think of them as the same thing. That was, in the end, a mistake. The same results emerge, but you think about them differently. The third one, more recently realized, is the description of how things work quantum mechanically, that quantum amplitudes have complex numbers. That's a mistake. In mathematical terms, a complex number can be described as a magnitude and a phase. Those really have to be separated as they come from different places.
Scale:
There are phenomena where you have to know the scale factor. You have to know the value of Planck's constant. And then it's very hard to tell people to go out and try to observe these things. There are predictions about models that say, for example, there will be a maximum entanglement speed in quantum mechanics; we just don't know how big it is. And that's a different idea that hasn't existed in other parts of physics.

No comments:

Post a Comment