Thursday, April 1, 2021

Wolfram on consciousness as narrativity [though he doesn’t use the word]

Stephen Wolfram has some recent remarks on consciousness, What Is Consciousness? Some New Perspectives from Our Physics Project. Naturally it is embedded in his computational view of the world, which he laid out in A New Kind of Science and has been elaborating ever science. I have read large chunks of the book, but don’t claim technical mastery, and have only blitzed through his remarks on consciousness. But this passage struck me because it emphasizes the temporal nature of consciousness:

Operationally, there’s potentially a rather straightforward way to think about this, though it depends on our recent understanding of the concept of time. In the past, time in fundamental physics was usually viewed as being another dimension, much like space. But in our models of fundamental physics, time is something quite different from space. Space corresponds to the hypergraph of connections between the elements that we can consider as “atoms of space”. But time is instead associated with the inexorable and irreducible computational process of repeatedly updating these connections in all possible ways.

There are definite causal relationships between these updating events (ultimately defined by the multiway causal graph), but one can think of many of the events as happening “in parallel” in different parts of space or on different threads of history. But this kind of parallelism is in a sense antithetical to the concept of a coherent thread of experience.

And as we’ve discussed above, the formalism of physics—whether reference frames in relativity or quantum mechanics—is specifically set up to conflate things to the point where there is a single thread of evolution in time.

So one way to think about this is that we’re setting things up so we only have to do sequential computation, like a Turing machine. We don’t have multiple elements getting updated in parallel like in a cellular automaton, and we don’t have multiple threads of history like in a multiway (or nondeterministic) Turing machine.

The operation of the universe may be fundamentally parallel, but our “parsing” and “experience” of it is somehow sequential. As we’ve discussed above, it’s not obvious that such a “sequentialization” would be consistent. But if it’s done with frames and so on, the interplay between causal invariance and underlying computational irreducibility ensures that it will be—and that the behavior of the universe that we’ll perceive will follow the core features of twentieth-century physics, namely general relativity and quantum mechanics.

But do we really “sequentialize” everything? Experience with artificial neural networks seems to give us a fairly good sense of the basic operation of brains. And, yes, something like initial processing of visual scenes is definitely handled in parallel. But the closer we get to things we might realistically describe as “thoughts” the more sequential things seem to get. And a notable feature is that what seems to be our richest way to communicate thoughts, namely language, is decidedly sequential.

When people talk about consciousness, something often mentioned is “self-awareness” or the ability to “think about one’s own processes of thinking”. Without the conceptual framework of computation, this might seem quite mysterious. But the idea of universal computation instead makes it seem almost inevitable. The whole point of a universal computer is that it can be made to emulate any computational system—even itself. And that is why, for example, we can write the evaluator for Wolfram Language in Wolfram Language itself.

The Principle of Computational Equivalence implies that universal computation is ubiquitous, and that both brains and minds, as well as the universe at large, have it. Yes, the emulated version of something will usually take more time to execute than the original. But the point is that the emulation is possible.

But consider a mind in effect thinking about itself. When a mind thinks about the world at large, its process of perception involves essentially making a model of what’s out there (and, as we’ve discussed, typically a sequentialized one). So when the mind thinks about itself, it will again make a model. Our experiences may start by making models of the “outside world”. But then we’ll recursively make models of the models we make, perhaps barely distinguishing between “raw material” that comes from “inside” and “outside”.

The connection between sequentialization and consciousness gives one a way to understand why there can be different consciousnesses, say associated with different people, that have different “experiences”. Essentially it’s just that one can pick different frames and so on that lead to different “sequentialized” accounts of what’s going on.

Why should they end up eventually being consistent, and eventually agreeing on an objective reality? Essentially for the same reason that relativity works, namely that causal invariance implies that whatever frame one picks, the causal graph that’s eventually traced out is always the same.

If it wasn’t for all the interactions continually going on in the universe, there’d be no reason for the experience of different consciousnesses to get aligned. But the interactions—with their underlying computational irreducibility and overall causal invariance—lead to the consistency that’s needed, and, as we’ve discussed, something else too: particular effective laws of physics, that turn out to be just the relativity and quantum mechanics we know.

I offer three thoughts:

  • Narrative has been of particular interest in some regions of the humanities and behavioral sciences in the last two or three decades and it is a general category in popular discourse. What’s the narrative?
  • I’m reminded of Walter Freeman’s conception of consciouaness as arising through disconsinuous whole-hemisphere states of coherence succeeding one another at a “frame rate” of 6 Hz to 10Hz – something I discuss in “Ayahuasca Variations” (2003). It’s the whole hemisphere aspect that’s striking (and somewhat mysterious given the complex connectivity across many scales and the relatively slow speed of neural conduction).
  • Finally, as a narrative technique, stream of consciousness often orders events non-sequentially, if not chaotically. But this is not a “basic” narrative technique. Rather, it requires quite a bit of sophistication.

Wolfram continues, bringing up the issue of spatial scale:

The view of consciousness that we’ve discussed is in a sense focused on the primacy of time: it’s about reducing the “parallelism” associated with space—and branchial space—to allow the formation of a coherent thread of experience, that in effect occurs sequentially in time.

And it’s undoubtedly no coincidence that we humans are in effect well placed in the universe to be able to do this. In large part this has to do with the physical sizes of things—and with the (undoubtedly not coincidental) fact that human scales are intermediate between those at which the effects of either relativity or quantum mechanics become extreme.

Why can we “ignore space” to the point where we can just discuss things happening “wherever” at a sequence of moments in time? Basically it’s because the speed of light is large compared to human scales. In our everyday lives the important parts of our visual environment tend to be at most tens of meters away—so it takes light only tens of nanoseconds to reach us. Yet our brains process information on timescales measured in milliseconds. And this means that as far as our experience is concerned, we can just “combine together” things at different places in space, and consider a sequence of instantaneous states in time.

If we were the size of planets, though, this would no longer work. Because—assuming our brains still ran at the same speed—we’d inevitably end up with a fragmented visual experience, that we wouldn’t be able to think about as a single thread about which we can say “this happened, then that happened”.

Going on:

Even at standard human scale, we’d have somewhat the same experience if we used for example smell as our source of information about the world (as, say, dogs to a large extent do). Because in effect the “speed of smell” is quite slow compared to brain processing. And this would make it much less useful to identify our usual notion of “space” as a coherent concept. So instead we might invent some “other physics”, perhaps labeling things in terms of the paths of air currents that deliver smells to us, then inventing some elaborate gauge-field-like construct to talk about the relations between different paths.

In thinking about our “place in the universe” there’s also another important effect: our brains are small and slow enough that they’re not limited by the speed of light, which is why it’s possible for them to “form coherent thoughts” in the first place. If our brains were the size of planets, it would necessarily take far longer than milliseconds to “come to equilibrium”, so if we insisted on operating on those timescales there’d be no way—at least “from the outside”—to ensure a consistent thread of experience. [...]

What if we and our brains were much smaller than they actually are? [...] What about our extent in branchial space? In effect, our perception that “definite things happen even despite quantum mechanics” implies a conflation of the different threads of history that exist in the region of branchial space that we occupy. But how much effect does this have on the rest of the universe? It’s much like the story with the speed of light, except now what’s relevant is a new quantity that appears in our models: the maximum entanglement speed. And somehow this is large enough that over “everyday scales” in branchial space it’s adequate for us just to pick a quantum frame and treat it as something that can be considered to have a definite state at any given instant in time—so that we can indeed consistently maintain a “single thread of experience”.

This discussion of scale reminds me of remarks that Ilya Prigogine has made about life, that living molecules are poised between the micro-realm where quantum effects reigh and the macro-realm, where grativity rules (see Benzon and Hays, “A Note on Why Natural Selection Leds to Complexity” (1990).

And so on.

H/t Tyler Cowen.

3 comments:

  1. 'cellular automaton', I rather like that term, never heard it before, just glanced at its definition.

    I often invert things in math and see them the 'wrong' way round. But curiosity as to why its rejected, first sense.

    ReplyDelete
  2. Thanks! I read it last night, also read it before, I sort of sleep walk through this subject, don't understand a lot, some small parts, uncanny sense of familiarity.

    ReplyDelete