The nature of consciousness is one of the big mysteries of contemporary thought. The best account of consciousness I know of is that offered by William Powers in Behavior: The Control of Perception (1973). That’s what this post is about. My objective is simple, to link Powers’s account of consciousness to the concept of polyviscosity that I offered about a week ago, The structured physical system hypothesis (SPSH), Polyviscous connectivity [The brain as a physical system]. Unfortunately, Powers’s concept, while basically simple, is simple only in the context of his overall model of mind, and that is not something that can readily be conveyed in a single blog post. Thus this post is mostly for my own benefit.
Powers’ model consists of two components: 1) a stack of servomechanisms – see the post In Memory of Bill Powers – regulating both perception and movement, and 2) a reorganizing system. The reorganizing system is external to the stack, but operates on it to achieve adaptive control, an idea he took from Norbert Wiener. Powers devoted “Chapter 14, Learning” to the subject (pp. 177-204). Reorganization is the mechanism through which Powers achieves learning.
Here’s an extensive passage that gets at the heart of the present matter (pp. 199-201):
To the reorganizing system, under these new hypotheses, the hierarchy of perceptual signals is itself the object of perception, and the recipient of arbitrary actions. This new arrangement, originally intended only as a means of keeping reorganization closer to the point, gives the model as a whole two completely different types of perceptions: one which is a representation of the external world, and the other which is a perception of perceiving. And we have given the system as a whole the ability to produce spontaneous acts apparently unrelated to external events or control considerations: truly arbitrary but still organized acts.
As nearly as I can tell short of satori, we are now talking about awareness and volition.
Awareness seems to have the same character whether one is being aware of his finger or of his faults, his present automobile or the one he wishes Detroit would build, the automobile’s hubcap or its environmental impact. Perception changes like a kaleidoscope, while that sense of being aware remains quite unchanged. Similarly, crooking a finger requires the same act of will as varying one’s bowling delivery “to see what will happen.” Volition has the arbitrary nature required of a test stimulus (or seems to) and seems the same whatever is being willed. But awareness is more interesting, somehow.
The mobility of awareness is striking. While one is carrying out a complex behavior like driving a car through to work, one’s awareness can focus on efforts or sensations or configurations of all sorts, the ones being controlled or the ones passing by in short skirts, or even turn to some system idling in the background, working over some other problem or musing over some past event or future plan. It seems that the behavioral hierarchy can proceed quite automatically, controlling its own perceptual signals at many orders, while awareness moves here and there inspecting the machinery but making no comments of its own. It merely experiences in a mute and contentless way, judging everything with respect to intrinsic reference levels, not learned goals.
This leads to a working definition of consciousness. Consciousness consists of perception (presence of neural currents in a perceptual pathway) and awareness (reception by the reorganizing system of duplicates of those signals, which are all alike wherever they come from). In effect, conscious experience always has a point of view which is determined partly by the nature of the learned perceptual functions involved, and partly by built-in, experience-independent criteria. Those systems whose perceptual signals are being monitored by the reorganizing system are operating in the conscious mode. Those which are operating without their perceptual signals being monitored are in the unconscious mode (or preconscious, a fine distinction of Freud’s which I think unnecessary).
This speculative picture has, I believe, some logical implications that are borne out by experience. One implication is that only systems in the conscious mode are subject either to volitional disturbance or reorganization. The first condition seems experientially self-evident: can you imagine willing an arbitrary act unconsciously? The second is less self-evident, but still intuitively right. Learning seems to require consciousness (at least learning anything of much consequence). Therapy almost certainly does. If there is anything on which most psychotherapists would agree, I think it would be the principle that change demands consciousness from the point of view that needs changing. Furthermore, I think that anyone who has acquired a skill to the point of automaticity would agree that being conscious of the details tends to disrupt (that, is, begin reorganization of) the behavior. In how many applications have we heard that the way to interrupt a habit like a typing error is to execute the behavior “on purpose”—that is, consciously identifying with the behaving system instead of sitting off in another system worrying about the terrible effects of having the habit? And does not “on purpose” mean in this case arbitrarily not for some higher goals but just to inspect the act, itself?
That, then, is consciousness as Powers conceives it. It is correlated with reorganization. If we are to reorganize a perception or action, we must be aware of it. The fact that we spend most of our lives in some state of consciousness implies that we are always learning or, perhaps, maintaining ourselves in a state of readiness to learn.
What has this to do with polyviscosity? Here I am thinking of neural connectivity. It is polyviscous in that some connections are highly resistant to change while others change readily. Reorganization, that is to say learning, requires that neural connectivity change. Connections of various levels of viscosity are likely to be intermingled in any given volume of cortical tissue.
Now, consider this passage from a 1988 paper by Fodor and Pylyshyn, Connectionism and Cognitive Architecture: A Critical Analysis (pp. 22-23):
Classical theories are able to accommodate these sorts of considerations because they assume architectures in which there is a functional distinction between memory and program. In a system such as a Turing machine, where the length of the tape is not fixed in advance, changes in the amount of available memory can be affected without changing the computational structure of the machine; viz by making more tape available. By contrast, in a finite state automaton or a Connectionist machine, adding to the memory (e.g. by adding units to a network) alters the connectivity relations among nodes and thus does affect the machine’s computational structure. Connectionist cognitive architectures cannot, by their very nature, support an expandable memory, so they cannot support productive cognitive capacities. The long and short is that if productivity arguments are sound, then they show that the architecture of the mind can’t be Connectionist. Connectionists have, by and large, acknowledged this; so they are forced to reject productivity arguments.
Physically, the nervous system appears to be connectionist in character. And so adding new items to the system is physically problematic. That’s the problem that is solved by polyviscous connectivity – see my post, Physical constraints on computing, process and memory, Part 1 [LeCun] (Note: in that post I use the term “hyperviscous” rather than “polyviscous”). Some connections must remain stable while others change. The stable connections maintain the overall structural integrity of the network while the changing connections introduce new items into that structure.
Here's a recent article that’s relevant, though it doesn’t use the term “polyviscious”: Poonam Mishra and Rishikesh Narayanan, Stable continual learning through structured multiscale plasticity manifolds, Current Opinion in Neurobiology 2021, 70:51–63, https://doi.org/10.1016/j.conb.2021.07.009
Abstract: Biological plasticity is ubiquitous. How does the brain navigate this complex plasticity space, where any component can seemingly change, in adapting to an ever-changing environment? We build a systematic case that stable continuous learning is achieved by structured rules that enforce multiple, but not all, components to change together in specific directions. This rule-based low-dimensional plasticity manifold of permitted plasticity combinations emerges from cell type–specific molecular signaling and triggers cascading impacts that span multiple scales. These multiscale plasticity manifolds form the basis for behavioral learning and are dynamic entities that are altered by neuromodulation, metaplasticity, and pathology. We explore the strong links between heterogeneities, degeneracy, and plasticity manifolds and emphasize the need to incorporate plasticity manifolds into learning-theoretical frameworks and experimental designs.
No comments:
Post a Comment