Of course I mean intentionality in the philosophical sense, a notion that Franz Brentano imported from medieval thought to modern. As Harman puts it in The Quadruple Object, “what distinguishes the mental from the physical for Brentano is that mental acts are always directed toward an object” (p. 21). When Fido sees and smells chopped liver in his food bowl he intends them—assuming, of course, that you are willing to grant a mind to Fido, a dog. His visual and olfactory perceptions are intentional objects, though Harman, not liking the “antiseptic sterility” of the term, prefers to speak of sensual objects.
In Harman’s philosophy sensual objects stand in contrast to real objects, such as the bowl and the chopped liver. Fido’s brain, body, and sense organs are also real objects in this sense. As I understand Harman’s usage, he could even talk of a larger object inside of which we would find Fido, the bowl, and the chopped liver as proper parts.
It follows that while Fido is contemplating the chopped liver, there is a real process in his brain that is his perception of that chopped liver. Let us say that that process too is an object. What is the relationship between that real neural process, that evanescent and fluctuating object in Fido, and those other real objects, the bowl and the chopped liver, that participate in, but do not dominate, that neural process?
Tricky Questions
I find that to be a very tricky question, and tricky in the nasty way that involves matters of mere definition and matters of substance that must be teased apart. I want to locate intentionality somewhere in that relationship, but just where I’m not sure. Is the sensual object another name for the intentionality that exists between a nervous system and the world? Or do we say that intentionality is the relationship between the nervous system and the intentional object? Or something else entirely? How do we talk of the relationship between a perception considered as a sensual object and the real nervous system without which that sensual object would not exist?
While thinking about THAT situation, contrast it with the relationship between a digital camera and the scene on which it’s focused. The digital camera has sensors that transduce light photons into electrical impulses, crudely similar in function to the rods and cones in mammalian retinas. It also has fairly complex circuitry to set aperture, focus, and sensitivity and to then convey a captured image to some memory device. One would not, however, for all that complexity, say that the camera had an intentional relationship with the scene in front of it. The photographer, yes; but the camera itself, no.
The difference is that the photographer, and Fido as well, are living creatures. The camera is not. The camera does not construct itself. Living beings do. The camera is passive in the face of incoming photons. Fido is not.
In some sense I want to say that the intentional relationship between Fido’s nervous system and the bowl-and-liver is a global relationship one. It’s not something that can be located in this or that bit of neural circuitry. It’s a relationship that exists because the world is what it is, with Fido, the bowl, and the chopped liver in the world alike.
From Brain States to Intentionality
But that’s enough of that. I want to turn to a different way of looking at the same question, that of the brain, intentionality, and the world. This is a passage from Beethoven’s Anvil (pp. 53-56). It doesn’t get around to mentioning intentionality until very near the end. Nowhere does it talk of anything so specific as a food bowl filled with liver. It’s about the general relationship between a living brain and an infinite world and how the microstructure of the brain is sculpted by that interaction.
* * * * *
Given that we are committed to thinking about the brain as a physical system, we need to adopt a general way of thinking about physical systems. We must think about the system’s states: what are they, how many are there, and how do you get from one state to another? We describe the state of a volume of gas, for example, by giving the position, direction of motion, and velocity of each molecule at a given moment in time. Since the number of molecules is likely to be quite large, the specification of the exact state of the substance could be quite complex. For most purposes, however, this is more detail than we need. So we work with average values, expressed as temperature and pressure. These are far from complete descriptions, but they let us mark the gas’s transition to liquid and solid states and back again.
It is not at all obvious how many states a nervous system could have, although the number is clearly very large. Assume, for a moment, that we take the neuron as the fundamental unit. The human neocortex has been estimated to have approximately 2.74 x 10^10 neurons (i.e. 27,400,000,000). Let us assume that at any moment, each neuron is either “on” or “off,” where “ on” means that it is generating an output impulse and “off “means that it is not. In this case the number of possible states of the neocortex would be 2 (the number of states each element can have) raised to the 27,400,000,000th power (the number of elements in the system).
As large as that number is, it must be too low. We probably want to know the state of each synapse—a juncture where neurons meet—rather than each neuron. Each cortical neuron has between 1000 and 10,000 synapses, giving us between 2.74 x 10^13 and 2.74 x 10^14 synapses. So now we are raising 2 to some power between the 27,400,000,000,000th and the 274,000,000,000,000th. Yet, as large as this resulting is, it is surely too low, for each synapse has more than two possible states. The brain is a complex electro-chemical machine with over 100 different neuroactive chemicals. As synapses are subject to the influence of several of these chemicals at a time, we probably want to know the concentration of each chemical, at each synapse, for each neuron in the cortex. Taking these into account will make the number of possible states even larger.
And the neocortex is not the entire nervous system. The cerebellum, a large subcortical structure, probably has ten times as many neurons as the neocortex. Clearly the nervous system can assume an enormous number of different states.
Actually, the number of possible states is smaller than the above analysis implies. We assumed that the states of our basic units are independent of one another, but this is not true. Neurons are connected in complex and far-ranging networks and circuits. Whether we assume that the neuron or the synapse is the basic unit, neural connectivity implies considerable dependency between the units. A neuron will affect the states of neurons to which it is connected; the state of one synapse will affect the states of neighboring synapses.
Still, the nervous system has such a very large number of elements that the number of possible states it can assume remains very very large. In the most commonly believed account of the nervous system’s ability to learn and adapt, that ability depends on constructing dependencies between elements. Back in 1949 the Canadian psychologist Donald Hebb speculated:
When an axon of cell A . . . excite[s] cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells so that A’s efficiency as one of the cells firing B is increased.
Thus as the nervous system learns about the world, the number of distinctly different states it can assume decreases. But the transitions between the remaining states track transitions between world states more and more accurately. The state dependencies that are introduced into the nervous system as it moves through the world reflect the system’s involvement in the world.
It is as though the nervous system’s state space were a block of marble sculpted into a likeness of the world through interaction with it. We do, in fact, see something like this during the maturation of the nervous system. Very early in life the neurons in the cerebral cortex develop a large number of dendrites—small branches that terminate in synapses. Then, as the nervous system continues to mature, always interacting with the world, synapses that are not reinforced die away, and perhaps even dendrites and neurons as well. Thus the dependencies that become wired-in to the nervous system reflect patterns of dependencies between states in the external world. Many of those external states are, of course, created by the rhythmic actions of one’s fellows. The remaining synapses, are, of course, still subject to the kind of learning Hebb described.
To use a word that Walter Freeman adopted from the medieval philosopher and divine St. Thomas Aquinas, these dependencies reflect the nervous system’s intentionality. The basic requirement for intentional action is that the nervous system have its own means of generating a large state space which it can then “offer” to the world through interaction. That state space is, in effect, the system’s “well” of potential intention, upon which it draws in learning the world. Given this well, Freeman explains perception thus:
A stimulus excites the sensory receptors, so that they send a message to the brain. That input triggers a reaction in the brain, by which the brain constructs a pattern of neural activity. The sensory activity that triggered the construction is then washed away, leaving only the construct. That pattern does not "represent" the stimulus. It constitutes the meaning of the stimulus for the person receiving it.
Freeman is one of the pioneers in applying work in chaos, complex systems, and nonlinear dynamics to the study of the nervous system. This body of work arose in the physical sciences toward the end of the 19th and beginning of the 20th centuries, but did not flower until it became possible to study such systems through computer simulation. One key point of this work is that, under certain conditions, physical systems exhibit self-organizing properties. For such order to emerge, the system must have a very large number of parts as well as an abundant source of external energy.
No comments:
Post a Comment