Monday, May 11, 2015

Follow-up on Dennett and Mental Software

This is a follow-up to a previous post, Dennet’s WRONG: the Mind is NOT Software for the Brain. In that post I agreed with Tecumseh Fitch [1] that the hardware/software distinction for digital computers is not valid for mind/brain. Dennett wants to retain the distinction [2], however, and I argued against that. Here are some further clarifications and considerations.

1. Technical Usage vs. Redescription

I asserted that Dennett’s desire to talk of mental software (or whatever) has no technical justification. All he wants is a different way of describing the same mental/neural processes that we’re investigating.

What did I mean?

Dennett used the term “virtual machine”, which has a technical, if a bit diffuse, meaning in computing. But little or none of that technical meaning carries over to Dennett’s use when he talks of, for example, “the long-division virtual machine [or] the French-speaking virtual machine”. There’s no suggestion in Dennett that a technical knowledge of the digital technique would give us insight into neural processes. So his usage is just a technical label without technical content.

2. Substrate Neutrality

Dennett has emphasized the substrate neutrality of computational and informatic processes. Practical issues of fabrication and operation aside, a computational process will produce the same result regardless of whether or not it is implemented in silicon, vacuum tubes, or gears and levels. I have no problem with this.

As I see it, taken only this far we’re talking about humans designing and fabricating devices and systems. The human designers and fabricators have a “transcendental” relationship to their devices. They can see and manipulate them whole, top to bottom, inside and out.

But of course, Dennett wants this to extend to neural tissue as well. Once we know the proper computational processes to implement, we should be able to implement a conscious intelligent mind in digital technology that will not be meaningfully different from a human mind/brain. The question here, it seems to me, is: But is this possible in principle?

Dennett has recently come to the view that living neural tissue has properties lacking in digital technology [3, 4, 5]. What does that do to substrate neutrality?

I’m not sure. Dennett’s thinking centers on neurons. On the one hand real neurons are considerably more complex than the McCulloch-Pitts logic gates that captured his imagination early in his career. That’s one thing. And if that’s all there is to it, then one could still argue that substrate neutrality extends to neural tissue. We just have to recognize that neurons aren’t the simple primitive computational devices we’d thought them to be.

But there’s more going on. Following others (Terrence Deacon and Tecumseh Fitch) Dennett stresses the fact that neurons are agents with their own agendas. As agents go, they may be relatively simple, and their agendas simple; but still, they have a measure of causal autonomy.

How does that causal autonomy impact the notion of substrate neutrality? The answer to that is not at all obvious to me. To be sure, one can assert that function is function; but it is one thing to assert that in the abstract. It is quite another thing to prove it out in the fabrication of actual devices. Perhaps living neural tissue has functional capabilities that simply aren’t available to inanimate devices.

Maybe the best we can do is to simulate such tissue?

3. What’s Computation, Anyhow?

What’s the difference between simulating a mind, for example, an actually building one? In the case of an atomic explosion, to consider a rather different example, the difference is obvious. Real atomic explosions are violent and destructive; simulations of such explosions are not. You don’t bury the simulation a mile underground and operate it from a distance. There’s no need to do so. The simulation runs on the computer just like any other computation, whether it be the monthly payroll, or the rendering of an animated segment of a movie.

And, of course, we can simulate neurons and networks of neurons as well. Neuroscientists have been doing this for decades. The simulation will involve such computational processes as the model requires. But that doesn’t necessarily imply that the electro-chemical processes of neurons are computational in kind. No more so that simulating atomic explosions implies that those explosions are computational in kind. The simulations are computational; the real processes are not.

The assertion behind so-called “strong” AI, however, is that a sufficiently powerful process running on a digital computer will be a mind. Not a simulation of a mind, but an actual mind.

Is that true? I don’t know. In particular, if that process takes the form of a simulation of a real brain, will the running of that process thereby become, not merely a simulation of a mind, but an actual mind? I don’t know.

One could argue that a sufficiently rich and powerful simulation of a real mind must itself be a real mind. But if you argue that, then what becomes of the argument that real neurons are quasi-autonomous agents? What becomes of realization that living tissue not only has different properties from inanimate contrivance but that some of those differences are essential to the functions it can perform?

Dennett wants to argue that, well, sure, but the mind is STILL computational; but its computational in a different way from what we had been pursuing [5]. His specific response is to talk of rogue neurons, neurons recovering some of the capacities their ancestors gave up when they entered into cooperative relations with other somatic cells. It’s not at all clear just what that means nor how it salvages the idea that the brain is still just a computational device.

Finally, consider this passage from Peter Gärdenfors (Conceptual Spaces 2000, p. 253) where he asserts that a computational understanding of the mind will involve at least three broad classes of computational processes:
On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computational more complex than the process that is simulated.
Notice that he talks of one type of calculation being simulated by another.

This calculation, it is a strange beast. Dennett knows that. But it may well be even stranger than his rogue neurons.

References

[1] W. Tecumseh Fitch, Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition, Physics of Life Reviews 11 (2014) 329–364. http://dx.doi.org/10.1016/j.plrev.2014.04.005

[2] Dennett DC. The software/wetware distinction. Comment on “Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch. Phys Life Rev 2014;11:367–8.

[3] Daniel Dennett. Aching Voids and Making Voids. The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324.

[4] The Normal Well-Tempered Mind, 2013, talk at Edge: https://edge.org/conversation/the-normal-well-tempered-mind

[5] If brains are computers, what kind of computers are they? - Daniel Dennett keynote at PT-AI 2013 – Philosophy and Theory of Artificial Intelligence, October 2013. https://www.youtube.com/watch?v=OlRHd-r2LOw

No comments:

Post a Comment