Pages in this blog

Sunday, June 26, 2022

More Post-Publication Thoughts on the RNA Primer

This is a follow-up to a previous post: Some Post-Publication Thoughts on the RNA Primer [Design for a Mind]. Expect more follow-up posts. 

I’m talking about:

Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, https://www.academia.edu/81911617/Relational_Nets_Over_Attractors_A_Primer_Part_1_Design_for_a_Mind

I offer two sets thoughts, calibration, and paths ahead.

Calibration

By calibration I mean assessing, as well as I can, the signficance of the primer’s arguments and speculations in the current intellectual environment.

Gärdenfors’ levels of computation: In his 2000 book, Conceptual Spaces, Gärdenfors asserted that we need different kinds of computational processes for different different aspects of neural process:

On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computational more complex than the process that is simulated.

The primer outlines a scheme that involves all three levels, dynamical systems at the subconceptual level, Gärdenfors’ conceptual spaces at the conceptual level, and a relational network (over attractors) at the symbolic level. As far as I know, this is the only more or less comprehensive scheme that achieves that, though I have no reason to believe that others haven’t offered such proposals.

I note as well, that the arguments in the primer are quite different from those that Grace Lindsay considers in the final chapter of Models of the Mind, where she reviews three proposals for “grand unified theories” of the brain: Friston’s free energy principle, Hawkins, Thousand Brains Theory, and Tononi’s integrated information approach to consciousness. For what it’s worth I make no proposal about consciousness at all, though I do have thoughts about it, which are derived by a book published in 1973, Behavior: The Control of Perception by William Powers. Friston offers no specific proposals about how symbolic computation is implemented in the brain, nor, as far as I know, does Hawkins – I should note that I will be looking into his ideas about grid cells in the future.

Lindsay notes (pp. 360-361):

GUTs can be a slippery thing. To be grand and unifying, they must make simple claims about an incredibly complex object. Almost any statement about ‘the brain’ is guaranteed to have exceptions lurking somewhere. Therefore, making a GUT too grand means it won’t actually be able to explain much specific data. But, tie it too much to specific data and it’s no longer grand. Whether untestable, untested, or tested and failed, GUTs of the brain, in trying to explain too much, risk explaining nothing at all.

While this presents an uphill battle for GUT-seeking neuroscientists, it’s less of a challenge in physics. The reason for this difference may be simple: evolution. Nervous systems evolved over eons to suit the needs of a series of specific animals in specific locations facing specific challenges. When studying such a product of natural selection, scientists aren’t entitled to simplicity. Biology took whatever route it needed to create functioning organisms, without regard to how understandable any part of them would be. It should be no surprise, then, to find that the brain is a mere hodgepodge of different components and mechanisms. That’s all it needs to be to function. In total, there is no guarantee – and maybe not even any compelling reasons to expect – that the brain can be described by simple laws.

I agree. Whatever I’m proposing, it is not a simple law. Tt presupposes all the messiness of a brain that is “a mere hodgepodge of different components and mechanisms.” It is a technique for constructing another mechanism.

Christmas Tree Lights Analogy: Here I want to emphasize how very difficult understanding the brain has proven to be. It will remain so for the forseeable future. I’m calling on a post I did in 2017, A Useful Metaphor: 1000 lights on a string, and a handful are busted.

In line with that post, imagine that the problem of fully understanding the brain, whatever that means takes the form of a string of serial-wired Christmas Tree lights, 10,000 of them. To consider the problem solved all the lights have to be good and the string lit. Let us say that in 1900 the string is dark. Since then, say, 3472 bad bulbs have been replaced with good ones. Since we don’t know how many bad lights were in the string in 1900 we don’t know how many lights have yet to be replaced.

Let us say that, in the course of writing that primer, I’ve replaced 10 bad bulbs with 10 good ones. If 6518 had been good in 1900, then we’d have had 9990 good bulbs before I wrote the primer. With the primer the last 10 bad bulbs would have been replaced and SHAZAM! we now understand the brain.

That obviously didn’t happen. I take it as obvious that some of the bad bulbs had been replaced by 1900 since the study of the brain goes back farther than that. If there had been, say, 1519 good bulbs in 1900, then there would have been 4991 good bulbs before my paper (1519 in 1900 + 3472 since then). My 10 puts us past 5000 to 5001. We’re now more than halfway to understanding the brain. 

Play around with the numbers as you will, my point is that we have a lot more to do to understand the “hodgepodge of different components and mechanisms” that is the brain.

Will we be all the way in another century? Who knows. 

For extra credit: What if the number of bulbs in the string isn’t 10,000, but 100,000? All analogies have their limitations. In what way is this one limited by the need to posit a specific number of light bulbs in the string?

“Kubla Khan”: This is tricky and, come to think of it, deserves a post of its own. But, yes, I do think that work on the primer advanced my thinking about “Kubla Khan” by putting Gärdenfors’ idea of conceptual spaces in the forefront of my mind, which hadn’t been the case before in thinking about the poem. I’m now in a position to think about the poem as a structure of temporary conceptual spaces, in Gärdenfors’ sense, where the various sections of the poem are characterized by various dimensions. [As an exercise, you might want to plug this into my most recent paper on “Kubla Khan,” Symbols and Nets: Calculating Meaning in "Kubla Khan.")

But let’s save that for another post. For this I just note that, as I have explained in various places (e.g. Into Lévi-Strauss and Out Through “Kubla Khan”), my interest in “Kubla Khan” that has been a major component of my interest in the computational view of mind and its realization in the brain. “Kubla Khan” is the touchstone by which I judge all else...sorta’. We’re making progress.

Paths Ahead

How do I understand some of the implications of the primer? Here’s some quick and dirty notes.

Understanding the brain: Here the issue is: How do we go from my speculative account to empirical evidence? One route, but certainly not the only one, is to start looking at recent evidence for the semantic specialization of the neocortex. I cited some of this work in the primer, but did not attempt to relate it to the relational network notation in a detailed way. That must be done, but I’m not the one to do it. Or, rather, I cannot do it alone. For one thing, my knowledge of neocortical anatomy isn’t up to the job. While that can be remedied, that’s not enough. The task needs the participation of bench scientists, investigators who’ve done the kind of work that I’ve cited, as they’re the ones with a sensitive understanding of the empirical results.

Long-Term “pure” AI: The primer says clearly that symbolic computation is central to human cognition. But it also says that it is derived from, implemented in, neural nets. That is the position the Lecun argued in his recent paper with Jacob Browning, What AI Can Tell Us About Intelligence. What does that imply about future research?

I think it means new architectures, new architectures for learning and for inference. What those might be....

Near and Mid-term applied AI: I think that’s as it has always been: If you have to solve a problem, now, use the best tool you can find. AI systems built on the kind of model suggested by the primer are not currently available, to my knowledge. If you need symbolic computation as well as a neural network, pick the best hybrid architecture you can find.

No comments:

Post a Comment