Showing posts with label glia. Show all posts
Showing posts with label glia. Show all posts

Thursday, December 12, 2024

Consciousness, Intelligence, and AI – Some Quick Notes [call it a mini-ramble]

The subject of consciousness keeps turning up in current discussions of AI and LLMs. Can AIs be conscious? Are current AIs conscious? Maybe a little?

What do consciousness and intelligence have to do with one another? I see no reason to think that dogs, rats, and cats are not conscious, though I have no idea how far down the phylogenetic chain consciousness exists. No one would argue that dogs, rats, and cats are as intelligence as we are. Intelligence is something different from consciousness, no?

And yet the issue gets raised. One line of (implicit) reasoning seems to go like this: It converses with me in an intelligent way, things that converse with me in an intelligent way (or even at all!) are conscious, therefore it must be conscious. And then there’s the fact that you can ask a chatbot about itself and it will say something, though just what it says depends on what it has been RLHFed to say. But, still, these LLMs have been “trained” on tons of text using the word “consciousness” and all its cognates, so sure, it can use the word in human-seeming ways. That doesn’t make it conscious.

An extra stage od information processing?

Fact is, philosophers often argue about consciousness as though it were a further or extra stage in...in what? Human information processing? Thinking? Whatever. It adds something extra, something beyond what was before. Let’s say it adds an extra bit of intelligence. Yeah, let’s say that.

So, a conscious being is more intelligent that its non-conscious simulacrum, to which it is otherwise identical. But then we have those philosophical zombies and they, presumably, are as intelligent as non-zombies.

Reorganization

This strikes me as being wrong-headed. I take my conception of consciousness from Wm Powers (Behavior: The Control of Perception). Consciousness enables reorganization. I explain this in a post from 2022: Consciousness, reorganization and polyviscosity, Part 1: The link to Powers. (It’s complicated, so I’m not going to try to summarize it here.) In that conception, consciousness really isn’t a further step in reasoning, though it may facilitate “moving around” in one’s mind (in particular, think default mode). Reorganization isn’t a further step in reasoning, though it may seem like it. (Should say more about this, but later.)

Turing and non-Turing computing

And then we have the difference between Turing computation and, shall we say, neural computation. Turing computation requires a strict separation of processing from memory (e.g. see article linked in this recent post). That’s not how the brain works. As I recall, von Neumann wondered about that in his little book on the brain. If neurons are memory, as they surely are, then where’s the processing? he wondered. In a Turing device, learning means adding new blocks of memory. In a neural device, where memory and computation are not separate, learning means, well, it means reorganization, to use Powers’s term. Things have to change all over the place, more in some places than others, perhaps a lot in some few places and very damn little in most places.

Well, consciousness is what allows that to happen. Consciousness mediates reorganization. Consciousness modulates and “distributes” change throughout the system. A Turing system doesn’t need consciousness in order to learn, to change. A network (or neural) system does. Consciousness is the mechanism that solves learning for neural systems. I further hypothesize that the glial cells are crucial here.

I’m not sure of how to formulate that, but that seems to be my key thought for the morning. It’s why I got out of bed at 5:41 AM.

Artificial neural nets

So, what of artificial neural nets and consciousness? Well, the machines themselves are Turing-type machines. No need for consciousness. Does that also imply no possibility for consciousness? Skip that for now. The neural net, however, is not a Turing-type machine. It’s, well, it is a network and, as such, does not distinguish between memory and processing. But it’s a network that’s stored in the memory of a Turing-type machine. And we don’t yet know how to reorganize such a network, that is, to add new information to it. But we may well solve that problem one of these days. I don’t see why not?

Would that make the artificial neural net conscious? Or will it only be simulating consciousness? Remember, it’s running on a non-consciousness Turing-type machine. I think it’s only a simulation, not the real thing.

Another item – What Chatbots can’t seem to do

For examples of what I have in mind, see these posts:

If that’s the case – and here I’m taking a leap – then I do believe it’s consciousness that connects us to the world. Not a surprising thought on the face of it, but in THIS context, some explaining is required. LATER.

And Superintelligence?

Well, if superintelligence is a species of intelligence, then it doesn’t imply consciousness. And if consciousness is our/then connection to the world, then superintelligence is a zombie (yeah, I know, another leap). I’m mean, these SOA LLMs already “know,” in some meaningful sense, more than any individual human. That’s some kind of superintelligence. Whatever kind of superintelligence that is, it doesn’t bother us.

Nor does the superintelligence of AlphaZero bother us. Superintelligence? But that’s narrow intelligence. Does the fact that it’s narrow mean it can’t be super? Does the emergence of the concept of AGI, it’s differentiation from AI, mark the break-down of the classical concept of intelligence and its pursuit by artificial means?

What about the AIs that do protein folding and now 15-day weather predicting? We can’t do either of those things. Provisionally, why not?

At the moment I’m thinking: Superintelligence? Bring it on. 

Let them be super.

Consciousness is ours, but also dogs, cats, rats, and other animals.

During the Day [addendum]

We are awake, mostly. When we’re awake, we’re conscious, mostly. Once we’re out of infancy, and perhaps toddlerhood, we are conscious most of the time. As I said, it is consciousness that connects us to the world.

Note, however, that consciousness is very mobile. Are attention can flit from one thing to another, quite freely. [Hence the literary technique, stream of consciousness.] I suppose we could analogize it to time-sharing in computers, but...

Consciousness HAS to be flexible and mobile. We live in an unpredictable world. We may get around by predicting the next thing or three, but sometimes the world intervenes, drastically. We’ve got to be able to disconnect from the prediction and attend to the real.

[I recall an email exchange with Walter Freeman about this. I asked him whether or not a state of global coherence was necessary for us to be able to make a quick change from one thing to another. He said yes.]

More later.

Saturday, August 20, 2022

Consciousness, reorganization and polyviscosity, Part 4: Glia

Early on in my reading and studying neuroscience I read about the glia, brain cells between the neurons. Not much was known about them at the time and they seem not to have been much studied. With my recent interest in polyviscosity I decided to check up on the glia.

Things have changed. Quite a bit is now known about them. They seem to be crucial. One recent article:

Robertson JM. The Gliocentric Brain. Int J Mol Sci. 2018 Oct 5;19(10):3033. doi: 10.3390/ijms19103033. PMID: 30301132; PMCID: PMC6212929.

Abstract: The Neuron Doctrine, the cornerstone of research on normal and abnormal brain functions for over a century, has failed to discern the basis of complex cognitive functions. The location and mechanisms of memory storage and recall, consciousness, and learning, remain enigmatic. The purpose of this article is to critically review the Neuron Doctrine in light of empirical data over the past three decades. Similarly, the central role of the synapse and associated neural networks, as well as ancillary hypotheses, such as gamma synchrony and cortical minicolumns, are critically examined. It is concluded that each is fundamentally flawed and that, over the past three decades, the study of non-neuronal cells, particularly astrocytes, has shown that virtually all functions ascribed to neurons are largely the result of direct or indirect actions of glia continuously interacting with neurons and neural networks. Recognition of non-neural cells in higher brain functions is extremely important. The strict adherence of purely neurocentric ideas, deeply ingrained in the great majority of neuroscientists, remains a detriment to understanding normal and abnormal brain functions. By broadening brain information processing beyond neurons, progress in understanding higher level brain functions, as well as neurodegenerative and neurodevelopmental disorders, will progress beyond the impasse that has been evident for decades.

I take it, then, that the glia are central to consciousness, reorganization, and polyviscosity. And that’s only one article. 

It seems to me that one effect of the computational view of neural function has been implicitly to encourage treating networks of neurons as passive switching networks that just happen to be constituted of living cells. But the fact that cells are living has been treated as contingent and not essential to their switching functions. A great deal of neuroscience and cognitive reads like this and AI even more so. For that matter, that’s more or less how I thought about matters for years. That had begun to change by September of 2014 when I wrote a post, What’s it mean, minds are built from the inside? Here’s three paragraphs:

If we want a computer to hold vast intellectual resources at its command, it’s going to have to learn them, and learn them from the inside, just like we do. And we’re not going to know, in detail, how it does it, any more than we know, in detail, what goes on in one another’s minds.

How do we do it? It starts in utero. When neurons first differentiate they are, of course, living cells and further differentiation is determined in part by the neurons themselves. Each neuron “seeks” nutrients and generates outputs to that end. When we analyze neural activity we tend to treat it, and its activities, as components of a complicated circuit in service of the whole organism. But that’s not how neurons “see” the world. Each neuron is just trying to survive.

Think of ants in a colony or bees in a swarm. There may be some mysterious coherence to the whole, but that coherence is the result of each individual pursuing its own purposes, however limited those purposes may be. So it is with brains and neurons.

So, I’ve been moving away from the passive-switching-network view for a while.

But it took that 1988 paper by Fodor and Pylyshyn (pp. 34-45):

Classical theories are able to accommodate these sorts of considerations because they assume architectures in which there is a functional distinction between memory and program. In a system such as a Turing machine, where the length of the tape is not fixed in advance, changes in the amount of available memory can be affected without changing the computational structure of the machine; viz by making more tape available. By contrast, in a finite state automaton or a Connectionist machine, adding to the memory (e.g. by adding units to a network) alters the connectivity relations among nodes and thus does affect the machine’s computational structure. Connectionist cognitive architectures cannot, by their very nature, support an expandable memory, so they cannot support productive cognitive capacities. The long and short is that if productivity arguments are sound, then they show that the architecture of the mind can’t be Connectionist. Connectionists have, by and large, acknowledged this; so they are forced to reject productivity arguments.

Jerry A. Fodor; Zenon W. Pylyshyn (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 0–71. doi:10.1016/0010-0277(88)90031-5.

THAT focused my attention on the problem of memory as a physical process. And that, in turn, led me back to Walter Freeman, which I discuss in this post, Physical constraints on computing, process and memory, Part 1 [LeCun]. And that in turn let me to my thoughts about polyviscosity – though I’d initially used the term “hyperviscosity,” but have abandoned it because it is already in use.

So, memory presents a physical problem (Fodor and Pylyshyn). That problem thus requires a physical solution: polyviscosity. And polyviscosity, it would seem, requires living tissue. Perhaps we’ll figure out how to implement it in inanimate materials, but at the moment living tissue is what we’ve got. And glial cells are central to the mechanisms of polyviscosity.

I’m tempted to say something like – and here I’m rambling again – that the glia implement consciousness. And the function of consciousness is to ‘operate’ the neuromolecular mechanisms of reorganization. And if that seems a bit circular, well, that’s the best I can do at the moment. The important point is that consciousness, reorganization, polyviscosity, and the glia and involved in the same phenomena.

More later.

* * * * *

Earlier posts in this series: