Thursday, July 4, 2024

On the significance of human language to the problem of intelligence (& superintelligence)

Back in May I did a post entitled, How smart could an A.I. be? Intelligence in a network of human and machine agents. Toward the end I said this:

The question of machine superintelligence would then become:

Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.

Remember, that even as we’re developing ever more capable AI systems, we are also developing more sophisticated modes of human problem solving.

Earlier in the post I observed: “Human intelligence is not fixed in the way that animal intelligence is.” That’s what I want to comment on.

Animal intelligence is fixed by biology. Animals have capacities for sensation and movement that are fixed by biology. Those capacities bind them to a particular environment. That that from that environment and they will perish.

Humans are not quite like that. We developed the capacity to communicate through language. And that capacity allowed us to develop new modes of thought. Just how that happened needs to be thought through in some detail, but I’m just going move through it quickly for now. We notice patterns in the world, capture them in language by talking them through with our fellows. We become curious about those patterns, we ask why? and make up stories in explanation. In this process we work ourselves free of the limits of our biological capacities for sensing and acting. We abstract over and act in the world in the way that no other animals can. From speech, we develop writing, then calculation, and moved onto computation over the last hundred years or so, a progression David Hays and sketched out in The Evolution of Cognition, which we published in 1990. With the emergence of recent developments in artificial intelligence, we’re pushing that process one step farther, leading me to write about the Fourth Arena (beyond Matter, Life, and Culture).

Is there anything beyond this? That’s the question I’m trying to formulate. Is there a “superintelligence” beyond this? We are “free” of our biological embedding in a specific sensory-motor world, free in the sense that we can move beyond that. Tens of thousands of years ago we became the only (higher) primate that moved out of the tropics to inhabit every land-based environment. We’ve sent people to the moon and back, have others living in orbit around the earth for months at a time, and can at least imagine establishing permanent colonies on the moon and Mars and other bodies. This last round of achievements are inextricably interwoven with various kinds of computing technology. Further advance will require more computation, of various kinds.

The difference between, say, the intelligence of a fish and the intelligence of a rat is of a certain kind. The difference between the intelligence of a rat and that of monkey is of the same kind. But the difference between the intelligence of an ape and that of a human is of a different kind. The difference comes about through language and collective culture. As far as I can tell, typical (Silicon Valley) speculation about superintelligence seems to think that is a kind of intelligence that is beyond human intelligence in the same way that human intelligence is beyond animal intelligence. The question I’m asking goes something like this:

In view of the fact that human intelligence is free of biological ‘binding’ to a specific environment, and in view of the fact that this freedom has allowed us to move through a succession of foundational architectures (speech, writing, calculation, computation, {whatever is happening now}), is there a fundamental capacity beyond THAT?

I have two responses: 1) It’s not obvious to me that there is. 2) I don’t know.

Computers are faster that brains, and can be built to have more capacity. What else is there? In a series of posts on AI, chess, and language, I’ve been looking at fundamental architectures, in effect, a family that is chess-like and a different family that is language-like. What else is there?

This brings me back to that earlier post that I referenced at the beginning of this one, and to the question I posed there:

Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

I’m inching toward a way of suggesting that, if the answer to that question is “yes,” then that computer-based node must have some fundamental capacity that is beyond human capacity in the way that human capacity is beyond animal capacity. What could that (possibly) be? If such a thing were possible, is such a think existed, then we could never know it, could we?

Note: In thinking about that question, you might want to review the remarks I made about epistemological independence of autonomous agents in Intelligence, A.I. and analogy: Jaws & Girard, kumquats & MiGs, double-entry bookkeeping & supply and demand.

No comments:

Post a Comment