Pages in this blog

Saturday, July 6, 2024

Will AIs be able to create new knowledge?

This is a quick and dirty reflection on the question posed in the following tweet:

That question has been on my mind for some time: Will AIs be able to create new knowledge? Just what does that mean, “new knowledge”? It’s one thing to take an existing conceptual language and use it to say something that’s not been said before. It’s something else to come up with fundamentally new words. I think that latter’s what that tweet’s about. General relativity was something of a fundamentally new kind, not just a complex elaboration of and variation over existing kinds.

In my previous post, On the significance of human language to the problem of intelligence (& superintelligence), I pointed out that animals are more or less biologically “wired” into their world. They can’t conceptualize their way out of it. The emergence of language in humans allowed us to bootstrap our way beyond the limits of our biological equipment.

I figure there are two aspects of that: 1) coming up with the new concept, and 2) verifying it. The tweet focuses on the first, but without the second, the capacity to come up with new concepts won’t get us very far. And when we’re talking about new concepts, I think we’re talking about adding a new element to the conceptual ontology. Verifying requires cooperation among epistemologically independent agents, agents that can make observations and replicate those observations. (See remarks in: Intelligence, A.I. and analogy: Jaws & Girard, kumquats & MiGs, double-entry bookkeeping & supply and demand.)

Now, let’s think about the current regime of deep learning technology, LLMs and the rest. These devices learn their processes and structures from large collections of data. They’re going to acquire the ontology that’s latent in the data. If that is so, how are they going to be able to come up with new items to add to the ontology? It’s not at all obvious to me that they’ll be able to do so. The data on which they learn, that’s their environment. It seems to me that they must be as “locked” into that environment as an animal is. Further, adding a new item to the ontology would require changing the network, which is beyond the capacity of these devices.

And then there’s they requirement of cooperation between independent epistemological agents. The phenomenon of confabulation is evidence for the importance of independent epistemological agents. The only requirement inherent in one such agent is logical consistency: that it emit collections of tokens that are consistent with the existing collection. The only thing that keeps humans for continuous confabulation is the fact that we must communicate with one another. It is the existence of a world independent of our individual awareness that provides us with a way of grounding our statements, of freeing ourselves from the pitfalls of our linguistic fluency.

* * * * *

I’ve been working my way through episodes of House, M.D. Every episode contains segments where House and his team participate in differential diagnosis, which involves rapid conversational interaction among them. In the first episode of season 4, “Alone,” House no longer has a team. He ends up bouncing ideas off of a janitor. That doesn’t go so well.

No comments:

Post a Comment