I have a new article up at 3 Quarks Daily:
Western Metaphysics is Imploding. Will We Raise a Phoenix from The Ashes? [Catalytic AI], https://3quarksdaily.com/3quarksdaily/2024/02/western-metaphysics-is-imploding-will-we-raise-a-phoenix-from-the-ashes-catalytic-ai.html
It is about philosophy, though not philosophy as it is currently practiced as an academic discipline. I like it. In fact I like it a lot.
Why? Because it’s built on a number of articles I’d previously published in 3QD as well as other work I’d published about AI over the past year, ChatGPT in particular. When I finally posted it on Sunday afternoon, it felt good, really good. “Man, I’ve got something here,” said I to myself.
When I got up early Monday morning, so early one might call it late Sunday night, I looked at the article and started glancing through it. “Holy crap!” thought I to myself, “when people start reading this they’re going to think they’ve landed in one of those classic New Yorker essays that wander all over the place before getting to the point, if there is one. What happened?”
Those are two very different reactions: “I’ve got something here” vs. “Holy crap!” Conclusion: I’ve got some work to do.
I might as well begin here and now.
Meaning, intention, and AI
One of my friends remarked, “you are too smart for me.” I took that to be a polite and diplomatic way of saying that he figured there must be something there but he sure couldn’t find it. How’d I get from his remark to that interpretation? I can tell you want it didn’t involve: conscious, deliberate thought. I simply knew that’s what he was saying. I intuited the intention behind my friend’s words, an intention that I’ve subsequently verified.
Intentionality – closely related to but not quite the same as intention – is at the heart the classic argument against AI. As far as I know that argument was first articulated by Hubert Dreyfus back in 1969 or 71’, in that time frame, but is probably best-known from John Searle’s Chinese Room argument, which first appeared in 1980 in Behavioral and Brain Sciences. That argument has been refitted for the current era, perhaps most visibly by Emily Bender, who coined the phrase “stochastic parrot” to characterize the actions of Large Language Models (LLMs).
I accept that argument. The problem is, however, that it’s one thing to have made that argument at a time when AI systems responded to human input in a relatively simple and straightforward way, which was the case when Dreyfus and Searle made their arguments. Back then the argument supplied a fairly satisfying – at least to some people – account of why AI won’t work. Now, in the face of ChatGPT’s much more impressive performance, you are asking a lot more from that argument, more, I’ve argued elsewhere, more than it can reasonably deliver.
The issue here is the gap between our first-person experience of the machine and what the machine is actually doing. Back in Searle’s time the philosophical concept of intentionality was able to account for that gap, at least for some of those familiar with the concept. In the case of ChatGPT the nature of that gap is quite different. To a first approximation, our first-person experience is that we’re conversing with a person that has a strange name, ChatGPT. Some people have strange names and stilted discourse is not uncommon. If ChatGPT is in fact a person, then there is no gap to account for. We know, however, that ChatGPT is NOT a person. It’s a machine.
We are now faced with a HUGE gap. What’s the machine doing? We don’t know. The people who built these systems can’t tell us what they’re doing – a point I make in the first section of the article after the introduction, “Views about Machine Learning and Large Language Models.” They can’t even tell themselves what the machine is doing much less craft a simplified account, based on metaphors and analogies, for the rest of us. They know how the system builds an LLM and how it accesses the LLM, but they don’t know what’s going on inside the LLM itself, with its billions and billions of parameters.
That’s one thing. This business about bridging the game between first-person experience and what’s really going on, that’s a second thing. That’s a view of philosophy articulated by Peter Godfrey-Smith, which I discuss in the second part of the article, “Philosophy’s integrative role.” “Integrative” is the word he uses for that function that philosophy plays in the larger intellectual discourse. His argument is that philosophy has largely abandoned that role and that it needs to get back to it. My argument is that nowhere is that more important than in the case of artificial intelligence.
I spend the rest of the article making that point. First, I digress into a section entitled, “Tyler Cowen, Virtuoso Infovore,” where I also discuss Richard Macksey. Cowen has recently argued, in effect, that the very greatest economists, in addition to their specialized work within economics, have also performed that integrative role on behalf of the larger intellectual pubic. Then I get to the argument I’ve been chasing all along, “Artificial Intelligence as a catalyst for intellectual integration,” which you are welcome to read.
But I want to get back to my friend’s response to my article and say a few words about that.
Intention, intuition and deduction in “intelligence”
How did my friend arrive at that statement he made to me? I don’t know. But I’m guessing it was mostly by intuition rather than explicit deductive reasoning. He’d read the article and was puzzled, conjured up our relationship and, viola! out comes the statement, “you are too smart for me.” Simple as pie.
Could he have arrived at that statement through a process of rational deduction? Possibly. How might that have gone?
ONE: FACT: The article doesn’t make sense to me.
TWO: There are three possibilities: 1) It’s nonsense, or at least deeply flawed. 2) It’s fine but too abstract for me. 3) Some combination of the first two.
THREE: PREMIS: Bill’s a smart guy. CONCLUSION: It’s probably 2 or 3. What do I say?
FOUR: FACT: Bill’s a friend. THEREFORE: I’ll give him the benefit of the doubt and base my response on 2.
FIVE: PREMIS: The article is too abstract for me. PREMIS: I’m smart. FACT: Bill made the argument. THEREFORE: Bill must be very smart...
SIX: Here’s what I’ll say: “...you are too smart for me.”
As logical arguments go, it’s rather rickety. I would hate to have to formulate it in terms of formal logic. But you get the idea. Logically, it’s a tangled mess.
In the annoying matter of text books, I leave it as an exercise for the reader to make a similar argument about how I knew what my diplomatic friend was telling me.
I do not believe that ChatGPT is capable of anything like this, though, given that there’s been tons of fiction in its training corpus, containing millions and millions of lines of dialog, it might provide a passable simulacrum in this or that case. The situation will not change when the underling LLM has more parameters and has been trained on a larger dataset, assuming there’s one to be had. The limitation is inherent in the technology.
Critics like Gary Marcus argue that LLMs need to be augmented by the capacity for symbolic reasoning if they are to be truly intelligent, whatever that is. I agree. Symbolic reasoning will get you a lot, but not a whole hell-of-a-lot in the situation I’ve been discussing here. That pseudo-deduction I just went through, symbolic reasoning will get you the capacity to do that, but in even more detail.
On that basis I don’t expect that AI and ML systems will be able to handle the nuances of human interaction in the foreseeable future, if ever. We’ve come a long way, and we have a long way to go.
No comments:
Post a Comment