Pages in this blog

Friday, March 4, 2022

Rodney Brooks has been making predictions: Concerning AI, “We’re still back in phlogiston land…”

Back on January 1, 2018 Rodney Brooks issued fairly specific predictions in three areas: 1) self-driving cars, 2) Artificial Intelligence, machine learning, and robotics, and 3) progress in the space industry. There are over a dozen predictions in each of those three areas. Brooks has updated those predictions each year since and plans to do so until 2050. You can find the most recent update, for 1.1.22, here: https://rodneybrooks.com/predictions-scorecard-2022-january-01/.

I’m not going to reprise any of those specific updates here, but I’d like to copy over some of his commentary for that second area, Artificial Intelligence, machine learning, and robotics.

Where’s the next big thing?

Back in 2018 I predicted that “the next big thing”, to replace Deep Learning, as the go to hot topic in AI would arrive somewhere between 2023 and 2027. I was convinced of this as there has always been a next big thing in AI. Neural networks have been the next big thing three times already. But others have had their shot at that title too, including (in no particular order) Bayesian inference, reinforcement learning, the primal sketch, shape from shading, frames, constraint programming, heuristic search, etc.

We are starting to get close to my window for the next big thing. Are there any candidates? I must admit that so far they all seem to be derivatives of deep learning in one way or another. If that is all we get I will be terribly disappointed, and probably have to give myself a bad grade on this prediction.

So far the things that I see bubbling around and getting people excited are transformers, foundation models, and unsupervised learning.

Concerning transformers:

These language models are over interpreted by people as understanding what they are spitting out, especially when the press writes stories where they have cherry picked responses. But they come with incredible problems, including copyright violations, intellectual theft of code, and even outright life threatening danger when they find their way into consumer products. Tech companies have a real problem in rushing some of these systems to market.

Continuing on:

Foundation models are large trained models that start out as a basis for tuning particular applications. There has been some self important announcements with a sort of me too feel (“Hey, I produced a foundation model too!!”), which don’t amount to much of an intellectual contribution. If this turns out to be the next big thing I am going to have to rip off my mask of equanimity and revert to my natural state of being a grumpy old man.

Unsupervised learning is an idea that has been around for a long time. Not a big intellectual jump to want to get it into deep learning–may be a hard technical problem, but not an intellectual breakthrough this time around.

The problem with AI

I have often stated that I think the field of AI, despite the great practical successes recently of Deep Learning, is probably a few hundred years away from where most people think it is. We’re still back in phlogiston land, not having yet figured out the elements, including oxygen.

Read that again and think about it. Does he really mean that? Why would he say such a thing? Is he nuts?

Let us assume that he’s correct. Given how impressive some current AI demonstrations are, can we not take Brooks’s view as implying that we have learned, or at least have the potential to learn, about ourselves and our own capacities? [Yeah, I know, that needs some unpacking. Maybe later.]

After he goes through his 14 specific predictions, Brooks reminds us of his bona fides:

AI, Robotics, and Machine Learning are areas that I have a real personal investment in. I wrote a terrible Masters thesis on ML back in 1977. I joined the Stanford AI Lab later that year, then the MIT AI Lab four years later, and became director of that lab in 1997, merging it with LCS (Lab for Computer Science) to form MIT CSAIL in 2003, the largest lab at MIT, still today. I have founded six AI and robotics companies. After 45 years in the academic and industry trenches can I be unbiased? Probably not.

I know that many who disagree with me will dismiss me for all that experience that I have. Perhaps those who agree with me should also dismiss me for the same reason!!

That last paragraph is interesting. Why would someone dismiss him for all his experience? He really knows this stuff, no? How can anyone look at this area without being biased in some way? Doesn’t naivete impose its own biases?

As you know, I’m of the belief that we’re in transition from one intellectual era to another. To which era does AI, robotics, and machine learning belong, the old one or the new. Maybe it straddles both. Maybe AI and robotics are old, machine learning new. Or maybe the perceptron is old, transformers new? Are we talking phlogiston or oxygen? How do you tell?

He goes on to state:

My current belief is that it all gets back to the symbol grounding problem, and even more deeply to adopting a computational approach to AI, Robotics, and ML (and I expect almost no one will agree with that latter claim).

Color me sympathetic to that last claim, that the computational approach is problematic. I’ve written a post on Brooks’s views: Has the computer metaphor for the mind run out of steam? New Savanna, June 19, 2019, https://new-savanna.blogspot.com/2019/06/has-computer-metaphor-for-mind-run-out.html.

He concludes by mentioning Brian Cantwell Smith, The Promise of Artificial Intelligence.

In this book Smith introduces the idea of registration, as a maintained relationship between an object outside of us and what goes on inside our head (and he would have it also in a classical computer) despite changes in perception and even context.

I’ve not read the book, but I’ve read reviews. I believe Smith introduces a distinction between reckoning and judgement. Reckoning is what computers do, but only humans are capable of judgement, at least so far. Intelligence requires judgement. I think we do need a fairly specific term for what it is that AI systems do. I kind of like “reckoning”. Note: Smith talks about registration in the video I've embedded here.

No comments:

Post a Comment