Rodney Brooks, 17 May 2019: Brooks starts talking about various predictions various experts (such as Elon Musk) have made about self-driving automobiles, making the point that they have had to revise over-optimistic prediction time and again. Thus Chris Urmson had once predicted that driverless cars would be plentiful by 2020. Brooks goes on to note:
Now let’s take note of this. Chris Urmson was the leader of Google’s self-driving car project, which became Waymo around the time he left, and is the CEO of a very well funded self-driving start up. He says “30 to 50 years”. Chris Urmson has been a leader in the autonomous car world since before it entered mainstream consciousness. He has lived and breathed autonomous vehicles for over ten years. No grumpy old professor is he. He is a doer and a striver. If he says it is hard then we know that it is hard.
I happen to agree, but I want to use this reality check for another thread.
If we were to have AGI, Artificial General Intelligence, with human level capabilities, then certainly it ought to be able to drive a car, just like a person, if not better. Now a self driving car does not need to have general human level intelligence, but a self driving car is certainly a lower bound on human level intelligence. Urmson, a strong proponent of self driving cars says 30 to 50 years.
So what does that say about predictions that AGI is just around the corner? And what does it say about it being an existential threat to humanity any time soon. We have plenty of existential threats to humanity lining up to bash us in the short term, including climate change, plastics in the oceans, and a demographic inversion. If AGI is a long way off then we can not say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and when it does show up it will be in a world that we can not yet predict.
Do people really say that AGI is just around the corner? Yes, they do…
Ray Kurzweil is the Energizer Bunny of AI, the Timex Watch ("It takes a licking and keeps on ticking."):
Ray Kurzweil still maintains, in Martin Ford’s recent book that we will see a human level intelligence by 2029–in the past he has claimed that we will have a singularity by then as the intelligent machines will be so superior to human level intelligence that they will exponentially improve themselves...But that [2050] is the low end of when Urmson thinks we will have autonomous cars deployed. Suppose he is right about his range. And suppose I am right that autonomous driving is a lower bound on AGI, and I believe it is a very low bound. With these very defensible assumptions then the seemingly sober experts in Martin Ford’s new book are on average wildly optimistic about when AGI is going to show up.In the comments, Brooks notes:
AGI has been delayed.
In the fullness of time we will have capable self driving cars. But not next year, and likely not without all sorts of changes to infrastructure and communications between vehicles (whether peer to peer or through a centralized broker). It is going to take a long time.Note what Brooks is doing here (highlighted in yellow above). 1) The early part of Brooks's post is devoted, not to AGI, but to driverless cars. Time and again experts have made predictions that haven proven wrong. Do they know what they're talking about? 2) He then asserts that the capacity for driving an automobile without human help is the lower bound for AGI. 3) His penultimate move is the point out that, in one expert has revised his arrival date from 2020 to a range, 2050-2070, then someone else's prediction about AGI (Ford putting it at 2029) is likely nonsense.
I like that. Brooks is in effect calibrating the AGI space by using a specific task, autonomous driving, as a marker. But what if AGI isn't single dimensional, and so having a single lower bound (and, by implication, a single upper bound, above which, I assume, we find superintelligence)? What if AGI is multidimensional, with a lower and an upper bound on each dimension?
No comments:
Post a Comment