Saturday, April 2, 2022

Chess as THE prototypical problem for AI [w/ a Rodney Brooks postscript on prediction sins]

Back in August of 2020 I had some thoughts about chess as a prototypical domain for AI. My point was that chess is, in fact, a very specialized conceptual domain and is in fact not characteristic of human thought at all. I’m now looking at Luke Muehlhauser’s useful survey, What should we learn from past AI forecasts? (2016). In discussing “The Peak of AI Hype” he observes:

For example, Moravec (1988) claims that John McCarthy founded the Stanford AI project in 1963 “with the then-plausible goal of building a fully intelligent machine in a decade” (p. 20).

In some cases, this optimism may have been partly encouraged by the hypothesis that solving computer chess might be roughly equivalent to solving AI in full generality. Feigenbaum & McCorduck (1983), p. 38, report:

These young [AI scientists of the 1950s and 60s] were explicit in their faith that if you could penetrate to the essence of great chess playing, you would have penetrated to the core of human intellectual behavior. No use to say from here that somebody should have paid attention to all the brilliant chess players who are otherwise not exceptional, or all the brilliant people who play mediocre chess. This first group of artificial intelligence researchers… was persuaded that certain great, underlying principles characterized all intelligent behavior and could be isolated in chess as easily as anyplace else, and then applied to other endeavors that required intelligence.

Another reason for early optimism might have been that some AI scientists thought it might be relatively easy to learn how the human mind worked.

Whoops! We’ve learned a lot since then, haven’t we? But not so much as to inoculate us against hype.

I note that Muehlhauser says nothing about the hype attending machine translation (MT) in the late 1950s and early 1960s and the attendant collapse of funding when those rosy predictions failed to materialize. On the one hand that’s understandable since the group of researchers working on MT was distinctly different from that working on AI. However, MT has been within the purview of AI for that last quarter of a century or more and has many conceptual and technical issues in common with AI.

Rodney Brooks: Seven Deadly sins of predicting AI

September of 2017 Rodney Brooks posted an essay, [FoR&AI] The Seven Deadly Sins of Predicting the Future of AI.

They are:

  1. Over and underestimating
  2. Imagining magic
  3. Performance versus competence
  4. Suitcase words
  5. Exponentials
  6. Hollywood scenarios
  7. Speed of deployment

From the first, over and underesting:

Roy Amara was a futurist and the co-founder and President of the Institute For The Future in Palo Alto, home of Stanford University, countless venture capitalists, and the intellectual heart of Silicon Valley. He is best known for his adage, now referred to as Amara’s law:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

There is actually a lot wrapped up in these 21 words which can easily fit into a tweet and allow room for attribution. An optimist can read it one way, and a pessimist can read it another. It should make the optimist somewhat pessimistic, and the pessimist somewhat optimistic, for a while at least, before each reverting to their norm.

A great example⁠1 of the two sides of Amara’s law that we have seen unfold over the last thirty years concerns the US Global Positioning System. Starting in 1978 a constellation of 24 satellites (30 including spares) were placed in orbit. A ground station that can see 4 of them at once can compute the the latitude, longitude, and height above a version of sea level. An operations center at Schriever Air Force Base in Colorado constantly monitors the precise orbits of the satellites and the accuracy of their onboard atomic clocks and uploads minor and continuous adjustments to them. If those updates were to stop GPS would fail to have you on the correct road as you drive around town after only a week or two, and would have you in the wrong town after a couple of months.

The goal of GPS was to allow precise placement of bombs by the US military. That was the expectation for it. The first operational use in that regard was in 1991 during Desert Storm, and it was promising. But during the nineties there was still much distrust of GPS as it was not delivering on its early promise, and it was not until the early 2000’s that its utility was generally accepted in the US military. It had a hard time delivering on its early expectations and the whole program was nearly cancelled again and again.

Today GPS is in the long term, and the ways it is used were unimagined when it was first placed in orbit. My Series 2 Apple Watch uses GPS while I am out running to record my location accurately enough to see which side of the street I ran along. The tiny size and tiny price of the receiver would have been incomprehensible to the early GPS engineers. GPS is now used for so many things that the designers never considered. It synchronizes physics experiments across the globe and is now an intimate component of synchronizing the US electrical grid and keeping it running, and it even allows the high frequency traders who really control the stock market to mostly not fall into disastrous timing errors. It is used by all our airplanes, large and small to navigate, it is used to track people out of jail on parole, and it determines which seed variant will be planted in which part of many fields across the globe. It tracks our fleets of trucks and reports on driver performance, and the bouncing signals on the ground are used to determine how much moisture there is in the ground, and so determine irrigation schedules.

GPS started out with one goal but it was a hard slog to get it working as well as was originally expected. Now it has seeped into so many aspects of our lives that we would not just be lost if it went away, but we would be cold, hungry, and quite possibly dead.

There's much more in the post.

No comments:

Post a Comment