Wednesday, April 6, 2022

Superhuman AI, a 21st century Philosopher's Stone?

I’ve been reading perhaps more than is healthy about such things as Superintelligence and AI takeoff (gradual or FOOM!) and am wondering whether or not the idea of superintelligence is a 21st Century equivalent of the Philosopher’s Stone (Arabic: ḥajar al-falāsifa, Latin: lapis philosophorum) of Olde. Superintelligence is the idea of a being, generally thought of as an AI of some kind, that is more intelligent that humans are. The Philosopher’s Stone is an alchemical belief in a substance that can transmute base metals into gold.

I dropped this concern into the Twitterverse last evening and Ted Underwood observed:

Good question, thought I to myself. What are those other parts of thinking?

This morning Ted came back with:

So I took a look at that article, “The Myth of a Superhuman AI,” which is from 2017 and is by Kevin Kelly. Very interesting. Kelly sets the stage:

Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief — a myth.

I like that set of parallels a lot, a whole lot. I got me excited that rather than finish reading the article I decided to make this post.

What’s in question is the nature of the world: What kinds of things and processes exist now or could exist in the future? The alchemical idea of a philosopher’s stone is embedded in a network of ideas about the nature of physical reality, its objects, processes, and actions. The same for superintelligence. Superintelligence is about minds, brains, computers, and about the future.

I know very little about the history of alchemy, but I do know the no less a thinker than Isaac Newton took it quite seriously:

Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church.

By the 19th Century, however, alchemy no longer held the attention of the most serious and venturesome thinkers. But it persists in popular culture, e.g. the Harry Potter universe, or Full Metal Alchemist.

That is to say, the idea of the philosopher’s stone didn’t disappear overnight. It was a gradual process, taking place over centuries, as the (so-called) scientific revolution radiated out from its earliest footholds in 16th Century astronomy and physics. Will the idea of artificial superintelligence undergo a similar process?

* * * * *

Question: Why are the prophets of Superintelligence more worried about the danger it might present to humanity than interested in the possibility that it will reveal to us the Secrets of the Universe? See my post from March 5, These bleeding-edge AI thinkers have little faith in human progress and seem to fear their own shadows.

* * * * *

A couple of hours later: I’ve now read Kevin Kelly’s article. Very good. Some passages:

Likewise, there is no ladder of intelligence. Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence.

Temperature is not infinite — there is finite cold and finite heat. There is finite space and time. Finite speed. Perhaps the mathematical number line is infinite, but all other physical attributes are finite. It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum?

Many proponents of an explosion of intelligence expect it will produce an explosion of progress. I call this mythical belief “thinkism.” It’s the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence.[…] No super AI can simply think about all the current and past nuclear fission experiments and then come up with working nuclear fusion in a day. A lot more than just thinking is needed to move between not knowing how things work and knowing how they work.

Likewise, the evidence so far suggests AIs most likely won’t be superhuman but will be many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash. Instead there will be a galaxy of finite intelligences, working in unfamiliar dimensions, exceeding our thinking in many of them, working together with us in time to solve existing problems and create new problems.

* * * * *

An article by Melanie Mitchell led me to this statement in the New York Times:

Eric Horvitz, who oversees much of the A.I. work at Microsoft, argued that neural networks and related techniques were small advances compared with technologies that would arrive in the years to come.

“Right now, what we are doing is not a science but a kind of alchemy,” he said.

2 comments: