Pages in this blog

Saturday, November 12, 2022

Dreams of artificial people, through history [ontological ambiguity]

From an article by Stephen Marche, “The Imitation of Consciousness: On the Present and Future of Natural Language Processing,” Literary Hub, June 23, 2021.

The article is a general discussion of the title topic. This post is NOT a precis of that discussion. I simply want to highlight a few passages.

AI as existential threat

Engineers and scientists fear artificial intelligence in a way they have not feared any technology since the atomic bomb. Stephen Hawking has declared that “AI could be the worst event in the history of civilization.” Elon Musk, not exactly a technophobe, calls AI “our greatest existential threat.” Outside of AI specialists, people tend to fear artificial intelligence because they’ve seen it at the movies, and it’s mostly artificial general intelligence they fear, machine sentience. It’s Skynet from Terminator. It’s Data from Star Trek. It’s Ex Machina. It’s Her. But artificial general intelligence is as remote as interstellar travel; its existence is imaginable but not presently conceivable. Nobody has any idea what it might look like. Meanwhile, the artificial intelligence of natural language processing is arriving. In January, 2021, Microsoft filed a patent to reincarnate people digitally through distinct voice fonts appended to lingual identities garnered from their social media accounts. I don’t see any reason why it can’t work. I believe that, if my grandchildren want to ask me a question after I’m dead, they will have access to a machine that will give them an answer and in my voice. That’s not a “new soul.” It is a mechanical tongue, an artificial person, a virtual being. The application of machine learning to natural language processing achieves the imitation of consciousness, not consciousness itself, and it is not science fiction. It is now.

The Turing Test [1950]

The Turing test appeared in the October 1950 issue of Mind. It posed two questions. The first was simple and grand: “Can machines think?” The second took the form of the famous imitation game. An interrogator is faced with two beings, one human and the other artificial. The interrogator asks a series of questions and has to decide who is human and what is artificial. The questions Turing originally imagined for the imitation game have all been solved: “Please write me a sonnet on the subject of the Forth Bridge.” “Add 34957 to 70764.” “Do you play chess?” These are all triflingly easy by this point, even the composition of the sonnet. For Turing, “these questions replace our original, ‘Can machines think?’” But the replacement has only been temporary. The Turing test has been solved but the question it was created to answer has not. At the time of writing, this is a curious intellectual conundrum. In the near future, it will be a social crisis.

The actual text of Turing’s paper, after the initial proposition of the imitation game and a general introduction to the universality of digital computation, consists mainly in overcoming a series of objections to the premise that machines can think. These range from objections Turing dismisses out of hand—the theological objection that “thinking is a function of man’s immortal soul”—to those that are more serious, such as the limitations of discrete-state machines implied by Godel’s theory of sets. (This objection does not survive because the identical limitation applies to all conceivable expressions of human intelligence as well.)

Revised:

What is shocking about the artificial intelligence of natural language processing is not that we’ve created new consciousnesses, but that we’ve created machines we can’t tell apart from consciousnesses. The question isn’t going to be “Can machines think?” The question isn’t even going to be “How can you create a machine that imitates a person?” The question is going to be: “How can you tell a machine from a person?”

Norbert Weiner lists historical moments

This is the passage that originally caught my attention:

Early in Cybernetics, the foundational text of engineered communication, Norbert Weiner offers a brief history of the dream of artificial people. “At every stage of technique since Daedalus or Hero of Alexandria the ability of the artificer to produce a working simulacrum of a living organism has always intrigued people. This desire to produce and to study automata has always been expressed in terms of the living technique of the age,” he writes. “In the days of magic, we have the bizarre and sinister concept of the Golem, that figure of clay into which the Rabbi of Prague breathed life with the blasphemy of the Ineffable Name of God. In the time of Newton, the automaton becomes the clockwork music box, with the little offices pirouetting stiffly on top. In the nineteenth century, the automaton is a glorified heat engine, burning some combustible fuel instead of the glycogen of the human muscles. Finally, the present automaton opens doors by means of photocells, or points guns to the play at which a radar beam picks up an airplane, or competes the solution of a differential equation.” Ultimately, we’re just not that far from the Rabbi of Prague, breathing the Ineffable over raw material, fearful of the golems conjured to stalk the city. The golems have become digital. The rabbis breathe the ineffable over silicon rather than clay. The crisis is the same.

Mathematics over language

Here’s where we’re at: Capable of imitating consciousness through machines but not capable of understanding what consciousness is. This gap will define the spiritual condition of the near future. Mathematics, the language of nature, is overtaking human language. Two grounding assumptions underlying human existence are about to be shattered, that language is a human property and that language is evidence of consciousness. The era we are entering is not posthuman, as so many hoped and feared, but on the edge of the human, incapable, at least for the present, of going forward and unable to go back. Horror mingled with wonder is an appropriate response.

Hmmmm....

No comments:

Post a Comment