Sunday, January 18, 2015

Dan Dennett and others on 'thinking machines'

This year's Edge Question is, alas, "What do you think about machines that think?" "Alas" because I have something of a professional obligation to keep up with this kind of stuff, though I don't hold out high hopes for commentary on the topic, not even for John Brockman's Edgers. But I'll excerpt Dan Dennett's reply (scan down the page). Concerning IBM's Watson:
Do you want your doctor to overrule the machine's verdict when it comes to making a life-saving choice of treatment? This may prove to be the best—most provably successful, most immediately useful—application of the technology behind IBM's Watson, and the issue of whether or not Watson can be properly said to think (or be conscious) is beside the point. If Watson turns out to be better than human experts at generating diagnoses from available data it will be morally obligatory to avail ourselves of its results. A doctor who defies it will be asking for a malpractice suit. No area of human endeavor appears to be clearly off-limits to such prosthetic performance-enhancers, and wherever they prove themselves, the forced choice will be reliable results over the human touch, as it always has been. Hand-made law and even science could come to occupy niches adjacent to artisanal pottery and hand-knitted sweaters.
Maybe. His conclusion:
What's wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don't delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.
(1) It is very, very hard to imagine (and keep in mind) the limitations of entities that can be such valued assistants, and the human tendency is always to over-endow them with understanding—as we have known since Joe Weizenbaum's notorious Eliza program of the early 1970s. This is a huge risk, since we will always be tempted to ask more of them than they were designed to accomplish, and to trust the results when we shouldn't.

(2) Use it or lose it. As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is) but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That's an event we should bend our efforts to averting now, because it could happen any day.
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.
Addendum: I've been skimming my way through. I've no intention of reading all the entries, but here's some snippets from some I've looked at. FWIW, the further I scanned down the list, the more I skipped over contributions. That's mostly a matter of my time and attention. No doubt I've skipped some stuff I would have liked had I read it. If I knew the name or saw a catchy title I took a look, otherwise, not likely.

Here's what Roger Schank says. As some of you may know, he was one of the stars of AI back in the 1970s and into the 1980s:
Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
He then goes on to explain why Watson isn't doing anything like thinking.

Rodney Brooks on deep learning (he's discussing visual identification with a human baby as an example):
...If we look inside the neuron layers it might be that one of the higher level learned features is an eye-like patch of image, and another feature is a foot-like patch of image, but the current algorithm would have no capability of relating the constraints of where and what spatial relationships could possibly be valid between eyes and feet in an image, and could be fooled by a grotesque collage of baby body parts, labeling it a baby. In contrast no person would do so, and furthermore would immediately know exactly what it was—a grotesque collage of baby body parts. Furthermore the current algorithm is completely useless at telling a robot where to go in space to pick up that baby, or where to hold a bottle and feed the baby, or where to reach to change its diaper. Today's algorithm has nothing like human level competence on understanding images.

Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people's heads.
Jonathan Gottschall, Darwinian literary theorist, proves himself to be clueless:
Of course machines can out-calculate and out-crunch us. And soon they will all be acing their Turing tests. But who cares. Let them do our grunt work. Let them hang out and chat. But when machines can out-paint or out-compose us—when their stories are more gripping and poignant than ours—there will be no denying that we are, ourselves, just thought machines and art machines, and outdated and inferior models at that.
Terrence J. Sejnowski, neuroscientist at the Salk Institute:
Students learn best when an adult teacher interacts with them one-on-one, tailoring lessons for that student. However, education is labor intensive. Few can afford individual instruction, and the assembly-line classroom system found in most schools today is a poor substitute. Computer programs can keep track of a student's performance, and some provide corrective feedback for common errors. But each brain is different and there is no substitute for a human teacher who has a long-term relationship with the student. Is it possible to create an artificial mentor for each student? We already have recommender systems on the Internet that tells us "if you liked X you might also like Y", based on data of many others with similar patterns of preference.

Someday the mind of each student may be tracked from childhood by a personalized deep learning system. To achieve this level of understanding of a human mind is beyond the capabilities of current technology, but there are already efforts at Facebook to use their vast social database of friends, photos and likes to create a theory of mind for every person on the planet. What is created to make a profit from a person could also be used to profit the person.
Martin Seligman, University of Pennsylvania maven of positive psychology:
Humans spend between 25% and 50% of our mental life prospecting the future. We imagine a host of possible outcomes, and we imbue most, perhaps each of these prospections with a valence. What comes next is crucial: we choose to enact one of the options. We need not get entangled in the problems of free will for present purposes. All we need to acknowledge is that our thinking in service of doing entails imagining a set of possible futures and assigning an expected value to each. The act of choosing, however it is managed, translates our thinking into doing.

Why is thinking structured this way? Because people have many competing goals (eating, sex, sleeping, tennis, writing articles, complimenting, revenge, childcare, tanning, etc.) and a scarcity of resources for doing them: scarcity of time, scarcity of money, scarcity of effort, and even the prospect of death. So evaluative simulation of possible futures is one of our solutions to this economy; this is a mechanism that prioritizes and selects what we will do….

I don't know much about the workings of our current machines. I do not believe that our current machines do anything in James's sense of voluntary action. I doubt that they prospect possible futures, evaluate them, and choose among them; although perhaps this describes—for only a single, simple goal—what chess playing computers do. Our current machines are somewhat constrained by available space and electricity bills, but they are not primarily creations of scarcity with clamorously competing goals and extremely limited energy. Our current machines are not social: they do not compete or co-operate with each other or with humans, they do not spin, and they do not attempt to persuade.
Neil Gershenfeld, physicist of MIT's fab lab:
Notably absent from either side of the debate about AI have been the people making many of the most important contributions to this progress. Advances like random matrix theory for compressed sensing, convex relaxations for heuristics for intractable problems, and kernel methods in high-dimensional function approximation are fundamentally changing our understanding of what it means to understand something.
Daniel Everett, Linguist and Dean of Arts and Sciences, Bentley University:
The mind is never more than a placeholder for things we do not understand about how we think. The more we use the solitary term "mind" to refer to human thinking, the more we underscore our lack of understanding. At least this is an emerging view of many researchers in fields as varied as Neuroanthropology, emotions research, Embodied Cognition, Radical Embodied Cognition, Dual Inheritance Theory, Epigenetics, Neurophilosophy, and the theory of culture…. 
We learn to reason in a cultural context, where by culture I mean a system of violable, ranked values, hierarchically structured knowledges, and social roles. We are able to do this not only because we have an amazing ability to perform what appears to be Bayesian inferencing across our experiences, but because of our emotions, our sensations, our proprioception, and our strong social ties. There is no computer with cousins and opinions about them.
Frank Tipler, physicist at Tulane, and clueless:
A simple calculation shows that our supercomputers now have the information processing power of the human brain. We do not yet know how to program human-level intelligence and creativity into these computers, but in twenty years, desktop computers will have the power of today's supercomputers, and the hackers of twenty years hence will solve the AI programming problem, long before any carbon-based space colonies are established on the Moon or Mars. The AI's, not humans, will colonize these planets instead, or perhaps, take the planets apart. No human, carbon-based human, will ever traverse interstellar space.
Nicholas A. Christakis Physician and Social Scientist, Yale University:
Culture is the earliest sort of intelligence outside our own minds that we humans created. Like the intelligence of a machine, culture can solve problems. Moreover, like the intelligence in a machine, we create culture, interact with it, are affected by it, and can even be destroyed by it. Culture applies its own logic, has a memory, endures after its makers are gone, can be repurposed in supple ways, and can induce action.

So I oxymoronically see culture as a kind of natural artificial intelligence. It is artificial because it is made, manufactured, produced by humans.
Joichi Ito, Director, MIT Media Lab:
Right around the time Western factory workers were smashing robots with sledgehammers, Japanese workers were putting hats on the same robots in factories and giving them names. On April 7, 2003, Astro Boy, the Japanese robot character, was registered as a resident of the city of Niiza, Saitama. 
Andrian Kreye, Editor, The Feuilleton (Arts and Essays), of the German daily newspaper, Sueddeutsche Zeitung, Munich:
In the folk tale of the late 19th century the mythical steel driving man John Henry dies beating a steam powered hammer during a competition to drill blast holes into a West Virginian mountainside. White collar and knowledge workers now face a race against being outperformed by machines driven by artificial intelligence. In this case AI is mainly a synonym for new levels of mainly digital productivity. Which is of course not quite as exciting as either waiting for the moment of singularity or the advent of doom. At the same time the reality of AI is not quite as comforting as the realization that machines, if properly handled, will always serve their masters.

Dystopian views of AI as popularized by movies and novels are just misleading. Those debates are rarely about science and technology. They tend to be mostly humans debating the nature of themselves….

There is no need for a superior intelligence to turn abstract debates about AI into very real questions of power, values and societal changes. Technology can initiate and advance historical shifts. It will never be the shift itself. The John Henry moment of the 21st century will neither be heroic nor entertaining. There are no grand gestures with which white collar and knowledge workers can go down fighting. There will be no folk heroes dying in the office park. Today's John Henry will merely fade into a sad statistic. Undoubtedly calculated by a skillfully thinking machine.
Steven Pinker, Harvard psychologist:
Nonetheless, recent baby steps toward more intelligent machines have led to a revival of the recurring anxiety that our knowledge will doom us. My own view is that current fears of computers running amok are a waste of emotional energy—that the scenario is closer to the Y2K bug than the Manhattan Project

No comments:

Post a Comment