Pages in this blog

Wednesday, November 22, 2017

Explain yourself, Siri, or Alex or Watson or any other AI that does interesting/amazing things and we don't know how it does it

From the NYTimes:
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
And so we have a new research field, explainable A.I., or X.A.I.
Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
One expert, David Gunning, asserts:
“The real secret is finding a way to put labels on the concepts inside a deep neural net,” he says. If the concepts inside can be labeled, then they can be used for reasoning — just like those expert systems were supposed to do in A.I.’s first wave.
And so:
To create a neural net that can reveal its inner workings, the researchers in Gunning’s portfolio are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks.
Makes sense. That's what the brain does, isn't it? Except that the network in even a small patch of neural tissue is huge in comparison to deep learning nets.

Perhaps language will help:
Five years ago, Darrell and some colleagues had a novel idea for letting an A.I. teach itself how to describe the contents of a picture. First, they created two deep neural networks: one dedicated to image recognition and another to translating languages. Then they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.

Darrell’s previous work had piggybacked on pictures that were already captioned. What he was now proposing was creating a new data set and using it in a novel way. Let’s say you had thousands of videos of baseball highlights. An image-recognition network could be trained to spot the players, the ball and everything happening on the field, but it wouldn’t have the words to label what they were. But you might then create a new data set, in which volunteers had written sentences describing the contents of every video. Once combined, the two networks should then be able to answer queries like “Show me all the double plays involving the Boston Red Sox” — and could potentially show you what cues, like the logos on uniforms, it used to figure out who the Boston Red Sox are.
Sounds promisin.

I wonder if these people could make sense of some obscure notes I wrote up a decade ago:

Abstract: These notes explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. A mind is a fluid attractor net of fractional dimensionality over a neural net whose behavior displays complex dynamics in a state space of unbounded dimensionality. The attractor-net moves from one discrete state (frame) to another while the underlying neural net moves continuously through its state space.


Abstract: These diagrams explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. Constructions include: variety ('is-a' inheritance), simple movements, counting and place notation, orientation in time and space, language, learning.

Introduction: This is a series of diagrams based on the informal ideas presented in
Attractor Nets, Series I: NotesToward a New Theory of Mind, Logic and Dynamics in Relational Networks, which explains the notational conventions and discusses the constructions. These diagrams should be used in conjunction with that document, which contains and discusses many of them. In particular, the diagrams in the first three sections are without annotation, but they are explained in the AttractorNets paper.
The rest of the diagrams are annotated, but depend on ideas developed in the attractor nets paper. 
The discussions of Variety and Fragments of Language compare the current notation, based on thework of Sydney Lamb, with a more conventional notion. In Lamb’s notation, nodes are logicaloperators (and, or) while in the more conventional notation nodes are concepts. The Lamb-basednotation is more complex, but also fuller.
And, we might as well toss these notes in as well:
From Associative Nets to the Fluid Mind. Working Paper. October 2013, 16 pp.

Abstract: We can think of the mind as a network that’s fluid on several scales of viscosity. Some things change very slowly, on a scale of months to years. Other things change rapidly, in milliseconds or seconds. And other processes are in between. The microscale dynamic properties of the mind at any time are context dependent. Under some conditions it will function as a highly structured cognitive network; the details of the network will of course depend on the exact conditions, both internal (including chemical) and external (what’s the “load” on the mind?). Under other conditions the mind will function more like a loose associative net. These notes explore these notions in a very informal way.

No comments:

Post a Comment