Humans can decipher adversarial images! Our new work (out TODAY in @NatureComms) shows that people can do "theory of mind" on machines—predicting how machines will see the bizarre images that "fool" them.— Chaz Firestone (@chazfirestone) March 22, 2019
Paper: https://t.co/G7KqkK0QSW
Full data & code: https://t.co/3qYTvxp7kE pic.twitter.com/4gohIv3pBX
Here's the article's abstract:
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike.
This is a fascinating and, I believe, important result.
From the tweet stream
We show that this is indeed the case! We showed human subjects images from many adversarial attacks, and made them guess how machines classified them — a "machine theory-of-mind" task. We found that, more often than not, humans can figure out how machines will see these images!— Chaz Firestone (@chazfirestone) March 22, 2019
Very interesting! And your own thoughts in recognizing this?
ReplyDeleteHmmm... Well, if you just looked at the image without the choice between labels and had say what it was you'd probably say it wasn't much of anything, just a bunch of dots. Perhaps you'd note that there was something a bit different about the middle area, but that's it. But if forced to choose, you can choose, and you make the same choice a computer does. But the computer in question doesn't have many choices available to it. It's the limited range of choices that's interesting.
Delete