Monday, June 17, 2019

Human's can "read" a computer's "mind"

Read the whole thread:

Here's the article's abstract:
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike.
This is a fascinating and, I believe, important result.

From the tweet stream

2 comments:

  1. Very interesting! And your own thoughts in recognizing this?

    ReplyDelete
    Replies
    1. Hmmm... Well, if you just looked at the image without the choice between labels and had say what it was you'd probably say it wasn't much of anything, just a bunch of dots. Perhaps you'd note that there was something a bit different about the middle area, but that's it. But if forced to choose, you can choose, and you make the same choice a computer does. But the computer in question doesn't have many choices available to it. It's the limited range of choices that's interesting.

      Delete