Pages in this blog

Thursday, March 24, 2016

State of the Art Robot from Boston Dynamics, watch it recover from a fall




Explanatory copy:
A new version of Atlas, designed to operate outdoors and inside buildings. It is specialized for mobile manipulation. It is electrically powered and hydraulically actuated. It uses sensors in its body and legs to balance and LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help with navigation and manipulate objects. This version of Atlas is about 5' 9" tall (about a head shorter than the DRC Atlas) and weighs 180 lbs.
From an interesting  and wide-ranging article in The Atlantic by Adrienne Lafrance:
When the U.S. military promotes video compilations of robots failing—buckling at the knees, bumping into walls, and tumbling over—at DARPA competitions, it is, several roboticists told me, clearly an attempt to make those robots likeable. (It’s also funny, and therefore disarming, like this absurd voiceover someone added to footage of a robot performing a series of tasks.) The same strategy was used in early publicity campaigns for the first computers. “People who had economic interest in computers had economic interest in making them appear as dumb as possible,” said Atkeson, from Carnegie Mellon. “That became the propaganda—that computers are stupid, that they only do what you tell them.”

But the anthropomorphic charm of a lovable robot is itself a threat, some have argued. In 2013, two professors from the University of Washington published a paper explaining what they deem “The Android Fallacy.” Neil Richards, a law professor, and William Smart, a computer science professor, wrote that it’s essential for humans to think of robots as tools, not companions—a tendency they say is “seductive but dangerous.” The problem, as they see it, comes with assigning human features and behaviors to robots—describing robots as being “scared” of obstacles in a lab, or saying a robot is “thinking” about its next move. As autonomous systems become more sophisticated, the connection between input (the programmer’s command) and output (how the robot behaves) will become increasingly opaque to people, and may eventually be misinterpreted as free will.
But just what IS a robot?
“I don’t think it really matters if you get the words right,” said Andrew Moore, the dean of the School of Computer Science at Carnegie Mellon. “To me, the most important distinction is whether a technology is designed primarily to be autonomous. To really take care of itself without much guidance from anybody else… The second question—of whether this thing, whatever it is, happens to have legs or eyes or a body—is less important.”

What matters, in other words, is who is in control—and how well humans understand that autonomy occurs along a gradient. Increasingly, people are turning over everyday tasks to machines without necessarily realizing it. “People who are between 20 and 35, basically they’re surrounded by a soup of algorithms telling them everything from where to get Korean barbecue to who to date,” Markoff told me. “That’s a very subtle form of shifting control. It’s sort of soft fascism in a way, all watched over by these machines of loving grace. Why should we trust them to work in our interest? Are they working in our interest? No one thinks about that.”

“A society-wide discussion about autonomy is essential,” he added.
H/t 3QD.

No comments:

Post a Comment