Jordan argues that AI, which is mostly machine learning these days, remains dominated by the Dr. Frankenstein notion of creating an artificial human. He regards that as a mistake, and argues for a more collective approach. (Cf. this post from two years ago, Beyond "AI" – toward a new engineering discipline.)
From the YouTube page:
Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single agent, and in which agents should be autonomous so they can exhibit intelligence independent of human intelligence. Thus, when AI systems are deployed in social contexts, the overall design is often naive. Such a paradigm need not be dominant. In a broader framing, agents are active and cooperative, and they wish to obtain value from participation in learning-based systems. Agents may supply data and resources to the system, only if it is in their interest. Critically, intelligence inheres as much in the system as it does in individual agents. This perspective is familiar to economics researchers, and a first goal in this work is to bring economics into contact with computer science and statistics. The long-term goal is to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic and inferential disciplines.
Michael I. Jordan is the Pehong Chen Distinguished Professor in the departments of electrical engineering and computer science and of statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Jordan is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences, and a foreign member of the Royal Society. He was a plenary lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize from the Cognitive Science Society in 2015 and the ACM/AAAI Allen Newell Award in 2009.
Two slides from the presentation:
Here’s a paper where Jordan is one of the authors:
By Divya Siddarth, Daron Acemoglu, Danielle Allen, Kate Crawford, James Evans, Michael Jordan, E. Glen Weyl, How AI Fails Us, Edmond J. Safra Center for Ethics and Carr Center for Human Rights Policy, Harvard University (December 1, 2021).
Abstract:
The dominant vision of artificial intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificial metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite. Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet. Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project to collective intelligence platforms like Wikipedia. We conclude with a concrete set of recommendations and a survey of alternative traditions.
No comments:
Post a Comment