Pages in this blog

Thursday, April 12, 2018

Common Sense in Artificial Intelligence

Sometime back in the 1970s, I believe it was, David Marr observed something of a paradox (I believed he used that word) in the development of artificial intelligence (AI). Much of the early work, which did meet with some success, involved modeling fairly sophisticated forms of knowledge, mathematics and science, but when researchers started working in simple domain, like ordinary narrative, things got more difficult. That is, it seemed easier to model the specialized knowledge of a highly trained scientist than the general knowledge of a six year old. That problem has come to be known in AI as the problem of common sense, and its intractability has was one reason that old school research programs grounded in symbolic reasoning fell apart in the mid-1980s. During the 1990s and continuing on to the present various machine learning techniques have become quite successful in domains that had eluded symbolic AI. But common sense reasoning has continued to elude researchers.

Earlier this year Microsoft co-founder Paul Allen announced that he was giving $125 million to his nonprofit Allen Institute for Artificial Intelligence (AI2) to study common sense reasoning. Here's a short paper from that lab that gives and overview of the problem.
Niket Tandon, Aparna S. Varde, Gerard de Melo, Commonsense Knowledge in Machine Intelligence, SIGMOD Records 2018.

Abstract: There is growing conviction that the future of computing depends on our ability to exploit big data on the Web to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in cur- rent systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g. quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.

1 comment:

  1. If my hypothesis about phonosemantic coding in natural languages is correct, then the vast majority of languages present us with seemingly 'arbitrary' form/meaning surface mappings. But many of these go back, historically, to much more iconic etymological reconstructions. The surface forms may be encrypted, much like wartime communications. The underlying iconic forms look for all the world like vector and tensor formulations, at least when dealing with physical materials, textures, forces, etc. This isn't you grandpapa's 'onmatopoiea'. The iconic system is diagrammatic, utilizing the architecture and internal symmetries of the phonology to demarcate meaning ranges, apparently coapted from pre-linguistic oral structure, function, and neural control. Our differentiated teeth specialize on different subtasks during mastication and deglutition, and the surfaces of our tongues have regions specializing in different tastes associated with the material textures processed at these regions. For example, sweet taste is in the front of the tongue, and the labial and dental/alveolar phonemes encode ideas of viscous yielding under pressure, the way very ripe fruits just melt in your mouth, not needing tooth involvement at all. Velars on the other hand encode the idea of very hard materials which require long and effortful processing (grinding, grating, cracking, etc.), which is what molars specialize in. And so on. Lots of 'common sense' built into these systems here. Unexamined by AI workers.

    ReplyDelete