Sometime back in the 1970s, I believe it was, David Marr observed something of a paradox (I believed he used that word) in the development of artificial intelligence (AI). Much of the early work, which did meet with some success, involved modeling fairly sophisticated forms of knowledge, mathematics and science, but when researchers started working in simple domain, like ordinary narrative, things got more difficult. That is, it seemed easier to model the specialized knowledge of a highly trained scientist than the general knowledge of a six year old. That problem has come to be known in AI as the problem of common sense, and its intractability has was one reason that old school research programs grounded in symbolic reasoning fell apart in the mid-1980s. During the 1990s and continuing on to the present various machine learning techniques have become quite successful in domains that had eluded symbolic AI. But common sense reasoning has continued to elude researchers.
Earlier this year Microsoft co-founder Paul Allen announced that he was giving $125 million to his nonprofit Allen Institute for Artificial Intelligence (AI2) to study common sense reasoning. Here's a short paper from that lab that gives and overview of the problem.
Niket Tandon, Aparna S. Varde, Gerard de Melo, Commonsense Knowledge in Machine Intelligence, SIGMOD Records 2018.
Abstract: There is growing conviction that the future of computing depends on our ability to exploit big data on the Web to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in cur- rent systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g. quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.