GPT3 and people make that same gut errors on reasoning tests like "A bat and a ball cost $1.10..." Inspired by how people can override their gut, @Maxwell_Nye shows how to augment a neural "System 1" with a symbolic "System 2", no extra training required https://t.co/TwIbcglzne pic.twitter.com/6XSYgIGFUS
— Brenden Lake (@LakeBrenden) July 12, 2021
Abstract of the linked article, Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning:
Human reasoning can often be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2"). Neural sequence models -- which have been increasingly successful at performing complex, structured tasks -- exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
From the article itself:
Numerous studies have shown that engagement of System 2-style effort can help “override or inhibit default responses emanating from System 1” (Evans, 2003), correcting inconsistent or un-systematic intuitive impulses. For example, when System 2 is engaged by asking people to take more time to respond, people’s accuracy improves on the CRT task above (Kahneman, 2013). It has been argued that integrating System 2 processing could similarly improve AI systems (Goyal & Bengio, 2020; Garcez & Lamb, 2020), and here we explore this idea as applied to neural sequence models.
In this work, we take inspiration from dual process theories to explore a neuro-symbolic generation system, wherein predictions from a neural model are treated as System 1 proposals, and a logical, deliberative System 2 filters these proposals for consistency and soundness (see Figure 1). We further take inspiration from the fact that humans often do not need explicit supervision to reason about new problems or domains (e.g., see human evaluation task in Section 4.2) and require that the System 2 module not need additional problem-specific training, especially on example contradictions or commonsense violations. People can handle novelty by reconfiguring, rather than retraining, their internal models (Lake et al., 2017), and we strive to build machine systems capable of the same. We show how a lightweight, easy-to-implement System 2 model can help improve coherence and consistency by adding a small amount of symbolic reasoning.
I wonder if in fact that's how speech and writing work. That is, an underlying System 1 pumps out pre-canned boilerplate language while an overlying System 2 does a real-time check on consistency. As long as the text string passes the test the words keep flowing. But as soon as we hit a 'fail', System 1 stops and generates different output. If it passes, keep on going. Otherwise, try again. This start and stop process is what we consciously experience as thinking.
No comments:
Post a Comment