I think people misunderstand GPT in general, because to humans, words have meanings, so we hear GPT speak words and we think it's intelligent. I think the biggest GPT3 is only intelligent in the same sense as a human's lingustic subsystem, and in that respect it's a superintelligence: so far beyond any human that we mistake it for having general intelligence. But I'm pretty sure GPT3 doesn't have *mental models*, so there are a great many questions it'll never be able to answer no matter how far it is scaled up (except if it's already seen an answer that it can repeat.)
Yes. Though I note, that if you believe semantics to be an aspect of the linguistic system (as I do), then GPT-3 doesn’t really cover language. But, in this context, that’s a quibble.
Here’s the important point: When Gary Marcus and others say GPT-3 lacks meaning, that’s what they’re talking about, no mental model. That’s what was going on in with symbolic AI back in the Jurassic era, the construction of a mental model in some propositional form. It’s not at all obvious just how one can couple neural nets with symbolic propositional models, but it has to be done somehow.
For some hints, see these posts:
- Think of GPT-3 as System 1. Now augment it with a symbolic System 2. [Dual-System, Neuro-Symbolic Reasoning]
- Geoffrey Hinton says deep learning will do everything. I’m not sure what he means, but I offer some pointers. Version 2.
No comments:
Post a Comment