Thursday, March 23, 2023

Adam Savage on Intuition [+ my intuitions about symbolic AI]

From the YouTube page:

Adam shares his absolute favorite magic book growing up: Magic with Science by Walter B. Gibson. Picking up this vintage copy is giving Adam memories of the countless times he pored over this book and how its demonstration of practical science experiments informed his approach and aesthetic style as a science communicator. Every illustration is clearcut and charming, and Adam is so happy to be reunited with this book!

Savage talks about reading about how things work in general, but in particular how magic tricks work as described and illustrated in this book. He puts a lot of stress on those illustrations.

And he also talks a lot about intuition (and how it is different from explicit knowledge). You get intuition, not from reading things, but from trying things out. Here he seems to be mostly about building things from ‘stuff’ and about doing those magic tricks. Intuition gives you a feel for things without, however, being (quite) able to explain what’s going on. You just know that this or that will work, or not.

I agree with this, and think a lot about intuition. I’m mostly interested in intuitions about literary works, and about thinking about the mind and so forth. In particular, it does seem to me that if you’ve done a lot of work with symbolic accounts of human thought, as I’ve done with cognitive networks, you have intuitions about language and mind that you can’t get from working on large language models (LLMs), such as GPTs. As far as I’m concerned, with advocates of deep learning discount the importance of symbolic thought, they (almost literally) don’t know what they’re talking about. Not only are they unfamiliar with the theories and models, but they lack the all-important intuitions.

More later.

* * * * *

Revised a bit from a note to Steve Pinker:

You’ve spent a lot of time thinking about language mechanisms in detail, so have I, though a somewhat different set of mechanisms. But I don’t think Mr. X has nor, for that matter, have most of the people involved in machine learning. Machine learning is about mechanisms to construct some kind of model over a huge database. But how that model actually works, that’s obscure. That is to say, the mechanisms that actually enact the cognitive labor are opaque. The people who build the models thus do not, cannot, have intuitions about them. In a sense, they’re not real. By extension, the whole world of cognitive science and GOFAI is not real. It is past. The fact that didn’t work very well is what’s salient. Therefore, the reasoning seems to go, those ideas have no value.

And THAT’s a problem. Every time Mr. X or someone else would talk about machines surpassing Einstein, Planck, etc. I’d wince. I couldn’t figure out why. At first I thought it might be implied disrespect but I decided that wasn’t it. Rather, it’s a trivialization of human accomplishment in the face of the dissonance between their confidence and the fact that they’ve haven’t got a clue about what would be involved beyond LOTS AND LOTS OF COMPUTE.

There’s no there there.

No comments:

Post a Comment