Monday, March 2, 2026

Ellie Pavlick, (How) Does AI Think?

c. 40:16 “At various points I’ve like argued really what we’re seeing here is a neural implementation of what is latently a symbolic system like our symbolic AI systems of yore.”

A bit later Pavlick will back off from that statement. However, her first example is arithmetic. Concerning arithmetic note that it is NOT “native” the human mind. Preliterate cultures may not even have open-ended counting systems, it any, and don’t do numerical calculations. Moreover, while children pick up language readily without specific instruction, arithmetic requires focus instruction and fluency requires hours of drill over several years. Careful reasoning is like that as well. Much of formal education is about learning how to reason in various domains.

Keep in mind, “symbolic AI systems” covers a LOT of ground. The expert systems, built on production rules, are perhaps the most visible type of symbolic system. But I think that cognitive nets are a better bet for the latent structure of neural nets. That’s what I argued in ChatGPT: Exploring the Digital Wilderness, Findings and Prospects.

No comments:

Post a Comment