This is a long rambling discussion, over three hours. But useful. Pro, con, down the middle, you name it. Dip in an out as you see fit.
Connor Leahy, for example, is a true believer in machine learning etc., but he's right that: 1) GPT-3, after all, does something and we need to take it seriously, and 2) that it may even indicate something about the universe. Not sure he can give a useful explanation of the second, but I've taken my own crack at it in GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2, which began as a series of blog posts at New Savanna. A big later, I think it is, Scarfe, and Duggar have a discussion of the distinction between pattern matching and reasoning that's worth a listen. They should take a look at this paper from the Olden Days:
Yevick, Miriam Lipschutz (1975) Holographic or Fourier logic. Pattern Recognition 7: 197-213.
https://sci-hub.tw/10.1016/0031-3203(75)90005-9
Abstract: A tentative model of a system whose objects are patterns on transparencies and whose primitive operations are those of holography is presented. A formalism is developed in which a variety of operations is expressed in terms of two primitives: recording the hologram and filtering. Some elements of a holographic algebra of sets are given. Some distinctive concepts of a holographic logic are examined, such as holographic identity, equality, containment and “association”. It is argued that a logic in which objects are defined by their “associations” is more akin to visual apprehension than description in terms of sequential strings of symbols.
Yes, it was published in 1975, which is ancient times in the world of
artificial intelligence. It seems to me that what Yevick
called holographic logic is similar in spirit, and even in mathematics
in some respects, to current work on neural networks, while, in
contrast, ordinary logic is as the abstract has it, “description in terms of sequential strings of symbols.” That gives us a starting point to think about the contrast between pattern matching and reasoning. I say a bit more about Yevick's work, and what David Hays and I made of it, in this post, Showdown at the AI Corral, or: What kinds of mental structures are constructable by current ML/neural-net methods? [& Miriam Yevick 1975].
Overall the video confirms my belief that we don't have a useful framework in which to think about minds, machine, and intelligence. We're constantly being surprised, and don't really know why anything works. Contrast this, for example, with the framework we have for thinking about manned expeditions to Mars. Elon Musk not withstanding we may not venture there for decades, who knows? But we can think about it in a detailed, systematic, and coherent way. We can't do that with machine intelligence.
No comments:
Post a Comment