Saturday, May 15, 2021

Ramble: Machine learning & the brain, word illusion, FOOM, Models of the Mind

Machine learning, the brain and the future of software

Friday’s post on Geoffrey Hinton’s assertion (deep learning is all we need) didn’t quite go the way I’d planned, but that’s OK. I figured relatively straightforward to point out the obvious limitations of that statement – obvious from my POV; it would take no more than four or five paragraphs. Then I got into it and realized that he was likely OK on the core assertion, but that it had implications he may not have appreciated. And once I’d worked through that I realized that I could interpret my old work on attractor nets more or less in Hinton’s terms and that, in turn, provides a way to think about complex thought in Hinton’s terms. That in turn led to the paper Hays and I did on natural intelligence. It all fits together. Making the connections took more time and effort than I had planned. But the end result is much more interesting.

That leaves me with more work to do. Well, maybe I’ll do it, maybe I won’t, at least not immediately. What sort of work? I’ve sketched out a framework, but what could we do within that framework in now and in the near future? That’s something I’d like to think about. Maybe it has implications for Andreessen’s AI-as-a-platform. I note that my post on that subject uses my old PowerPoint Assistant idea, and that is, in turn, based on some of the ideas behind attractor nets.

So much to do!

Word illusion

One problem Hinton as, that many have, is that he’s subject to the word illusion. In think about that I recalled how difficult it was for me to understand the difference between a word’s meaning and its referent. I think it was my sophomore year in college, when I read Roland Barthes’ Elements of Semiology, not much more than a pamphlet. I was easy enough to grasp that the sign consisted of a signifier ¬– something spoken or written – and well, what? I was used to thinking of words as pointing to things; that’s easy enough. But that the meaning, the signifier, is NOT the thing, the referent, I really had trouble absorbing that idea.

By the time I was in graduate school at SUNY Buffalo and working with David Hays on computational semantics, by that time the distinction between signifier and signified was abundantly clear. For the semantic network formalism was a very ‘concrete’ way of thinking about the realm of signifieds. You could draw pictures, or write (quasi)formal statements. But I didn't have anything like that at my disposal when I encountered Barthes. I don’t know just when the signifier/signified distinction became clear to me. Perhaps it really wasn’t clear until I began studying with Hays. But there must have been something before that, after all I’d been reading in cognitive science and knew about cognitive networks. I’d seen those diagrams (Ross Qullian, Don Norman, Sydney Lamb).

That is, it is one thing to assert the distinction, as Saussure did early in the 20th century. But how you understand that distinction depends on the conceptual tools available to you. And those tools weren’t readily available until, well, the so-called cognitive revolution. I suppose symbolic logic is a precursor, but the formalism is so detached from ordinary language that it doesn’t really do the job.

The thing about machine learning is that it doesn’t really force you to deal with the signifier/signified distinction. Sure, they are know that the texts in the corpora used by the machine are just word forms, signifiers. The meanings/signifieds aren’t there. But they don’t really have to work with the distinction. And when the resulting engine does interesting things with language, like vector semantics, machine translation, and so forth, well, it must somehow in some degree ‘understand’ the language it’s spitting out, no? The fact that it is not at all clear just what the machine is doing only reinforces the illusion. Since we don’t actually know, it’s easy enough to, in effect, believe that the machine has some how induced signifiers for the signifieds. That’s what those vectors are, no? Not, really, but...

You see the problem.

FOOM vs. the real singularity

I’m thinking of using that as the starting point for a 3 Quarks Daily piece. By FOOM, which is a term of art in some circles, I mean the idea that at some point in the (perhaps not too distant) future that the machines will suddenly become super-intelligent, bootstrap themselves to even greater levels of intelligence and then, FOOM! take over the world. That is, FOOM is a term for a very popular version of the so-called Technological Singularity.

I’m using “singularity” in the sense Jon von Neurmann used it, as a point “in the history of the race beyond which human affairs, as we know them, could not continue.” I think we’re already there. It is real. It is not a point event, but an ongoing and ill-defined process. Think of the cyberwar we’ve seen so far, including election inteference, ransomewhere (think of the current attack on the oil pipeline to the NYC region), and this that and the other. It’s not going away. Think about the fact that Facebook, Google, and Twitter have (potentially) more control over public discourse than any national governments. And so forth.

I was thinking of doing this for May 24, but I think I’ll push it off a month. I want to simmer it.

Models of the Mind

Instead I’m going to review Grace Lindsay, Models of the Mind: How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain (2021). It’s a very good general introduction to some of the various mathematical ideas that have been and are being used in understanding the brain. I know some of this stuff, and some I didn’t.

* * * * *

I’ll continue blogging about Seinfeld bits and about kids and music. And then we have the irises.

More later.

No comments:

Post a Comment