Tuesday, July 12, 2022

Ramble: Waffles and AI [the physical world, algorithms, memory]

I’ve been feeling a need to write a substantial blog post, whatever that is. That’s one thing. But my brain is, if not fried, in no mood for painstaking thought and it’s mid-afternoon, not the best time of day for strenuous intellectual work. Those two things are not compatible.

What to do? Ramble. Here goes.

For years I’ve been going to the Malibu Diner a couple of times a month, sometimes once a week on Saturday mornings for a month, maybe two, for breakfast, mostly, but not always, waffles. The Malibu closed a couple of months ago because the property had been sold and the Malibu was demolished to make room for an apartment building. I’ve heard that the Malibu will have a place in the new building when it’s completed. But that’s at least two years away and so of no use to me now.

A couple of years ago – when I was living in Jersey City – a friend told me about this place, Turning Point, that had opened up in Hoboken. Well I’m now living in Hoboken, just around the corner from Turning Point. So I decided to give it a try.

It's on Frank Sinatra Drive, just above 14th Street. Sinatra Drive runs along the Hudson River, just across from Midtown Manhattan. Turning Point has an outdoor space. That’s where I decided to sit.

The sun was beating down on my neck, but I solved that problem by moving to the other side of the table. I looked over the menu and saw that they had a nice selection of pancakes, French toast, and waffles – other stuff, too, but that’s generally what I get when I go out to eat. I decided on a plain Belgian waffle, cranberry juice, and bacon, pretty much what I had at the Malibu. Except that I also had coffee at the Malibu. But I decided that I’d forego a $3.50 cup semi-fancy coffee.

A waiter came and took my order. And I started thinking about artificial intelligence. That’s something I think about a lot, but I’ve been thinking about it particularly intensely recently. [Somewhere in here I decide to take a photo of the table before breakfast arrives.] Yann LeCun, a prominent researcher who’s vice president of AI at Meta and on the faculty at NYU, had just published a position paper outlining his vision for the next 10 years or so. I’d read it, written some comments on it, and listened to two YouTube videos about it, one expository, the other critical.

At about this time my waffles and bacon arrive, I’d already gotten the cranberry juice. So I had to interrupt my thinking and take some photos of breakfast. “Why,” you ask, “did you do that?” Because I like to take photos and it’s rather fun to take photos of food.

You realize that I’m just making the order up. I wasn’t taking notes. Yes, I had breakfast at Turning Point. Yes, I too photos. And yes, I thought about AI. But in exactly what order, who knows?

So I spread the butter on my waffle, poured some syrup on it, and began eating. And, in between bites of waffle and bacon, a sips of cranberry juice and ice water – a nice pitcher of it with a bit of lemon floating in it – I thought some more about AI. [While slipping in a photo or two as well.]

This is called multi-tasking.

It’s become clear to me that AI is up against the physical world in two ways. On the one hand, it needs very basic concepts about the physical world in order to function. We get such concepts from growing up in and moving about in the physical world. AI’s have a much more difficult time doing that. Yes, there are robots, and getting robots to move about easily, and to sense the world, that’s difficult. It’s this lack of easy contact with the physical world that cripples its ability to acquire common-sense reasoning. That was a problem for Old School symbolic systems and it’s a problem for today’s new-fangled learning systems.

Meanwhile I’ve eaten a strip of bacon – one of three – about a third of my waffle, and have taken a couple of photos of the meal in progress.

LeCun realizes this. Heck, everyone realizes it, sorta’. And he thinks he can solve the problem by running a bunch of video tapes through his new architecture and inferring the 3-D structure of the world be interpolating between frames. Maybe. But no one really knows. It would be nice if that would work, because that would mean that much of the work of acquiring common-sense knowledge could be done more or less automatically. I have my doubts. Eric Jang, VP for AI a Halodi Robotics, thinks maybe that work will have to be done by robots. Will that work? I don’t know, but the robots are out there in the world, which is where you need to be to pick up that kind of knowledge.

Two strips of bacon, the waffle’s mostly eaten, I’ve drunk most of the cranberry juice, more water left. More photos.

And then there’s the fact that computers are physical devices. That bears on the problem of dealing with symbols, which is, in a way, the ‘opposite’ of dealing with the physical world. Symbols are discrete while the physical world is continuous. Learning systems work well on continuous information (when they can get it), but have problems with discrete information. So they don’t handle symbols well. Much of human knowledge is encoded in symbols, natural language, but also mathematics and other formalisms.

Now, this is tricky [only one bite of waffle left, and strip of bacon]. When you program symbolic structures, the algorithm can be physically separated from the memory it needs to run. Most computer programs are written that way. Old School symbolic systems were written that way. But these newer learning-based systems don’t work that way at all. The learning architecture creates a system in which memory and algorithm are intimately intertwined. Thus they have a great deal of trouble dealing with symbols. As far as I can tell LeCun doesn’t really have a strategy for dealing with this problem beyond minimizing the use of symbols. I don’t think he’ll get very far that way.

I’m pretty much done with breakfast. The waffle was fine. The bacon was a bit overdone for my taste. Perhaps next time I’ll ask them not to cook it so long. I’ve drunk all the cranberry juice and taken some photos of the empty glasses on the table.

Which is all well and good for breakfast, but what about algorithms, memory, and AI? I don’t know. That’s a tough one.

But the thing is, the problem is a fundamental one about realizing computational structures in matter. If it exists for silicon, it exists for neurons as well. How does the human nervous system deal with this issue?

I’m just making this up. But I think three things are going. Fundamentally, we have neural plasticity. Nervous systems are not ‘hard-wired’ in the way digital computers are, though it’s not so much the physical wiring as it is memory locations. There’s a fair amount of ‘flex’ in neural connectivity. That’s one thing.

And then we have to remember that for roughly the first two years, humans don’t have much language to speak of. It’s a non-symbolic system. Language develops gradually. Yeah, I know, that’s not the standard line. The standard line is that language develops rapidly. But rapidly means over a period of just how many years are we talking about? Five, six, seven? Just how rapid is that? My point is that symbolic and non-symbolic systems run in parallel, with the non-symbolic systems getting a head start. One result is a differentiation between systemic and episodic cognition. I suspect that eases the problem of shoe-horning symbolic and non-symbolic computing into the same physical system, but I’m just guessing about that. Who know?

By this time I’ve paid my check and have started walking around, taking a few more photos. A good meal. I’ll be back.

1 comment: