Friday, July 10, 2015

Finite Minds in Communities

Here are some crude notes from my current thinking, and I do mean crude. I keep thinking I’m going to write a well-reasoned post on this stuff. And I will, but I have no idea just when. So I’m putting this out there for now. More later.

* * * * *

There’s a real difference between having N independent minds and one large mind with roughly the equivalent computational power as measured in raw “power”, however it is appropriate to do that. The interaction among N independent minds has greater flexibility and computational power. A greater capacity for meaning, if not information.

“The wisdom of crowds.”

As far as I can tell, proponents of AI think of basically one large mind that just grows and grows and grows in capacity.

* *

Where minds are constructed from the inside, the first ‘layer’ of cognitive equipment somehow ‘contains within itself’ ultimate limitations. You can’t just keep adding and adding and adding. Not everything will ‘fit’.

Building a real mind in the real world is not like adding propositions and axioms to a logical system.

Thoughts are constructed over a matrix. Different matrices accommodate, have affordances for, different kinds of thoughts.

There’s getting around in the concrete world, whatever that is. And there’s abstractions over that. The power comes in the abstractions. But there’s more than one way to construct abstractions over a given base. Which one is best? How do you tell?

And so we have to think about the world. What's the world HAVE to be like so that a mind can think it? It has to have structure; without structure there's nothing for the mind to get a purchase on. But there must be structure to the structure. Objects and events must be 'clumped' in perceptual space with lots of emptiness between the clumps. That is to say, perceptual space must be sparsely populated. (Cf. the material about Yevick's law and abstraction in Principles and Development of Natural Intelligence.)

* * *

The end is implicit in the beginning.

But in the human case, when a given regime ‘tops out’ there’s always a younger generation waiting. And they have a chance to reset cognition on a different base. That’s how you get out of a conceptual bind.

Human thought has run into dead ends time and again. And so has artificial ‘thought’. What makes anyone think there’s a way around that?

So far, every human system of thought has ‘topped out’. I suspect that’s inherent in the relationship between thought and the world.

Real minds are always finite and limited. That will also be true of artificial minds. It’s not a matter of how many elements you have or how fast you can run them. It’s a matter of structure and of the fact that you’ve got to build on what you’ve got. Sooner or later that’s going to top out.

* *

Sure, Turing machines can compute all computable functions. So what? Do we really know what that means vis a vis reasoning about and understanding the world? Yeah, there’s Gödel, too. Our minds are demonstratively finite, taken one at a time. But taken in populations generation after generation, there’s always room to grow. Always ways to reorganize and start from a new ‘bottom’.

Artificial computing devices with gazillions of elements working a light speed, so what? Sheer capacity won’t do the trick.

No comments:

Post a Comment