Pages in this blog

Friday, January 4, 2019

What is the mind (and what can we know of it)?

In a recent working paper, Toward a Theory of the Corpus [1], I suggested
that we think of the mind has a high-dimensional space of word meanings. That idea is explicit in each of the pieces, though different mathematical models are used. The researchers who use these models do not, however, think of them that way, at least not so far as I can tell. They think of the models as being about texts, and thus about the meanings of words in texts. But how are these texts created? Where do the words come from? Surely the words are placed in texts by minds somehow engaged with the world, whether the real world or some fiction is secondary. If those high-dimensional spaces are about texts, they are also about the minds that created the texts.

Now, that is a strange thing, very strange, to think of the mind as a high-dimensional space of verbal meanings. If you wish, think of it as a metaphor, one we can operationalize through the use of mathematical models. And we don’t have to think of it as a metaphor that somehow exhausts the mind. I certainly don’t. There is more to the mind than words; there are perceptions, motivations, and feelings. We’re dealing with an idealization, a conception of the mind, not the mind itself. (p. 21, see also pp. 24-25, 27)
I found that strange when I first suggested it, and still find it a bit strange. The mind is a high-dimension space? What’s that?

Perhaps it would be better to think of the mind as evolving through or unfolding in a high dimensional space. The high-dimensional space is just a system of coordinates, like the latitude and longitude coordinates used to map the earth’s surface. The mind exists in that space. What we’re interested in is the regions in that space and how attention, say, can move from one region to another. Some regions will be adjacent, and some will not. And so forth.

But I’m not ready to start proposing regions. Though ultimately that’s what I want to do. I want to follow literary texts through this space and see what kinds of paths they take. By analyzing those paths in relation to the text it should be possible to begin making sense out of the various regions.

Let’s set that aside. I just wanted to get that up there.

Now, here’s the first paragraph of the Wikipedia entry for mind:
The mind is a set of cognitive faculties including consciousness, perception, thinking, judgement, language and memory. It is usually defined as the faculty of an entity's thoughts and consciousness.[3] It holds the power of imagination, recognition, and appreciation, and is responsible for processing feelings and emotions, resulting in attitudes and actions.
Here the mind seems to be a container for various psychological processes. And that’s a very different conception from the one I’ve proposed above.

Or is it? The terms are certainly different. What can you do with this standard conception?

As far as I can tell about all that is in face done with the bare bones conception of mind is to argue whether or not computers can constitute minds or whether or not the mind exists in the brain. It’s not clear to me that this conception does much useful intellectual work. I don’t find arguments about whether or not computers can have minds to be very useful.

As for minds and brains, well, there we have my metaphor of the mind as neural weather. That harks back to Walter Freeman’s work on the complex dynamics of the nervous system. And, of course, Freeman talks the brain’s high-dimensional state space, which is some kind of relative of the high-dimensional semantic space that I talk about above. Just how close a relative, that’s hard to say. Each point in Freeman’s space is a state of the brain. Each point in a corpus model is a word meaning. Those are very different things. Though we might bring them together by thinking of each point in a corpus model as the target of focal attention or perhaps as a basin of attraction in a dynamical model of neural activity [2]. And now things are beginning to get just a little interesting.

However, I don’t think I want to start wandering down that path just yet. Let’s just leave it there, hanging in the air as it were.

But we’re now back at corpus semantics. That’s been around for awhile and has proven quite useful. All I’m trying to do is provide a motivated way of interpreting corpus semantics as being about the structure of mind. And if we can do that I think we can begin to move beyond Martin Kay’s provocative observation [3]:
Symbolic language processing is highly nondeterministic and often delivers large numbers of alternative results because it has no means of resolving the ambiguities that characterize ordinary language. This is for the clear and obvious reason that the resolution of ambiguities is not a linguistic matter. After a responsible job has been done of linguistic analysis, what remain are questions about the world. They are questions of what would be a reasonable thing to say under the given circumstances, what it would be reasonable to believe, suspect, fear or desire in the given situation. If these questions are in the purview of any academic discipline, it is presumably artificial intelligence. But artificial intelligence has a lot on its plate and to attempt to fill the void that it leaves open, in whatever way comes to hand, is entirely reasonable and proper. But it is important to understand what we are doing when we do this and to calibrate our expectations accordingly. What we are doing is to allow statistics over words that occur very close to one another in a string to stand in for the world construed widely, so as to include myths, and beliefs, and cultures, and truths and lies and so forth. As a stop-gap for the time being, this may be as good as we can do, but we should clearly have only the most limited expectations of it because, for the purpose it is intended to serve, it is clearly pathetically inadequate. The statistics are standing in for a vast number of things for which we have no computer model. They are therefore what I call an “ignorance model”.
Kay is right, in the context that most interests him, machine translation, we ARE using “statistics over words ... to stand in for the world construed widely”. What I’m suggesting is that there is a way to use those statistics to tell us something about the mind, perhaps construed loosely.

Loosely is better than nothing. And we can use it to lead us further on.

References

[1] William Benzon, Toward a Theory of the Corpus, Working Paper, December 31, 2018, 46 pp. Academia, https://www.academia.edu/38066424/Toward_a_Theory_of_the_Corpus_Toward_a_Theory_of_the_Corpus_-by;
SSRN, https://ssrn.com/abstract=3308601.

[2] That brings us within range of my notes on attractor nets. See William Benzon, Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic and Dynamics in Relational Networks, Working Paper, 2011, 52 pp., Academia, https://www.academia.edu/9012847/Attractor_Nets_Series_I_Notes_Toward_a_New_Theory_of_Mind_Logic_and_Dynamics_in_Relational_Networks.

[3] Kay, M.: A Life of Language. Computational Linguistics 31(4), 425-438 (2005), http://web.stanford.edu/~mjkay/LifeOfLanguage.pdf.

No comments:

Post a Comment