Are we looking at an ontological moment in human thought?
On the one hand, we have the rise of Object-Oriented Ontology in the Continental philosophical tradition.
But we also have the fact that “ontology” has become a term of art in the knowledge representation division of cognitive science and AI. An ontology, in this sense, is the list of different kinds of things—animal, vegetable, mineral—one must include in a domain model in order to represent that domain in a computer program. As such, it entails no direct commitment to what’s really out there in the world; rather, it’s just how we think about the work.
Q: But how we think about the world is changing, no?
A: Yes.
A: Yes.
Back in 1979 psychologist Frank Keil published Semantic and Conceptual Development, which is about how children develop ontological concepts. How do they distinguish living from non-living, and when do they first do it? Plants from animals? Like all good thinkers, of course, he sets it up with a review of the background. There is, way way long ago, Aristotle’s Categories, which he covers, But it’s the more recent history that interests him, Bertrand Russell and Gilbert Ryle, who “proposed theories that relate predictability to ontological categories” and some work originating in logic from the early 1960s by Fred Sommers that encompasses the two.
The notion of ‘category mistake’ is key. For example, “The cow was an hour long” (from Keil p. 3) simply doesn’t make sense. One cannot measure the length of cows in hours (unless, perhaps, one were to ask how long it would take, say, for a small slow-moving animal to move from one end of the cow to another). It’s a mistake in category to make such an assertion.
In this sense, one of the best-known sentences in modern thought is a wonder of category mistakes, Chomsky’s example: Colorless green ideas sleep furiously. Chomsky’s point was that that sentence is perfectly grammatical, but semantically hopeless. Hence, grammar is independent of semantics. He was wrong on that, but that’s another argument.
Let’s go to Google’s Ngram viewer and look at the rise of “category mistake”.
The legends are a bit difficult to read (you can re-run the query here), but the up-tick starts around 1960 and peaks a bit before 2000. The rise of object-oriented ontology is on the downslope of the curve.
Q: Why?
A: That’s an exercise for the reader.
A: That’s an exercise for the reader.
Meanwhile, in 1974 Irving Goffman published Frame Analysis, which he introduced by discussing William James's essay on “The Perception of Reality”:
Instead of asking what reality is, he gave matters a subversive phenomenological twist, italicizing the following question: Under what circumstances do we think things are real? The important thing about reality, he implied, is our sense of its realness in contrast to our feeling that some things lack this quality. One can then ask under what conditions such a feeling is generated, and this question speaks to a small manageable problem having to do with the camera and not what it is the camera takes pictures of.
There we have it, ontology, though ontology as a category of human thought, without commitment to what’s actually out there in the world. But Goffman takes it a step further, as he’s interested in how we frame one another’s experiences—he discusses, for example, Orson Welles’s infamous War of the Worlds broadcast. Human thought is not just an isolated Cartesian subject trying to think itself out of the vat it’s in. It’s social, it’s collective. And it’s cultural as well.
Why? Why’s all this happening now? Do ontological moments come and go in human thought?
I’ll leave that last one alone for a second. As for now, I’m going to blame it on the computer. Why? Because, in terms of our categories of thought, the computer is ontologically anonymous. It’s clearly a thing, an inanimate complex of stuff. But we interact with it through language, a capacity heretofore associated only with our own noble selves.
Whoa! Spooky! The world just shook.
A thinking machine—that’s what they called them back in the 1950s, and before (check out this Ngram). How do we make sense of that, a machine that thinks. But does it, really? We have thinkers, serious thinkers, claiming that the whole world is just a computer. We have other thinkers redoing the Christian apocalyptic rapture as the spontaneous emergence of a TRANSCENDENT SUPER COMPUTER out of, what, interactions in the web of mere computers?
Do ontological moments come and go in human thought?
Well, there was a time when they referred to a steam-driven locomotive as an iron horse (see Ngram). Consider this passage from Henry David Thoreau’s “Sound” chapter in Walden (1854):
When I meet the engine with its train of cars moving off with planetary motion . . . with its steam cloud like a banner streaming behind in gold and silver wreaths . . . as if this traveling demigod, this cloud-compeller, would ere long take the sunset sky for the livery of his train; when I hear the iron horse make the hills echo with his snort like thunder, shaking the earth with his feet, and breathing fire and smoke from his nostrils, (what kind of winged horse or fiery dragon they will put into the new Mythology I don’t know), it seems as if the earth had got a race now worthy to inhabit it.
There you have it, iron horse and the fire-breathing dragon.
Is this mere metaphor. Or did Thoreau not know, deep in his gut, just what kind of thing this was? After all, this was the first time in history that people saw inanimate beings, mere collocations of things, move over the surface of the earth under their own power.
Whoa! Spooky! The world just shook.
And it’s still shaking.
Whole lotta’ shakin’ going on.
Love, love me do.
Love, love me do.
So, are we looking at an ontological moment in human thought?
No comments:
Post a Comment