Back in the ancient days of the 1950s the intentional fallacy was invoked to separate the text from the author, indeed, it was invoked to separate any work of art from its creator. Agency was thus invested solely in the text itself, the autonomous text. It was the critic’s job to interrogate the text and thus discern its meaning.
As a practical matter, it turned out that texts spoke differently to different critics. For some this was evidence of the richness of texts, that they should support so many meanings. For others it was a problem.
The problem tried out various solutions. One line of thinking restored authorial intention, subordinating textual meaning to that intention, thus locating agency in the author. Another line of thinking killed the author and located meaning in codes variously linked to social structure or to the unconscious. Agency was thus denied to author, reader, and text and invested in those codes and the nebulous structures placing them on offer. Yet another line of thinking located agency in the reader.
So: text, author, codes, reader. What else could there be?
Now the speculative realists and object-oriented ontologists are investing the text with agency—see, for example, this Twitter lecture by Eileen Joy and this commentary by Levi Bryant. Is this but a return to an old position albeit encased in new terminology? Or will something new emerge?
Who knows? I note that Bryant ends by suggesting that we “allow the work of art to transform how we sense”—a old idea, tried and true: make it new.
I further note that Joy begins by asking: “First, what happens when we consider that literary characters are not human beings, but more like mathematical compressions of the human?” Indeed, literary characters ARE NOT human beings. Could we perhaps arrive at some understanding of just how they are “mathematical compressions” and of how we understand such compressions?
No comments:
Post a Comment