Saturday, August 1, 2020

What Louis Milic saw back in 1966 [digital humanities]

Willard McCarty just reminded me of an article Louis Milic published in the first issue of Computers and the Humanities:
Milic, L. (1966). The next step. Computers and the Humanities, 1(1), pp. 3-6. doi:10.1007/bf00188010
DOIs didn’t exist at that time, much less our helpful Russian friends who have made the world’s scholarly literature freely available through Sci-Hub. At that time we were, of course, locked in a deadly Cold War with the Soviet Union, a war that lasted until the fall of the Berlin Wall in 1989.

Computers and the Humanities aside, 1966 was an important year in humanistic thought in the United States, for that’s the year the French landed in Baltimore and stormed the stacks of The Johns Hopkins University to discourse on “The Languages of Criticism and the Sciences of Man,” [“Les Langages Critiques et les Sciences de l'Homme”]. I was a sophomore at the time and, though I didn’t attend any of the sessions of that (in)famous structuralist symposium, I was in the orbit of Dick Macksey, one the its organizers. I rather doubt that humanities computing was on the minds of any speakers at the symposium, but the informatic and cybernetic culture that informed computing would have been very much in the air, as it also informed structuralism.

As Milic mentions machine translation in his remarks, I observe that MT was one of the founding disciplines of computer science, with its origins in the early 1950s. In America the objective was to translate Russian technical documents into English. The Soviets had broader needs, as they had a multi-cultural multi-lingual nation to govern. The American enterprise collapsed in the mid-1960s and was deep into re-conceptualizing itself as computational linguistics at the time of the structuralist symposium.

Did Milic attend any of those sessions? I do not know. Was he aware of the symposium? Perhaps, though there would come a time when everyone was aware of it. In any event, he was on the faculty of Columbia’s Teacher’s College at the time.

Let us take a look at his (prophetic) essay:
We are still not thinking of the computer as anything but a myriad of clerks or assistants in one convenient console. Most of the results I have just described could have been accomplished with the available means of half a century ago. We do not yet understand the true nature of the computer. And we have not yet begun to think in ways appropriate to the nature of this machine.
I fear that this is more or less true in much of the (so-called) digital humanities to this day, over a half-century later. We cannot hope to come to grips with GPT-3 (and its kin) without learning “to think in ways appropriate to the nature of this machine.”

Milic goes on to point out:
The true nature of the machine is unknown to us, but it is neither a human brain nor a mechanical clerk. The computer has a logic of its own, one which the scholar must master if he is to benefit from his relations with it.
I would underline the last clause of that first sentence, “neither a human brain nor a mechanical clerk.” If neither of those, then what is it? I rather like the phrase “artificial being” (jinzo ningen in Japanese), which Osamu Tezuka used to describe Michi, the central character of his 1949 manga, Metropolis. But it is, I suppose, too general for our purpose.

Milic continues:
Its intelligence and ours must be made complementary, not antagonistic or subservient to each other. For example, understanding in the arts and letters is based on the perception, identification and recognition of patterns. But the patterns must be small and traditional enough to be perceived by the human apparatus.
Such patterns are central to so-called “close reading” (As you may know, I am deeply suspicious of this metaphor of distance. What kind of distance is this, with respect to what? Not physical, nose to page as it were. Metaphysical? I digress.)

He concludes the paragraph:
Perhaps for that reason Aristotle questioned whether a large object could be beautiful. In literature, we sense this when we read a long novel. Unlike the human perceiver, however, the computer can be made to detect the longest and best-concealed pattern, no matter how random an appearance it presents to the human eye. Thus, we must learn to ask it larger questions than we can answer and to detect what escapes our unaided senses. This may involve not only proposing old questions in new ways but even thinking up new questions. The computer can be made an extension of man only if it opens avenues we have not suspected the existence of.
There you have it, a remit for much of the best and most imaginative work being done in computational criticism.

Let me bring this post to a close by quoting Milic’s next paragraph in full – it is by no means the end of his essay. I leave commentary and annotation as an exercise for the reader:
Thinking in a new way is not an easy accomplishment. It means re- orientation of all the coordinates of our existence. Necessarily, therefore, our first motions in that direction are likely to be tentative and fumbling. The most interesting direction, to my mind, for this new work to take is in the imitation of the process of literary composition. For a long time, we have asked ourselves how the mind worked when it tried to articulate its experience with linguistic symbols. Many kinds of analysis (grammatical, statistical, psychological) nave provided us with only a fractional insight into this mystery. The notable failure of machine translation has been paradoxically a very instructive development. Computers were instructed to behave like human translators, and they could not. What was learned about the complexity of linguistic structure, however, far exceeds what might have been gained from translating Chinese or Russian political speeches or scientific papers. That use of the computer was constructive, if not creative. It moved in the direction of synthesis rather than analysis.

No comments:

Post a Comment