Lisa Siraganian, Against Theory, now with bots! On the Persistent Fallacy of Intentionless Speech, Nonsite, August 2, 2021.
Opening paragraph:
If Siri responded to your questions with QAnon conspiracy theories, would you want her answers to be legally protected? Would your verdict change if we labeled Siri’s answers either “computer generated” or “meaningful language?” Or as legal scholars Ronald Collins and David Skover ask in their recent monograph, Robotica: Speech Rights and Artificial Intelligence (2018), should the “constitutional conception of speech” be extended “to the semi-autonomous creation and delivery of robotic speech?”1 By “robotic speech,” they don’t mean some imagined language dreamed up in science fiction but the more ordinary phenomenon of “algorithmic output of computers”: the results of Google searches, instructions by GPS navigational devices, tweets by corporate bots, or responses by Amazon’s Alexa to a query about tomorrow’s weather. And by “the constitutional conception of speech” they are invoking the First Amendment’s fundamental prohibition declaring that “Congress shall make no law … abridging the freedom of speech, or of the press.”2 Collins and Skover deliver their verdict: the U.S. Constitution should recognize and protect so-called “robotic expression,” the computer-generated language of your iPhone or like devices (40).
A bit later:
But more unexpected is Collins and Skover’s approach. Rather than justifying their defense of “robotic expression” (free speech rights for algorithms) primarily with legal precedent or theory—both of which other legal scholars have done—their basic premise is literary theoretical and interdisciplinary.7 Specifically, to argue for the First Amendment rights of computer content, Collins and Skover adapt Reader Response literary criticism from the 1970s, as well as related debates about literary meaning from the 1980s, to develop an idea they call “intentionless free speech” (40). As they explain it, the current legal debate over robotic free speech “significantly mirrors yesterday’s debate among schools of literary theory over textual interpretation and the reader’s experience,” yet “the importance of the lessons from reader-response criticism and reception theory” have gone unrecognized in legal scholarship (41–42). They summarize how the decades’ past criticism of Stanley Fish, Norman Holland, Wolfgang Iser, and Hans Robert Jauss reveals that the “real existence” of a text is imparted by the reader, not by the intention of the author (38). For them that means that your iPhone’s or Amazon Echo’s lack of an intention should not bar a court finding its “message” to be meaningful because the iPhone’s owner makes those messages mean. “Meaning resides in the receiver of information,” they write, thus the receiver’s use of that information is the ultimate determinant of an expression’s value (45). Their “theory of ‘intentionless free speech’ is solidly grounded in those lessons” of reader-response criticism (42). As Collins and Skover write, “the receiver’s experience of speech is perceived as an essential dimension of the constitutional significance of speech, whether human or not, whether intended or intentionless” (45).
Siraganian doesn't buy it, nor do I (do I?). What interests me at the moment is simply that the argument is being made.
Still later:
... Robotica is really part of a broader trend, beginning much earlier in the twentieth century, of extending free speech rights as well as many other rights and privileges to entities like corporations that previously were considered out of bounds for such protections.11 Most notoriously, the U.S. Supreme Court decision Citizens United v. Federal Election Commission (2010), equated money to free corporate speech by relying, in part, on the argument that corporations have the legal status of persons and money is their way of speaking. As many commentators have noted, the stakes of these developments are significant and disturbing.
Do AI engines possess intentionality? No. But how do we know?
There's much more at the link.
No comments:
Post a Comment