I am in the process of revising my post, Golumbia Fails to Understand Chomsky, Computation, and Computational Linguistics, and reposting it as a downloadable PDF. The following is some of the new material for the revision.
Who you may ask, is David Ferrucci. He’s the man who led IBM’s Watson team to victory over human Jeopardy champions. He subsequently left IBM for a hedge fund, Bridgewater Associates [1]. As such he works at the heart of corporate neoliberalism. But is he a computationalist in Golumbia’s sense? Does he think the mind is fundamentally a computer?
First, we must realize that Watson was and is a straight-up engineering effort [2]. There is no intention to simulate a human mind. The objective was to achieve a specific practical result by whatever means worked.
With that in mind, let’s look at a snippet from one of the many interviews Ferrucci did after the Jeopardy win. Time Magazine asked him [3]:
Artificial-intelligence pioneer Marvin Minsky once said that consciousness is like a simple memory chip in the brain. Is a conscious computer possible?Over beers, I could talk for hours and hours about that. But it's a question best left to a philosophical treatment.
I think that reply is worth thinking about. Ferrucci makes a distinction between what he does on the job, which is craft state of the art AI systems, and what’s best left for philosophical reflection. Of course, for philosophers, what Ferrucci calls “philosophical treatment” IS their job.
It’s also interesting that Ferrucci wasn’t willing to offer philosophical speculation in this interview. Why not? I’m guessing it was a matter of (informal) professional ethics and that he was being interviewed about work for which he was professionally responsible. So that set those boundaries on what he was willing to say?
Whatever.
Let us now imagine that we’re in a bar and that the entire Watson team was drinking beer and talking philosophy. That’s twenty or so people, likely broken into three or more groups. On general principle I’m guessing that we’d hear a wide variety of opinions on whether or not the brain is a computer, whether it is digital or analog, whether or not artificial computers can be conscious, and so forth. Moreover I would imagine that some on the team have given a good deal of thought to the philosophical issues and have explored the professional literature while others have not. It is even possible that one’s commitment to a philosophical position is somewhat independent of how much one has looked at the relevant literature. For some the lure of Star Trek is likely to be more imaginatively compelling than arcane arguments in philosophy of mind. I’m guessing that the conversations would be interesting free-for-alls.
Recall that the Watson project was (and is) a straight-up engineering project. While working on Watson required substantial agreement on a wide range of technical matters, but it wouldn’t have required agreement on the philosophical issues. I can imagine that a person’s philosophical position on some of these issues might bias their technical approach, but I can’t guess as to how that works out in practice.
There’s more. IBM is a profit-making business. Watson was developed as a bet on technology that could be used in the business (and is now being rolled out as such). Consider the executives who made the decisions about Watson technology? What do they think about the philosophical issues? How did that affect their business strategy? I don’t know. But I observe that they wouldn’t have authorized the research if they didn’t think it could be the foundation of salable service offerings. And then we have the people who are actually developing and marketing Watson services and tech. What do they think about the philosophical issues? Medicine is an important application area for Watson. What do the end-user physicians think about the philosophical issues implied by the diagnostic technology they’re using?
I think an ethnographic investigation of these questions – in the best Science and Technology Studies manner – would be fascinating. But I don’t see that Golumbia’s book would be of much value in that work. It is too removed from the situations and institutions through which philosophical and technical ideas are transformed into socio-technical systems.
I am here reminded of some draft material that Alan Liu recently put online. Liu is concerned with conceptualizing the situation of the digital humanities in the world at large [4]:
I borrow in this book another portfolio of thought that to my knowledge has not yet been introduced directly to infrastructure studies. It is also a portfolio largely unknown in the digital humanities and, for that matter, in the humanities as a whole even though it is broadly compatible with humanities cultural criticism. The portfolio consists of the “neoinstitutionalist” approach to organizations in sociology and, highly consonant, also “social constructionist” (especially “adaptive structuration”) approaches to organizational infrastructure in sociology and information science. Taken together, these approaches explore how organizations are structured as social institutions by so-called “carriers” of beliefs and practices (i.e., culture), among which information-technology infrastructure is increasingly crucial. Importantly, these approaches are a social-science version of what I have called lightly-antifoundationalist. Scholars in these areas “see through” the supposed rationality of organizations and their supporting infrastructures to the fact that they are indeed social institutions with all the irrationality that implies. But they are less interested in exposing the ungrounded nature of organizational institutions and infrastructures (as if it were possible to avoid or get outside them) than in illuminating, and pragmatically guiding, the agencies and factors involved in their making and remaking.
To this I would add Robert Merton’s classic essay, “On Sociological Theories of the Middle Range” [5, p. 39]:
Throughout we focus on what I have called theories of the middle range: theories that lie between the minor but necessary working hypotheses that evolve in abundance during day-to-day research and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behavior, social organization and social change.Middle-range theory is principally used in sociology to guide empirical inquiry. It is intermediate to general theories of social systems which are too remote from particular classes of social behavior, organization and change to account for what is observed and to those detailed orderly descriptions of particulars that are not generalized at all.
In short, what is missing from Golumbia’s account is a feel for the phenomena, which are empirical in nature. But then he’s not engaged in empirical inquiry. He’s engaged in cultural criticism and, in this instance, it takes on the character of transcendental interpretation, interpretation by no one, for no one. It is just there, floating in the ether.
References
[1] Steve Lohr, "David Ferrucci: Life After Watson," The New York Times, May 6, 2013: http://bits.blogs.nytimes.com/2013/05/06/david-ferrucci-life-after-watson/
[2] David Ferrucci, et al., "Building Watson: An Overview of the DeepQA Project," AI Magazine, Fall 2010, 59-79: http://www.aaai.org/ojs/index.php/aimagazine/article/view/2303
[3] David Ferrucci, "10 Questions for Watson’s Human," Time, Monday, Mar. 7, 2011: http://content.time.com/time/magazine/article/0,9171,2055194,00.html
[4] Alan Liu, “Drafts for Against the Cultural Singularity (book in progress).” 2 May 2016. http://liu.english.ucsb.edu/drafts-for-against-the-cultural-singularity
[5] Robert Merton, “On Sociological Theories of the Middle Range,” Social Theory and Social Structure. New York: The Free Press, 1968 Enlarged Edition.
No comments:
Post a Comment