Wednesday, November 22, 2023

Some Quick Observations on the OpenAI Upheaval

Given the importance of the technology involved, it’s clear that the reverberations of this even will spread far and wide. One thing that is clear is that we’re seeing an ideological rift, though just how to characterize it is not obvious. the most obvious possibility:

AI Optimists vs. AI Doomers: That opposition may be the most wide-spread at the moment. Sam Altman, and his allies, is an optimist while those who pushed him out are, if not full-on Doomers, more pessimistic about the immediate prospects for AI. Hence they want to slow things down while Altman wants to go (more or less, even if perhaps he doesn’t say so publicly) full speed ahead. This NYTimes podcast, featuring Cade Metz, has some interesting observations about Altman’s ambivalence.

This opposition is close to that between e/acc (effective accelerationist) and decel (decelerationist). But the nuance and emphasis are different.

Let me offer two other contrasts:

Management vs. Development: This is a classic conflict within high tech companies, certainly in the software business. Management wants to get product out the door as fast as possible so as to bring in money while development wants to work all the bugs out first. I saw this first-hand when I worked as a tech writer for MapInfo back in the 1980s. Management forced the developers to ship before they felt ready. The product was buggy, the customers were not happy, and the company fell into a funk.

In the case of OpenAI this conflict has become heavily inflected with the Doomers/Optimists conflict. That conflict is not inherent in software development, or computer technology in general, but it now seems that it is inherent in AI. Even if you don’t believe that the current technology is anywhere near the possibility of going rogue, there are serious downsides (e.g. offensive and dangerous content in the LLMs, bad actors, job loss and economic instability). There is certainly a case to be made for slowing down on those grounds.

Business vs. Research: This is similar to management vs. development, but is nonetheless different. In this context by research I mean basic research. That kind of research is quite different from research oriented toward product development. Fundamental research has no specific product goals in mind and is radically open-ended. Typically it isn’t done in a business environment at all. Rather, it is done in universities and dedicated research centers.

Some very large businesses have engaged in fundamental research. Bell Labs is the classic example. But we also have the Xerox Palo Alto Research Center and IBM’s various research centers; Google, Meta, and Microsoft engage in this kind of research as well. These are very large companies and, as such, can afford to undertake some basic research.

OpenAI is not that kind of company. To be sure, its capitalization is unusually large in relation to its employee count, but as far as I know it does little or no basic research. My guess is, that for this reason, this conflict played almost no role in the upheaval.

And yet we most certainly need basic research to really move AI forward. I don’t expect any fundamental breakthroughs out of OpenAI.

No comments:

Post a Comment