Pages in this blog

Thursday, October 24, 2019

Given that some AI technology can be used for evil ends, what are responsible publication norms?

After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.

There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.
And so on, raising the question of just how AI should disseminate its research:
Like all technological advances, AI has benefits and drawbacks. Image analysis can speed up medical diagnoses, but it can also misdiagnose individuals belonging to populations less well-represented in the dataset. Deep fakes—computer-generated realistic video or audio—allow for new kinds of artistic expression, but they also can be used maliciously to create blackmail material, sway elections or falsely dispel concerns about a leader’s health (or the “well-being” of disappeared individuals). Algorithms can assist with financial trading or navigation, but unanticipated errors can cause economic havoc and airplane crashes.

It is heartening that AI researchers are working to better understand the range of harms. Some researchers are delineating different kinds of potential accidents and their associated risks. Some are identifying risks from malicious actors, ranging from individuals engaging in criminal activity and harassment, to industry exploiting users, to states and others engaged in social and political disruption. Still others are focused on how AI may create long-term, less obvious “structural risks”—shifts to social, political and economic structures that have negative effects—such as destabilizing the nuclear deterrence regime. Many AI governance principles suggest that researchers attempt to minimize and mitigate these risks throughout the AI lifecycle. Meanwhile, in the wake of AI advances, multistate and supranational bodies, individual states, industry actors, professional organizations and civil society groups have been churning out ethical principles for AI governance.

Still, there is no agreement about AI researchers’ publication obligations. Of all the new ethical guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. The Malicious Use of AI Report, OpenAI’s Charter and the EU High Level Expert Group on Artificial Intelligence’s “Trustworthy AI Assessment List” all discuss situations where limited publishing is preferable. Meanwhile, individual researchers have also advocated for calculating DREAD scores—which weigh the potential damage, attack reliability, ease of exploit, scope of affected users and ease of discovery—when designing machine learning systems and outlined questions to consider before publishing.
The article goes on to discuss that factors that should be considered in making such decisions. Some specific issues:

Pretext:
Granted, responsible publication norms may be used to support pretextual claims: An entity might overstate security concerns or other risks to justify self-interested nondisclosure.
Pandora's Box:
Additionally, there is the Pandora’s Box problem: Research can always be released at a later date—but, once released, it cannot be reined in. Meanwhile, it is impossible to accurately predict how AI might adaptively evolve or be misused.
Incentivizing Adoption:
Responsible publication norms could be integrated into the AI research process in various ways, ranging from voluntary implementation to more formal requirements. [...] Robert Heinlein has observed, “The answer to any question starting, ‘Why don’t they—’ is almost always, ‘Money.’” In thinking about how best to incentivize norm adoption, it is important to recall that regulations can shape technological development by creating carrots as well as sticks.

No comments:

Post a Comment