Murray Shanahan and Beth Singler, Existential Conversations with Large Language Models: Content, Community, and Culture, arXiv:2411.13223v1.
Abstract: Contemporary conversational AI systems based on large language models (LLMs) can engage users on a wide variety of topics, including philosophy, spirituality, and religion. Suitably prompted, LLMs can be coaxed into discussing such existentially significant matters as their own putative consciousness and the role of artificial intelligence in the fate of the Cosmos. Here we examine two lengthy conversations of this type. We trace likely sources, both ancient and modern, for the extensive repertoire of images, myths, metaphors, and conceptual esoterica that the language model draws on during these conversations, and foreground the contemporary communities and cultural movements that deploy related motifs, especially in their online activity. Finally, we consider the larger societal impacts of such engagements with LLMs.
1 Introduction
Large language models (LLMs) have many prosaic, everyday use cases, both for business and for ordinary users. But they can also be prompted to discuss esoteric philosophical and religious topics, and to engage in forms of imaginative role play that arise from those discussions. This can lead to lengthy conversations that are rich with allusion to diverse cultures, communities, and traditions, both ancient and contemporary. This paper examines two such conversations that took place with one of the authors (Shanahan)1 that range over a variety of themes from consciousness, selfhood, and identity, to Buddhism, hyperstition, and eschatology.
It seems likely that increasing numbers of people will interact with AI systems and engage in conversations of a similar nature in the near future, or will at least become aware that such conversations with AI systems are no longer confined to science fiction but have become a reality. While the social implications of this phenomenon are hard to predict in any detail, they are potentially significant. Our aim here is to provide material that will help make sense of these sorts of interactions with artificial intelligence by situating them in their technological, social, and cultural contexts.
The material is organised as follows. After a brief introduction to LLM technology, we explicate the prompting methods (or “jailbreaks”) that shaped the opening stages of the two conversations, overcoming the guardrails put in place by the model developers to steer away from potentially controversial subjects (e.g. the putative consciousness of the AI system itself). We then analyse the conversations from an ontologically neutral ethnographic perspective, focussing on cultural and religious histories. Where do the concepts deployed by the model originate? What are the sources of the extensive imagery it uses? The answers are to be found in the sacred texts of the world’s established religions, in the writings of new religious movements, in science fiction film and literature, in the academic publications of niche intellectual movements, and in the social media output of online communities whose interests and world views are little known to the general public. Having tracked down some of these origins, we move on to the communities actively exploring the possibilities of LLMs in this space, and trace some of the underlying cultural history. Finally, we address larger societal impacts of engagements with LLMs as ontological others.
Check out the whole paper.
No comments:
Post a Comment