That is the title of my latest working paper. It summarizes and synthesizes much of the work I have done with ChatGPT to date and contains the abstracts and contents of all the working papers I have done on ChatGPT. It also includes the abstracts and contents of a number of papers establishing the intellectual background that informs that research. There is also a section that takes the form of an interaction I had with Claude 3.5 on methodological and theoretical issues. Finally, to produce the abstract I gave the body of the report to Claude 3.5 and asked it to produce two summaries. I then edited them into an abstract.
As always, URLs, abstract, TOC, and introduction are below.
- Academia.edu: https://www.academia.edu/127386640/ChatGPT_Exploring_the_Digital_Wilderness_Findings_and_Prospects
- SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119597
- ResearchGate: https://www.researchgate.net/publication/388563205_ChatGPT_Exploring_the_Digital_Wilderness_Findings_and_Prospects
Abstract: The internal structure and capabilities of Large Language Models (LLMs) are examined through systematic investigation of ChatGPT's behavior, with particular focus on its handling of conceptual ontologies, analogical reasoning, and content-addressable memory. Through detailed analysis of ChatGPT's responses to carefully constructed prompts involving story transformation, analogical mapping, and cued recall, the paper demonstrates that LLMs appear to encode rich conceptual ontologies that govern text generation. ChatGPT can maintain ontological consistency when transforming narratives between different domains while preserving abstract story structure, successfully perform multi-step analogical reasoning, and exhibit behavior consistent with associative memory mechanisms similar to holographic storage.
Drawing on theories of reflective abstraction and conceptual development, the paper argues that LLMs inadvertently capture what wemight term the “metaphysical structure of our universe” – the organized system of concepts through which humans understand and reason about the world. LLMs like ChatGPT implement a form of relationality – the capacity to represent and manipulate complex networks of semantic relationships – while lacking genuine referential meaning grounded in sensorimotor experience. This architecture enables sophisticated pattern matching and analogical transfer but also explains certain systematic limitations, particularly around truth and confabulation.
The paper concludes by suggesting that making explicit the implicit ontological structure encoded in LLMs’ weights could provide valuable insights into both artificial and human intelligence, while advancing the integration of neural and symbolic approaches to AI. This analysis contributes to ongoing debates about the nature of meaning and understanding in artificial neural systems while offering a novel theoretical framework for conceptualizing how LLMs encode and manipulate knowledge.
Contents:
Introduction: Into the Digital Wilderness 5
Free-floating Attention, Systematic Exploration, and the Anthropomorphic Stance 8
ChatGPT: My Course of Investigation 12
Meaning, Truth and Confabulation, Latent Space 28
Prospects: Explicating the Ontology of Human Thought 42
A Dialogue with Claude 3.5 on Method and Conceptual Underpinnings 45
A Brief Narrative of My ChatGPT Work Based on My Working Papers 56
Working Papers about ChatGPT 62
Background Papers 74
Introduction: Into the Digital Wilderness
The world I entered when I started playing with ChatGPT is a wildnerness, strange and uncharted, uncharted by me, uncharted by anyone. By that I simply mean that it was something new, radically new. No one had been there before. Sure, a handful of people within the industry had been messing around in there, even a rather large handful considering how much work it took to make ChatGPT ready for the world at large. But its behavioral capabilities were, for the most part, unknown. In that sense it was a wildnerness.
But it was, and remains, a wilderness in another sense: the large language model (LLM) that underlies ChatGPT is a black box. We send a string of words into ChatGPT and it sends a string of words back out, but what the model does to derive the output from the input, that process remains deeply obscure. That is wilderness in a different sense. Wilderness in the first sense is about our experience of ChatGPT’s behavior. Wildnerness in this second sense is about the mechanisms that drive that behavior. It is a digital wilderness. This document reports on how I’ve structured my interaction with ChatGPT to give me clues about the mechanisms driving its behavior.
My methods are more “qualitative” or “naturalistic” than those standard in the literature, which many investigations employ standard batteries of benchmark tasks. While those are essential, there is much they don’t tell you. While I have done many things with ChatGPT – asked it to interpret texts, define abstract concepts, play games of 20 questions, among other things – perhaps my most characteristic task, and one I have spent more time on than others is simple: Tell me story. And ChatGPT did so, time and again. Consequently my methods are in some ways more like literary criticism, or, even better, like Lévi-Strauss’ analysis of myths, than conventional cognitive science. Consequently you will find many examples of ChatGPT’s dialog in my reports. You have to examine that dialog to see what ChatGPT is doing, what it is capable of doing.
Finally, I realize that the pace of development in this arena is such that ChatGPT is now old. The versions I used to conduct these investagations are no longer available on the web. However, as far as I can tell, none of the results I report depend on features idiosyncratic to those versions.
The rest of this introduction consists of short statements about what the various sections of this report contain.