Pages in this blog

Tuesday, June 17, 2025

ChatGPT tries to create a semantic network model for Shakespeare’s Sonnet 129

New working paper. Title above; links, abstract, table of contents, and introduction below.

Academia.edu: https://www.academia.edu/129993358/ChatGPT_tries_to_create_a_semantic_network_model_for_Shakespeares_Sonnet_129
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5299312
ResearchGate: https://www.researchgate.net/publication/392758464_ChatGPT_tries_to_create_a_semantic_network_model_for_Shakespeare's_Sonnet_129

Abstract: This document explores the capacity of large language models, specifically ChatGPT, to construct semantic network models of complex literary texts, using Shakespeare’s Sonnet 129 as a case study. Drawing on the author’s prior work in cognitive modeling, the analysis reveals that ChatGPT, while capable of producing linguistically coherent commentary, fails to generate a structurally plausible semantic network for the sonnet. The failure is traced not to a lack of exposure to relevant literature, but to the model’s lack of embodied, interactive learning. The process of constructing cognitive network diagrams is shown to be iterative, visual-verbal, and skill-based—comparable to learning a physical craft like playing an instrument or woodworking. It requires extended practice under expert feedback, enabling a form of reasoning that is neither algorithmic nor easily reducible to textual description. The essay argues that this hybrid modeling skill represents a “deep” human capability that is nevertheless teachable and routine. It concludes with reflections on the nature of such skills and their implications for AI, pedagogy, and literary interpretation.

Asking ChatGPT create a semantic model for a Shakespeare sonnet 2
Creating a Plausible Model for Sonnet 129 4
Final Couplet 7
Hunter simile 11
Modeling the semantic underpinning of “spirit” 13
Semantic Network Researchers 16
Cognitive Networks and Literary Semantics 16
How I did the model 18
Implementing cognitive nets in neural nets 20

Asking ChatGPT create a semantic model for a Shakespeare sonnet

Out of curiosity I decided to see whether or not ChatGPT could create a plausible semantic network model for Shakespeare’s famous Sonnet 129, “Th’ expense of Spirit.”, which I had uploaded to it. While such models are ultimately expressed in computer code, as far as I can tell from having read extensively in the literature, everyone who worked with such models expressed those models in the form of diagrams, diagrams depicting some kind of directed graph. ChatGPT was certainly capable of drawing diagrams and even of creating complex photo-realistic imagery. That, along with its ability to “read” text in some fashion are the rock-bottom basic tools for creating such a model.

Here, for reference purposes, is the text:

Th' expense of spirit in a waste of shame
Is lust in action; and till action, lust
Is perjured, murd'rous, bloody, full of blame,
Savage, extreme, rude, cruel, not to trust,
Enjoyed no sooner but despisèd straight,
Past reason hunted; and, no sooner had
Past reason hated as a swallowed bait
On purpose laid to make the taker mad;
Mad in pursuit and in possession so,
Had, having, and in quest to have, extreme;
A bliss in proof and proved, a very woe;
Before, a joy proposed; behind, a dream.
    All this the world well knows; yet none knows well
    To shun the heaven that leads men to this hell.

I knew the sonnet well, had published my own model for it early in career, and made it the central example of 1978 doctoral dissertation, “Cognitive Science and Literary Theory.” Knowing how difficult it can be to create such models – they’re not rocket science, as the expression goes, but they’re not obvious either – I didn’t expect it to do a plausible job, but I was curious to see what it would do.

I have annotated that interaction and appended it starting on page 4. In the rest of this introduction I want offer some informal remarks on ChatGPT’s failure.

ChatGPT’s failure got me to thinking. Just why couldn’t it do the task? After all, it seems to have “read” a lot of the relevant literature. When I asked it to name some important researchers in the field, it produced a list of ten, all of them familiar to me.

The process of starting with a bunch of text, like a Shakespeare sonnet, and producing a semantic network model for that text, that is not an algorithmic process. I don’t know what kind of process it is. I do know that it took me the better part of a semester to learn how to create semantic network diagrams that modeled small chunks of English. I did that while being tutored by David Hays for one session a week. I'd produce a diagram or three, show them to Hays, and he’d explain why they didn't really work. Then we set out doing more adequate diagrams. After three months I began to get the hang of it.

The point is that I didn’t learn how to do it simply by reading papers about semantic networks. I had to go through an interactive process of creating every more sophisticated models under the tutelage of an expert. What did I pick up through that interactive process that I couldn’t pick up simply be reading finished papers? Whatever it was, I will further observe that it wasn’t until I had acquired it that I actually understood the research in the field.

ChatGPT, or rather the underlying large language model, didn’t do anything like that. It simply read finished work, lots of it. Given that I couldn’t produce an even superficially plausible, it didn’t really understand what it had “read” during the pre-training process. I picked up enough that it could make plausible comments about models, our dialog is full of those, but I could not translate those comments into plausible diagrams.

As for the process involved in creating the diagrams for a semantic model, I certainly did that well after that initial period of learning. That was not a simple process. It was iterative. I would make some diagrams, examine them by tracing paths through them, and then revise them. I probably covered 30, 40, or more sheets of paper with diagrams before I settled on the ones I used for the paper I published. The following diagram depicts a sequence (SEQ) of episodes and is one of 11 diagrams I used in the paper:

The processes involved in creating those diagrams was not formal reasoning. But some kind of reasoning was involved. And one that humans can do. It’s not rocket science. But it’s not easy either.

No comments:

Post a Comment