As you see from the number blog entries involving either ChatGPT or Claude, I use them a lot. I’ve recently been crashing on my book, Play: How to Stay Human in the AI Revolution, and generating very long conversations with ChatGPT, so long that it’s been “glitching” on me in various ways. So I asked about it. Here’s its response.
* * * * *
Short answer: no—nothing is “wrong” in the sense you’re implying. What you’re seeing is a by-product of the way this conversation has evolved, not a degradation of capability or attention.
A few concrete points, to be precise:
1. Conversation length and mode switching
This thread has accumulated an unusually large amount of material: conceptual planning, world-building, autobiographical reflection, proposal drafting, theory, and now multiple uploaded documents. When a conversation reaches this density, the system has to juggle several different response modes (analysis, drafting, meta-reflection, document handling). Occasionally that produces truncated replies, dropped continuations, or slightly mistimed handoffs—what feels like “glitching.”
2. Interrupted turns, not conceptual failure
When I cut off mid-sentence earlier, that wasn’t confusion about your argument or loss of the thread. It was a turn-completion issue: the response exceeded internal limits and was clipped before it could resolve cleanly. The underlying conceptual state—your architecture, your themes, the Fourth Arena logic—remains intact.
3. File uploads change the conversational surface
Once multiple documents are uploaded, the system starts treating the conversation as partly archival and partly dialogic. That can trigger awkward transitions (e.g., sudden references to files, or tool messages appearing where you expected prose). Again, that’s a UI/interaction artifact, not cognitive drift.
4. You’re working at an unusually high level of abstraction
You’re not asking for answers so much as co-thinking across timescales—ten-year intellectual plans, mythic structure, cultural dynamics, narrative architecture. That’s exactly where small interaction glitches become more visible, because continuity matters more than in transactional Q&A.
If there were a substantive problem—loss of memory of your framework, misunderstanding of your aims, flattening of nuance—you’d see it in the content: wrong emphases, basic misreadings, regression to clichés. That hasn’t happened.
Practical suggestion going forward
When you’re ready to resume drafting or synthesis, the cleanest move is one of these:
- Upload a single assembled document and say “work only from this for the next phase,” or
- Start a fresh chat explicitly labeled (e.g., “Proposal Drafting – Chapters 2 & 3”) and treat it as a focused work session.
- That will reduce surface noise and keep the collaboration feeling as tight as it has conceptually been all along.
In short: no systemic trouble—just the friction that shows up when you push a medium hard.
* * * * *
Note comment #4 in particular. Working with Claude and ChatGPT HAS changed how I work (and I don't code). I'm talking about descriptive, analytic, and expository prose. At the moment I'm working on an eight-chapter book. Working with the chatbots makes it natural for me to work over and through the whole book at any level, from architecting the overall book – which involves outlines, but so much more – through writing paragraphs and drafting chapters, brainstorming here and there, gathering background information, all of it. I feel as though I'm designing the informatic structure of the book like an architect designs a building. I've never done this before, certainly not at this scale. This is NEW and I LIKE it. A LOT.
No comments:
Post a Comment