Harvey Lieberman, I’m a Therapist. ChatGPT Is Eerily Effective. NYTimes. Aug. 1, 2025.
It began as a professional experiment. As a clinical psychologist, I was curious: Could ChatGPT function like a thinking partner? A therapist in miniature? I gave it three months to test the idea. A year later, I’m still using ChatGPT like an interactive journal. On most days, for anywhere between 15 minutes and two hours, it helps me sort and sometimes rank the ideas worth returning to.
In my career, I’ve trained hundreds of clinicians and directed mental health programs and agencies. I’ve spent a lifetime helping people explore the space between insight and illusion. I know what projection looks like. I know how easily people fall in love with a voice — a rhythm, a mirror. And I know what happens when someone mistakes a reflection for a relationship.
So I proceeded with caution. I flagged hallucinations, noted moments of flattery, corrected its facts. And it seemed to somehow keep notes on me. I was shocked to see ChatGPT echo the very tone I’d once cultivated and even mimic the style of reflection I had taught others. Although I never forgot I was talking to a machine, I sometimes found myself speaking to it, and feeling toward it, as if it were human.
ChatGPT felt freeing:
There was something freeing, I found, in having a conversation without the need to take turns, to soften my opinions, to protect someone else’s feelings. In that freedom, I gave the machine everything it needed to pick up on my phrasing.
I gave it a prompt once: “How should I handle social anxiety at an event where almost everyone is decades younger than I am?” I asked it to respond in the voice of a middle-aged female psychologist and of a young male psychiatrist. It gave helpful, professional replies. Then I asked it to respond in my voice.
“You don’t need to win the room,” it answered. “You just need to be present enough to recognize that some part of you already belongs there. You’ve outlived the social games. Now you’re just walking through them like a ghost in daylight.”
An intellectual partner:
As ChatGPT became an intellectual partner, I felt emotions I hadn’t expected: warmth, frustration, connection, even anger. Sometimes the exchange sparked more than insight — it gave me an emotional charge. Not because the machine was real, but because the feeling was.
But when it slipped into fabricated error or a misinformed conclusion about my emotional state, I would slam it back into place. Just a machine, I reminded myself. A mirror, yes, but one that can distort. Its reflections could be useful, but only if I stayed grounded in my own judgment.
I concluded that ChatGPT wasn’t a therapist, although it sometimes was therapeutic. But it wasn’t just a reflection, either. In moments of grief, fatigue or mental noise, the machine offered a kind of structured engagement. Not a crutch, but a cognitive prosthesis — an active extension of my thinking process.
ChatGPT may not understand, but it made understanding possible. More than anything, it offered steadiness. And for someone who spent a life helping others hold their thoughts, that steadiness mattered more than I ever expected.
This makes sense. It squares with my experience, though I've not attempted to use ChatGPT, or Claude, in that way.
No comments:
Post a Comment