David Brooks on Audacity, AI, and the American Psyche, Conversations with Tyler, August 20, 2025.
David Brooks returns to the show with a stark diagnosis of American culture. Having evolved from a Democratic socialist to a neoconservative to what he now calls “the rightward edge of the leftward tendency,” Brooks argues that America’s core problems aren’t economic but sociological—rooted in the destruction of our “secure base” of family, community, and moral order that once gave people existential security.
Tyler and David cover why young people are simultaneously the most rejected and most productive generation, smartphones and sex, the persuasiveness of AI vs novels, the loss of audacity, what made William F. Buckley and Milton Friedman great mentors, why academics should embrace the epistemology of the interview, the evolving status of neoconservatism, what Trump gets right, whether only war or mass movements can revive the American psyche, what will end the fertility crisis, the subject of his book, listener questions, and much more.
Here's Brooks on AI:
COWEN: Is it [smartphones] the only kryptonite humans face?
BROOKS: I think AI is also kryptonite in the exact same way, that if you’re a college kid, and you’re not doing any of your papers, you’re just using your GPT to write your papers — that’s kryptonite.
COWEN: Does it ruin you any more than old-style cheating? Which was very common. It was bad for people, but it didn’t ruin generations, right?
BROOKS: Yes. Though, that cheating took a lot of creativity, [laughs] and this cheating is total, it’s a totalistic cheating, where the model is literally doing most of your work. I follow you on AI. I’m 15 yards behind you the whole time. I think it’s generally great, but I think this one thing of robbing people of their own education is truly a serious threat.
COWEN: I’ve heard people in the AI sector from the major companies say they’re very worried that AI can be so persuasive. Do you find it so persuasive? Because I don’t. It will teach me lots of facts, and I will defer to it, but it’s never persuaded me much with arguments, whereas humans have. What’s your view? Is it kryptonite in that way?
BROOKS: See, it depends what you’re writing about. I was with an astrophysicist recently, who worked on a problem with eight of his senior colleagues for months. They put it to AI, and with a little extra prompting, it solved their problems in a couple of hours. I gather from economist friends that it can solve problems that only a few elite economists can solve.
I write about culture, psychology, sociology, politics. It’s pathetic. I cannot use AI, and I use it every day. I make the attempts, and because in the things I care about, which are more humanistic, it’s hoovering up all the crap on the internet about what love is, or do you grow from post-traumatic experiments? It’s just the pabulum that’s out there on random websites.
COWEN: What if you ask it, “What would David Brooks say?”
BROOKS: That’s what I do.
COWEN: Then it rises to the occasion.
BROOKS: Indeed, it’s brilliant in that case.
It’s a fricking moron. What I do is, I assign it voices. What does Jean Piaget say about this? Does he disagree with Erik Erikson about this? If you assign it to a voice, then you can screen out a lot. But I still have found it basically useless for my own research. It’s great as a travel agent. It’s great at a lot of things, but in humanistic inquiry, I find it pretty pathetic.
I do a lot of interviewing, as you do, with AI folks. I was at OpenAI several months ago, and somebody said, “We’re going to create a machine that can think like a human brain.” I call my neuroscientist friends, and they say, “Well, that’ll be a neat trick because we don’t know how human brains think.”
I think AI is a great tool, but I’m unthreatened by it because I don’t think it has understanding, I don’t think it has judgment, I don’t think it has emotion, I don’t think it has motivation. I don’t think it has most of the stuff that the human mind has. It’ll teach us what we’re good at by reminding it what it can’t do.
I like that, Brooks assigning voices to an AI. I'll have to try that. I wonder, what would happen if you assigned it a fictional voice, say Spock from Star Trek, or Ahab from Moby Dick?
There's more at the link.
Assigning a voice will give only a surface level expression of the person/character that speaks. Even a fictional voice has the expression of a real human motivating it. And real humans have vocal expression that is informed by the person's experiences (subcognitive and somatosensory). Even an AI voice based on a fictional character will be absent any of the qualities that make real life real people/real life fictional characters compelling because of the unpredictability, the unexpected, the dips into what was thought to be the unknowable or the unknown.
ReplyDeleteSure, a chatbot pretending to be Piaget isn't going to respond like Piaget would have. But there will be a difference.between responding as Piaget and responding as Erik Erikson. I suspect that's all that Brooks is after.
DeleteI think both you Bill, & Brooks, will appreciate...
ReplyDelete"BROOKS: See, it depends what you’re writing about."
'What Happened When I Tried to Replace Myself with ChatGPT in My English Classroom
Piers Gelly on a Semester-Long Dive into the AI Discourse
https://lithub.com/what-happened-when-i-tried-to-replace-myself-with-chatgpt-in-my-english-classroom/