David Marchese, The Interview: Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change, NYTimes, Feb. 7, 2026.
Throughout his work — which includes classic books like “The Omnivore’s Dilemma” (2006), about why we eat the way we do, and “How to Change Your Mind” (2018), about the science and uses of psychedelic drugs — Pollan has waded into ideas about the inner workings of the mind. Now, with his forthcoming book, “A World Appears: A Journey Into Consciousness,” which will arrive this month, he has jumped into the deep end. The book is both a highly personal and expansive multidisciplinary survey of questions around human consciousness — what it is, what causes it, what it’s for and what the possible answers might mean for how we choose to live. And as Pollan explained, with the rise of artificial intelligence as well as the relentless political pressure on our attention (that is, our minds), those questions, already profound, are becoming only more urgent.
Later, in the interview:
Marchese: You are skeptical that A.I. can achieve consciousness. Why?
Pollan: I’m convinced by some of the researchers, including Antonio Damasio and Mark Solms, who made a really compelling case that the origin of consciousness is with feelings, not thoughts. Feelings are the language in which the body talks to the brain. We forget that brains exist to keep bodies alive, and the way the body gets the brain’s attention is with feelings. So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings. The other reason I think we’re not close to it is that everything that machines know, the data set on which they’re trained, is information on the internet. They don’t have friction with nature. They don’t have friction with us. Some of the most important things we know are about person-to-person contact, about contact with nature — this friction that really makes us human. [...]
Marchese: But if an A.I. says: “Michael, I’m conscious. I promise,” how do we know?
Pollan: We don’t, and that is exactly why people are falling deep into these relationships with A.I. We can’t say it’s not conscious when it tells us it is. But we can test it in various ways. It all goes back to this idea of the Turing test — that the test of machine intelligence would be when they can fool us.
Marchese: If the Turing test is the criteria for machine consciousness, then that test has already been passed.
Pollan: Exactly, it has fooled many, many people. Whether it can fool an expert, too, I don’t know, but probably. So we’re in a very weird place where the machines we’re living with are telling us they’re conscious. We can’t dispute it, but we can look at how they’re made and draw the kind of conclusions I’ve drawn. But is that going to persuade everybody? No. We want them to be conscious in some way. Or some of us do. It’s easier to have a relationship with a chatbot than another human. Going back to that friction point, they offer no friction. They just suck up to us and convince us how brilliant we are, and we fall for it.
There's much more at the link.
No comments:
Post a Comment