Tyler Cowen, Alison Gopnik on Childhood Learning, AI as a Cultural Technology, and Rethinking Nature vs. Nurture (Ep. 265), Conversations with Tyler, Dec. 17, 2025.
The introduction:
Alison Gopnik is both a psychologist and philosopher at Berkeley, studying how children construct theories of the world from limited data. Her central insight is that babies learn like scientists, running experiments and updating beliefs based on evidence. But Tyler wonders: are scientists actually good learners? It’s a question that leads them into a wide-ranging conversation about what we’ve been systematically underestimating in young minds, what’s wrong with simple nature-versus-nurture frameworks, and whether AI represents genuine intelligence or just a very sophisticated library.
Tyler and Alison cover how children systematically experiment on the world and what study she’d run with $100 million, why babies are more conscious than adults and what consciousness even means, episodic memory and aphantasia, whether Freud got anything right about childhood and what’s held up best from Piaget, how we should teach young children versus school-age kids, how AI should change K-12 education and Gopnik’s case that it’s a cultural technology rather than intelligence, whether the enterprise of twin studies makes sense and why she sees nature versus nurture as the wrong framework entirely, autism and ADHD as diagnostic categories, whether the success of her siblings belies her skepticism about genetic inheritance, her new project on the economics and philosophy of caregiving, and more.
Kids as Scientists:
We have some good computational models of how scientific theory change works. It turns out that those apply to children as well. The specific thing that I’ve looked at is, what is it that scientists do? Here’s this big, hard problem. All we seem to get from the world are a bunch of photons at the back of our retina and disturbances of air in our ears, and yet, children know about people and things, and scientists know about quarks and quantum phenomena. How do we ever get from the data to the theory?
One subcategory of that is, how do we ever get causal structure which is so important in science? How do we ever figure out what causes what just from a bunch of data that we have?
What’s happened is that philosophers of science and computer scientists have found some systematic ways that you could talk about that. Scientists — I think, mostly, not necessarily consciously, but just as part of what they do — and little kids are looking at data and systematically figuring out what kind of structure out there in the world could have caused this pattern of data. That’s not the only thing, of course, that’s going on in science. There’re lots of other things, too, but it’s at least one central thing going on in science that we’ve started to really understand. [...]
If you asked a three-year-old, “Do you think that this pattern of conditional dependencies is giving you a confounding causal structure?” They would probably not give you a very sensible answer. Even when you ask scientists that, they don’t give you a very sensible answer. But when you look at their actual practice, what you see is that, in fact, kids, for example, are Bayesian, and so are scientists.
Now, the thing is that, in fact, in many respects, kids are better Bayesians than scientists, but a lot of it depends on your prior. If you have, as they say, a very peaked prior, you have a lot of experience, you have a lot of reason to believe that this prior assumption is right, then it’s rational not to change it when you just have a little bit of evidence. You should require a lot of evidence to overturn something that you have a lot of confirmation for.
It’s interesting that the kids, actually, are better at solving problems that involve unusual outcomes than the scientists are. I think what happens in science — we’ve just been doing some work about this — is that there’s also a social factor, where having a big distribution of people who are more likely to go with the prior versus people who are more likely to go with the evidence, which seems to be true in science, that collectively can get you to the right answer. There’s no arbitrary principle you can have about when should you abandon the theory and when should you hold onto it. [...]
One thing you can do, which is like what you’re describing about the money supply, is just make little changes to what you already know. That’s what you mean about moving in the predictable direction. You’re just changing things a little bit. Then seeing, “Okay, if I change it a little bit, is it doing a better job of accounting for the data?” That’s what people think of as a low-temperature search. The other kind of search you can do, the high-temperature search, is just bounce around the space. Try wild, crazy things. Exactly as you were saying, have just a more random walk.
The strategy that you see in computer science, this annealing, is start out with this wild, crazy, out-of-the-box, high-temperature search through the space, and then cool off and just fill in the details. If you think about your four-year-old, who do they sound like? Do they sound like the creature that’s just moving a little bit, or do they sound like they’re noisy and bouncy and random and doing all sorts of weird things? The four-year-old seemed to be a really good idea of this kind of random search. [...]
I think you see both things happening. When you get big paradigm shifts, as Kuhn said, when you get big changes in science, a lot of times it’s because someone found an idea that looked like it was improbable. The nice thing about kids is, because they don’t have to worry about grant proposals, they can be off in the wild space all the time.
[...]With scientists, we underestimate how much that — we sometimes dismissively call it a fishing expedition — how much that very general experimentation is playing a role in scientific progress. In the grant, you’re supposed to say, “Here’s my three hypotheses, and here are the four experiments I’m going to do to test them.” But I think in practice, a lot of times, scientists are being like the little boy with the avocado and the spoon. They’re saying, “I don’t know, what will happen if I try this? What will happen if I try that?” Then they write the grant to get money to do the things that they’ve already done by doing all these experiments.
FWIW, I've known about simulated annealing for years, a couple of decades at least. For awhile I was one of my go-to metaphors/analogies, though I've not used it recently. In terms that I've been developing in other posts and in some working papers, high-temperature search is ludic (Homo Ludens) while low-temperature filling-in-the details is economic (Home Economicus).
LLMs tend to be used in economic ways. All those benchmarks are based on specific problems in well-specified domains. That's why they aren't particularly creative. My series of blog posts on humans in the loop contains case studies of three of my own ludic explorations.
Freud and Piaget
COWEN: Is there anything in Freud’s understanding of childhood that’s really held up?
GOPNIK: That’s a good question and a complicated question. When I first started doing the work that we do now, Freud was — and Freud still is — I think rather surprisingly to the intellectual world in general — it just doesn’t show up in modern psychology. Even having someone teach a class about Freud would be unusual. I don’t think anyone does in Berkeley’s department, which is the best department in the country.
On the other hand, some of the ideas . . . some of the Freudians have been very enthusiastic about the work that people like me and my colleagues have done because, I think, the intuition that there was more going on in even little infants, that even small babies and young children could make inferences about the social world around them or the psychological world around them, and that was influencing how they grew up — I think that idea has turned out to be right, and it wasn’t obvious that that was going to turn out to be right.
COWEN: What in Piaget has held up the best?
GOPNIK: Piaget, on the other hand, still is, I think, the big theoretical foundation of what everyone has done since in cognitive development, and all of us — our attitude about Freud is, “Yes, well, of course, here’s this little bit of something that turns out to actually be right.” Whereas with Piaget, we’re all trying to claim his legacy, that what we’re doing is what Piaget was trying to do.
AI as cultural technology
GOPNIK: My view about generative AI — and I’ve actually written about this in a paper in Science with Henry Farrell, who I think you know is —
COWEN: Yes, I know Henry.
GOPNIK: — a political scientist, and James Evans, who’s a sociologist. Again, our intuitive lay conception of how AI works is really misguided. We very much have this golem view about here’s this non-living thing that we’ve given a mind to, and that always works out badly. It’s going to either be for good or for ill. It will be superintelligence. That’s the narrative. We think the right narrative is to think of it as what I’ve called a cultural technology. It’s a way of getting information from other people.
The way that generative AI works is that it’s trained on all the stuff that very intelligent humans have done. It’s not surprising that a lot of times it will simulate what intelligent humans would do. I think it’s analogous to things like print or writing, or internet search itself, libraries, where one of the things that is characteristic of humans and has always been — and as some people have argued, I think rightly, is our superpower — is that we can get information from other people, and we can use that information to make progress ourselves.
Generative AI is the latest technique for doing that. What generative AI tells you is, here’s a summary of what all the people on the net have said in this context, and learning how to use those cultural technologies. Again, if you want a new, genuinely intelligent agent in the world — if you have a kitten, that will be a genuinely intelligent agent — probably won’t change the world too much. You change a cultural technology, you introduce print — that really does change the world in radical ways for good or for ill.
COWEN: That seems wrong to me. In your piece with Henry, you don’t consider reasoning models. Reasoning models, in some way, “think.” They can now prove some mathematical theorems. Almost every day, there’s some new, albeit often minor, scientific discovery that comes from AIs that was not previously on the internet. Isn’t the actual model of AI now — 2025 — quite different?
GOPNIK: I don’t think so. If you look at the way that the reasoning models work, they work the same way that all the other models work, which is that they look at patterns of text on the web. One of the things that is a pattern that you have — again, this is positive — one of the things that you have are patterns of reasoning. What you have are patterns of, here’s someone who was trying to solve a math problem. Here’s the steps that they took to solve that math problem. Can I find a general statistical pattern in those steps and reproduce it in this other context?
COWEN: They do much more than that. It’s clear they look at data on the web, but the people I speak to who build them say it’s not transparent, even to them, exactly how the thing works, but as they apply more scaling, it gets better and better at its own reasoning. There’s 01, there’s 03, now there’s GPT-5. GPT-6 is on the way. The scaling just seems to give it more ability to do actual reasoning of a unique sort that’s not just copying the reasoning of some human.
GOPNIK: Well, that’s the question. I think the fact that they’re so good in this mysterious way at picking up patterns and reproducing patterns — that’s clearly a really important thing that these technologies can do, but that’s not what humans are doing, and it’s not even what animal agents are doing. So, I would be impressed if they were actually designing experiments that would tell you something about something new that was going on in the world that all the other people around them didn’t know before.
FWIW, my own view on this is closer to Gopnik's than to Cowen's.
Moving on:
COWEN: If I write out a unique economics problem, it will beat most human economists in trying to solve the problem, a problem that no one’s ever seen before. I create it. I write it down. I give it to the beast. I give it to some humans. Mostly, it beats the humans.
GOPNIK: Yes. Is it going to actually get a novel insight about economics that isn’t there before, as opposed to it’s just using the kind of apparatus that you already have?
COWEN: The demand curve will still slope downwards, but it gets the answer, and maybe the humans don’t. There’s something unique about that.
GOPNIK: It depends on the humans. The other thing to say is, we don’t know. We’ll actually see what happens. We’ll see what the outcomes are going to be. I’m pretty skeptical just because we have been trying to figure out how two-year-old babies go out and solve the kinds of problems that they solve in the world, and they’re solving problems.
Note that it's Cowen whose come up with the novel economics problem, not the LLM. In doing that Cowen's bounded the the space, he's given the LLM a boundary it can work against. I think this is a very effective way of using SOA chatbots. They "know" far more than any one human, but sell-trained and sophisticated humans, humans with a ludic impulse, can make connections that aren't in LLM. When they suggest such a connection to the LLM, it can often do an effective job of working through the details implied by the connection. To borrow terms from the earlier discussion, the human does the high-temperature search while the AI does the low-temperature detail work.
I have a working paper where I argue, in effect, that the default mode network (DMN) operates in a mode where humans do high-temperature searches: From Mirror Recognition to Low-Bandwidth Memory (August 8, 2025). In the last section ChatGPT sketches something it calls an "associative drift engine" which might approximate the performance of the DMN. I have little sense about whether or not that would be feasible.
There's much more at the link.
No comments:
Post a Comment