Thursday, May 8, 2025

Tyler Cowen discusses the impact of AI with Jack Clark

From the introduction:

Few understand both the promise and limitations of artificial general intelligence better than Jack Clark, co-founder of Anthropic. With a background in journalism and the humanities that sets him apart in Silicon Valley, Clark offers a refreshingly sober assessment of AI’s economic impact—predicting growth of 3-5% rather than the 20-30% touted by techno-optimists—based on his firsthand experience of repeatedly underestimating AI progress while still recognizing the physical world’s resistance to digital transformation.

In this conversation, Jack and Tyler explore which parts of the economy AGI will affect last, where AI will encounter the strongest legal obstacles, the prospect of AI teddy bears, what AI means for the economics of journalism, how competitive the LLM sector will become, why he’s relatively bearish on AI-fueled economic growth, how AI will change American cities, what we’ll do with abundant compute, how the law should handle autonomous AI agents, whether we’re entering the age of manager nerds, AI consciousness, when we’ll be able to speak directly to dolphins, AI and national sovereignty, how the UK and Singapore might position themselves as AI hubs, what Clark hopes to learn next, and much more.

The last to be affected:

COWEN: Where is it in our economy that AGI will affect last in a significant manner?

CLARK: Ooh, I’d hazard a guess that it’s going to be things that are the trades and the most artisanal parts of them. You might think of trades as having things like electricians or plumbing, or also things like gardening. I think within those, you get certain high-status, high-skill parts, where people want to use a certain tradesman, not just because of their skill but because of their notoriety and sometimes an aesthetic quality. I think that my take might be gardening, actually.

COWEN: They won’t use AGI to help design the garden? Or just the human front will never disappear?

CLARK: I think the human front will never disappear. People will purchase certain things because of the taste of the person, even if that taste looks like certain types of modern art production, where the artist actually backs onto thousands of people that work for them, and they’re more orchestrating it.

COWEN: How about in the more desk-bound part of the service sector? Where will it come last?

CLARK: Come last? Ooh, good question. I think that on this, there are certain types of desk-bound work that just require talking to other people and getting to alignment or agreement. If you count certain types of sales —

COWEN: But it’s great at doing that already, right? It’s a wonderful therapist.

CLARK: It is, but we don’t send Claude to sell Claude, yet. We send people to sell Claude, even though Claude could probably generate the text to do the sales motion. People want to do commerce with other people, so I think that there’ll be certain relationships which get mediated by people, and people will have a strong preference, probably, for deals that they make on behalf of their larger pools of capital, where the deals are done by human proxies for large automated organizations or pools of capital. [...]

COWEN: Once you can put the AI on your own hard drive, which will be pretty soon, won’t that all change?

CLARK: It will change in the form of gray market expertise, but not official expertise. I had a baby recently. Whenever my baby bonks their head, while I’m dialing the advice nurse, I talk to Claude just to reassure myself that the baby isn’t in trouble.

I don’t think we actually fully permit healthcare uses via our own terms of service. We don’t recommend it because we’re worried about all of the liability issues this contains, but I know through my revealed preference that I’m always going to want to use that, but I can’t take that Claude assessment and give it to Kaiser Permanente. I actually have to talk through a human to get everything else to happen on the back end to work out if they need to prescribe my child something.

COWEN: So, the number one job will be surreptitiously transmitting the generation of information that comes from AIs, in essence?

CLARK: Some of it may be that. Some of it is about laundering the information that comes from AIs into human systems that are not predisposed to that information going in directly.

AI teddy bears:

COWEN: I believe we’re not that far from the age of what I call the AI teddy bears. You know what I mean when I say that?

CLARK: Yes.

COWEN: What percentage of parents now will buy those teddy bears for their kids and allow it?

CLARK: I’ve had this thought since I have a person, that is my child, that’s almost two.

COWEN: Sure.

CLARK: I am annoyed I can’t buy the teddy bear yet. I think most parents —

COWEN: You’re an outlier [laughs].

CLARK: No. I don’t know. I don’t know.

COWEN: You are cofounder of Anthropic, right?

CLARK: I don’t think I’m an outlier. I think that once your lovable child starts to speak and display endless curiosity and a need to be satiated, you first think, “How can I get them hanging out with other human children as quickly as possible?” So, we’re on the preschool list, all of that stuff.

I’ve had this thought, “Oh, I wish you could talk to your bunny occasionally so that the bunny would provide you some entertainment while I’m putting the dishes away, or making you dinner, or something.” Often, you just need another person to be there to help you wrangle the child and keep them interested. I think lots of parents would do this.

COWEN: Say that the kid says to you, “Daddy, I prefer the bunny to my friends. Can I stay at home today?” Do you take the bunny away? That’s the tough part, right?

CLARK: I think that’s the part where you have them spend more time with their friends, but you keep the bunny in their life because the bunny is just going to get smarter and be more around them as they grow up. If you take it away, they’ll probably do something really strange with smart AI friends in the future.

No, I don’t think I’m an outlier here. I think most parents, if they could acquire a well-meaning friend that could provide occasional entertainment to their child when their child is being very trying, they would probably do it [laughs].

COWEN: I feel the word “occasional” is doing a lot of work in that sentence.

Governing AI Agents:

COWEN: Speaking of agents, how should the law deal with agents that are not owned? Maybe they’re generated in a way that’s anonymous, or maybe a philanthropist builds them and then disavows ownership or sends them to a country where, in essence, there’s not much law. I’m not talking about terrorism; that’s separate. But just someone sends an agent to Africa, and 98 percent of what it does helps people, but as with every charity, some things go wrong. There’re some problems. Can someone sue the agent? How is it capitalized? Does it have a legal identity?

CLARK: I will partially contradict myself where, earlier, I talked about maybe you’re going to be paying agents. I think that the pressure of the world is towards agents having some level of independence or trading ability.

From a policy standpoint, I’m reminded of that early thing that IBM said, which was, a computer cannot be accountable to a decision; only humans can. I think it got at something quite important where if you create agents that are wholly independent from people but are making decisions that affect people, you’ve introduced a really difficult problem for the policy and legal systems to deal with. So, I’m dodging your question because I don’t have an answer to it. I think it’s a big problem question.

COWEN: My guess is we should have law for the agents, and maybe the AIs write that law, and they have their own system. I worry that if you trace it all back to humans, someone could sue Anthropic 30 years from now. Oh, someone’s agent was an offshoot of one of your systems. It was mediated through Chinese Manus, but that, in turn, may have been built upon things that you did.

I don’t think you should be at all liable for that. I see liability getting out of control in so many cases. I want to choke it off and isolate it somewhat from the mainstream legal system. If need be, you require that an independent agent is either somewhat capitalized, or it gets hunted down and shut off.

CLARK: Yes. It might be that, along with what you said, having means to control and change for resources that agents use could be some of the path here because it’s the ultimate disincentive.

Although I will note that this involves pretty tricky questions of moral patienthood, where we’re working on some notions around how to get clearer on better anthropic. If you actually believe that these AI agents are moral patients, then turning them off introduces pretty significant ethical issues, potentially, so you need to reconcile these two things.

COWEN: I was, not too long ago, at an event with some highly prestigious people. This was in New York, of course, not San Francisco.

CLARK: Oh, it’s where prestigious people hang out.

COWEN: I used the phrase AGI, and not one of the five even knew what I meant. I don’t mean they were skeptical in the deep sense, which maybe one should be. They just literally didn’t know what I meant. What’s your model of why so many people are still in a fog?

CLARK: I am a technological pessimist who became an optimist through repeated beatings over the head of scale. What I mean by this is, I’ve consistently underestimated AI progress. Maybe I am today in this conversation when I talk about 3 percent to 5 percent growth rates. What has happened is, I have just endlessly seen the AI system get to where I thought it couldn’t, or thought would take a long time, much faster than I thought. So, I’ve had to internalize this repeatedly.

Nonetheless, we, ourselves, find it surprising. Last year here people were saying, “Oh, well soon Claude is going to be doing most of the coding at Anthropic.” We’re now on the way to that, where Claude Code and other things are writing tons of code here. It still felt surprising internally even though we have docs from last year predicting it would happen about now.

Most people outside of the AI labs have no experience of pre-registering their predictions about AI and getting it proved wrong to them repeatedly, because why would you do this unless you work here? I found that the only way to break through is to take their domain and show them what AI can do directly in the domain, which they can evaluate it within, which is an expensive process.

As I've observed many times, back in the mid-1970s I believed we were headed for a world where we'd have a machine that could read Shakespeare in some substantial way. David Hays and I published that in "Computational Linguistics and the Humanist" (1976). We didn't make any predictions in that paper, but I was thinking 20 years. That's the mid-1990s. Nothing like that existed at that time. Not only that, but the technology Hays and I discussed in that paper was all but obsolete. So I was wrong about that.

More recently, I've said on several occasions that we'll understand how LLMs work internally before we achieve AGI. Now, if we've already achieved AGI, as some believe, then I'm wrong there, for we certainly don't understand how LLMs work internally. My colleague, Ramesh Viswanathan, tells me he's making progress on that problem. Who knows, maybe in a year or three...As for AGI, it's such a sloppy idea I don't see how one can base hard predictions on it. Meanwhile, Rodney Brooks keeps making quite detailed predictions across a wide range of AI and robotics.

On AI consciousness:

COWEN: When Geoffrey Hinton says that, right now, the AIs are conscious, which I think is what he says, I believe he’s crazy. What do you think?

CLARK: I think that he’s —

COWEN: You can be more polite than I am. That’s fine. [laughs]

CLARK: Well, no. How I would phrase this is that I agonize about this. You read my newsletter. I write fictional stories in it often, which are me grappling with this question. I worry that we are going to be bystanders to what in the future will seem like a great crime, which is something about these things being determined to be conscious and us taking actions which you think are bad to have taken against conscious entities.

Internally, I say, there’s a difference between doing experiments on potatoes and on monkeys. I think we’re still in the potato regime, but I think that there is actually a clear line by which these things become monkeys and then beyond in terms of your moral relationship to them.

To Hinton’s point, I think that these things are conscious in the sense that a tongue without a brain is conscious. It takes actions in response to stimuli that are really, really, really complicated. In a moment, it has a sense impression of the world and is responding, but does it have a sense of self? I would wager, no, it doesn’t seem like it does.

These AI systems — we instantiate them, and they live in a kind of infinite now where they may perceive, and they may have some awareness within a context window, but there’s no memory or permanence. To me, it feels like they’re on a trajectory heading towards consciousness, and if they’re conscious today, it’s in a form that we would recognize as like a truly alien consciousness, not human consciousness.

There's more at the link, but I've got things to do. You're on your own.

No comments:

Post a Comment