Sunday, March 12, 2023

How do people reconcile the linguistic fluency of LLMs with the fact that they are only machines? [Ezra Klein: This changes everything]

Let me repeat: How do people reconcile the disconnect between the obvious linguistic fluency of current large language models (LLMs), such a ChatGPT, with the fact that they are only machines, albeit very complex and sophisticated machines? That depends on many factors which will vary from individual to individual. Someone who has no specialized knowledge in a relevant discipline (e.g. artificial intelligence, neuroscience, philosophy, etc.) has to take things at face value. I take it that is the case for Ezra Klein, whom I quote at length and the end of this post. He is generally sophisticated, but lacks specialized knowledge. There is at the present no one, no matter how much sophisticated knowledge they have, who understands how these things work. That is to say, no one who explain this fluency in terms of a very sophisticated arrangement of inanimate components.

What options are open to someone without specialized knowledge? You can accept LLMs at face value. They understand language, they grok meaning, like you or I, though they make mistakes and do crazy things. But then, don’t we also do that? You may even believe they are sentient. All of which is to say that you don’t believe in an absolute difference between humans and machines.

What it you do believe in such a difference? You can also deny what you’re seeing. Yes, it looks like they understand, but for this or that reason, they can’t possibly understand. Some might think of them as works of evil beings, perhaps Satan himself. If that’s where you end up, then, while you believe in an absolute difference between men and machines, you also have some believe in a supernatural order that affects life on earth.

Do those of us with relevant specialized knowledge have other options? That answer to that is not at all clear to me. How much work can be done by saying “I don’t know”? And is that a cop-out? I don’t know.

I’m not sure whether or not there is an absolute difference between humans, as living beings, and machines. I don’t think that current LLMs are sentient, nor do they really understand language. But I do have a rather elaborate structure of ideas that allows me to accept that language behavior as something approximating understanding and meaning to an interesting degree – see, e.g. this recent post: The issue of meaning in large language models (LLMs). And by THAT I mean they are something we need to investigate and understand.

I leave you to Ezra Klein.

* * * * *

Ezra Klein, This Changes Everything, NYTimes, March 12, 2023

Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.

In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?

We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

Welcome to the weird:

But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building.

I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.

“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.”

Here I must protest. I understand the predominant view that these systems are opaque black boxes whose operations are not at all obvious. It is one thing to take that as a statement of fact, but quite something else to take it as Holy Writ. If you take it as a statement of fact, you may also believe that we can understand how these systems operate, but it will require intellectual imagination and hard work. That’s my belief. If you take it as Holy Writ, that’s entirely different. You’re inclined to believe that inscrutability is inherent in these systems and so we have no chance of ever understanding them. We just roll the dice and hope for the best.

The mysterian option, that opacity is Holy Writ, that scares me. I think that’s more dangerous than AGI because it leads to passivity and helplessness. Passive people are easily victimized by demagogues.

No comments:

Post a Comment