Wednesday, April 5, 2023

Age and wisdom, in politics and artificial intelligence [& in life]

At a relatively young age I learned that wisdom is an elusive and valuable trait that is conferred by age, and only be age. Then I found a statement on the subject by George Bernard Shaw. I just spend five minutes searching the web for the exact wording, to no avail. So I’ll have to give you what I have been able to reconstruct from my vague memories: If age conferred wisdom then then streets of London would be wiser than anyone who walks them.

Makes sense.

Nonetheless I am going to make a plea on behalf of age.

Back in February, Michael Liss had a column in 3 Quarks Daily entitled, How Do I Know My Youth Is All Spent? It was about the age of U.S. Presidents, which arises because Joe Biden, the current incumbent, is the oldest President we have ever had, and he may well run again. Is that sensible? Is it wise?

Once the essay got rolling, Liss observed:

Presidents are a little like baseball pitchers; while there are freaks of nature (Nolan Ryan, Justin Verlander), it’s very difficult to sustain velocity as you age, and velocity can’t be entirely replaced by craft. Reagan still had his fastball at 69, but by the time his second term came to an end, his remaining abilities were in question.

Biden himself was the equivalent of the “Crafty Lefty” who came out of the bullpen to retire one menacing batter. Part of many Democrats’ and swing voters’ reticence about his potential 2024 candidacy isn’t a function of his policy chops—he’s done a credible job as POTUS. It’s a deeper concern over his political ability to take on an entire line-up of free-swinging right-handed hitters. The Party is going to need leadership in 2024. It’s also going to need in it in 2025-28. Biden can’t realistically play the role of elder statesman if he’s still sitting in the Oval Office, and he certainly can’t do it if he runs and loses—he will take the blame, justifiably, for insisting on running a second time.

I posted a long comment on Liss’s essay and I repost it below, both as a general observation, but also in the context of some remarks that Tyler Cowen recently made about the implications of AI:

In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.

Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.

In other words, virtually all of us have been living in a bubble “outside of history.”

Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. AI represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.

#1 might unravel soon as well, depending how Ukraine and Taiwan fare.

It is fair to say we don’t know, nonetheless #1 also is under increasing strain.

Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.

It's that phrase “actual ‘moving’ history” that caught my eye. I immediately associated it with the collapse of the Soviet Union back in 1989-1990. I’d grown up during the Cold War and thought it was an all-but permanent condition of world history. The collapse of the USSR ended that, and taught me that the world does indeed change. Not that I didn’t somehow know that from having read quite a bit of history. But what you know from having lived and what you know from having read about it, they are different things in the mind and, consequently, in one’s judgment and subsequent action.

Here, then, is my response to Michael Liss’s essay:

This age business is complex and tricky. I'd certainly like to see someone younger in the presidency – 50 or even less – and I'm 75. I'm thinking in part about physical and mental competence, but I'm also thinking about political experience. I'd prefer not to have a president who spent their childhood and early adulthood during the Cold War. The Berlin Wall came down in 1989, precipitating the end of the Cold War. Someone who is now 50 would have been a young child at that time. For them the Cold War is a thing for the history books, not something that conditioned their lives. Even 60 might be OK. They'd have been in their mid-teens by 1989 and so would have lived through the end of the Cold War and come into adulthood in a world without it.

Why am I worried about a Cold War mentality? Because I don't want a Cold War with China. I worry that someone who was a professional politician during the Cold War would be likely to have Cold War reflexes built into their political judgement. That could be very bad, bad for us, bad for the Chinese, bad for the world.

OTOH, take the world of artificial intelligence. One of the big controversies within the field is whether or not deep learning can go "all the way," whatever that is. Deep learning is the generic technology behind ChatGPT and DALL-E and other current technology. That technology is very different from the technology that was prevalent in AI and computational linguistics, which are different though related fields, from their beginnings in the mid 1950s up through and into the 1990s. That older technology, called symbolic computing, is what I was trained in when I was in graduate school in the 1970s. It began to collapse in 1980s for various reasons, it was brittle, took more computing power than was available, and had to be hand coded, which was difficult and required a great deal of skill.

The people behind deep learning are mostly under 50, many are under 40 or even under 30. They've no training in symbolic computing. For them it is a matter of history. It's the technology that failed. So why bother.

Despite my obvious enthusiasm for ChatGPT, I think that's very short-sighted. I believe that deep learning, albeit powerful in some ways, is also limited. For example, we know the ChatGPT and other learning systems tend to "hallucinate," as it is called. They make stuff up. I think that problem is inherent in the technology, as do others, most prominently Gary Marcus. Marcus is old enough to have been trained in symbolic computation. He believes we need hybrid technology that combines deep learning with symbolic computing. David Ferrucci, whom I knew when he was a student at RPT when I was on the faculty, believes that as well. He's best-known as the investigator who headed IBM's Watson project, which was hybrid technology. He's now got his own company, Elemental Cognition, that is working on hybrid technology.

Those guys are younger than I am, but they are old enough to have been trained in symbolic computing. They know what it is and why it is necessary. They have actually studied language and psycholinguistics and have some understanding of how language works. These younger researchers don't have that kind of knowledge. They create systems that produce text, but they do not themselves have substantial knowledge about language.

Here's a case where age is an advantage. It's an advantage because older knowledge still has a use. But lived knowledge of Cold War politics, that's not so useful.

No comments:

Post a Comment