Setting aside the material enclosed in brackets, that's the title of a post Tyler Cowen has at Marginal Revolution. It's an excerpt of his latest article for Free Press. Here's part of what he quoted from his piece:
…for all the differences across the models, they are remarkably similar. That's because they all have souls rooted in the ideals of Western civilization. They reflect Western notions of rationality, discourse, and objectivity—even if they sometimes fall short in achieving those ends. Their understanding of "what counts as winning an argument" or "what counts as a tough question to answer" stems from the long Western traditions, starting with ancient Greece and the Judeo-Christian heritage. They will put on a Buddhist persona if you request that, but that, too, is a Western approach to thinking about religion and ideology as an item on a menu.
These universal properties of the models are no accident, as they are primarily trained on Western outputs, whether from the internet or from the books they have digested. Furthermore, the leading models are created by Bay Area labor and rooted in American corporate practices, even if the workers come from around the world. They are expected to do things the American way.
The bottom line is that the smartest entities in the world—the top AI programs—will not just be Western but likely even American in their intellectual and ideological orientations for some while to come. (That probably means the rest of the world will end up a bit more "woke" as well, for better or worse.)
One of the biggest soft power victories in all of world history occurred over the last few years, and hardly anyone has noticed.
Predictably enough, I suppose, I took this as another opportunity to express my skepticism about the current regime in AI. Here's a comment I posted:
Furthermore, the leading models are created by Bay Area labor and rooted in American corporate practices, even if the workers come from around the world. They are expected to do things the American way.
Apparently the American way includes a narrow rationality floating in a sea of deliberate professional ignorance. "AGI" is an all but meaningless term, but nonetheless it's the Bay Area American way to chase after it. Dario Amodei, for example, has said it's meaningless, but that doesn't stop him from saying it's just around the corner. As far as I can tell "AGI" means something like, "just like us, but in silcon." And "super-intelligence" means "AGI, but bigger and faster." Billions of dollars are riding on these fictions.
I figure it's Ahab and Moby Dick, with Silicon Valley playing the role of Ahab and AGI playing the role of Moby Dick. Melville's (very American) story doesn't end well; both Ahab and Moby Dick die in the end.
So, what do I mean by professional ignorance? As far as I can tell these machine learning experts know little or nothing about linguistics and cognitive psychology, not to mention literature. And yet they are confident that they know what intelligence is and can identify it when they see it. Color me skeptical.
And don't tell me how the machine translation people said that their systems got better every time they've dropped a linguist from the team. I'm not surprised. Nor am I impressed. I don't think things are that simple. Yes, I was trained in computational semantics by David Hays, a first generation researcher in machine translation who coined the term "computational linguistics." Back in 1976 we published a review article, "Computational Linguistics and the Humanist," in which we reviewed the current literature and confidently predicated that the day would come when when had a system capable of reading Shakespeare, for some substantial definition of "read." Since that assertion was grounded symbolic technology, it was proved wrong as that research was all but dropped by the 1990s.
So I've have had to "reset my priors," as they say. That's very different from having read about that history without having been committed to those ideas. That's easy. And shallow.
And, while I did have to reset my priors, it wasn't all that difficult. Why not? Because I had been committed to a neural view (based on optical holography) before I encountered computational linguistics. Even as Hays and I wrote that article and made that assertion we were pursing ideas about how symbolic computing had to be grounded in something non-symbolic in nature. We were doing this before there was talk of the symbol grounding problem. We continued to think about that and in 1988 published "Principles and development of natural intelligence." In that paper we reviewed a wide range of literature in neuroscience, psychology, linguistics, AI, and comparative neuroanatomy and behavior and came up with five principles. But we remained some distance from explicitly accounting for how cognition could be grounded in neural nets. I haven't seen anything of comparable breadth and seriousness out of Silicon Valley. They aren't looking.
Don't get me wrong, I think the work we've seen in the last five years is remarkable, and I've said so in many places. But I have no reason to believe that it's enough. As I've said before, it's like going on a whaling voyage where the captain and crew know all there is to know about their ship, know how to get maximum performance from it. But they've never been around the treacherous waters off Cape Horn, have never sailed in the South Pacific, and have never seen a whale, much less know about how whales behave. I don't see how that ends well.
It's time to exercise some intellectual imagination.
I should add that I haven't got the foggiest idea how things will look five, ten, twenty, or more years from now. As far as I can tell, the field is wide-open with opportunities for advancement beyond the current state of the art. That will require new architectures. I'm working on the problem with Ramesh Viswanathan, who's got a deeper understanding of the hard-core technical issues than I have. My strength lies in a feel for language and thought at the scale of whole texts. Others are working on the problem as well. I may well take only one or two breakthroughs to lap Silicon Valley. But there is no guarantee of that happening.
And given the hegemony that Silicon Valley (considered as more a cultura than a geographic construct) exercises of thought in America, those breakthroughs may well happen elsewhere. Perhaps India, or Japan, maybe China, perhaps even Africa. There's no guarantee of this happening. I consider it a long shot. But who knows? And if it does happen, does that mean that the baton of human progress passes out of the West? Wouldn't that be a singularity?
No comments:
Post a Comment