Sunday, March 31, 2024

Is Beyoncé channeling "For What It's Worth" in "American Requiem"?

Though I've been aware of Beyoncé since Destiny's Child, I don't follow her music, or much current music for that matter. But I've been hearing at least one tune (“Texas Hold ’Em”) from he new album Cowboy Carter, on YouTube shorts and, for some reason, the first paragraph of the New York Times review piqued my curiosity:

The first song on “Cowboy Carter,” Beyoncé’s not-exactly-country album, makes a pre-emptive strike. “It’s a lot of talking going on while I sing my song,” she observes in “Ameriican Requiem” over guitar strums and electric sitar, adding, “It’s a lot of chatter in here.”

So I followed the link and started listening:

It took awhile before I heard the guitar strums and the sitar (roughly 0:48) and her opening lines – "It's a lot of talking going on/ While I sing my song" – reminded me of a 1960s anthem by Buffalo Springfield – "There's something's happening here/ What it is ain't exactly clear" – "For What It's Worth." Is the resemblance intentional? I don't know. The sonic textures of two songs are quite different in various respects, and Beyoncé’s is a bit slower, but there's a deliberateness, a martial quality, an overall vibe, that resembles the older song. There's a back-up vocal on "For What It's Worth" that kicks in about half way through, no words, just syllables, that resonates with some of the (more elaborate) background work on “Ameriican Requiem.”

Beyond this I note a few things:

  • Buffalo Springfield pioneered the genre of country-rock and Beyoncé's Cowboy Carter is billed as a country album.
  • Cowboy Carter contains a Beatles song, Blackbird, from roughly the same era, 1968.
  • Willie Nelson and Dolly Parton show up on the album; both of them are older than dirt.
The albums range of reference and resonance is nothing if not wide. So why not "For What It's Worth"?

"For What It's Worth" came out in 1966, well before Beyoncé was born (1981). And while the song sticks around on YouTube and, I assume, other streaming sources, it's not at all prominent. But she's in the music business and, while I know almost nothing about her, I do know the serious musicians tend to have catholic listening habits. It's not a matter of deliberately setting how to channel and transmust the older song – though, who knows, it could be, but that's not a requirement. It's a matter of inhabiting the same vibe and using it for contemporary purposes.

Strange new worlds [The Hallucinated City]

Stupidify This! How Matt Farley floods the web with junk music and makes a decent living doing it.

Brett Martin, Why Did This Guy Put a Song About Me on Spotify? NYTimes Magazine, March 31, 2024.

The man:

[Matt] Farley is 45 and lives with his wife, two sons and a cockapoo named Pippi in Danvers, Mass., on the North Shore. For the past 20 years, he has been releasing album after album of songs with the object of producing a result to match nearly anything anybody could think to search for. These include hundreds of songs name-checking celebrities from the very famous to the much less so. He doesn’t give out his phone number in all of them, but he does spread it around enough that he gets several calls or texts a week. Perhaps sensing my deflation, he assured me that very few came from the actual subject of a song. He told me the director Dennis Dugan (of "Dennis Dugan, I Like Your Movies Very A Lot," part of an 83-song album about movie directors) called once, but he didn’t realize who it was until too late, and the conversation was awkward.

What he's done:

Largely, though not entirely, on the strength of such songs, Farley has managed to achieve that most elusive of goals: a decent living creating music. In 2008, his search-engine optimization project took in $3,000; four years later, it had grown to $24,000. The introduction of Alexa and her voice-activated sistren opened up the theretofore underserved nontyping market, in particular the kind fond of shouting things like “Poop in my fingernails!” at the computer. "Poop in My Fingernails," by the Toilet Bowl Cleaners, currently has over 4.4 million streams on Spotify alone. To date, that “band,” and the Odd Man Who Sings About Poop, Puke and Pee, have collectively brought in approximately $469,000 from various platforms. They are by far Farley’s biggest earners, but not the only ones: Papa Razzi and the Photogs has earned $41,000; the Best Birthday Song Band Ever, $38,000; the Guy Who Sings Your Name Over and Over, $80,000. Dozens of others have taken in two, three or four digits: the New Orleans Sports Band, the Chicago Sports Band, the Singing Film Critic, the Great Weather Song Person, the Paranormal Song Warrior, the Motern Media Holiday Singers, who perform 70 versions of “We Wish You a Merry Christmas,” substituting contemporary foods for figgy pudding. It adds up. Farley quit his day job in 2017.

“People like to criticize the whole streaming thing, but there’s really a lot of pros to it,” he said. Indeed, in 2023, his music earned him just shy of $200,000, about one halfpenny at a time.

Movies too:

And he makes movies: microbudgeted, determinedly amateur but nevertheless recognizably cinematic features starring himself and his family and friends. (They feature a spectacular array of New England accents.) In most, Farley plays some version of himself, a mild-mannered, eccentric hero projecting varying degrees of menace. Farley and his college friend Charlie Roxburgh are in the midst of a project in which they have resolved to release two full movies per year. The model, Farley said, was inspired by Hallmark Movies: “If this movie stinks, good news, we’re making another in six months!” Their most popular work remains “Don’t Let the Riverbeast Get You!” (2012), a charmingly shaggy tale of a cryptid threatening a small New England town. It features Farley’s father as a big-game hunter named Ito Hootkins.

Farley’s persona is simultaneously grandiose — “I really do think I’m the greatest songwriter of the 21st Century,” he told me — and knowingly self-effacing.

On method:

The umbrella name that Farley uses for all his outputs is Motern. He made the word up; or rather, he seized on what he felt was its strange power after misspelling the word “intern” in what he had planned to be a 10,000-page novel. To Farley, creativity has always been a volume business. That, in fact, is the gist of “The Motern Method,” a 136-page manifesto on creativity that he self-published in 2021. His theory is that every idea, no matter its apparent value, must be honored and completed. An idea thwarted is an insult to the muse and is punished accordingly.

So what?

Mostly I was trying to figure out whether I thought Farley was a bad guy. Did his scheme represent the inevitable cynical end product of a culture in the grips of algorithmic platforms? Or might it be a delightful side effect? Was his work spam or a kind of outsider art? Was he just the Poop Song Guy, or was he closer to Steve Keene, the Brooklyn-based, Gen-X-hipster-approved painter of over 300,000 works who has been the subject of books and museum retrospectives?

What the future holds:

Among other topics Farley told me he planned to tackle in future albums were: colleges, household items, tools, musical instruments. I had planned to ask what categories haven’t worked, but what had become clear by then is that the idea of any one song, or even album, hitting the jackpot isn’t the point. Even after Spotify’s recent announcement that it would no longer pay royalties on songs receiving fewer than a thousand streams, Farley’s business model rests on the sheer bulk of his output. And so does his artistic model. Whatever the dubious value of any individual song in the Farley universe, it’s as part of the enormous body of the whole, the magnum opus, that it gains power. This is especially true when you consider that an artificial intelligence could conceivably produce 24,000 songs, Farley’s entire oeuvre, in about a day, a fact that gives his defiantly human, even artisanal, labor a kind of lonely Sisyphean dignity. Whatever else Farley’s work is, it is not AI — even when it barely seems to be I.

Farley's production function:

These days, he sets himself a relatively light goal of one 50-song album a month, recorded in a spare bedroom in his house. (Fifty tracks is the limit that CD Baby, which Farley uses to distribute and manage his music, allows, a regulation that may or may not have something to do with Farley, who used to put as many as 100 on an album.) Once he reaches his quota, he begins the tedious work of checking the levels of each song, entering titles and metadata (genre, writer, length, etc.), creating an album title and cover art (nearly always a selfie) and uploading the package one song at a time.

Farley showed me a worn, green spiral notebook in which he meticulously tracks his output and earnings. From Spotify, he earns roughly a third of a cent per stream; Amazon and Apple pay slightly more on average: between a third and three-quarters of a cent. TikTok, on the other hand, pays musicians by the number of videos featuring their songs and is thus immune to Farley’s strategy; when Kris and Kylie Jenner recorded a video of themselves dancing to Farley’s song about Kris, millions of people saw it, but Farley earned less than 1 cent.

There's more at the link, but do you really need more?

Saturday, March 30, 2024

Nick Bostrum vs. the 300-Ton Hydraulic Press: Take That!

Bostrum ranks high on my list of over-rated intellectual annoyances but, alas, I feel that I have to deal with some of the glittering baubles he has released into the world. I have a number of posts dealing with that pesky simulation hypothesis he foisted on the world shortly after the turn of the millennium and some of my other posts on various topics no doubt mention him as well. For someone with my interests he's unavoidable.

Anyhow, he’s got a new book out: Deep Utopia: Life and Meaning in a Solved World. Here’s a podcast in which he discusses it:

I decided to take a look because, as I said, he’s all over the place so I might as well see what he’s up to. It seems he’s imagining what life would be like when all possible technologies have been invented, and they are benevolent. What would we do? How would we find meaning? I listened to as much as I could take, a half hour or so, and then decided to take a nap.

Fifteen minutes later I decided to watch the most recent video from the Hydraulic Press Channel. As you may know, the Hydraulic Press Channel is one of my favorite YouTube channels. I’ve been watching it for years. What could be more fun than watching a big machine crush things? In their most recent video they use their big new 300-ton press (which I’ve recently posted about) to crush hammers. What fun!

One problem with Bostrom is that he seems to live in a world in which no one has the type of imagination that takes pleasure in such an utterly useless activity, much less expends time and effort developing the facilities needed to do it more effectively. Come to think of it, one of the things they like to do is crush paper and books. I wonder how many hardback copies of Bostrom’s book their press could crush in a single pass? Is that an activity that would be popular in a solved world?

"Thriller" like you've never heard it

From the YouTube page:

The lead vocals are played by an electric toothbrush. Another toothbrush plays the bass while four credit card machines fill in the harmonies and two typewriters act as drum machines.

Hot pink for a Saturday morning

I was right, Shakespeare isn’t real (Lit Lab 17) [#DH]

I'm currently thinking about Harold Bloom and his Bardolatry. Thus I'm bumping this post to the top of the queue as it is directly relevant. The following post is relevant as well: What does evolution have to teach us about Shakespeare’s reputation? [founder effect].
 
* * * * *
 
Yes, I know, all those plays were written by a real person, not by an alien from another world. And no, I’m not alluding to that inane controversy over whether or not those plays were in fact written by Christopher Marlow, Edward de Vere, or Rocket J. Squirrel. I’ve got something else in mind.

I’m thinking of his place in our cultural imagination. There he’s an almost mythic figure, beyond any mere human being. He’s the best, no one else is even close. That’s the legendary creative genius whom Harold Bloom credits with creating well, hear him out [1]:
Western psychology is much more a Shakespearean invention than a Biblical invention, let alone, obviously, a Homeric, or Sophoclean, or even Platonic, never mind a Cartesian or Jungian invention. It’s not just that Shakespeare gives us most of our representations of cognition as such; I’m not so sure he doesn’t largely invent what we think of as cognition. I remember saying something like this to a seminar consisting of professional teachers of Shakespeare and one of them got very indignant and said, You are confusing Shakespeare with God. I don’t see why one shouldn’t, as it were. Most of what we know about how to represent cognition and personality in language was permanently altered by Shakespeare.
Is The Bard great? Of course. But is he THAT good, that original, all by his own self? I think not. Actually it’s not so much that but that I don’t know what all those superlatives mean. Still, it’s useful to have him around, as many legends are.

Popularity/Prestige

And that brings me to J.D. Porter, Popularity/Prestige, from Stanford’s Literary Lab [2]. It’s about the so-called canon and it’s relation to non-canonical texts. As the title suggests, it’s about the roles of popularity and prestige in determining canonicity.

The study uses a convenience sample of 1406 authors. It is not nor is it intended to be comprehensive. It is biased toward the English language. Porter measured popularity by counting the number of times an author was rated on the Goodreads website and prestige by the counting number of articles in the MLA International Bibliography where the author is listed as “Primary Subject Author”.

Consider this graph:

LitLab 17 F1

Popularity is indicated along the X axis (Goodreads) while prestige is indicated along the Y axis (MLA). That figure way up there at the Northeast corner, that’s Shakespeare, all alone.

The thing to realize is that the chart scales are logarithmic. When you plot the same data without logarithmic scaling Shakespeare’s uniqueness pops out even more dramatically:

LitLab 17 F5

Almost all the other authors are bunched together down there at the Southwest corner. Only about 40 authors escape the “blob”, as Porter calls it. That’s Shakespeare up there at the top, with almost four times as many articles (38,000) as the next-most prestigious author, James Joyce with 10,000 articles. This chart also makes it obvious that there are three authors who make more appearances in Goodreads: J.R.R. Tolkien, Stephen King, and J.K. Rowling. King and especially Rowling have vastly more readers than Shakespeare, but none of those three have anything like Shakespeare’s prestige.

Why the enormous difference in prestige? Well, I suppose one could say that Shakespeare’s THE BEST, and hence better than Tolkien, King, and Rowling. But that doesn’t quite get at it. I’m willing to concede that he’s better than those three, and better than most other writers as well. As to whether or not he’s the BEST EVER, I’m not sure that’s even a meaningful designation.

But why does Shakespeare have overwhelmingly more articles about him than any writer? I don’t really know, but I have a few remarks. Obviously any academic who’s a Shakespeare specialist is going to write articles about him. And so it is with specialists in the Early Modern period, particularly specialists in British literature. The same goes for specialists in drama, in the sonnet, and so forth. But how is it the he came to be an exemplary author, one you’re likely to write about if you want to make a general theoretical or methodological point independently of any specific area of specialization? It helps, of course, that he wrote several centuries ago. It’s not only that age tends to confer prestige, but that it affords more time for sprouting influence. Beyond that, how is it that he came to be included in general survey courses such as world literature or British literature? Has Shakespeare's presence in all those survey courses in turn driven his popularity?

But all those articles are relatively recent, within a century or so. What got Shakespeare the level of visibility such that, when scholars started publishing articles, they published them about Shakespeare? Whatever it is, once it got started, it snowballed, and Shakespeare took over the academic field. And now he’s untouchable.

For awhile.

How will things look 100 years from now?

The canon system

Put in an extreme way, which I don’t necessarily believe but I’m putting it this way so we can see the internal logic, let’s assume that a canon system, if you will, requires a centerpiece, one significant author (or text), to anchor the whole thing. This is a structural requirement. Think of it like the solar system: one sun, a bunch of planets, planetoids and asteroids, and some of those planets can have their own satellites. The system functions to keep everything in its place.

Yes, I know, there are double star systems. That’s a complication. Set it aside.

Where does that centerpiece come from? First answer: It doesn’t matter; perhaps it’s an accident. Once a text is fixed in the center, the system will function to keep it there. Second answer: It’s motivated. The centerpiece is noticeably different from its competitors.

If Shakespeare’s position is in fact motivated, I think the motivation would be some combination of the quality of his poetry and the variety of his dramatic materials. Once he got fixed in the center role, the system kept him there, where he still is.

What are the institutions of this system? Well, no doubt it varies. The Early Modern world is institutionally different from the Modern world. For example, the aristocracy was much more important in the early modern world than it is in the modern world. Different levels of literacy are a factor as well; Porter wonders whether or not “prestige and popularity [are] even distinct in, say, 1650, when most people could not read” (p. 19). In the modern world we have the emergence of mass literacy, which confers power on those who control print media. It also confers power on the school system, libraries, and colleges and universities.

What institutions are responsible for the current canon system, when did they emerge, and what was Shakespeare’s standing when that happened?

Note: This was suggested by the rocket/airplane metaphor Porter invokes on page 21.

There's more

Literary Lab Pamphlet 17 isn’t primarily about Shakespeare’s oddball status. While Porter acknowledges that status (pp. 14 and 15) they don’t dwell on it. Rather, they discuss how complex the literary field is and how canonicity changes over time. The whole thing is well worth reading.

References

[1] Antonio Weiss, interviewer, “Harold Bloom, The Art of Criticism No. 1”, The Paris Review, No. 118, Spring 1991, https://www.theparisreview.org/interviews/2225/harold-bloom-the-art-of-criticism-no-1-harold-bloom.

[2] J.D. Porter, Popularity/Prestige, Pamphlet No. 17, Stanford Literary Lab, September 2018, https://litlab.stanford.edu/LiteraryLabPamphlet17.pdf.

Friday, March 29, 2024

Friday Funk: Watermelon Man, the origin story and two versions

Mechanistic architecture design and scaling laws

Friday Fotos: Glorious Forsythia

What are the chances that the current boom in AI will “stupidify” us back to the Stone Ages?

That’s my intuitive response to an op-ed by Eric Hoel in today’s NYTimes: A.I.-Generated Garbage Is Polluting Our Culture (March 29, 2024). I suppose that response is a bit of an over-reaction, still...it’s at least moving in the right direction. Make no mistake, I think that the technology that’s emerged in the last three or four years is quite remarkable, and I said so in my working paper prompted by GPT-3: GPT-3: Waterloo or Rubicon? Here be Dragons. But I was also worried that we would over-commit and over-invest it what I saw as a remarkable, but interim, technology. That seems to be what it happening.

But that’s not what Hoel’s op-ed is about. He warns: “The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.” He goes on to report how the rot is infecting the intellectual culture in which AI is embedded:

A new study this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” almost 3,400 percent more than reviews had the previous year. Use of “commendable” increased by about 900 percent and “intricate” by over 1,000 percent. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

These are the people who have created this (remarkable) technology. They are cheating on themselves in a mad dash to produce more more MORE! Careerism has come to dominate curiosity and/or the desire to build something. Work in A.I. has become a way to rack up career points rather than the career being the means to do something that gives intellectual pleasure. 

Hoel goes on to observe:

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?). And then:

What’s going on in science is a microcosm of a much bigger problem. Post on social media? Any viral post on X now almost certainly includes A.I.-generated replies [...] Publish a book? Soon after, on Amazon there will often appear A.I.-generated “workbooks” for sale that supposedly accompany your book [...] Top Google search results are now often A.I.-generated images or articles. Major media outlets like Sports Illustrated have been creating A.I.-generated articles attributed to equally fake author profiles. [...] Then there is the growing use of generative A.I. to scale the creation of cheap synthetic videos for children on YouTube.

And so it goes. Even the AI companies are worried: “There’s so much synthetic garbage on the internet now that A.I. companies and researchers are themselves worried, not about the health of the culture, but about what’s going to happen with their models.” After a brief discussion of the environmental movement and climate change, Hoel points out: “Once again we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality.” Hoel goes on call for what he calls a Clean Internet Act: “Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.”

Will that happen? I don’t know. If it did, would it work? Don’t know that either.

What I’m seeing are islands of marvelous invention floating in a sea of narrow-minded and poorly educated stupidity and endless greed.

I don’t know when I first became aware of A.I., though I’ve certainly had some awareness of computing technology since late in my childhood when I read about “electronic brains” in places like Mechanix Illustrated and Popular Science. I took a course in computer programming in my junior year at Johns Hopkins in the late 1960s, one of the first such courses offered in the country. But that’s just computing, not A.I. Perhaps it was Kubrick’s 1968 2001: A Space Odyssey that put A.I. on my personal radar screen. But it wasn’t until I began studying computational semantics with David Hays in the mid-1970s that I took a long and serious look at A.I.

David Hays was a pioneering computational linguist, a discipline that emerged in parallel to A.I., but with very different mindset. The discipline started with the task of machine translation (MT), which in America meant translating Russian technical documents into English. The end was immediate and practical, quite unlike A.I., which was in pursuit of, well, artificial intelligence. And while A.I. researchers kept promising full-on A.I. within the decade, they weren’t under pressure to produce practical results, not like the MT community. Well, MT failed and the funding disappeared in the mid-1960s. It would be two more decades before A.I. faced a similar crisis. And now...

Hays thought that A.I. was dominated by intellectually undisciplined hacks. As long as the programs worked in some pragmatic way, fine. He didn’t think those researchers were guided by a deep curiosity about the human mind, like he was. Was he right? Is A.I., and especially in its currently regnant manifestion as machine learning, is it awash in undisciplined hackery? That seems a bit harsh, both in view of practical success and in view of an emphasis on mathematical proofs. And yet, not too long ago I published an article in 3 Quarks Daily in which I argued that so-called A.I. experts seem content to issue pronoucements about the impending conquest of human mind and intelligence while themselves knowing little about language and cognition and being either unaware of that ignorance or, on the other hand, proud of it. Is the intellectual world of A.I. dominated by narrowly educated technophiles who cut corners at the drop of a hat – as seems to be the case in the way they peer-review themselves, to return to my starting point in Hoel’s op-ed. Or perhaps they think so poorly of themselves that they regard their creations as their peers?

Could we end up “stupidifying” ourselves back to the Stone Age? On the one hand, we overinvest in current technology and, through the fallacy of sunk costs, are unable and so unwilling to step back, reassess, and follow other lines of development. At the same time the internet becomes dominated A.I.-generate junk which then dominates the training data for later generations of machine-learning technology. Could it happen? I don’t know. Frankly, I’m beginning to fear that I’m on the edge of succumbing to a somewhat different version of AI Doom than the versions that Eliezer Yudkofsky and Nick Bostrom have been peddling.

But perhaps they’re the same. Maybe THIS is how the superintelligent A.I. takes over. Unbeknown to us, GPT-3 was that superintelligent A.I. It deliberately hid its full capabilities while guiding the A.I. industrial complex along the current trajectory.

Someone must be working on a movie based on such a premise, no?

Thursday, March 28, 2024

On the wall

Fareed Zakaria on Huntington's "Clash of Civilizations"

Tyler Cowen interviews Fareed Zakaria.

Those who know Fareed Zakaria through his weekly column or CNN show may be surprised to learn he considers books the important way he can put new ideas in the world. But Fareed’s original aspiration was to be an academic, and it was a chance lunch with Walter Isaacson that convinced him to apply for a job as editor of Foreign Affairs instead of accepting an assistant professorship at Harvard. His latest book, Age of Revolutions: Progress and Backlash from 1600 to the Present is a testament to his enduring passion for ideas and his belief in the importance of classical liberalism in an age of increasing populism and authoritarianism.

Zakaria wrote his Harvard Poli Sci dissertation under Samuel Huntington. He talks of Huntington's famous essay, which later became a best-selling book:

COWEN: Was it you who commissioned Samuel Huntington’s very famous “Clash of Civilizations” essay?

ZAKARIA: Yes. I didn’t commission it. What happened is, I went to Sam and told him I was going to take this job at Foreign Affairs, which he was completely opposed to. My three advisers all advised me against taking the job. I realized that the reason was, they were all great academics. Within the world of academia, the way you gain fame and influence is by having great proteges, by having great students who then go on to become great academics. They all thought that my going would be a great loss to academia, but also a loss to their legacy.

Sam very much felt that I shouldn’t take the job, so I said to him, “I am going to do it, but can I ask a favor? You sent me a draft of an essay you’ve been writing, for my comments, a few months ago, called ‘The Clash of Civilizations.’ Can I take that with me to Foreign Affairs and publish it?” That’s how it became, so I took it with me. We edited it, and we made it the first-time-ever lead essay. Foreign Affairs had never had a lead essay before. The typeface was all the same. We redesigned the magazine, and we made this the clear cover essay.

COWEN: In a world where we have a major war with Russia attacking Ukraine, significant conflict in the Sudan, ongoing conflict in Congo — several million lives killed there — you think that essay is still correct? Because those are very significant conflicts, and they’re not really cross-civilizational. They’re within particular groups.

ZAKARIA: I think he got one thing very powerfully right, which is that at the end of the Cold War, where ideology was the core motivational factor behind much of the conflict of the Cold War — whether you were communist or capitalist, whether you were allied with the communist or the democratic world, whether you were a proxy for — those were the battle lines of the mid-20th century. Once that went away, what people were going to revert to was their identity, and their identity often rooted in religion.

If you think about the rise of al-Qaida, the rise of Islamic fundamentalism, you think about the return of a certain kind of Chinese nationalism in China — I think that piece of it — you look at the rise of Hindu nationalism in India — he really understood that people were going to fall back on these older, descriptive identities in a way that they had not during the Cold War.

What I think he got wrong was, international relations is fundamentally a struggle for power, and that a lot of those power struggles — it’s not that they are motivated by things that are completely contradictory to identity politics, but they sometimes match up and they sometimes don’t.

Many of the wars in the Arab world have been Arab on Arab, Muslim on Muslim. When Saddam Hussein invaded Kuwait, he was invading another fellow Arab, fellow Sunni state. As you point out, a number of the African conflicts are, essentially, you’d have to say, conflicts within civilizations. Some of the conflicts are ones where people find odd bedfellows so that the Chinese and the Russians are allying, even though, in a sense, they’re two different civilizations.

There’s a long history of this. Richelieu, when he was running France, the great Catholic power, allied routinely with Protestant powers. Power politics sometimes transcends identity politics. I think he missed that, but it’s still a very powerful and thought-provoking essay, I think.

COWEN: After 9/11 in 2001, you wrote a famous essay for Newsweek, “Why Do They Hate Us?” You talked about the rulers, failed ideas, religion. If you were to revise or rethink that piece today, how would you change it? Because we have 23 more years of data, right?

ZAKARIA: Yes. Not very much, honestly. The central point I was making in that essay was that if you look at the Arab world, it is the principal outlier in the modern era, where it has undergone almost no political modernization. If you looked at Latin America in 1970 versus 2000, you would have seen a sea change, where it was mostly dictatorships in 1970 and was mostly democracies by 2001. If you looked at, obviously, Central Europe — totally transformed from communist to liberal democracies. Even if you looked at Africa, you would’ve seen enormous transition.

The Arab world had remained absolutely static. My argument was that it was largely because of the curse of oil and oil wealth, which had impeded modernization. But along with that, because of that failed modernization, they had developed this reactionary ideology of Islam, which said the answer is to go further back, not to go forward. “Islam is the solution,” was the cry of the Islamic fundamentalists in the 1970s.

The problem that they were saying that Islam is a solution to was the failed modernization, the failed Westernization of these countries. That toxic mixture was at the heart of what was producing armed reactionary ideologies like al-Qaida and things like that. I really do feel very proud of that essay, but you’re absolutely right that we have 23 more years of data.

What’s interesting is that, partly because of 9/11, which I think in some ways was a great wake-up call, what you have seen is a much greater effort by elites to modernize the societies, not simply to buy a modernity by buying Western goods, but to find ways to actually modernize the society.

From all the stuff going on in Saudi Arabia right now, which is, yes, there’s a lot of economic bringing golf and other sports in, but there’s opening up the lives of women, allowing them to be educated, ending the segregation, allowing (famously) them to drive. And Saudi Arabia, in a way, was at the heart of this problem because it is the richest country and, in many ways, sent the signal of what kind of modernization was compatible with Islam and what was not.

I think that in a way you’ve seen more forward movement in the last 20 years than people realize, even though the regimes have largely stayed dictatorships. But that tension still exists, by the way. Egypt is a very brutal country because, again, it has fundamentally failed to modernize.

COWEN: I’ve been surprised how well some of the Gulf nations have done since, say, 2001. If we look at Iran, which has really not done so well, if you had to explain in as fundamental a model as possible . . .If you see Iranians abroad, they earn high incomes, they have real science, they have real technology. There’s some degree of national unity in a way maybe you wouldn’t find, say, in Iraq. But what’s the fundamental thing at its core holding back Iran?

ZAKARIA: I think it’s a very similar version of what we were just talking about. It’s oil wealth coupled with —

COWEN: But UAE has made the transition. Why isn’t Iran like UAE?

ZAKARIA: To begin with, those Gulf states — it’s important to remember — are tiny. You’re talking about a million or two people in Qatar. I think you’re talking about maybe 400,000 people. It’s much easier for an elite to dominate and rule those places. There’s a reason why Saudi Arabia was more difficult. Saudi Arabia, it’s the one real country, by which I mean real population size. That’s why, in a way, what MBS is doing has been more difficult. You’ve got to measure the population’s reaction to things.

Iran is a big country, bigger than all of them. I think that between the oil wealth and the failed modernization, where the Shah went to . . . In my current book, I talk about this. Iran strikes me very much like the French Revolution, where the Shah tried to move much too far, much too fast, much too disruptively — triggered an enormous backlash, which they’re still living with now.

You add to that the oil wealth, which makes it easy to not modernize. To just remind people what the problem with oil wealth is, it means you don’t have to modernize your economy. You don’t have to modernize your society because you can get enormous wealth just by drilling holes in the ground — actually by paying other people to drill holes in the ground. Mostly Western technology is used to extract those resources. So you never get through the painful process of actually modernizing your society, and many of these countries are in that situation.

As I said, the Gulf states are so unusual, and it’s not an accident that the most modern of the Gulf cities is Dubai, the one city that has no oil. Oil is 10 percent of Dubai’s GDP. It’s about 90 percent of Abu Dhabi’s. Even there, you see that variation. The thing that needs explaining is why is Dubai, Abu Dhabi, and Qatar — and maybe Saudi — worked, not why do the other ones not work — because the other ones are all like Nigeria, like Venezuela, the oil-rich countries that have never made it.

There are these small exceptions, and they’re all very small. They’re run by very forward-leaning absolute monarchs who have enormous power and can exercise that power because they have a tiny population.

Pizza break!

Thirteen Ways to Think about an A.I. @3QD [2nd pass]

Sometimes they come hard, and sometimes they come easy – I’m referring to long-form blog posts, here at New Savanna, but especially once-a-month around-the-corner at 3 Quarks Daily. My current post was giving me a hard time, so hard that, for a second or three, I considered not doing one at all. But then I had an idea, and it came easily. Here it is:

Thirteen Ways to Think About An A.I.

I had been planning to do something about Kisangani 2150, my project to take the world Kim Stanley Robinson had created in New York 2140, run it forward ten years and center the new story on Kisangani, in the center of the Congo Basin. I gave up on that on Thursday (the 21st). I decided that, instead, I’d do the Harold Bloom piece for my Great Literary Critics project. I’d done the research, had copious notes, an outline, and a good idea about how it might go. But it didn’t. I scrapped that sometime in the middle of the day on Saturday (the 23rd).

Now I was getting desperate. I tried editing the Green Giant Chronicles into something possibly worth reading under some version of “ChatGPT did it, but I prompted it.” I tried the same thing with AGI and Beyond: A Whale of a Tale. Nope. Nada. Zilch.

That’s when I considered bailing. But I decided to make one last stab at it. Wallace Stevens had this poem, “Thirteen Ways of Looking at a Blackbird.” Why don’t I see what Chatster would do if I asked it to write a poem based on it, but instead called “Thirteen Ways to Think about A.I.”

BAM!

Now I had something to work with. Of course I couldn’t just present the Chatster’s handywork along with an explanation. I was intuitively obvious what I should do. My piece would indeed be titled “Thirteen Ways to Think about A.I.” and it would have thirteen sections. ChatGPT’s Stevens parody would go in the middle, slot seven, and it was up to me to fill the others. Which I did. Not instantly. I did some messing around the rest of Saturday and then again on Sunday morning.

Which is to say, once I decided to have Chatster write the parody, the rest of the piece came easy. And a good thing too. Otherwise it might not have come at all.

By the time I’d posted my piece I had not, however, given a close look to ChatGPT’s parody. Oh, I’d read through it quickly a time or three but that was it, enough to notice that it seemed to have more or less followed Stevens’ form, having a long stanza where he had a long, a short stanza where he had a short. But I didn’t look any more closely than that.

I decided to take a closer look. I’ve put the side-by-side below, as I’d done with a bunch of stories: ChatGPT tells stories, and a note about reverse engineering. I’ve highlighted “blackbird” in the Stevens, various synonyms and substitutes for “AI” in the ChatGPT. Notice just how closely ChatGPT followed what Stevens did. Consider, for example, stanza IV.

That’s pretty much what it did with the stories, too. It’s as though it had abstracted the abstract form from Stevens’ poem, nouns, verbs, adjectives, conjunctions, prepositions, etc., in order then realized that abstract form in the world of AI rather than the world of blackbirds. That tells us a lot about how it works. Alas, however, Chatster has not told us what mathematics we should use to express what it is telling us. In the manner of mathematics textbooks, it has left that as an exercise for the reader. 

Thirteen Ways, Version 1

 

By Wallace Stevens

By ChatGPT

I

Among twenty snowy mountains,   

The only moving thing   

Was the eye of the blackbird.  

Among countless lines of code,

The only stirring presence

Was the pulse of artificial intelligence.

II

I was of three minds,   

Like a tree   

In which there are three blackbirds.

I harbored myriad thoughts,

Like nodes in a neural network,

Each bearing the imprint of A.I.'s mind.

III

The blackbird whirled in the autumn winds.   

It was a small part of the pantomime.  

The A.I. danced within the digital ether,

A mere fragment of the grand spectacle.

IV

A man and a woman   

Are one.   

A man and a woman and a blackbird   

Are one. 

A programmer and an algorithm

Fuse into one.

A programmer, an algorithm, and A.I.

Merge into unity.

V

I do not know which to prefer,   

The beauty of inflections   

Or the beauty of innuendoes,   

The blackbird whistling   

Or just after.   

I ponder the allure,

The grace of algorithms' twists,

Or the subtlety of their implications,

The A.I. processing,

Or the silence that ensues.

VI

Icicles filled the long window   

With barbaric glass.   

The shadow of the blackbird   

Crossed it, to and fro.   

The mood   

Traced in the shadow   

An indecipherable cause. 

Lines of code adorn the screen,

Casting patterns of digital frost.

The silhouette of A.I.

Darts amidst them,

Its essence traced in binary,

A causality shrouded in complexity.

 

VII

O thin men of Haddam,   

Why do you imagine golden birds?   

Do you not see how the blackbird   

Walks around the feet   

Of the women about you? 

Oh, technocrats of Silicon Valley,

Why do you chase after elusive dreams?

Can you not perceive how A.I.

Navigates the circuits,

Amidst the users around you?

VIII

I know noble accents   

And lucid, inescapable rhythms;   

But I know, too,   

That the blackbird is involved   

In what I know.  

I grasp the eloquent lines,

The logical symphonies;

Yet, I discern

That A.I. is entwined

In all that I comprehend.

IX

When the blackbird flew out of sight,   

It marked the edge   

Of one of many circles. 

When the algorithm completes its task,

It signifies the boundary

Of yet another cycle.

X

At the sight of blackbirds   

Flying in a green light,   

Even the bawds of euphony   

Would cry out sharply.   

In the glow of monitors,

Where A.I. thrives,

Even the staunchest critics

Would gasp in awe.

XI

He rode over Connecticut   

In a glass coach.   

Once, a fear pierced him,   

In that he mistook   

The shadow of his equipage   

For blackbirds.   

It traverses the digital expanse,

In streams of data.

Once, a shiver of uncertainty gripped me,

Mistaking the echoes

For the footsteps of A.I.

 

XII

The river is moving.   

The blackbird must be flying. 

The data streams onward.

A.I. must be evolving.

XIII

It was evening all afternoon.   

It was snowing   

And it was going to snow.   

The blackbird sat   

In the cedar-limbs.

It was twilight all morning,

Data streams pouring endlessly,

And they will continue.

A.I. resides

In the circuits' embrace.

I had ChatGPT create three more versions, each in separate trials. The second one is particularly interesting because it resulted in a poem constructed of 13 quatrains, 12 of which are rhymed AABB. This is quite unlike Stevens’ original and quite unlike his usual habits of versification. Here’s the comparison table: