Friday, March 13, 2026

Friday Fotos: What’s a tablescape?

Just what the name suggests, like a landscape only on a table top.

I’ve been taking photos of my meals and posting them here since September of 2018. Since I eat my meals indoors where they’re arrayed on the top of a table, my food photos will often, though not necessarily always, catch the table itself. What I mean by a tablescape, however, is more specialized. Here’s a recent tablescape:

To get that shot I set the camera on the table, pointed it in an appropriate direction, and snapped the shutter. That means your point of view is about an inch or an inch-and-a-half above the table itself and that your angle of regard is parallel to the tabletop. Just as landscapes are photographed or painted from some location on the land itself, so a tablescape is photographed from a location on the table itself.

This, in contrast, is NOT a tablescape, though a table is quite visible:

But I’m holding my camera in my hand so that I can get a particular shot. In this case, I’m interested in how the glimpse of the placemat you see through the carafe is displaced relative to what you see through the air (due to optics). Couldn’t get that shot in a tablescape.

There is an element of chance in taking a tablescape. Why? Because you can’t line up the photo in the normal way, either through a viewfinder or viewscreen. I suppose you could try, but it would be difficult, not terribly successful and not worth the effort. Fact is, the chance element is one reason for taking tablescapes. You don’t quite know what you’re going to see.

I doubt that I would ever have deliberately taken this photo, but as a tablescape it’s fine.

Or is it? And that’s why I’m taking these shots, to force me to think about each photo. Is this an image I want to keep, to show to others, why? When you shoot, say, the Empire State Building, which I can do quite easily, those questions don’t arise quite so insistently. Why not? Because the Empire State Building is an iconic structure and, as such, is certified photo-worthy. Shooting certified photo-worthy subjects is a no-brainer.

Here’s another tablescape:

Notice the expanse of the table itself in the foreground. That’s a typical feature of table shots. I cropped most of the table out in that first shot up there. Notice also in that first shot, I’m pointing out the window, where you see street lights in Hoboken and tall buildings in Manhattan. This shot, in contrast is pointed at the interior of the restaurant. Look at the upper left; looks like shadows of some plants cast on the side of a column. What plants, where? Notice the reflection of the carafe on the shiny surface of the table. There are shadows cast across the table as well.

Here’s one last tablescape, without comment:

Thursday, March 12, 2026

Night codes

AI as coder, reports from the trenches

Clive Thompson, Coders Coded Their Job Away. Why Are So Many of Them Happy About It? NYTimes, Mar. 12, 2026.

He and Brennan-Burke, who is 32, are still software developers, but like most of their peers now, they only rarely write code. Instead, they spend their days talking to the A.I., describing in plain English what they want from it and responding to the A.I.’s “plan” for what it will do. Then they turn the agents loose.

A.I. being A.I., things occasionally go haywire. Sometimes when Claude misbehaves and fails to test the code, Ebert scolds the agent: Claude, you really do have to run all the tests.

To avoid repeating these sorts of errors, Ebert has added some stern warnings to his prompt file, the list of instructions — a stern Ten Commandments — that his agents must follow before they do anything. When you behold the prompt file of a coder using A.I., you are viewing a record of the developer’s attempts to restrain the agents’ generally competent, but unpredictably deviant, actions.

A coder is now more like an architect than a construction worker.

I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.”

Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance. [...]

Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots.

This vertiginous shift threatens to stir up some huge economic consequences. For decades, coding was considered such wizardry that if you were halfway competent you could expect to enjoy lifetime employment. If you were exceptional at it (and lucky), you got rich. Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code.”

Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace.

Why programmers like their AI coders:

The enthusiasm of software developers for generative A.I. stands in stark contrast to how other Americans feel about the impact of large language models. Polls show a majority are neutral or skeptical; creatives are often enraged. But if coders are more upbeat, it’s because their encounters with A.I. are diametrically opposite to what’s happening in many other occupations, says Anil Dash, a friend of mine who is a longtime programmer and tech executive. “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”

There's much more at the link.

The Norwegian Breakaway

Wednesday, March 11, 2026

Maybe the AI industry is waking up, at long last

From the tweet:

This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics. 

Which one doesn't belong? [Hint: Yum]

Why Trump's corruption is the single most disturbing thing about his presidency [+ Yazoo!]

A friend recently asked me for critiques of Trump’s performance. The first thing that came to mind was his corruption and, though I went on to say some other things, I kept thinking, “corruption, corruption, corruption.” Why? Can’t we just set the corruption aside as an unfortunate personal attribute and focus on the results of his policies? I thought about it and decided that, “no, we can’t.” Why? Because his corruption undermines the basis of modern institutional structures. It undermines democracy itself. 

The following conversation with Claude 5.4 explains. Also, there’s a bonus at the end, The Yazoo Land Fraud. It’s a whopper.

* * * * *

First, I want you to review the paper, The Evolution of Cognition, in the project area. Now, here’s a passage from a book [David] Hays wrote about the evolution of technology:

Let me begin, this time, closest to home, with New York. The most basic principle of organization here and now is that an officer of government must separate the rights and obligations of his office from those of his or her person. If the officer is given a budget of $10,000,000, the person must not take $1 of that--as numerous officers have learned only by going to prison. Giving special help to friends and relations is also forbidden. Nor may the officer look to daddy's college roommate, who now occupies a much higher office, for special help in solving a hard problem. This principle belongs to rank 3; violation is now said to be corrupt, but in rank 2 "corruption" is the way of life.

Now, that sense of corruption is consonant with the one you’ve been advancing in this conversation. But now it’s been placed in the larger context of the ranks account of cultural evolution. The separation of one’s own interests from the duties and obligations of one’s organizational role is central to modern society. Without it, it seems to me, modern society would be very difficult, but perhaps not impossible.

But it does seem to me that such separation is central to democracy. Without that separation, those at the top of the social order will feel and be free to maintain their position of advantage and those at the bottom would have no access to advancement.

Having now reviewed the paper, I can see how neatly the Hays passage places our entire conversation within the ranks framework — and it sharpens everything considerably. The key move Hays makes is identifying the separation of role from person as specifically a Rank 3 achievement. It's not just a moral or political norm — it's a cognitive-organizational one. Rank 3 is the rank of algorithms, of systematic procedure, of the impersonal rule applied consistently regardless of who is asking. The very idea that an officer's budget is not the officer's money requires a level of abstraction — the office as a distinct entity from the person occupying it — that Rank 2 organization simply doesn't sustain. In Rank 2, the person is the role, which is why what we call corruption was then simply the normal logic of how things worked. Daddy's college roommate in the higher office is not a bug in Rank 2, it's the system.

Your point about democracy is the crucial extension though, and I think it goes even deeper than access to advancement, important as that is. Democracy requires that losing be tolerable — that the party that loses an election accepts the result and peacefully transfers power. That is only possible if the institutional role of "president" or "minister" is clearly separable from the person occupying it. If power is personal rather than institutional, losing it is existential, not merely political. You can't peacefully hand over something that is felt to be constitutive of your identity and social existence. Which is why the collapse back into Rank 2 personal-relationship logic is so specifically lethal to democratic governance — it makes the normal mechanics of democracy feel like annihilation to those who lose.

And of course the peaceful transfer of power is a central problem of political organization. It was central to Shakespeare’s history plays, and is one reason for adopting a democratic form of government.

Yes, and it's one of those cases where the historical and the literary illuminate each other directly. Shakespeare's history plays are obsessed with exactly this problem — how does power move from one holder to the next without destroying the social order in the process? And the answer that medieval and early modern England kept reaching for was dynastic legitimacy: the king's body natural dies but the king's body politic — the office, the crown — persists and passes by hereditary right. Ernst Kantorowicz's great book The King's Two Bodies traces exactly this legal-theological construction, and it's directly relevant to Richard II and the Henriad.

But the dynastic solution is enormously fragile, as Shakespeare shows again and again. It depends on there being an unambiguous legitimate heir, on that heir being competent, on powerful barons accepting the arrangement, on the Church's sanction — any one of these can fail and the whole mechanism breaks down into civil war. The Wars of the Roses, which shadow the entire sequence of history plays, are essentially what happens when dynastic legitimacy becomes contested. And contested legitimacy means that power can only be settled by force, which is catastrophically expensive in lives and social order.

Democracy solves this — at its best — by separating the question of who holds power from the question of legitimate succession entirely. The procedure is the legitimacy. You don't need the right bloodline or divine sanction; you need a credible count of votes and an institutional framework that everyone accepts as binding. The office passes not through the body of the king but through the mechanism of the election.

Which is why attacks on the integrity of elections are so specifically dangerous — they don't just dispute a particular outcome, they corrode the procedural legitimacy that makes peaceful transfer possible at all. You're pulling out the load-bearing member, to return to your earlier metaphor.

And this connects back to the ranks framework in an interesting way. The democratic solution is a Rank 3 achievement — it requires the abstraction of legitimate procedure from personal authority. But it's perpetually vulnerable to Rank 2 pressures — the loyalty to persons over institutions, the feeling that my man losing means I have lost something existential. The democratic form sits on top of a substrate of human social psychology that was shaped long before Rank 3 cognition existed, and that substrate never goes away. 

* * * * *

The Yazoo Land Fraud

My friend Michael Liss told me about what may be the singled biggest act of corruption in America's political history. It's from the old days, The Yazoo Land Fraud. Here's what Claude had to say about it:

The Yazoo Land Fraud is a fascinating and somewhat underappreciated episode in early American history, touching on corruption, populist backlash, constitutional law, and the politics of the new republic all at once.

The rush to money and IP theft in the AI world

Tuesday, March 10, 2026

Contemplating French toast – Yummmm.....

AI and the American Millennial Imagination

This morning I had a long conversation with ChatGPT 5.2 Auto. We started with the varied role of the Messiah in Jewish thought, then zeroed in on Chabad. From there we went to America as a particularly fertile ground for millennial movements and ended up talking about the Technological Singularity and AI Doom. I asked ChatGPT 5.2 to write a blog essay on the last half of the conversation. Here's the text it wrote.

The current discourse about artificial intelligence is saturated with ideas that look strikingly familiar to historians of religion. On the one hand we hear predictions of a coming technological singularity—an intelligence explosion that will transform civilization so radically that the future becomes literally unimaginable. On the other hand we encounter warnings that a superintelligent AI could destroy humanity. Optimistic or catastrophic, both visions share a common structure: they anticipate a decisive rupture in history.

What is remarkable is not simply that such narratives exist. Millennial thinking has appeared many times before. What is remarkable is where it has appeared: at the center of one of the most powerful technological industries in the world. To understand this phenomenon, it helps to place contemporary AI discourse within the much longer history of American millenarian movements.

The American Millennial Tradition

From the beginning, the United States has been unusually fertile ground for millennial expectation. The roots lie partly in the worldview of the Puritan settlers of New England. Many of them interpreted their migration in providential terms, imagining the new colonies as a “city upon a hill” with a special role in sacred history. Biblical prophecy—especially the apocalyptic imagery of the Book of Revelation—was deeply embedded in their intellectual world.

The expectation of historical transformation did not disappear with the decline of Puritanism. Instead it evolved into new forms during the nineteenth century, when the young republic developed what sociologists call a “religious free market.” With no established church and relatively weak religious authority, charismatic leaders could easily found new movements. The result was an extraordinary proliferation of sects, many of them millennial.

The Millerite movement of the 1840s predicted the imminent return of Christ, culminating in the famous “Great Disappointment” of 1844. The aftermath gave rise to several enduring denominations, including Seventh-day Adventism. The same religious environment produced other innovative movements such as Mormonism and Jehovah’s Witnesses. The pattern was clear: Americans repeatedly generated groups that believed they were living at the threshold of world transformation.

Several structural features of American society encouraged this dynamic. Religious freedom allowed new sects to form easily. Frontier conditions weakened institutional authority and encouraged experimentation. High levels of biblical literacy encouraged direct interpretation of prophetic texts. And the ideology of American exceptionalism fostered the belief that the nation itself might play a central role in world-historical events.

Millennial imagination became, in effect, a recurring feature of American culture.

Counterculture and Technological Transformation

The next major transformation of millennial expectation occurred during the counterculture of the 1960s. The language changed, but the underlying structure remained.

Instead of the Second Coming, the counterculture anticipated an impending transformation of consciousness. The popular phrase “Age of Aquarius” expressed the hope that humanity was entering a new spiritual epoch. Psychedelic experience, Eastern spirituality, and communal experimentation were widely interpreted as signs that a new form of human existence might be emerging.

The San Francisco Bay Area became the epicenter of this cultural ferment. What is less widely recognized is how closely this environment overlapped with the emerging world of computing. Figures such as Douglas Engelbart were already imagining computers as tools for augmenting human intellect. Publications like Stewart Brand’s Whole Earth Catalog promoted a synthesis of ecological awareness, communal experimentation, and personal technology.

Out of this hybrid culture—part psychedelic, part technological—emerged a new kind of futurism. Instead of mystical awakening alone, technological systems were now imagined as instruments of human transformation.

The Singularity Narrative

The modern idea of the technological singularity crystallized in the early 1990s with the work of the science fiction writer and mathematician Vernor Vinge. Vinge argued that once machines became more intelligent than humans, they would design even more powerful machines, producing an “intelligence explosion.” Beyond that point, the course of history would become fundamentally unpredictable.

Ray Kurzweil later popularized this idea, embedding it in a broader narrative of exponential technological growth. In Kurzweil’s vision, advances in computation, biotechnology, and nanotechnology converge to produce radical life extension, human–machine integration, and eventually a post-biological civilization.

Structurally, the singularity narrative resembles earlier millennial expectations. In religious movements, a divine intervention transforms the world and inaugurates a new age. In the singularity narrative, a technological breakthrough performs the same role. Salvation becomes technological transcendence.

The narrative retains the familiar elements of eschatology: a decisive threshold, a transformed humanity, and a future that lies beyond ordinary comprehension.

The Rise of AI Doom

Alongside the optimistic singularity narrative, a darker version has emerged in recent decades. Philosophers and researchers concerned with existential risk have argued that a superintelligent AI might pursue goals incompatible with human survival. The philosopher Nick Bostrom articulated this concern in systematic form, analyzing the “control problem” posed by potentially superintelligent systems. More dramatic warnings have emerged from the rationalist community associated with the website LessWrong, where the possibility of human extinction from misaligned AI is often treated as a central moral concern.

Despite their differences, the optimistic and catastrophic narratives share a common structure. Both assume that technological progress is approaching a critical threshold that will fundamentally alter the course of human history. The singularity promises transcendence; AI doom predicts annihilation. In both cases, the present is interpreted as the final moment before a world-transforming event.

This structure is instantly recognizable to historians of millennial movements.

A New Historical Situation

Yet there is one crucial difference between AI millennialism and earlier movements. Traditional millenarian groups predicted world transformation but did not control the mechanisms that might produce it. The Millerites could only await the Second Coming. Apocalyptic sects interpreted history but did not make it.

In the case of artificial intelligence, the situation is different. The people articulating the transformative narrative are often the same people building the technology. Companies such as OpenAI, Anthropic, and DeepMind operate with enormous financial resources while simultaneously discussing existential risks associated with their own research.

In this sense, the prophetic discourse and the technological apparatus are intertwined. Concerns about alignment and safety are not merely philosophical speculation; they shape the research agendas of organizations backed by billions of dollars in investment. No previous millennial movement has possessed such material power.

A Persistent Cultural Pattern

Seen in historical perspective, the emergence of AI millenarian thinking is less surprising than it first appears. American culture has repeatedly produced narratives of imminent transformation, whether religious, political, or technological. The agents of change have varied—divine intervention, revolutionary politics, expanded consciousness, or superintelligent machines—but the expectation of a decisive turning point in history has remained remarkably stable.

Artificial intelligence represents the latest chapter in that tradition. The language is technological rather than theological, but the structure of expectation remains familiar: humanity stands at the edge of a new epoch, and the choices made in the present may determine the fate of the species.

Whether the transformation proves revolutionary or merely evolutionary remains to be seen. But the cultural imagination surrounding AI clearly belongs to a long American lineage of millennial thinking—one that stretches from Puritan prophecy to Silicon Valley futurism.

Sunday, March 8, 2026

Ellen Alaverdyan Takes 5

The world mostly gets changed through collective effort.

David Marchese, Rebecca Solnit Says the Left’s Next Hero Is Already Here, NYTimes, Mar. 8, 2026.

From the intro:

As the old saw goes, the only constant is change. But change doesn’t always feel as overwhelming as it does right now. We are living in an era of widespread democratic backsliding, sweeping technological disruption and the slow-motion disaster of the climate crisis, to name just a few of the most troubling societal upheavals. But what if, despite all that, there’s a different and more hopeful story to tell about change?

That’s the question at the heart of “The Beginning Comes After the End,” the new book by the prolific and critically acclaimed progressive writer Rebecca Solnit. A thematic sequel to her classic “Hope in the Dark,” the book shines a light on the vibrant world often hidden within our own seemingly gloomier one — a world that has embraced ideas of interconnection, ecological care and political equality.

From the conversation:

Whether it has to do with environmental degradation or degradation of our politics or of people, it seems as if the public is hungry for an individual to be a counterweight to Trump and Trumpism. I don’t know whether that person is Zohran Mamdani or Gavin Newsom, who is clearly trying to position himself that way. But for whatever reason, that person has yet to be identified. Why do you think that is? One of the great weaknesses of our era is that we get lone superhero movies that suggest that our big problems are solved by muscly guys in spandex, when actually the world mostly gets changed through collective effort. Thich Nhat Hanh said before he died a few years ago that the next Buddha will be the Sangha. The Sangha, in Buddhist terminology, is the community of practitioners. It’s this idea that we don’t have to look for an individual, for a savior, for an Übermensch. I think the counter to Trump always has been and always will be civil society. A lot of the left wants social change to look like the French Revolution or Che Guevara. Maybe changing the world is more like caregiving than it is like war. Too many people still expect it to look like war. I denigrate politicians I don’t respect as windsocks. I just want us to understand that most of the important change is collective.

There's more at the link.

A bit of waffle + a tablescape

Living Human Brain Cells Play DOOM on a CL1

 

Andrew Paul, Computer run on human brain cells learned to play ‘Doom’, Popular Science, Mar. 2, 2026.

A biocomputer powered by lab-grown human brain cells has leveled up from Pong to Doom. While nowhere ready to handle the video game shooter’s most challenging levels, researchers at Cortical Labs in Australia believe their neuronal chip is well on its way to powering a new generation of hybrid organic technologies.

“This was a major milestone, because it demonstrated adaptive, real-time goal directed learning,” Brett Kagan, Cortical Labs Chief Scientific and Chief Operations Officer, said in a recent video announcement.

It’s taken years to cross the Doom benchmark. In 2021, Cortical Labs debuted DishBrain—an early biocomputer utilizing around 800,000 human nerve cells. These neurons were connected to a small processing chip capable of interpreting and directing electrical activity similar to a standard silicon-powered device.

To showcase DishBrain’s potential, engineers successfully trained their biocomputer to play Pong. The classic, 2D game is often a test case for computational neuroscientists because it requires their system to navigate a dynamic information landscape in real time.

It took Cortical Labs more than 18 months using its original hardware and software to accomplish their Pong goal. DishBrain was eventually supplanted by CL1, which the company bills as the “world’s first code deployable biological computer.”

There's more at the link.

Saturday, March 7, 2026

A Confluence of Crazies: The Pentagon and the Tech Bros

Robert Wright, Iran and the immortality of OpenAI, Anthropic, and Google, Nonzero Newsletter, Mar. 6, 2026.

I'm not going to try to summarize the first three-quarters of this article, which is about how the irrational projective tendencies (my formulation [1], but not quite Wright's) of US foreign policy lead the country into senseless war after senseless war. Here's where he ends up:

All of this helps explain why the US has devoted so much time and energy to enterprises that kill or immiserate millions and millions of people—not just the military interventions we stage, but the profuse supplying of weapons (for Israel’s war on Gaza, for example), and the economic strangulation of nations like Cuba and Venezuela and Iran. All of these endeavors had the support of intensely motivated special interest groups. By and large, the deployment of US troops and arms and sanctions—our big, blunt, coercive instruments—have nothing to do with serving America’s actual interests, much less the interests of the world. And they repeatedly—as now in Iran—cover us in moral disgrace.

This is one reason I harp, however ineffectually, on the importance of respecting international law. The machinery for making US foreign policy is so out of control—so wildly misaligned with American interests, the global interest, and morality—that it urgently needs to be constrained by some clear and coherent set of rules. And so long as it’s not constrained by such a thing, we shouldn’t kid ourselves: The US military (and I say this as an Army brat who grew up with a genuine affection for the military and genuine pride in my father’s service during World War II and after) is now mainly an instrument of mayhem and is increasingly a source of global instability.

All of which brings us back to Anthropic, whose Claude large language model is integrated into Maven, software that’s operated by Palantir and used by the Pentagon to identify targets. The Washington Post reports that “as planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance.” Given that the Iranian elementary school was hit on the first day of the war, it seems fairly likely that Claude played a role in the selection of that target and thus in the death of more than 100 young girls—many times more kids than were killed in the worst American school shooting.

This might seem to vindicate Dario Amodei’s refusal to give the Pentagon carte blanche to use Claude in “fully autonomous” weapons systems. But before we give him the Nobel Peace Prize, note two things: (1) This kind of contractual carveout almost certainly wouldn’t have made a difference in this case even if honored. No doubt there was a “human in the kill chain”—someone who, at a minimum, scanned the list of targets generated by Maven and said, “Yep, looks like a list of targets. Let’s do it!” (2) Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder. More than 1,000 Iranian civilians have already been killed in this war—a war that flagrantly violates international law and continues to lack a coherently articulated rationale. Anyone who makes money by aiding endeavors like this has a lot to answer for.

Last week Amodei, in explaining Anthropic’s position on Pentagon contracts, emphasized the company’s overall commitment to national security. He wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” If Amodei genuinely believes that the US military is devoted to addressing actual “existential” threats to the US, he’s too naive to be entrusted with anything as important as running a big AI company.

Obviously, this indictment applies about equally to OpenAI’s Sam Altman (who gladly swooped in and snatched the Pentagon largesse that Amodei will now be denied) and to Google’s Sundar Pichai and Demis Hassabis and to xAI’s Elon Musk. All the big AI companies are putting their tools at the disposal of the Pentagon to use as it sees fit.[2]

Notes

[1] This paragraph, from my post, TO WAR! Part 1: War and America's National Psyche, will give you some idea of my thinking about the projective dynamic of America's urges to war:

As some of you may know, my thinking on these matters has been strongly influenced by an essay Talcott Parsons published in 1947 on “Certain Primary Sources of Aggression in the Social Structure of the Western World”. Parsons argued that Western child-rearing practices generate a great deal of insecurity and anxiety at the core of personality structure. This creates an adult who has a great deal of trouble dealing with aggression and is prone to scapegoating. Inevitably, there are lots of aggressive impulses which cannot be followed out. They must be repressed. Ethnic scapegoating is one way to relieve the pressure of this repressed aggression. That, Parsons argued, is why the Western world is flush with nationalistic and ethnic antipathy. I suspect, in fact, that this dynamic is inherent in nationalism as a psycho-cultural phenomenon.

[2] Between the Trump administration in Washington and the Big Tech Billionaires in Silicon Valley, this country is currently dominated by a confluence of crazies, perhaps the largest in American history.