Monday, May 11, 2026

AI’s New Trillion Dollar Mission (is BS)

YouTube:

This week on Prof G Markets, Scott Galloway and Ed Elson discuss the growing belief in Silicon Valley that AI won’t just replace workers, but managers too. Then, they break down the proposed pied-à-terre tax in NYC and why they believe taxing luxury second homes makes sense. Finally, they unpack why alcohol stocks are struggling while GLP-1 drugs are booming, and what that says about the future of American consumer behavior. [...]

Timestamps:
00:00 Preview
00:26 Today's number
01:04 Today's episode
05:23 AI's new mission
24:35 Ad break
26:55 The wealth tax debate is heating up
46:10 Ad break
48:41 The death of the night out
01:08:56 Week ahead
01:11:11 Scott's prediction
01:12:51 Ed's prediction
01:14:05 Credits

Starting at about 20:08, Scott Galloway:

And that is there is something to be said of and there's a balance here. I've in my companies, I'm doing some virtue signing right now. I've always said there should be two or three people and I've always had small companies, right? They they start at zero. Once we have a someone in HR or CFO, I either step step down from the CEO role or become the chairman because I I don't have those skills to scale a company and I don't want to deal with that stuff.

But until then, I've always said we should have two or three people that are one or two bad decisions away from living in their car. They're not, you know, they're they have bad judgment. They're they do stupid shit all the time. They're not what I'd call there's no way they're leaving us for Google. Let me put it that way. A little bit down on their luck maybe. And guess what? The business can be a great means of a little bit of social good.

And also the notion this is basically the notion that part of an organization if you think of stakeholders and I didn't get this. So, I always thought my goal was to pay people less than market and figure out other tricks of the trade to get them to stay and retain them. And then what you realize as you get older is that what is more rewarding is to build a profitable company and slightly overpay people. And if there's some fat in the organization and if there's a few people who quite frankly are, you know, not going to get a job anywhere else but work, you know, work hard or good people and maybe they're not, you know, amazing. Okay, that's okay too. And in some countries, the objective of a lot of the owners is to increase employment. Now, you have to balance that with making sure the organization can survive and has profitability.

But this is again this singular messiah complex that is nothing. There's only one stakeholder and it's shareholders. and I can figure out technology to replace people and we can all work singularly and then eventually the AI will take out those singular teams and replace them and then there will just be one. It'll be Jack Dorsey and Elon Musk who each own 49% of the world and do a lot of ketamine and if they're good enough they will provide UBI for all of us such that we don't uh rise up and kill them.

I I I'm not a fan, Ed. I'm not a fan of this whole line of thinking. I think it's [ __ ] and I think it's unhealthy and I think it's nihilistic.

The icon, the omelette, and some grass

Cultural Evolution and (Tyler Cowen’s views on) History [MR-4]

While Tyler Cowen’s monograph, The Marginal Revolution: Rise and Decline, and the Pending AI Revolution (2026), is about intellectual history, I didn’t sense that Cowen had any overall view of history. Rather he seems to think of it as one thing after another, which it is, during which bursts of human creativity appear at irregular intervals and mostly later than they should. I decided to quiz Cowen’s AI about that. Before I present that dialog, however, I want to comment about the concept of cognitive ranks, which I have instead of a theory of history.

The idea was incubated during my undergraduate years at Johns Hopkins, where I was introduced to the thinking of Jean Piaget by Mary Ainsworth. As you know, Piaget argued that children’s conceptual development happened in stages, where each stage marked the emergence of a new cognitive mechanism. Piaget did, however, also apply his approach to the emergence of ideas in history, as in his slender book, Genetic Epistemology. That’s what came to interest me.

In my sophomore or junior year I took a course on the Scientific Revolution, where I read Kuhn’s famous book, The Structure of Scientific Revolutions. Earlier, in my freshman year, I had taken a two-semester course in art history. And I’d taken a bunch of literature courses. I couldn’t help but notice that these otherwise different topics “crossed” in a three century span. During the 15th through 17th centuries we saw the emergence of realistic depiction based on rationalized space, the Scientific Revolution, and the origins of the novel. Then I read Centuries of Childhood: A Social History of Family Life (1962) in a sociology class. That same period saw the conceptualization of childhood as a specific stage of life with specific needs and capacities. Was this all just coincidence or did these separate phenomena have something in common beyond time frame?

And then there was the widespread recognition that the emergence of literary had society-wide effects on culture. Similarly, we have widespread cultural changed in the transition from the late 19th to the early 20th century. Modernism appears in the arts, music, literature, and the plastic arts, and physics undergoes a major change.

With all that under my belt I came across a slender book by Walter Wiora, The Four Ages of Music (1965). In a quick survey of the worlds’ music he argues for four stages based respectively on rhythm, melody, and harmony, with the fourth based on an eclectic use of a wide variety of techniques. I read that and wondered: Perhaps everything is like that? Is that the long course of cultural evolution?

I took that with me when I went on for my Ph. D. at the State University of New York at Buffalo. While I was getting my degree in the English Department, I spent a great deal of time in Linguistics studying cognitive semantics with David Hays. Hays too was interested in cultural evolution, so we started working on it. I worked out a version of the idea in my 1978 dissertation, “Cognitive Science and Literary Theory,” where I applied it to the evolution of narrative forms. The idea then lay dormant until Hays and picked it up again in the late 1980s, leading to the publication of our basic article, The Evolution of Cognition (1990), followed by other articles by me and by Hays; Hays also wrote a book, The Evolution of Technology Through Four Cognitive Ranks.

I think of the idea as a description motivated a certain conception of cognition. It’s descriptive because Hays and I don’t attempt to explain why this evolution happened; rather we observe that that is what has in fact happened. Our conception of cognition is, broadly speaking, constructivist, a term that has been applied to Piaget’s work. Ideas are constructed by cognitive mechanisms. Each cognitive rank draws on mechanisms not available at earlier ranks. Thus I am inclined to see order in the long-term history of ideas, an order based on the accumulation cognitive mechanisms, where Cowen tends to see adventitious emergence based on a local confluence of facilitating circumstances.

We look at the world in very different ways, which was the point of an earlier post, Welcome to Reality, Tyler; the AI’s got your number: Further Notes on Revolution, Marginal and AI [MR-3a]. I note, however, that I have no more use for Hegel and Marx than Cowen does.

And with that, here’s the dialog I had with Cowen’s AI. Note that, toward the end, I get around to asking about Girard. I didn’t have Girard in mind when I started this dialog, but, in thinking things over, I began to get a sense of why Girard would appeal to Cowen, though they’re very different kinds of thinkers. Cowen’s an empirical social scientist of the late 20th century whereas Girard is a 19th century grand theorist displaced to the middle of the 20th century.

* * * * *

What are Tyler’s views on history in general? I know he favors economic growth. But I don’t get any general sense of history from this book. It just seems to be that pinnacles of human creativity appear at unpredictable times that don’t seem to anything more than local order.

That's a perceptive gap to identify, and it's real. Tyler uses history throughout the book but doesn't have a philosophy of history in any strong sense. His historical examples — the slow development of botanical classification, geology, the Marginal Revolution itself spreading unevenly across decades — are deployed instrumentally, to illustrate why intellectual progress is non-linear and contingent. [See text: Ch.3 · The uneven diffusion of marginalist ideas as hi...] But he never steps back and asks what the overall shape of history is, whether it has direction, whether the peaks of creativity he identifies are connected by anything deeper than circumstance.

What you're left with is essentially a punctuated equilibrium model without the theory — bursts of insight separated by long plateaus, with the bursts explained by the convergence of multiple enabling conditions: the right institutions, the right communication infrastructure, the right social permission to think heterodox thoughts. [See text: Ch.3 · How enabling conditions converged for early eco...] That's not nothing, but it's also not a philosophy of history. It's pattern recognition dressed as explanation.

Tyler's deeper commitments, visible more on Marginal Revolution than in this book, lean toward a kind of progress studies framework associated with thinkers like Patrick Collison and Michael Nielsen — the view that human progress is real, measurable, and has been accelerating, but is also fragile and contingent rather than inevitable. That's a Whiggish skeleton with the triumphalism removed. History goes somewhere, but it could easily have gone elsewhere, and the mechanisms are local and specific rather than grand and structural.

What's missing — and what your 1990 paper with Hays actually supplies — is any account of why complexity keeps increasing, why the peaks get higher even if their timing is unpredictable. Tyler sees the peaks. He doesn't have a theory of the underlying gradient that makes higher peaks possible over time. Your argument — that natural selection operates in an inherently complex universe, that greater differentiation yields more information without limit — would give him that gradient. History isn't just local order appearing at unpredictable times. It's local order appearing against a background of increasing universal complexity, which is itself directional even if not deterministic. Tyler has the observations. He's missing the substrate that would connect them.

Hegel and Marx

Interesting, especially in light of his extensive reading, which includes history of all kinds. History is just one thing after another. I wonder to what extent that might be a reaction against the grand theories of, say, Hegel and Marx. For that matter, does living historian offer some kind of grand theory? Well, there’s Pinker with his account of diminishing violence over time, though I know Tyler has expressed reservations about that somewhere in the MR blog.

Your instinct about Hegel and Marx is well-supported by the MR blog record. Tyler's relationship with grand historical theory is one of respectful dismissal — he's read enough to know why people find it compelling, and enough to remain unconvinced. [See text: Ch.4 · Tyler on empiricism defeating all the grand -isms]

On Hegel specifically: Tyler has looked at every page of the Phenomenology of Spirit but can hardly claim to have read it, and found the Master-Slave dialectic — usually considered the core — "a waste of time" he couldn't sustain attention through. That's not a refutation, it's a temperamental rejection. His bottom line: he doesn't think most people should spend much time with Hegel. So the anti-Hegelian stance isn't really argued — it's more like an allergy.

On Marx, Tyler is more engaged and more interesting. He notes that "Smith thought in terms of marginal tradeoffs" while "Marx, above all, focused on inframarginal and systematic effects." That's actually a sharp distinction — and it maps directly onto the gap in this book. Tyler is a marginalist by temperament even when critiquing marginalism. His final verdict: "marxism, as a political program, remains dangerous nonsense" and "Marx's blind spots were enormous." But he does count Marx among the great economists. So: useful diagnostician, catastrophic prescriber.

On Pinker, you're right that Tyler has expressed reservations. He prefers a weighted average of per capita and absolute measures of violence, arguing that killing six million Jews in the Holocaust is not "half as violent" if global population is twice as high — and that once you include absolute measures, the long-term trends are not nearly as favorable as Pinker suggests. More structurally, Tyler wonders whether the distribution is better described as "long periods of increasing peace, punctuated by large upward leaps of violence" — a warfare correlate to the Minsky Moment, with "great moderations" that each end with a bang. That's a genuinely different model from Pinker's, and it rhymes with your punctuated-equilibrium framing from the Hays paper.

On Enlightenment Now, Tyler's reservations go deeper: his main difference with Pinker may be that he believes there is "a certain amount of irreducible 'irrationality' in people, and it has to be 'put somewhere,' into some doctrine or belief system." That's not a Whig objection — it's almost a Burkean one. Progress doesn't eliminate the irrational substrate; it just redirects it.

Here are the relevant MR posts:

  1. Steven Pinker on violence (2011) Tyler's most sustained critique of Better Angels, raising the absolute vs. per capita violence distinction and the Minsky Moment analogy for warfare. The core reservations you remembered are all here.
  2. Enlightenment Now, the new Steven Pinker book (2018) Tyler's numbered-list response to Pinker's follow-up book, where he pushes back on Pinker's treatment of the Counter-Enlightenment and raises the "irreducible irrationality" objection.
  3. Steven Pinker on slavery and the Enlightenment (2019) Tyler calls himself an admirer of Pinker and a long-term optimist, but insists that "what is sometimes called progress does also have a dark side." A careful, specific critique of Pinker's historical claims about abolition.
  4. Hegel, or Department of Yikes (2008) The definitive Tyler-on-Hegel post. Short, direct, and tells you everything you need to know about why grand dialectical theories of history don't appeal to him.
  5. What is valid in Marxism? (2004) Tyler's five-point attempt to extract what's genuinely useful from Marx, including the inframarginal/systematic effects point. Worth reading alongside the book's dismissal of Marxism as a "school" that lost out to empiricism.

Two cats on a roof

Shipyard Park in Hoboken

I’d forgotten how hilarious Tootsie is [Media Notes 180]

I’d seen the movie when it originally played in theaters [1982] and I’d watched it several years ago on some streaming platform (probably Netflix). I watched it again last night [May 10, 2026] and laughed myself silly two or three times, including the ending, which surprised me, and I’d seen the movie at least twice before. Nor did I remember that Bill Murray was in it, nor Dabney Coleman and Charles Durning. Did I remember Jessica Lange or Teri Garr? Maybe a little. What I mostly remember is Dustin Hoffman as Michael Dorsey, a talented actor whose intrusive perfectionism (he actually went full-tilt “method” on a 30-second gig he had as a tomato in a TV commercial) has kept directors from hiring him. So, what does he do? Out of sheer orneriness he goes up for a part an actress friend of his (Terri Garr) failed to get in a soap opera, Southwest General. And wouldn’t you know, he gets it!

And now he’s stuck as Dorothy Michaels playing Emily Kimberly, hospital administrator. Yes, he’s got a gig, an income, but how he’s got to play Dorothy Michaels to the cast and staff of the TV program. His first test as Dorothy comes the first day on set when he plays a scene with the hospital’s Chief Surgeon, known as “the tongue” among the other actresses. When Kimberly sees him going in for the kiss Dorothy bashes him on the head with her clipboard and improvises her way through the scene. It works! Other things work as well. Before you know it Dorothy’s a hit and the show picks up her option for another year. Now Michael’s stuck. Another year? No!

At one point Dorothy is talking with another cast member, Julie Nichols (played by Jessica Lange) who’s complaining about men. She tells Dorothy:

“Hey, listen. You know I'm confused about this too. I could lay a big line on you, we could do a lot of role-playing, but the simple truth is I find you very interesting and I'd really like to make love to you.”

At this point we of course know that that line is going to come back a bit later in the film, this time going in the other direction. And it does. Michael Dorsey is at a party talking with Julie and he says, “Hey, listen. You know I'm confused about this too....” As soon as he opens his mouth we know what will happen. Sure enough, he gets a glass of wine tossed in his face.

THAT’s what I remember from the movie. Not the fact that Julie’s father (played by Charles Durning) decided to court Dorothy and has her to marry him, nor that Julie was having an affair with the director, Ron Carlisle (played by Dabney Coleman), and I’d completely forgotten that Sidney Pollack played Dorsey’s agent. So much forgetting, such a funny movie. And the movie’s title, Tootsie? That’s what Carlisle, an unrepentant sexist, called Dorothy, who didn’t like it, at all. But it stuck.

Here’s what Roger Ebert said (taken from the Wikipedia entry):

Tootsie is the kind of Movie with a capital M that they used to make in the 1940s, when they weren't afraid to mix up absurdity with seriousness, social comment with farce, and a little heartfelt tenderness right in there with the laughs. This movie gets you coming and going...The movie also manages to make some lighthearted but well-aimed observations about sexism. It also pokes satirical fun at soap operas, New York show business agents and the Manhattan social pecking order.”

Tootsie was nominated for nine academy awards and got one, Jessica Lange for Best Supporting Actress.

A.I. and the growth of the chore economy

Carl Benedikt Frey, This Is Why You’re Drowning in Busywork, NYTimes, May 11, 2026.

We have been told that A.I. will take people’s jobs. What no one mentions is that many of those jobs are landing on us. The A.I. revolution involves a massive transfer of labor — not from worker to machine, but from worker to consumer. The ability to do everything ourselves may be satisfying, but it can gradually overload us with busywork without our noticing. Tasks that we used to delegate will still be done. They will simply move out of the work force and into the household as new forms of invisible, unpaid labor.

The movement toward self-service is one of the most powerful and least appreciated forces in the history of work. Consider the washing machine. In many 19th-century cities, laundering was a major urban service occupation, and one of the hardest. [...] The washing machine, together with the infrastructure that made it possible — running water, electricity, synthetic detergents — gradually ended this world. But it did not end the work. Customers bought machines and did the laundering themselves. The laundress was displaced by her former clients.

And so it goes:

That pattern has been repeating ever since. Self-checkout makes scanning and bagging the shopper’s job. The internet gives travelers direct access to the flight schedules and hotel reviews that agents once controlled. Online brokerages put a trading terminal in every pocket. And the smartphone replaced the bank teller with you.

We’re used to being our own checkout assistants, travel agents and tellers. Handling these tasks ourselves often makes our lives more efficient. But A.I. is now extending the chore economy into territory that once required years of training, such as law and medicine. As of January, more than 40 million people worldwide were using ChatGPT daily for health questions — from symptoms to decoding bills and fighting insurers.

Trade-offs:

However, self-service does not automatically reproduce a professional’s judgment. The billing specialist notices the code the patient didn’t think to question. The accountant points out the deduction the taxpayer didn’t know existed. The tool answers what you ask, whereas the expert tells you what to ask. That is the A.I. trade-off: greater access, but thinner expertise.

Second, no single act of self-service feels like a major burden. We notice the accountant’s fee we didn’t pay. We rarely notice the evening we spent doing her job. There is a name for this: opportunity cost neglect — the well-documented tendency to overlook the value of what we give up when the cost is time rather than money.

There's more at the link.

Another white iris

Sunday, May 10, 2026

The kitty and the ink

Tyler Cowen is imagining that AI will kill the research paper

Here’s roughly the first half of his post:

Imagine taking a macroeconomics paper and adding a little button at the end “Press this button to update this paper with the latest macro data.”

All of a sudden you have multiple papers rather than one, and no single canonical version. It is the latter versions, not created directly by the authors, that people will look at.

Imagine adding another button, to either micro or macro papers “Please rerun these results using what the AI thinks might be five other different yet still plausible specifications.”

Then you have more papers yet.

Ultimately, why not just build a “meta-paper,” using AI, to answer any possible question about the subject area under consideration. This meta-paper would allow the reader, using AI, to make many sorts of modifications and additions to the basic work.

It goes on in that vein. Cowen’s final line:

It is funny, and tragic, how much some of you are still obsessed with writing and publishing “papers.”

And it’s funny how Cowen imagines that future research in economics and “many of the other sciences” will be just like current research, but more, more, more! And be produced by a different mechanism, perhaps, ultimately, by a mechanism without any humans in it whatsoever. 

Look at it this way, if this would actually be possible, and acceptable as normal practice, what does that imply about current practice? Is professional economics a cookie-cutter business? Are economists little more than clerks using fancy mathematics? 

Cowen can’t seem to imagine that the future, aided by AI to be sure, will bring new questions, new theories, new models, that it may bring a whole new intellectual world in comparison to which the current intellectual world will look like the 19th century, if not the 12th. It’s amazing that, despite his knowledge of intellectual history, the history of economics in particular, and his recent historically-oriented books about the great economists and about marginalism, that despite all this he has no sense of what future ideas might be, for they will surely be different from current ideas in ways we cannot anticipate. But then he seems to have little sense of what ideas are except lumps of “stuff” falling from otherwise mysterious trees.

In this he’s pretty much like the rest of the Silicon Valley AI Mob. So many thoughts, all sprung from the same repertoire of ideas with no hint of a new repertoire in the offering. It’s all in the hands of the machines.

Four Horseman of the Apocalypse

The Chinese conceptualize A.I. differently than Americans and are deploying it differently

Jacob Dreyer, America’s A.I. Is Futuristic. China Is Just Making It Work. NYTimes, May 9, 2026.

The reality is that China and the United States are racing in different directions, because the two countries conceptualize A.I. very differently. Americans want to create the most powerful technology humans have ever known. In the quest for superintelligence, the U.S. government is encouraging private firms to move full speed ahead, regulation be damned. Under the very tightest regulation, by contrast, the Chinese want to make A.I. more practical and embedded in society, more carefully selecting how it is deployed and used by the population. If the Chinese achieve their A.I. goals, they may take a lead in the larger geopolitical contest between the two nations.

Most Chinese policymakers don’t believe A.I. superintelligence is arriving any time soon. Instead, the Chinese strategy is about advancing a government-directed strategy referred to as “A.I.+” that treats A.I. as if it were infrastructure. This includes government-coordinated plans, local subsidies and national computing-power programs to diffuse cheap, capable A.I. tools into every public service. Chinese people encounter A.I. as a natural part of their day-to-day lives. Sometimes it’s visible and palpable, like the “smile to pay” terminals used in many shops. Sometimes it’s invisible, like Hangzhou’s City Brain, which uses A.I. to analyze massive amounts of data for urban management needs like regulating traffic and environmental protection.

Unlike in the United States, where most people remain wary, A.I. seems to have had less of a backlash in China. The Chinese A.I.+ strategy is practical and comprehensible to the local population in a way that the U.S. strategy simply is not, which may explain why the Chinese appear so much more optimistic about A.I. than Americans.

Chinese leaders are trying to maximize the country’s resources. The country’s chief resource is not oil, soybeans or pork bellies, but Chinese people. [...]

But China’s A.I.-as-infrastructure strategy is about more than just improving the country’s domestic quality of life. It’s also about exporting Chinese influence. Chinese A.I. is already integrated into the supply chains that dominate world trade.

And increasingly, rather than selling individual goods or services, China is selling a whole suite — energy, infrastructure, telecoms, transportation, surveillance — with A.I. systems to manage it all.

Interesting. It seems that the Chinese are interested in improving people's lives and making them more effective. In contrast, one could almost say that Silicon Valley AI is about replacing people. After all, isn't that the Doomsday scenario at the heart of so much Silicon Valley AI mythology? 

There's more at the link.

White iris on 11th St. in Hoboken

Saturday, May 9, 2026

Analogical thinking: How to succeed in “wicked” environments

From down in the article:

Inventors with the most patents had worked in multiple unrelated fields before their breakthrough work. Comic book creators with the longest careers had drawn for the most different genres before settling. Scientists who won Nobel Prizes were dramatically more likely than their peers to be serious amateur musicians, painters, sculptors, or writers. 

The skill that mattered in wicked environments was not depth in one pattern. It was the ability to recognize when a pattern from one domain applied unexpectedly in another. That kind of thinking cannot be built by drilling a single subject. It can only be built by accumulating mental models from many subjects and learning to move between them. 

Douthat, A.I., and Pope Leo

Ross Douthat, The Atheist and the Machine God, NYTimes, May 9, 2026.

Douthout is anticipating Pope Leo XIV's encyclical on A.I. For Douthat the crucial issue is consciousness:

In one possible timeline, the advent of A.I. is widely understood as a win for atheism and a blow against religious ideas of soul and spirit, persuading more people that their own minds are just computers — no divine spark or immortal soul, just the meatspace equivalent of a helpful chatbot or an A.I. therapist.

In another potential future, the mystery of consciousness ends up seeming more profound in the shadow of machine intelligence, the mystical finds new appeal as a form of experience computers cannot emulate, and religion becomes a place for human exceptionalists to plant a defiant flag.

But between those two scenarios there’s a future where artificial intelligence mostly increases metaphysical uncertainty, leaving a lot of people simply unsettled about fundamental questions, increasingly “mysterian” rather than clearly atheistic or devout.

That’s how my encounters with Silicon Valley culture often feel: Beneath a materialist carapace, it’s a place where people who aren’t sure exactly what they’re building dabble in Buddhist metaphysics or consult with Catholic priests, adopt churchy or cultish attitudes toward their new creations or rebel into apocalyptic doomsaying.

Douthat then goes on to discuss Richard Dawkins’ recent encounter with Claude in which Dawkins, the Great Atheist, fools himself into thinking that Claude is conscious. Douthat asks:

Meanwhile, in its less-besotted passages, Dawkins’s essay circles around an important question for materialists like himself. The origin and nature of consciousness currently evades our understanding, but the good Darwinian is committed to the proposition that it evolved to serve some crucial evolutionary purpose. But if a digital entity seems to display the capacities that we associate with conscious minds, and we don’t believe that this entity is actually conscious, then what is consciousness’s true purpose? If we can have intelligence without self-awareness, a zombie that calculates and speaks, why does the self exist at all?

I'm not so sure that consciousness is the issue. It's complicated. My own view of consciousness is derived from the work of William Powers, which seems thoroughly materialist to me. But I'm not at all sure what that means, not in a world with quantum mechanics and deep complexity. Deep complexity itself presents issues, interesting ones. At the moment I'm considering the possibility that deep complexity gives us Aristotle's unmoved mover or Paul Tillich's God as the ground of being.

Douthat ends up with this as his penultimate paragraph:

As certain philosophers have argued, this harmony between the psychological and the physical seems more much likely to appear in a universe where consciousness is fundamental, where matter isn’t everything and Mind is where things start.

I'm not familiar with those certain philosophers, but who knows? I'm thinking that deep complexity gives us a generative universe, and that's all we need. But I'm still working on it.

What I'm looking for from Pop Leo's encyclical is how it discusses A.I. and idolatry. Now, just how is THAt entangled with consciousness?