Thursday, August 31, 2023

Boston Legal – Denny Crane! [Media Notes 94]

I’ve just finished watching the series, which had five seasons that ran from 2004-2008. It’s a one-hour legal show with dramatic and comedic aspects and a touch of surrealism. It’s basically an ensemble series with a shifting cast, and with three members of the ensemble predominating: James Spader as Alan Shore, William Shatner as Denny Crane, and Candice Bergen as Shirley Schmidt. Crane and Schmidt are name partners in the distinguished firm of Crane, Poole, and Schmidt. Poole shows up on screen perhaps a half-dozen or times in the whole series, but is otherwise off-screen in a mental institution.

My favorite little bit about the show is Shatner’s suits. This is, after all, a high-class law film, so all the male lawyers wear suits. But Denny Crane’s are a cut above the rest, richer and more interesting fabric. Lots of the guys wear pin stripes and so did Denny, but he may have been the only one with chalk stripes. Luscious! And the ties. Most of the guys wore ties with narrow diagonal strips or small figures and patterns. Denny had some of those, but he also had wider stripes, mid-size polka & even largish dots, and richer textures. More interesting shirts, the only one wearing those shirts with white collars and cuffs – called a Winchester – against a colored or patterned fabric. If I were a rich man, I’d dress like Denny Crane. I wonder if Shatner got to keep any of those suits for his personal wardrobe?

The series ends – spoiler alert! – with a double wedding: Denny Crane and Alan Shore, Shirley Schmidt and Carl Sack. Crane and Shore have been with us for the whole run, from the first episode and in every consequent episode. Shirley Schmidt arrives halfway through the first season and is in most, if not all (I don’t recall), subsequent episodes, though not quite as prominently as the first two. Sack doesn’t arrive until the beginning of the fourth season and is a featured player from there to the end. For the sake of a not very rigorous argument let’s casually assume that these two weddings more or less define the show.

First, Shirley and Carl. Carl is Jewish and Shirley is Catholic, which didn’t signify much of anything until the very last episode when it occasioned arguments between them that almost scuttled the wedding – don’t ask why, it’s silly and complicated. That’s not all. Shirley and Denny were founders of the firm and had had a hot romance back in the day, but never got married to one another. Both went on to have multiple marriages. But Denny spent the whole series mooning over and hitting on Shirley now and then. He even had a life-size Shirley doll which he kept in a closet in his office, decked out in a wedding dress in the last episode when, in his confusion, he sometimes thought he was the one marrying Shirley. But he wasn’t, he was marrying Alan.

See what I mean, surrealistic?

Denny was in his early 70s when the series began and was having mental slips. His doctor said he had precursors to Alzheimer’s. As a protective gesture Denny referred to it as “mad cow,” which became a running motif in the show. There was even a late show involving mad cow and the beef industry. So, Denny’s got the mad cow and is always lamenting that he’s no longer the macher (not a word he’d use) he’d been in days gone by. He's also erratic, which brings a bit of disarray to the litigation department. So Alan Sacks was brought in from the firm’s New York office to tighten things up. He gets off to a rocky start. Not the least because that turns out to have been a cover to get him to Boston so he and Shirley and spark and spoon, which doesn’t please Denny at all.

As for Denny, Alan Shore is his best buddy and a generation younger (early 40s). They are opposites in important ways. Denny is a staunch conservative Republic. Alan is a passionate liberal Democrat. Alan doesn’t believe in abortion, Denny does. Alan gives long, ornate, convoluted, and successful closing arguments. Denny sometimes has trouble stringing a half-dozen words together, often uttering “Denny Crane” as though it were a magic incantation.

Nor does Alan approve of guns. Denny loves them. Has one or more on his person at all times, but no carry permit. At one point he produced a half-dozen while being booked. Sometimes the gun is only a paint gun, which he shoots off, several times. But sometimes the gun is real. He also shoots people with that, but manages to stay out of jail. Again, the surrealistic intrudes. Both end up joining the Coast Guard Reserve; I forget just why.

Both are compulsive womanizers. I don’t believe Alan has ever been married, or perhaps he was, once, but he’s had series affairs. Has one during the show. But also hits on woman and woman. The mad sex scrum in an elevator or on his desk is a running gag. As is the sight of Denny grabbing a woman’s buttock as he hugs her. Lots of that. At one point he marries a woman. The marriage falls apart almost instantly. Why? Because Denny has sex with a bride’s maid in the coat-check room, which is how he’d met the woman he just married.

See, the surreal touch. Or is it mere farce? Whatever.

Both love scotch and cigars. How often do you see men smoke cigars on TV? Well over half the episodes of Boston Legal end with Alan and Denny seated side-by-side on the balcony outside Denny’s office high above Boston. They’re smoking and drinking and talking about things, musing on the meaning of it all. They are obviously deeply attached to one another.

And so it is only fitting that they get married, as same-sex marriage is legal in Massachusetts. By this time Denny’s mad cow had progressed to early-stage Alzheimer’s and he was worried, deeply worried. Who would take care of him when the disease got really bad? Who would see that he died a decent death? What would happen with his money? And so he proposed to Alan that they get married. Alan turned him down initially, but Denny persisted and Alan agreed.

And so they got into Denny’s Gulfstream along with Shirley, Carl and judge and flew to Denny’s favorite resort, Nimmo Bay, Alaska, to get married. And, wouldn’t you know, there was Justice Antonin Scalia, who’d just arrived on vacation. Alan and Denny had just been arguing before the Supreme Court earlier that day! They got Scalia to perform the double ceremony. The couples danced and then ... back to Boston, back to Alan and Denny with cigars and scotch on the balcony.

Now, as I said, it’s an ensemble show. There are lots of other interesting characters in the show, not to mention the wide variety of legal issues taken up. But this is enough for a simple little blog post.

The End

Postlude, an hour later.

What? This is a legal show! What about the legal issues, the cases argued? Shouldn’t I say something about them.

Well, I suppose I should, but the idea hadn’t even occurred to me until after I’d posted the above material. That’s what interested me.

How could I possibly cover the legal issues, and their entanglement in moral and political issues, in a relatively short blog post? What do I do, there are so many. Each show had at least two, if not three, cases, and some of them were absurd on the face of it. There were 101 episodes, implying 250+ cases. How would I pick three of them. I suppose I could do that, but this is just a blog post, and a bright and breezy one at that.

If I were to write about one more thing it would be about Jerry “Hands” Espenson, a lawyer who showed up a third of the way into season two and was so liked that he became a regular member of the cast. Why him? Because he was on the Asberger’s spectrum and his behavior sometimes seemed dominated by ticks and compulsions. The way the show treated him was interesting and worthy of comment. But I digress.

What I wrote above is what came to mind when I decided to do a blog post. Think of it as the matrix in which all these legal, moral, and political issues are presented to us. THAT’s worth thinking about. 

Really The End

You can look out over the whole city from up there

Just what IS cancer, anyhow? Is it being over diagnosed?

Laura Esserman and Scott Eggener, Not Everything We Call Cancer Should Be Called Cancer, NYTimes, Aug. 30, 2023:

Some cancers have extraordinarily low risks of altering the quality or length of life but get lumped in with those that do. And that often leads to unnecessary treatment, disfigurement, side effects and a constellation of other psychological, relationship and financial issues.

We are oncologists with expertise in prostate and breast cancers. We believe the medical community must reconsider what we call cancer in its earliest manifestations. So do a growing number of cancer experts around the world.

The word “cancer” is attributed to Hippocrates 2,500 years ago, though the disease was described by the Egyptians 2,500 years earlier. Then tumors could be seen or felt. Today, we also identify cancer based on blood samples, biopsies or surgically removed specimens meeting specific criteria under the microscope. But as newer and more sensitive technologies come into use, we are increasingly identifying medical conditions that might have gone undetected without any issues. This phenomenon of overdiagnosis is a well-documented consequence of screenings for breast and prostate cancer.

Early detection of cancer sounds intuitively attractive and in many cases saves lives. But automatically calling something cancer can lead to aggressive treatment, even if the cancer in question is unlikely to cause problems. For many cancers, the term simply doesn’t match how the disease behaves. As cancer surgeons, knowing what we now know, we wish we could go back and undiagnose or reclassify a significant proportion of our patients.

After giving examples from prostate and breast cancers, the authors observe:

Renaming very low-risk cancers would make it easier to persuade patients when it’s appropriate to adopt monitoring and risk reduction as their approaches. Early-stage “cancers” that meet the microscopic definition of the disease (what a pathologist sees through the microscope) but not the clinical definition (a condition that is highly likely to grow and cause symptoms and has the potential to kill a person) could be designated as IDLE (indolent lesion of epithelial origin) or preneoplasia — anything but the dreaded C-word.

They go on to note at the very end:

Changing the label would make matters considerably less stressful for patients and their families. It would greatly reduce unnecessary treatment. The financial and psychological benefits for patients would be profound. Screening for life-threatening cancers would improve.

Some doctors who disagree with us argue that early-stage cancer patients may have regions of their prostate or breast with unsampled, riskier cancers that may pose a threat and should be treated accordingly. But it should not be routine, as it is now, to treat based on what might have been missed. We have many tools at our disposal to accurately diagnose patients. We should use them.

By modifying the names of early-stage prostate and breast “cancer” to appropriately reflect how they behave, we’d reduce unnecessary treatments and their side effects and improve screening, prevention and care.

I note that I have a minor interest in this subject – above and beyond that fact that I am 75 and that my father died of complications from cancer surgery – because enormous sums have been poured into cancer research without commensurate clinical benefits. Are we missing some very basic knowledge? 

* * * * *

Let's tawk

The Tree of Life, and a Note on Job

This is from 11 years ago. I'm bumping this to the top of the queue because, Why not? I like it.
More or less on Michael Sporn’s recommendation, I’ve just seen Terrence Malick’s The Tree of Life. While I’m collecting my thoughts on this trying, tedious, and rewarding film, I’ll let Michael’s thoughts stand-in for many of mine:
The film starts with a vision of god that moves beyond to a patriarchal dominated family in Waco, Texas. The suggestion of a death leads us back to god and the creation of the earth. From protozoa to dinosaur to the birth of a child, this filmmaker exudes absolute love for every organism he can show us on screen. Yet, right from the dinosaurs onward he creates an ominous tone in this male-dominated power hungry environment. You’re always expecting something terrible to happen in the hands of the children who push the film forward. This is a film that technically has a new way of presenting itself almost through an impressionistic vision. The whispered narration and dialogue mix and blend into one; the sun streamed backlit late-afternoon interiors create a whispered visual to match.
It was the phrase “from protozoa to dinosaur” that got me.

The film is that of a mystic. I know nothing of Malick, though it seems he was born in the Bible belt and studied philosophy, so I don’t know if he is really mystic, but then, what’s really in that question? I once told my draft board that I was a mystic. Really? Really. That’s what comes through in the film: “exudes absolute love for every organism he can show us on screen.”

The film’s explicit religiosity bugged me in the beginning. Am I going to have to say something about this in my review? Am I going to have to declare, for example, whether or not I’m a believer? And then it didn’t bug me, not for the last hour or so. I just forgot about it.

I’ll have more to say about the film later, but I just wanted to dig out some old notes, from 25 years ago, on Job.

To understand the story of Job we must first reject the ending, in which Job regains all that he has lost, and more, for his possessions were doubled. The ending is known to be a later addition. We reject it, for it subverts the deepest significance of the basic story, which is that man and God are essentially and absolutely different, hence there can be no reciprocal contracts between them. The effect of that ending is to assimilate the story to an ethos in which such contracts are possible, in which it is reasonable for man to bargain with God.

The view of the relationship between man and God which is assumed, first by Job's three friends, Eliphaz, Bildad, and Zophar, and later by Elihu (also a later addition), is basically a contractual one. There are rights and obligations on both sides of the contract and if either party breaks the contract he is liable to punitive action by the other party. God may be immensely more powerful and knowledgeable than man, but he is no so deeply different that he cannot enter into contracts with man. In this view God is assumed to be just and a man's state is assumed to reflect his performance in carrying out his obligations to the divine contract. The basic contract seems to be: Man obeys God's rules and is rewarded or punished accordingly. Prosperity is a sign of good performance while misfortune is a sign of poor performance. Job's misfortune's are taken as a sign of his poor performance.

However, Job rigorously examines his life and can find no instances of poor contractual performance. He has met his obligations. Hence he cannot understand why he is being punished. But, the text is careful to assert, "Throughout all this, Job did not utter one sinful word." His friends insist that he must have done something wrong, otherwise God wouldn't be punishing him. Hence he should look more deeply and continue to do so until he finds what he has done wrong. Job will have none of this. And so we face a dilemma. If Job is both just and the victim of misfortune, is God then unjust?

The answer indicated by the text is, in effect, that justice has nothing to do with it, that the relationship between man and God is not one of reciprocal contractual obligation. On the contrary, it is wholly one-sided. God begins his answer by telling Job to "Brace yourself and stand up like a man." He then begins a long series of rhetorical questions:
Where were you when I laid the earth's foundations?
Tell me, if you know and understand.
Who settled its dimensions? Surely you should know.
Who stretched his measuring-line over it?
On what do its supporting pillars rest?
Who set its corner-stone in place,
when the morning stars sang together
and all the sons of God shouted aloud?
The series of such questions amounts to a miniature encyclopedia of natural phenomena, one which emphasizes the absolute difference between God and man. For God has done and understands all this things while man, Job, has done and understood none of them. Job acknowledges this absolute difference, finally replying
I know that thou canst do all things
and that no purpose is beyond thee.
But I have spoken of great things which I have not understood,
things too wonderful for me to know.
I knew of thee then only by report,
but now I see thee with my own eyes.
Therefore I melt away;
I repent in dust and ashes.
The effect of the added ending, in which Job gets it all back, with interest, is to undermine this absolute difference between the human and the divine. If Job gets it all back, with interest, then the contract was not really broken at all. Job gets what is his due. That, however, is not what the story is about. The story is about absolute difference; it is attempting to replace the contractual view of the relationship between the human and the divine with a deeply ontological view, in which the divine is the underpinning, the ground, of the human.

Notice that the story is framed in such a way that the audience or the reader knows that Job did not do any wrong and that he is not being unjustly punished. We know that Job's misfortunes have nothing to do with punishment. The real reason - that Job is being used to make a point to Satan - may not be much better from our modern point of view; but it doesn’t contradict the basic point. In fact, the frame reinforces the point. For Satan has argued that Job is good only because God has rewarded him well. “But stretch out your hand and touch all that he has, and then he will curse you to your face.” And so Satan is given permission to wreck Job's life and Job does not, in fact, curse God. The story shows Satan, and us, that his view of the relationship between the human and the divine, which is the contractual view, is wrong.

Whatever the relationship between the human and the divine, it is such that Job was able to bear up under his misfortune without either blaming himself or cursing God. Could it be that he was able to do so precisely because God was Completely and Categorically Other?

Wednesday, August 30, 2023

City lights, and the river

Why I hang out at LessWrong and why you should check-in there every now and then

The first two sections are a background, first a bit on cultural change to set the general context. Then some general information about LessWrong. Now we’re reading for my personal impressions of the place, concluding with a suggestion that you take a look if you haven’t already.

Cultural change over the long haul: Christians and professors

Back in the ancient days Christianity as just another mystery cult on the periphery of the Roman Empire. Then in 380 AD Emperor Theodosius I issued the Edict of Thessalonica and it became the state religion of the Roman Empire. In time it spread out among the many tribes of Europe and those Christian tribespeople began thinking of an entity called Christendom, and that, in time, became Europe and “the West.”

Back in the days when Europe was still Christendom the Catholic Church was the center of intellectual life. That changed during the Sixteenth Century with the advent of the Scientific Revolution and the Reformation. The Catholic remained powerful, of course, but universities supplanted it as the institutional center of intellectual life.

My point is simple and obvious: things change. Cults can become mainstream and new institutions can displace old ones. With that in mind, let’s think about LessWrong.

LessWrong

To be sure do not want to imply that LessWrong, a large and sophisticated online community, and the currents that swirl there (the rationalist movement, effective altruism (EA), dystopian fears of rogue AI) is comparable to Christianity, but it appears cultlike to outsiders, and to some insiders as well. It hosts a great deal of high-wattage intellectual activity on artificial intelligence and AI existential risk, effective altruism, and, more generally, how to live a life. I suspect that for some who post there, it is the intellectual center of their intellectual life.

I was founded in 2009 by Eliezer Yudkowsky, an autodidact with strong interests in AI and, in particular, the destructive potential of advanced AI. That is what he’s best known for and, while I suspect that Nick Bostrom is more widely known on that subject – his 2014 book, Superintelligence, was a best-seller, and he has an academic post at Oxford – Yudkowsky has likely been more influential within the tech community centered on Silicon Valley, where he lives. As an indicator of that influence, consider this pair of recent posts at X, the site formerly known as Twitter, by the president and co-founder of OpenAI, Sam Altman:

Color me skeptical about the Peace Prize. But that’s beside that point, which is that Yudkowsky has been and is very influential in Silicon Valley. People who work in those companies post and comment at LessWrong.

Why I’m there

Because it is an interesting place, if a bit strange and off-putting, and because I have so far had some interesting conversation there. Not a lot, but certainly enough to make it worthwhile.

I don’t know when I first took a look at LessWrong, but let’s say it was more than five years ago and perhaps even as long as ten years ago. But I didn’t spend much time there. That began changing, say, two or so years ago, sometime after GPT-3 began rocking the world. I made my first post in June of 2022, and have made a total of 40 posts and 137 comments there so far (now 40 posts as I've cross-posted this there). I generally check in there every day just to see what’s going on. I may take a quick look at a new post or five, look at comments on posts I’m following, and then go on about my business. On a particularly good day I’ll read some comments on one of my own posts and reply.

The thing is, I’m a ronin intellectual, and have been for years. If you’ve ever seen the anime series, Samurai Champloo, I’m the intellectual equivalent of Jin. Yes, I’ve got a PhD – there are a few of those at LessWrong – and once held a faculty post at the Rensselaer Polytechnic Institute. But I left that a long time ago and have been without an intellectual home ever since. I’ve published a bunch of articles in the academic literature on various topics scattered over literary criticism, cognitive science, and cultural evolution, and two books in the trade press with good publishers, one on music (Beethoven’s Anvil, Basic 2001) and one on computer graphics (Visualization, Harry Abrams 1989). For what it’s worth, and it’s worth a great deal to me, the idea count is higher than the page count would seem to indicate, but my work never really caught on. So I understand what it’s like to be a mammal in a world dominated by dinosaurs.

Let a thousand flowers bloom

BrainGPT: A Large Language Model tool to assist neuroscientific research

From the BrainGPT home page:

This is the homepage for BrainGPT
A Large Language Model tool to assist neuroscientific research.

The scientific literature is exponentially increasing in size. One challenge for scientists is keeping abreast of developments. One solution is a human-machine teaming approach in which scientists interact with a vast knowledge base of the neuroscience literature, referred to as BrainGPT. BrainGPT is trained to capture data patterns in the neuroscience literature, taking advantage of recent machine learning advances in large-language models.

BrainGPT functions as a generative model of the scientific literature, allowing researchers to propose study designs as prompts for which BrainGPT would generate likely data patterns reflecting its current synthesis of the scientific literature. Modellers can use BrainGPT to assess their models against the field's general understanding of a domain (e.g., instant meta-analysis). BrainGPT could help identify anomalous findings, whether because they point to a breakthrough or contain an error.

Importantly, BrainGPT does not summarize papers nor retrieve articles. In such cases, large-language models often confabulate, which is potentially harmful. Instead, BrainGPT stitches together existing knowledge too vast for human comprehension to assist humans in expanding scientific frontiers.

I have no idea how this is going to unfold, but I think it's the kind of thing we need.

How AI will change governance: "Preparing for regime change"


From the beginning of the article:

Circa the early 2000s, “internet safety” discussions revolved around first-order issues like identity theft, cybercrime and child exploitation. But with the benefit of hindsight, these direct concerns were swamped by the internet’s second-order effects on our politics and culture. Indeed, between an information tsunami and new platforms for mass mobilization, the internet destabilized political systems worldwide, even leading to outright regime change in the case of the Arab Spring.

To the extent AI is simply the next stage in the digital revolution, I expect these trends to only intensify. The issue is not that AI and informational technology are inherently destabilizing. Rather, to put it in slightly Marxian terms, the issue is that society’s technological base is shifting faster than its institutional superstructure can keep up. Populist leaders who promise to root out corruption and reset the system are a symptom of governance structures that have in some sense lost their “direction of fit,” like clothes that shrank in the wash or a species outside its evolutionary niche.

 And from the end:

These inherent constraints on government, combined with AI’s much faster diffusion through the private sector, suggest a net weakening of liberal governments relative to the rest of society. The rapid degeneracy of our legacy institutions could thus make a kind of high-tech anarchy suddenly viable, bootstrapped off the latent demand for social order and other public goods.

The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion. It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan. At a minimum, this will require embracing AI tooling within the machinery of government; painful concessions to the government functions that AI simply renders obsolete; and the dialectical construction of a new social contract — an AI ordered-liberty — that one hopes is far more Swiss than Pashtun.

Regardless of what path we take, one thing is certain: the U.S. government of 2040 will look as different to our contemporaries as the U.S. government of the 1940s must have looked to the men and women of the pre-industrial era.

Tuesday, August 29, 2023

Cranking the lights of change

The Death of Burning Man

Daniel Pinchbeck, How Burning Man Failed, Daniel Pinchbeck's Newsletter, Aug. 29, 2023.

I've known about Burning Man for a long time. When I read about it in Pinchbeck's first book, Breaking Open the Head, I wanted to go, but didn't. Now...?

Pinchbeck notes:

As some readers may recall, I love Burning Man and consider it a massive disappointment. When I first visited back in 2000, I was overcome with enthusiasm, inspiration — many people still feel this, today, when they go for the first time. I still recall that wonderfully intoxicating sense of arriving at a “free” or liberated cultural zone where, in theory, you can recreate yourself in any way you want, express yourself in any way, as long as it doesn’t cause harm to anyone else. [...]

Burning Man seemed to reveal how society, as a whole, could (and, I believed, eventually would) be reconstructed around the psychedelic anarchist vision. The festival showed me it was possible to deprogram people, en masse, from the economic shackles, blind ambitions, and incessant status-seeking of normative culture. In my naive excitement, I saw Burning Man as a porto-revolutionary model for the future— in How Soon Is Now, I joked that the festival was my version of the Paris Communes (1848), a short-lived worker-run experiment eventually crushed by Napoleon III, which inspired Karl Marx and Friedrich Engels.

But now, he goes on to say:

Over time it became clear — particularly after the death of Larry Harvey, its founder — that the dominant ethos underlying Burning Man was a kind of occult-tinged free market Libertarianism, influenced by transhumanism and the technological Singularity. Back in the early 2000s, Burning Man mocked itself vigorously. It had a hard, Terence McKenna / Robert Anton Wilson edge. We felt we were exploring Chapel Perilous, awaiting the Eschaton.

As Burning Man expanded and became more popular, it lost its self-parodying humor, its self-critical irony, and its encompassing social vision, to a great extent. It started to feel increasingly hollow, shallow, and narcissistic. Much of the art now seems designed to provide a fitting backdrop for Instagram selfies. The outfits and hats became copycats of each other. Burners no longer explore much originality of self-expression. They follow the pre-set script, the round-the-clock EDM schedule. [...]

There was always an innate beauty hierarchy at Burning Man; over time, wealth started to play a greater role in determining the festival’s focus. Wealthy Burners raised millions of dollars for their art cars, sculptures, and mega-domes.

Crank it up, rich tech bros! 

* * * * *

 This just in (73.23):

Referring to this:  Trapped in Mud, Burning Man Attendees Are Told to Conserve Food, NYTimes, Sept. 9, 2023:

Thousands of attendees at the Burning Man festival in a remote stretch of the Black Rock Desert in Nevada were told on Saturday to conserve food, water and fuel after heavy rainfall trapped them in thick mud.

The event, which takes place in Black Rock City and began on Sunday, was interrupted by heavy rains on Friday night, and organizers directed attendees to shelter in place as rain poured over the area.

The festival site received more than half an inch of rain overnight on Friday, organizers said. While it had stopped for much of Saturday, more was expected in the evening and into Sunday morning, with a slight chance of thunderstorms, they said.

Except for emergency services, vehicles have also been prohibited around Black Rock City.

Krugman on the rise of wealthy cranks

Paul Krugman, The Paranoid Style in American Plutocrats, NYTimes, Aug. 28, 2023.

From the article:

If you regularly follow debates about public policy, especially those involving wealthy tech bros, it’s obvious that there’s a strong correlation among the three C’s: climate denial, Covid vaccine denial and cryptocurrency cultism.

I’ve written about some of these things before, in the context of Silicon Valley’s enthusiasm for Robert F. Kennedy Jr. But in the light of Hotez’s puzzlement — and also the rise of Vivek Ramaswamy, another crank, who won’t get the G.O.P. nomination but could conceivably become Donald Trump’s running mate — I want to say more about what these various forms of crankdom have in common and why they appeal to so many wealthy men.

The link between climate and vaccine denial is clear. In both cases you have a scientific consensus based on models and statistical analysis. But the evidence supporting that consensus isn’t staring people in the face every day. You say the planet is warming? Hah! It snowed this morning! You say that vaccination protects against Covid? Well, I know unvaccinated people who are doing fine, and I’ve heard (misleading) stories about people who had cardiac arrests after their shots.

To value the scientific consensus, in other words, you have to have some respect for the whole enterprise of research and understand how scientists reach the conclusions they do. This doesn’t mean that the experts are always right and never change their minds. They aren’t, and they do. [...]

After saying that "the man in the street" is often puzzled by scientific research, Krugman goes on to point out that these wealthy businessman, "who’ve made money in technology" sometimes have other things on their minds:

But there are forces working in the opposite direction. Success all too easily feeds the belief that you’re smarter than anyone else, so you can master any subject without working hard to understand the issues or consulting people who have; this kind of arrogance may be especially rife among tech types who got rich by defying conventional wisdom. The wealthy also tend to surround themselves with people who tell them how brilliant they are or with other wealthy people who join them in mutual affirmation of their superiority to mere technical drones — what the tech writer Anil Dash calls “V.C. QAnon.”

So where does cryptocurrency come in? Underlying the whole crypto phenomenon is the belief by some tech types that they can invent a better monetary system than the one we currently have, all without talking to any monetary experts or learning any monetary history. Indeed, there’s a widespread belief that the generations-old system of fiat money issued by governments is a Ponzi scheme that will collapse into hyperinflation any day now.

And so:

True, there have always been wealthy cranks. Has it gotten any worse?

I think it has. Thanks to the tech boom, there are probably more wealthy cranks than there used to be, and they’re wealthier than ever, too. They also have a more receptive audience in the form of a Republican Party whose confidence in the scientific community has collapsed since the mid-2000s.

No chickens in these eggs

Fables of Identity, European and American [Oppositional Trickeration]

I'm bumping this piece, from August 2012, to the top of the queue on general principle. Plus, it's about American identity, which is a vexations and trying matter, even more so now than it was back then.
Judging by a remark in the final paragraph, I wrote “Beyond Oppositional Trickeration” sometime during the O. J. Simpson trial and published it in Gravity within a month or two before “Fore Play: A Lesson in Jivometric Drummology.” The tone is conversational, a bit hip, but without the Lord Buckley embroidery of “Fore Play.” As I recall “whiteness” was being discovered at the time and, though I never read any of the books resultant upon that discovery, I read certainly articles and interviews. Could hardly miss them, they were thicker than snow at the North Pole. The influence of those ideas is obvious.

Were I to develop these ideas more formally, I would devote considerable effort to delivering on this informal observation, which I make early on: we must orient ourselves to that whole range of experience we have access to beyond our immediate family and neighborhood. What I had and have in mind is that, to the extent that we are aware of human history, we must situate ourselves within it in some role that gives us some place in history. By identifying with some ethnic, religious, or national group, we make contact with history through the role that group has played in history.

One can argue that such identifications are ideological formations, that they organize history and culture into patterns owing more to stance and desire than to fact, but such unmasking does not in itself eliminate the need for such identities. Nor does it provide alternative means for satisfying that need. Perhaps we should devote some effort to understanding just why human brains and groups and humans need such fictions. But that's rather more than I attempted in this piece and certainly more than I am now prepared to undertake.

This piece offers sketches of whiteness and blackness and how their opposition is more than mere opposition. There is a psycho-cultural dynamics at work that is rather independent of reasoned argument. For what it is worth, when I wrote this piece, I had no intimation that, a decade later, America would trick itself into fighting a nebulous "war" on terrorism, and thereby wage a real and hopeless war in Iraq. But the mechanism of "oppositional trickeration" I describe is what drives that nebulous war.
Note: This piece was originally published in Meanderings, which became absorbed by Gravity. It was one of the first sepia-toned joints on the web, and remains so to this day. Thanks, Cuda! BTW, here's Cuda, that's Cuda or Cooter Brown, a nom de plume for a most distinguished gentleman who shall remain nameless (dig the digressions!) writing about his star-struck youth. Very intense. Very.
* * * * *

Beyond Oppositional Trickeration: 

American Identity in the 21st Century, a Just-So Story


White folks weren't always white. By this I don't mean only that, like everyone else's, white folks' ancestors were from the African continent. That is true, but we don't really know what that signifies colorwise.

The fact is, we don't know what colors the original humans were. All we know about them is what we can deduce from some bones, pot shards, flaked stone tools and weapons, remains of fires and other assorted bits and pieces of stuff. None of this speaks to the issue of color. Those originals might have been mocha java, hazelnut, lion tawny, watermelon pink, eggplant purple, lilac lavender, tulip red or speckled striped blued and tattooed. Who knows. I'd like to think that, in fact, their color was like Satchel Paige's age:
What color would you be if you didn't know what color you was? That's what color I am.
And this brings us back to the question of white folks. There was a time when they were the same color as the rest of humanity, no color and all colors. They changed all that during their Renaissance, a word which, you may recall, means rebirth. They rebirthed themselves and came out Christian, European, and white.

This essay is about identity, about how Europeans created their collective identity, and how African America responded in kind. Needless to say it's about time for us to move beyond the pale and into the multi-hued savanna of new civilizations.

STANDARD DISCLAIMER: In this essay I follow a ubiquitous, if not universal, convention of talking about black and white, African American and European American, and so forth, as though these terms have simple and obvious meanings, as though they designate homogeneous and mutually impervious groups. I know better than that and so do you, so please, don't start dogging either one of us on that score until you get far enough into this piece to see where it's going.

Monday, August 28, 2023

Irises and the street

The fragility of artists’ reputations from 1795 to 2020

Letian Zhang , Mitali Banerjee, Shinan Wang, and Zhuoqiao Hong, The fragility of artists’ reputations from 1795 to 2020, PNAS, August 21, 2023, Vol. 120 | No. 35. February 22, 2023; accepted June 19, 2023, https://doi.org/10.1073/pnas.2302269120

Significance

This study uses machine-learning techniques and a historical corpus to examine the evolution of artists’ reputations over time. Contrary to popular wisdom, we find that most artists’ reputations peak just before their death, and then start to decline. This decline is strongest for artists who were most popular during their lifetime. We show that artists’ reduced visibility and changes in the public’s aesthetic taste explain much of the posthumous reputation decline. This study highlights how social perception of historical figures can shift and emphasizes the vulnerability of human reputation. Methodologically, the study illustrates an application of natural language processing to measure reputation over time.

Abstract

This study explores the longevity of artistic reputation. We empirically examine whether artists are more- or less-venerated after their death. We construct a massive historical corpus spanning 1795 to 2020 and build separate word-embedding models for each five-year period to examine how the reputations of over 3,300 famous artists—including painters, architects, composers, musicians, and writers—evolve after their death. We find that most artists gain their highest reputation right before their death, after which it declines, losing nearly one SD every century. This posthumous decline applies to artists in all domains, includes those who died young or unexpectedly, and contradicts the popular view that artists’ reputations endure. Contrary to the Matthew effect, the reputational decline is the steepest for those who had the highest reputations while alive. Two mechanisms—artists’ reduced visibility and the public’s changing taste—are associated with much of the posthumous reputational decline. This study underscores the fragility of human reputation and shows how the collective memory of artists unfolds over time.

Pumphouse with tree and some orange

Coming to America [Media Notes 93]

I believe I saw this when it came out in 1988 and I’ve watched in on line since, most recently yesterday. Meh. The Wikipedia entry quotes Shelia Benson of the Los Angelis Times as saying it was “hollow and wearying Eddie Murphy fairy tale.” Wikipedia’s summary of the plot: “Eddie Murphy plays Akeem Joffer, the crown prince of the fictional African nation of Zamunda, who travels to the United States in the hopes of finding a woman he can marry and will love him for who he is, not for his status or for having been trained to please him.”

I have little idea of how a wealthy hereditary African ruler lives these days, but the Zamunda stuff seemed over-the-top silly. The Queens material was better. But even there it was built around a central gag that just lay there. Prince Akeem fell in love with a woman, Lisa McDowell, whose father, Cleo McDowell, owned a burger joint called McDowell’s, that looked pretty much like a McDonald’s. There was some business about the play on “McDonald’s,” but not enough to be at all interesting.

My favorite moment came when Akeem was walking with Lisa in the early evening. He had a wad of money in his pocket he wanted to get rid of – does it matter why? – so he stuffed it in the coat pocket of a bum he saw sprawled in the street. It turns out that bum was a down-and-out Mortimer Bellamy from the 1983 Trading Places, a much better film, by the way, much. Perhaps that’s why I sent to see Coming to America, that, plus the fact that 48 Hrs. (from 1982) and Beverly Hills Cop (1984) were also good movies. But that one gag had nothing to do with the plot. It just reminded us that Murphy had been in better films and Coming to America.

Flowers | The street where I used to live

The mind as a polyviscous fluid

About a year ago I uploaded a post with a typically ungainly title, The structured physical system hypothesis (SPSH), Polyviscous connectivity [The brain as a physical system]. It’s that word, “polyviscous,” that’s got my present attention. Since then I’ve done a number of posts using that idea, whatever it is. This is another of those ideas.

So, viscosity. Honey is more viscous than, say, water. It flows more slowly, much more. What happens if you drop a lump of honey into a tumbler of water? It sinks to the bottom in a continuous lump and flattens out along the bottom. It will begin to diffuse into the water along the boundary, but I don’t know how long, if ever, it will take to mix completely. Now, put a stick down into the tumbler until it extends into the honey. Give is a stir or three, but no more. Now you’ll have gobs and threads of honey mixed in with water in a complex and somewhat irregular and ragged way. That’s a simple polyviscous fluid. It has regions of relatively high viscosity and other regions of relatively low viscosity. Now imagine a fluid with 5, 10, 27, 48, and more different levels of viscosity, from all but solid like cold tar through the wispiest whatever. Polyviscosity.

As the title of the year-old post indicates, I was thinking in terms of connectivity:

Thus I say that the cortical network as a whole exhibits polyviscous connectivity. What do I mean? Some connections are highly resistant to change, and thus have high viscosity. Others change quite readily, and have low viscosity.

OK. Now let’s shift our thinking just a bit and think of the mind as a polyviscous fluid. The mind, as the saying goes, is what the brain does. And that is very complex.

Imagine that you’re watching a movie, make it a Hong Kong martial arts movie. Your mind is entrained to the images on the screen. During a fight scene the level of mental viscosity is relatively how. The fight is over and the hero rests, contemplating the sunset, let’s say. The viscosity is somewhat higher.

Yet, while you’re entrained by the film, you’re not completely absorbed into. While the hero contemplates the sunset, you take a bit of popcorn. And maybe you were munching furiously during the fight. So, even as you were watching the film you slipped in some mental popcorn “frames” among the film frames. Very slippery, low viscosity.

When I wrote my book on music, Beethoven’s Anvil, I talked of the mind as neural weather. Thus (p. 72):

If the functional proclivities of a patch of neural tissue are not relevant for a current activity, those neurons will not be firing very often, but they will still generate some output. The only neuron that does not generate any output is a dead one. Neurons that are firing at low intensity one moment may well be recruited to more intense activity the next. As Walter Freeman has said, a low level of activity is still a means of participating in the evolving mental state.

The mind, in this view, is thus like the weather. The same environment can have very different kinds of weather. And while we find it natural to talk of weather systems as configurations of geography, temperature, humidity, air pressure etc., no overall mechanism regulates the weather. The weather is the result of many processes operating on different temporal and spatial scales.

At the global level and on a scale of millennia we have the long-term patterns governing the ebb and flow of glaciers which, in one commonly accepted theory, is a function of wobble and tilt in the earth’s spin axis and the shape of the earth’s orbit. At the global level and operating annually we have the succession of seasons, which is caused by the orientation of the earth with respect to the sun as it moves through the year. We can continue on, considering smaller and smaller scales until we consider the wind ripping through the twin towers of the World Trade Center or even the breeze coming in through your open window and blowing the papers off your desk.

Weather is regular enough that one can predict general patterns at scales of hours, days, and months, but not so regular that making such predictions is easy and routinely reliable. Above all, there is no central mechanism governing the weather. It just happens.

I develop that idea further in a couple of posts, The Mind is What the Brain Does, and Very Strange, and Neural Weather, an Informal Defense of Psychoanalytic Ideas.

So, neural weather, polyviscous fluid. Perhaps we’re getting somewhere. The mind IS what the brain does, and what the brain does is complex and varies along a wide range of time scales. The brain’s overall physical structure is relatively constant throughout life, barring injury and disease. But the connectivity changes over a variety of time scales from seconds through hours and days and even longer (think of cortical plasticity). There is much, perhaps most, millisecond to millisecond, activity that produces no synaptic change at all. A very fluid phenomenon, over multimer time scales.

Sunday, August 27, 2023

A new park and playground in Hoboken

Xanadu, GPT, and Beyond: An adventure of the mind

I've posted a new working paper. Title above; links, abstract, table of contents, and introductory material below.

Download at: 

Academia: https://www.academia.edu/106001453/Xanadu_GPT_and_Beyond_An_adventure_of_the_mind
SSRN: https://ssrn.com/abstract=4553351
ResearchGate: https://www.researchgate.net/publication/373433939_Xanadu_GPT_and_Beyond_An_adventure_of_the_mind

Abstract: This article recounts an intellectual journey that began in curiosity about the structure of Coleridge’s “Kubla Khan” in the late 1960s and has led to an interest in large language models at the present time. A close analysis of the poem revealed its two parts each to have a nested structure (think of a matryoshka doll) that suggested the operation of an underlying computational process (nested loops). That led to the study of computational linguistics (semantic networks), followed by neuroscience (Karl Pribram’s neural holography), and cultural evolution. In the 2010s I began following work digital humans had been doing with machine learning. When GPT-3 was released in 2020 I was ready, though it took me awhile to establish a link, however tentative, between that conceptual universe and that of “Kubla Khan.”

Encountering Coleridge’s “Kubla Khan” 3
Romantic states of consciousness 3
Matryoshka dolls and the escape from Xanadu 5
Semantic networks and a Shakespeare sonnet 8
Karl Pribram, neural holography, and the brain 10
The wandering years 11
Through GPT to the future 13
Mind and world in text 15
The Text of “Kubla Khan,” including Coleridge’s prefatory note 17
A note about the cover image 19

Encountering Coleridge’s “Kubla Khan”

I became hooked on Coleridge’s “Kubla Khan” in the Spring of 1969, my last semester as an undergraduate at Johns Hopkins. Three years later “Kubla Khan” had become the standard against which I measured my understanding of the human mind. That is why I am telling a story about how my interest in the mind has evolved through “Kubla Khan” to include, most recently, ChatGPT. Strange as it may seem, that poem is the vehicle through which I am coming to terms with this new technology and arriving at a sense of its potential.

There is a sense in which the story of that great poem can be traced back to the 11th century invasion of Britain by the Norman French, for that culture-crossing is what gave rise to the English language. A century or so later that story encountered a tale born of an encounter between an Italian merchant, Marco Polo, and a Mongolian warlord, Kubla Khan, which, when enlivened by the East India Company’s trade in opium, set fire to the mind of Samuel Taylor Coleridge in the late 18th and early 19th centuries. We need not trace that trajectory in any detail. I mention it only to give a sense of the scope of this 36-line poem, which is one of the best-known poems in the English language, and is perhaps unique in the annals of Western literature. It has made its mark on popular culture, from Orson Welles’s Citizen Kane, where it names Kane’s estate, Xanadu, thereby establishing the matrix for the whole film, to a hit song and film by Olivia Newton-John, Xanadu, subsequently made into a Broadway musical. It even provided that most vulgar of real-estate barons, Donald Trump, with the name for the nightclub, Xanadu, in his now defunct Atlantic City casino.

Romantic states of consciousness

I may well have read the poem prior to taking Romantic Literature with Professor Earl Wasserman in 1968-1969. But I have no memory of that. Though we didn’t study Coleridge until the second semester, it is probably best if I start my story with the first semester.

The course started with Keats. I decided to write my paper about a minor poem, “To–[Fanny Brawne],” and had delayed writing until the night before it was due. I was tired and my mind snapped. All of a sudden, I was typing a passage from one of Keats’ letters to Fanny, but I experienced the act of typing as though the words were my own. When I finished that passage, my mind was astir and found its way to the second stanza of “Ode on a Grecian Urn” – you know “Heard melodies are sweet, but those unheard Are sweeter...” I read those words as though they were my own.

I finished the paper, turned it in, got a grade, and...

I had a problem: What was that!? I didn’t know. But this was the 1960s and altered states of consciousness were all the rage, drug induced, but also meditation, and now it seems, the influence of late-night poetry on a tired mind.

Next up: Percy Bysshe Shelley, he who had declared poets to be “the unacknowledged Legislators of the world.” Again, I delayed writing my paper until the last minute. I was tired. The damned paper wrote itself, through me. But I didn’t experience anything of Shelley’s as though I had written it. It was different from the experience I had writing about Keats. The words just lined themselves up, one after the other and flowed down my arms, through my fingers, from the typewriter and onto the page. It was easy. No sweat. It was a good paper too.

Wordsworth was up in the Spring semester. That was the best paper I’d written as an undergraduate. Wasserman remarked that it “was a mature contemplation of the poem” – though I forget just what poem it was. There were no mental hijinks. I wrote it with the standard task-assemblage of a sentence or three here, a paragraph there, pace the room a bit, make a note or three, look up something, back to the typewriter, rinse, repeat, and so forth....it’s done.

And so it went with my “Kubla Khan” paper. The poem itself presents a number of problems. The first is: What is it about? There is no narrative. It has often been dismissed as word music. Word music it is, but that is no ground for dismissal.

Then we have Coleridge’s preface. He said the poem was incomplete. He had become lost in an opium reverie when two or three hundred lines came to him – “all the images rose up ... as things, with a parallel production of the correspondent expressions, without any sensation or consciousness of effort” – which was dashed when he was interrupted by a man from Porlock. When the Porlockian had gone, so had those two or three hundred lines. All that was left were the 54 lines of this, one of the most extraordinary poems in the world. In fact, nothing is obviously missing. If it weren’t for that preface, no one would even suspect that the poem was incomplete.

Critics have had various ways of dealing with the disparity between the poem itself and Coleridge’s claim. I invented another solution to the problem. It is easy and natural to interpret the second part of the poem as asserting that the poem is incomplete (I’ve appended a complete text to the end of this essay). The speaker says “Could I revive within me” (l. 42), clearly implying that he can’t, but if he could he would “build that dome in air” (p. 46). The dome is assumed to be Kubla’s pleasure-dome from the first part and is here being used as figure for the poem itself. That’s a perfectly respectable reading of those lines.

I pushed it a step further. I asserted that the poem paradoxically completes itself by asserting that it is incomplete. That kind of reading has it all. The wealthy English of Coleridge’s time were fond of placing newly built but incomplete or dilapidated structures in their gardens – “follies” they were called. An exquisitely dilapidated poem fit right in with that aesthetic. Moreover, such paradoxical readings fit right in with the rising tide of structuralist, post-structuralist and deconstructionist readings in American literary criticism. Despite all that that, Wasserman, who was more traditional in his conceptual leanings, Wasserman loved it.

A note about the image

The portrait of “Kubla Khan” was made by Araniko, a Nepalese artist, shortly after Kubla’s death in 1294. The image is from Wikimedia Commons and is in the public domain.

Araniko: https://en.wikipedia.org/wiki/Araniko.
Image: https://commons.wikimedia.org/w/index.php?curid=4126240.

I have overlaid it with an image I made in MacPaint on a Classic Macintosh in 1985.

Friday, August 25, 2023

Friday Fotos: May Flowers

Ramble on STUFF: intelligence, simulation, AI, doom, default mode, the usual

Time for another one of these. Lots on my mind. Difficult to sort it out. So, some quick hits just to get them all into the same space.

The discourse of intelligence

As far as I can tell the notion of intelligence that’s instantiated in such phrases as “intelligent life” or “artificial intelligence” is relatively new, dating back to the late 19th and early 20th centuries with the emergence of tests to estimate general cognitive capacity, that is, intelligence, as in “intelligence quotient.” This conception is pervasive in the modern world. It pervades the educational system and our discourse on race and public policy, including eugenics. AI seems to be a cross between this idea and computation, the idea and its realization in digital computers.

How does this discourse show up in fiction? I’m thinking of such characters as Sherlock Holms or, much more recently, Adrian Monk? These characters are noted for their intelligence, but are otherwise odd and in Monk’s case, extremely odd. 

Bostrom’s simulation hypothesis seems very thin to me

The more I think about it, it is not at all clear just what he means by simulation. He focuses on human experience. But what about animals? They are part of human experience. Are animals just empty shells, existing only as appearances for humans. But what about their interactions with one another? What drives their actions? Are they (quasi) automatous agents or are their actions entirely subordinate to the mechanisms which present appearances to humans? What happens when animals or humans reproduce? Do we have simulated DNA, fertilization, embryonic development? That seems to be rather granular and would impose a heavy computational burden. But if the simulation doesn’t do that, then how are the characteristics of the offspring related to those of its parents? What mechanism is used of that? Lots of problems.

Other posts on the simulation argument.

Performance and competence in AI test-taking

https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ Rodney Brooks has introduced a distinction between competence and performance into his ongoing discussion of AI. It is common to benchmark large language models on standardized tests given to humans, such as advanced placement (AP) tests or professional qualification tests (e.g. bar exams. Those are expressions of performance. What underlying competencies can we validly infer from those performances? In particular, are they similar to the competencies we infer about humans from those tests. Given that both a human and an LLM score at a certain level on the AP physics exam, is the LLM’s underlying competence in physics comparable to the human’s?

Super-intelligence

It’s a vague notion, no? In chess, computers have had the advantage over humans for a decade and a half, but that’s not quite what we mean by superintelligence (SI), is it? Too specialized. When talking of SI we mean general intelligence, no? LLMs such as GPT-4 and Claude-2 have a much broader range of knowledge than any human. Super-intelligence? In a way ‘yes,’ but probably no, not really.

Geoffrey Hinton has imagined an intelligence that exceeds human intelligence by as much as human intelligence exceeds a frog’s. This trope is common enough, and easy enough to cough-up. But what does it mean? There is a sense in which a contemporary Harvard graduate is a superintelligence with respect to a well-educated person of the 17th century or, for that matter, ancient Greece or Rome. That Harvard graduate understands things which are would be unintelligible to those earlier people – Mark Twain used this (kind of) disparity as a theme for a book, A Connecticut Yankee in King Arthur’s Court.

Note that, no matter how a frog is raised and ‘educated,’ it will never have the mental capacities of a human. But, if you took an infant from the 17th century and time-traveled it into the contemporary world, it could become a Harvard graduate. When talking of a future computational superintelligence, which of these disparities (frog vs. human, 17th C. vs. now) are we talking about. Maybe both?

How does superintelligence develop? Through thought alone or does it interact with the world? I fear this could go on and on.

Why’d Time print Yudkowsky’s essay?

I was shocked when Time Magazine published Eliezer Yudkowsky’s essay at the end of March. Why? Yudkowsky’s vision of AI Doom strikes me as being mostly a creature of science fiction rather than being a plausible hypothesis about the future of AI. I didn’t think Time was in the business of publishing science fiction.

So, I was wrong about something. I’ve got to revise some priors, as the Bayesians say. But which ones, and how? For the moment I see no reason to revise my thoughts about AI existential risk. Which implies that I was wrong about Time, which I regard as a bastion of middle-brow opinion.

But, really, just what does extinction-via-AI really mean to that audience? Of course, people fear job loss, that’s been around since, like, forever. And now we’ve got crazy behavior from LLMs. My guess is that AI Doom is continuous with and an intensification of those fears. Nothing special.

Why doom, after all? (a search for the real?)

Why is AI doom such a compelling idea to some people, particularly those in high tech? What problem does it solve for them? Well, since it is an idea about the future, and pretty much the whole future at that, let’s posit that that’s the problem it solves: How do with think about the future? Posit the emergence of (a godlike) superintelligence that is so powerful that it can exert almost totally control over human life. Further posit that, for whatever reason, it decides that humankind is expendable. However, if – and it’s a BIG IF – we can exert control over this superintelligence (‘alignment’), so that we can control it, then WE ARE IN CONTROL of the future.

Problem solved.

But why not just assume that we will be able to control this thing – as Yann LeCun seems to assume? Because that doesn't give us much guidance about what to do. If we can control this thing, whatever it is, then the future is wide open. Anything is possible. That’s no way to focus our efforts. But if it IS out to get us, that gives a pretty strong focus for our activity.

Do I believe this? How would I know? I just made it up.

Confabulation & default mode

One well-known problem for LLMs is that they tend to make things about. Well, it’s struck me that confabulation is the default mode for human mentation as well. What passes through your mind when you aren’t doing anything in particular, when you’re daydreaming? Some of those fleeting thoughts may be anchored in reality, but many will not be, many will be fabulation of one kind or another.

Neuroscientists have been examining brain activation during that activity. They’ve identified a distinct configuration of brain structures that supports it. This has been called the default mode network.

Hmmmm....

What is REALITY anyhow?

Indeed. More and more reality is seeming rather fluid. As David Hays and I remarked in, A Note on Why Natural Selection Leads to Complexity, Reality is not perceived, it is enacted – in a universe of great, perhaps unbounded, complexity.