Pages in this blog
Saturday, November 23, 2024
Friday, November 22, 2024
Making a (large) epoxy table
While I've been watching Cam's videos for a few years. While I have no particular interest in these tables, I enjoy these videos. So I've watched a bunch of them. AND, this is about making something with your hands.
Timestamps:
00:00 Intro
00:08 Awkward Delivery
01:32 What I'm Building
02:32 Recent Customer Request
03:42 Oversized Machinery
05:16 Epoxy Best Practice
06:38 Don't Be Like Joe Pa
08:28 Emmy Winning Blacktail?
10:51 My One Contribution
12:57 Suspect Measuring
14:55 Workbench Modification
15:31 Epoxy Mix and Pour
18:45 Terrible Idea I Had
20:43 CNC and Potential Problems
22:08 Circle Cutting
25:25 Redwood OF
27:37 How to Clean Epoxy Buckets
29:53 UV Finishing
31:20 Reveal
Time for another ramble: Melancholy, Claude, Bloom, Claude, Ring composition, and Other Stuff
It’s been a while since I’ve done one of these; May 29th was the last one. If you look over there to right at the Blog Archive you’ll see I’ve been on a posting slump, with 3-figure monthly totals from January through June, then a dip to 61 for July, August: 18, September: 15, October: 30, and now 33 for November as I write this, and the month isn’t over. Maybe I’m pulling out of the slump.
Anyhow, I’m feeling a little backed up with things to post about, so it’s time to ramble on and see what’s up.
Melancholy, Mind (Mine), and Growth
That’s the tentative title for my next 3 Quarks Daily article. Starting back in November 2017 I’ve been making occasional posts about my monthly posting habits, which tend to drop during the winter. I’m thinking of using that as the point of departure for my next 3QD piece, which will go up on December 2nd.
During those down times I’m depressed to one degree or another (melancholy). But why? Since those down times have been in the winter, perhaps its seasonal affective disorder (SAD). But that doesn’t square with all of the evidence. There was no down-time in the winder of 2022-2023 and 2023-2024, but there was a slump in the summer of 2023. Something else is going on, and I think it has to do with creativity. To that end I want to discuss the ridiculous blither of tags here, 665 by November or 2023.
Claude
I’ve starting working with Claude, Anthropic’s chatbot. I want to do some posts where I verify some of the work I’ve done with ChatGPT. I’m thinking of posts on stories, ontological structure, abstract definition, and the Girardian analysis of Jaws. I can then gather those into a working paper.
I also want to look at other things. At the moment I’m thinking of seeing how Claude summarizes longish documents. I’m thinking of the Hamlet chapter from Bloom’s Shakespeare book and Heart of Darkness.
Harold Bloom and GOAT literary critics
A year ago I began a series of posts on the theme of the greatest literary critics. I got bogged down in discussing Harold Bloom. It’s time to finish it off.
Bloom may well be as brilliant a literary critic as we’ve had in the last 50 or 60 years. But brilliance is one thing, greatness is another. Brilliance is a function of the individual, while greatness is a function of the relationship between an individual’s work and the arena in which they’re working.
I’m not sure about Bloom’s fit. While he’s got a wide readership, it’s not clear to me that scholars have taken up his work in any significant way. They may cite him – perhaps especially is concept of influence – but that they don’t much use of his ideas in his work. But we’ve also got to consider his work in the larger public arena, where he is hands-down the most prominent literary critic. I’m not sure of how to handle that.
However, if History wants to declare that Harold Bloom is one of the great all-time literary critics, maybe even the GOAT, what do I care? What would really bother me is if future critics should decide to take his work as a model and (attempt to) do more like it. Like most literary critics he’s neglected the study of form and he’s been deaf to the cognitive sciences. There’s little in his work that’s worth amplifying. It’s a dead end.
ChatGPT report
About a year or so ago I started writing a report summarizing my work on ChatGPT. I need to finish that report. I’d estimate the three-fourths or more are done. I’d like to be able to include some work with Claude. I don’t intend a lot on this, just enough to say that I’ve verified some things.
I’d like to finish this by the end of this year.
Why’s ring-form composition important?
That’s tricky. It has to do with the fact that literary works are extended in time, unlike the visual arts, which are static in time but extended in space. You can’t take the whole thing in at a glance like you can a painting.
There are constraints on how a literary work can unfold in time. There is a sense in which (the nature of) the end is inherent in the beginning. Ring-compositions are even more tightly constrained. Dylan Thomas consciously and deliberately plotted the ring-composition of the rhyme scheme in his “Author’s Prologue.” Rhyme is not about meaning; its patterns are arbitrary with respect to meaning. But Coleridge did not consciously work out the ring-compositions in “Kubla Khan,” not Conrad in Heart of Darkness. These patterns ARE NOT arbitrary with respect to meaning. On the contrary, they are central to how meaning is constituted.
Other Stuff
More Cobra Kai: Follow-up on my post where I explore the Freudian angle, saying a bit more about Girard, and extending that into history.
* *
More on meaning in LLMs: I’ve suggested that Ilya Sutskyver conflates mistakenly (and unknowingly?) conflates cognitive and semantic structure with the structure of the world and so suggests that robust next-token prediction requires knowledge of the world. In this post I argue that making that distinction is, in fact, difficult, and involves what I’ve been calling the word illusion. I first confronted the problem as an undergraduate when I was trying to understanding the difference between the signified, and mental structure, and the reference, something in the world, of a sign. I may not have gotten deep intuitions about that until I began studying cognitive networks in graduate school.
* *
Gila-monster venom and computational irreducibility: The idea is to start with a NYTimes article on drug discovery that starts with Gila-monster venom and ends up with Wolfram’s concept of computational irreducibility. This is about search, computation, and the complex and irregular structure of the (natural world).
* *
My work with Ramesh: Notes on conceptual ontology and hypergraphs in conceptual space.
* *
LLMs and literary study: Can we use LLMs to analyze the thematic structure of literary texts?
In particular, can we use them to examine Bloom’s these about Shakespeare as “inventing” the human. Are there themes that appear first in Shakespeare? We need more than Bloom’s vigorous assertion on this. We need to examine the thematic structure of prior texts and of Shakespeare’s texts and show that there are things new in Shakespeare. Do those new things then continue if texts after Shakespeare? To do this properly we need to examine a lot of texts.
I’d also like to know if LLMs could be used to find ring-composition in literary texts. It is by no means obvious to me that they can.
It will be interesting to see how AI affects medical practice
Not too many years ago Geoffrey Hinton confidently predicted that radiologists would soon be replaced by AI. That didn't happen. But now...
The New York Times a small study (50 doctors, a mix of residents and attendings) in which ChatGPT-4 outperformed physicians in diagnosis based on a case report:
...doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors.
“I was shocked,” Dr. Rodman said.
The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.
The study showed more than just the chatbot’s superior performance.
It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.
Of course, reading an x-ray and analyzing a case report are very different activities. Still...
There's more at the link, including a brief look at INTERNIST-1, an old-school AI system developed in the 1970s for diagnosis. It's clear to me that AI his here to stay, in general, and certainly in medicine. What's not at all clear is just how it's going to be used. Obviously, that will change over time as AI capabilities develop. While thinking about that you might look at Hollis Robbins' post, AI and The Last Mile:
While we worry about AI replacing human judgment, the real story may be how AI is creating a market for that judgment as a luxury good, available only to those who can pay for the “last mile” of human insight. What do I mean by this?
The challenge of mail delivery from the post office to each home or from a communication hub to each individual end user is known as a “last mile” problem. In the paper newspaper era, the paper boy was the solution to the last mile problem, hawking papers on street corners or delivering papers house by house in the early morning before school. The postal carrier is a solution to the last mile problem. DoorDash is a solution to the last mile problem in the food business. [...]
What I’m calling “the last mile” here is the last 5-15% of exactitude or certainty in making a choice from data, for thinking beyond what an algorithm or quantifiable data set indicates, when you need something extra to assurance yourself you are making the right choice.
Thursday, November 21, 2024
3QD Catch-Up: Affective Technology, Georgia on My Mind
Whenever I publish a new article at 3 Quarks Daily I also publish a somewhat shorter post here at New Savanna. That post links to the 3QD post and comments on it, or perhaps adds some further thinking. But that doesn’t always happen. I’ve missed giving notice for my 3QD articles in July, August, and September.
The articles for July and August continue the series on Affective Technology that I began in June. That first article was about Poems and Stories. In July I published Emotion Recollected in Tranquility. The phrase is from Wordsworth: “I have said that poetry is the spontaneous overflow of powerful feelings: it takes its origin from emotion recollected in tranquility.” That is, there is an emotion that we experience at some moment in time. Then, at a later time, when we’re doing something else, or perhaps nothing beyond lazing about as we let our mind wander as it will, that emotion calls to us. We hear it, take hold of it in words, and bring it back, creating a poem in the process. But before that I talk about state-dependent memory, the idea that our ability to remember something depends, in part, on our mental state at the time of remembering. Remembrance works best is there our current mental state resonates with our mental state during the original experience; this resonance is (hypothesized to be) a function of neurochemistry. But what if you can’t establish that resonance? Does that mean that your memory of that experience is gone, at least until you can establish resonance? I conclude by discussing Shakespeare’s Sonnet 129, “The Expense of Spirit.”
That sets us up for the third article in the series, Coherence in the Self, which I published in August. If our memories are neurochemically sensitive, that implies that our ability to remember the events of our lives depends, in part, on establishing neurochemical resonance with past states of mind and the events that engendered. If we can’t establish that resonance, then those events are lost. Our autobiographical self is shattered, a topic I take up with a discussion of dissociated identity disorder (DID), aka “split personality.” I then go on to argue that, by presenting us with a wide range of emotions, literary texts (and, by implication, other expressive culture) help us to construct neurochemical “scaffolding” that helps us keep in touch with our past. This strikes me as more fundamental than the (moral) improvement folks are forever attributing to (good) literature, though there may be a bit of that as well. Who knows? After discussing play I conclude considering Kenneth Burke on how we use literary texts to make sense of our lives.
September’s post is a bit different. I discuss five recordings of “Georgia on My Mind”: the original by Hoagy Carmichael, Ray Charles’s 1960 version, a harmonically-dense version by Jacob Collier, a beautiful jazz rendition by Dexter Gordon (tenor sax), and I conclude by another jazz version, this one by trumpeter Lew Soloff. That video is why I wrote this post. Notice that he begins with an a cappella trumpet line he took from the intro to the Ray Charles version.
Soloff’s style is interesting. It’s not swing, bebop, hard bop or cool, but it’s not modal or free either. In a way, it’s a kind of 1960s mainstream, but extended and opened-up (such as Soloff’s passage with only his mouthpiece or his use of the plunger mute) with a freedom that hardly existed before free jazz. That’s what makes it so interesting, that and Soloff’s soulfulness.
On the ethnographic study of expressive culture: If a Martian anthropologist were to study American culture, would she assign special privilege to the canonical texts of the Western traditions?
This was originally posted to The Valve on 7 Feb 2007 under the title: Discipline: The Aesthetic and the Ethnographic. I’ve edited it slightly so as to insert the proverbial Martian anthropologist into the mix. The distinction I’m making here, between the aesthetic and the ethnographic approach to culture, should be compared with a contrast I've made between ethical and naturalist criticism.
I think that judgments about good and bad literature are arrived at on an intuitive basis and communally negotiated in terms of richness, complexity, reflexivity, and so forth, while quoting appropriate passages here and there to illustrate what’s being talked about. But, such examination and discussion are applied ONLY to those texts judged to be good. It is simply assumed that the bad ones do not measure up on those rather vague standards.One of the standard clichés is that good texts can support many readings; that’s a measure of their goodness. And, what do you know, we have many readings for the good texts. That’s because the community that believes this doesn’t bother to provide readings for bad texts.As for the value of these readings, my guess is that it has more to do with the critic than with the text or texts being criticized.What we do not have anywhere, as far as I know, are explicit criteria applied to a wide variety of texts with the evaluations being done in some way that permits objective comparison between valuations.
Coca-Cola and AI: It's the Real Thing! [Still] [Media Notes 144]
Alex Vadukul, Coca-Cola’s Holiday Ads Trade the ‘Real Thing’ for Generative A.I., NYTimes, Nov. 20, 2024. The opening two paragraphs:
With temperatures dropping, nights growing longer and decorations starting to appear in store windows, the holidays are on their way. One of the season’s stalwarts, however, is feeling a little less cozy for some people: Coca-Cola, known for its nostalgia-filled holiday commercials, is facing backlash for creating this year’s ads with generative artificial intelligence.
The three commercials, which pay tribute to the company’s beloved “Holidays Are Coming” campaign from 1995, feature cherry-red Coca-Cola trucks driving through sleepy towns on snowy roads at night. The ads depict squirrels and rabbits peeking out to watch the passing caravans and a man being handed an ice-cold bottle of cola by Santa Claus.
And then:
The internet pile-on arrived swiftly as consumers decried the commercials as uncanny valleyesque perversions of the company’s classic ads.
“Coca-Cola just put out an ad and ruined Christmas,” Dylan Pearce, one of the campaign’s many critics, said on TikTok, adding, “To put out slop like this just ruins the Christmas spirit.”
“This is legit heartbreaking,” another user, De’Vion Hinton, posted on X. “Coca Cola has been the gold standard in branding and advertising for decades.”
Alex Hirsch, an animator and the creator of the Disney series “Gravity Falls,” expressed a sentiment that other creative professionals have shared online, noting that the brand’s signature red represented the “blood of out-of-work artists.”
The company, however, is sticking by the ads.
“The Coca-Cola Company has celebrated a long history of capturing the magic of the holidays in content, film, events and retail activations for decades around the globe,” a spokesman for the company said in a statement provided to The New York Times. “This year, we crafted films through a collaboration of human storytellers and the power of generative A.I.”
Ah, remember back in the previous century when Coke decided to revamp its formula and produce New Coke? Ah, those were the good old days. But the best days are yet to come!
There's more at the link. Me? I think I'm going to have a good old Diet Coke.
Wednesday, November 20, 2024
Why is Cobra Kai better than one might have expected? [Media Notes 143a]
When Cobra Kai first showed up on Netflix back in 2020 I noticed it – Oh, the karate kid continues, that’s nice – but had no interest in watching it. But then I read something, perhaps this piece in The New York Times, indicating that it was actually rather good. So I decided to give it a try.
I liked it, I actually liked it. It’s a martial arts movie, and I’ve got a minor interest in such movies. But it’s got more going for it.
You may remember that back in 1984 there was a film called The Karate Kid, which was about a teenaged boy who is mentored by a middle-aged Japanese handyman and karate sensei. The film did fairly well and was followed by a couple of sequels. I have a vague recollection of having watched, I don’t know, that last half of the original film on TV at some time. But that was the extent of my involvement.
Cobra Kai is a sequel to those films, but with a twist. Rather than pick up where the original series left off, it starts when the teenaged protagonist of the original film have reached early middle age. Here’s how the Wikipedia entry explains the premise:
Thirty-four years after being defeated by Daniel LaRusso in the 1984 All-Valley Karate Tournament at the end of The Karate Kid (1984), Johnny Lawrence suffers from alcoholism and depression. He works as a part-time handyman and lives in an apartment in Reseda, Los Angeles, having fallen far from his wealthy lifestyle in Encino. He has an estranged son named Robby, from a previous relationship, whom he has abandoned. In contrast, Daniel is now the owner of a successful car dealership and is married to co-owner Amanda with whom he has two children: Sam and Anthony. However, Daniel often struggles to meaningfully connect with his children especially after his friend and mentor Mr. Miyagi passed away prior to the series’ beginning.
After using karate to defend his teenage neighbor Miguel Diaz from a group of bullies, Johnny agrees to teach Miguel the way of the fist and re-opens Cobra Kai. The revived dojo attracts a group of bullied social outcasts who find camaraderie and self-confidence under Johnny’s tutelage. The reopening of Cobra Kai reignites Johnny’s rivalry with Daniel, who responds by opening the Miyagi-do dojo, whose students include Sam and Robby, leading to a rivalry between the respective dojos.
Whereas The Karate Kid had Johnny Lawrence as the bad guy (“villain” doesn’t seem quite the word) and Daniel LaRusso as the good guy, Cobra Kai is not so polarized. Both men get to be jerks and both get to be decent and sympathetic. But they maintain their antagonism and work it out through their respective dojos, Cobra Kai and Miyagi-do.
So we’ve got action on two strata, among the adults and among the teenagers. The teens are motived by their own romances and beefs and join one dojo or another in the process of working things through. But Johnny and Daniel pick up the conflict from their youth and use their dojos as a vehicle for working it out. That conflict is cloaked in different martial arts styles. Cobra Kai blunt, aggressive, and brutal while Miyago-do strives for ‘balance’ and is more ‘spiritual.’ The stylistic difference is obvious in the fights ¬– of which there are many – and is the occasion for much conversation throughout the series. There’s even an attempt to blend the two styles in opposition to yet a third dojo which shows up in the middle of the series.
Which is to say, things get complicated. More actors, more complex interactions, kids moving from one dojo to another as their relationships between one another and with their elders shift. By the second or third season the two dojos are fighting of karate dominance in San Fernando Valley. By the sixth season – in process, 10 episodes are online, the last five will come next year – we’re talking about dominance of world-wide (teen) karate. Somewhere in this process we see genuine evil, with people getting seriously injured and even killed.
So, what do we have? A conflict between two teenage boys gets reignited in their early middle age where it becomes amplified into a war for dominance of the karate world. On the one hand, it’s rather ridiculous, adults using their dojos as vehicles for working out their conflicts. How Freudian! [Indeed. Think about that for a minute, think about it very seriously.] The series is aware of this and indicates that awareness in various ways.
It’s brutal, but also lighthearted and funny. And there’s plenty of fighting, well-choreographed, and well filmed. It’ll be interesting to see how things end.
* * * * *
And exercise for the reader: Like so many things these days, this series just begs for a Girardian interpretation. On the one hand we’ve got mimetic desire operating on individual relationships. But we’ve also got larger scale sacrificial rhythms revealing themselves in the various gang battles and organized tournaments. Who are the scapegoats?
Scientific American throws shade on Elon
Well, not so much the magazine as their senior opinion editor, Daniel Vergano, who wrote an article entitled, Elon Musk Owes His Success to Coming in Second (and Government Handouts) (Sept. 13, 2023).
The first three paragraphs:
The superstar success of Elon Musk, the world’s richest man, with a fortune now estimated at $246 billion, poses quite a puzzle. How can someone whose wealth and fame rest on innovative technologies—electric cars and rocket launches—be such a doofus? Never mind his cursing at advertisers, getting high on a podcast, fostering white supremacists or challenging another peculiar rich guy to fistfights, the globe’s most famous technology mogul has spent much of the past two years blowing $44 billion on Twitter (now X) only to drive it into the ground.
And yet this same character remains critical to Ukraine’s defense against Russia, U.S. national security space launches and the American automotive industry’s hopes of withstanding a flood of cheap Chinese electric vehicles. His Dragon capsule will retrieve astronauts stranded on the International Space Station by his doddering space competitor, Boeing, NASA announced in August. This same genius crudely proposed impregnating the pop star Taylor Swift, in front of millions of people, following the second presidential debate in September. Again, how can someone be so essential, so rich and, at the same time, such a dingbat?
A look at history and economics suggests that Musk, now age 53, won his fortune from a “second-mover” advantage, well known in innovation scholarship. With regulation, marketing and basic principles established, a savvy second comer—think Facebook trampling MySpace or Google crushing Yahoo—can take over a burgeoning industry with a smart innovation pitched at a ripe moment. Musk also enjoyed a great deal of good fortune, otherwise known as handouts from Uncle Sam and less astute competitors.
The rest of the article fleshes out that last paragraph.
I wouldn’t use the words “doofus” or “dingbat” but, yes, he’s a bit strange. So?
As for the rest, OK. Am I an Elon-fanboy? No. That’s not how I am.
Does he mess up? Yes, what he's done with Twitter is not a success story. And no matter how much he blathers about free speech, free speech is not his game at all. He's egocentric. I think it’s incredible. Still, look at what he’s done.
Vergano’s critique is weak tea.
How far can next-token prediction take us? Sutskever vs. Claude
One of my main complaints about the current regime in machine learning is that researchers don’t seem to have given much thought to the nature of language and cognition independent from the more or less immediate requirements of crafting their models. There is a large, rich, and diverse literature on language, semantics, and cognition going back over a half century. It’s often conflicting and thus far from consensus, but it’s not empty. The ML research community seems uninterested in it. I’ve likened this to a whaling voyage captained by a man who knows all about ships and little about whales.
As a symptom of this, I offer this video clip from a 2023 conversation between Ilya Sutskever and Dwarkesh Patel in which next-token prediction will be able to surpass human performance:
Here's a transcription:
I challenge the claim that next-token prediction cannot surpass human performance. On the surface, it looks like it cannot. It looks like if you just learn to imitate, to predict what people do, it means that you can only copy people. But here is a counter argument for why it might not be quite so. If your base neural net is smart enough, you just ask it — What would a person with great insight, wisdom, and capability do? Maybe such a person doesn't exist, but there's a pretty good chance that the neural net will be able to extrapolate how such a person would behave. Do you see what I mean?
Dwarkesh Patel
Yes, although where would it get that sort of insight about what that person would do? If not from…
Ilya Sutskever
From the data of regular people. Because if you think about it, what does it mean to predict the next token well enough? It's actually a much deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token. It's not statistics. Like it is statistics but what is statistics? In order to understand those statistics to compress them, you need to understand what is it about the world that creates this set of statistics? And so then you say — Well, I have all those people. What is it about people that creates their behaviors? Well they have thoughts and their feelings, and they have ideas, and they do things in certain ways. All of those could be deduced from next-token prediction. And I'd argue that this should make it possible, not indefinitely but to a pretty decent degree to say — Well, can you guess what you'd do if you took a person with this characteristic and that characteristic? Like such a person doesn't exist but because you're so good at predicting the next token, you should still be able to guess what that person who would do. This hypothetical, imaginary person with far greater mental ability than the rest of us.
The argument is not clear. One thing Sutskever seems to be doing is aggregating the texts of ordinary people into the text of an imaginary “super” person that is the sum and synthesis what all those ordinary people have said. But all those ordinary individuals do not necessarily speak from the same point-of-view. There will be tensions and contradictions between them. The views of flat-earthers cannot be reconciled with those of standard astronomy. But this is not my main object. We can set it aside.
My problem comes with the Sutskever’s second paragraph, where he says, “Predicting the next token well means that you understand the underlying reality that led to the creation of that token.” From there he works his way through statistics to the thoughts and feelings of people producing the tokens. But Sutskever doesn’t distinguish between those thoughts and feelings and the world toward which those thoughts and feelings are directed. Those people are aware of the world, of the “underlying reality,” but that reality is not itself directly present in the language tokens they use to express their thoughts and feelings. The token string is the product of the interaction between language and cognition, on the one hand, and the world, on the other:
Sutskever seems to be conflating the cognitive and semantic structures inhering in the minds of the people who produce texts with the structure of the world itself. They are not at all the same thing. A statistical model produced through next-token prediction may well approximate the cognitive and semantic models of humans, but that’s all it can do. It has no access to the world in the way that the humans do. That underlying reality is not available to them.
What does Claude have to say about this?
I gave Claude 3.5 Sonnet Sutskever’s second paragraph and had a conversation about it. I wanted it to see if it could spot the problem. Claude saw various problems, but couldn’t quite find its way to what I regard as the crucial point. In the end I had to tell it that Sutskever failed to distinguish between the structure of the world and the structure of the semantic and cognitive structure expressed by the text.
My text is set in bold Courier while Claude's is plain Courier.
Tuesday, November 19, 2024
Neil deGrasse Tyson talks about his life, the cosmos, and stuff
From the YouTube page:
For a lot of people black holes and string theory were topics that were filed in the mental box labelled ‘things I will never be able to get my head around”. However, all changed when Neil deGrasse Tyson began appearing on TV screens.
0:00 Intro
02:02 Early context
05:47 Your parents direct influence
12:39 Your father being racially abused
23:36 How to decide what I want to do with my life
26:52 What are you concerned about with the human race
30:05 Social media polarisation
42:40 Do we matter
47:48 Where does happiness and meaning come from?
54:46 Whats required for a happy life for you?
01:00:17 The perfect way to tell stories
01:13:39 What do you struggle with
01:17:32 Mental health
01:30:04 The last guest’s question
Monday, November 18, 2024
The heterarchical generation of movement in the brain
"during the generation of behavior, the brain functions like a heterarchical system consisting of interconnected regions that flexibly influence each other through processes with different dynamics... at a particular timescale, their respective functions can even be redundant."
— Luiz Pessoa (@PessoaBrain) November 18, 2024
Highlights of the linked article
- Investigating constrained behavior during a single point in time obscures the unique contributions of different brain regions.
- Bidirectional connections enable flexible interactions between regions, providing a substrate for partial functional redundancy.
- Redundancy can be expressed in several ways, including several structures subserving similar functions at a point in time.
- Theories and experiments that integrate across behaviors and timescales may help untangle unique functional contributions.
Abstract
The nervous system evolved to enable navigation throughout the environment in the pursuit of resources. Evolutionarily newer structures allowed increasingly complex adaptations but necessarily added redundancy. A dominant view of movement neuroscientists is that there is a one-to-one mapping between brain region and function. However, recent experimental data is hard to reconcile with the most conservative interpretation of this framework, suggesting a degree of functional redundancy during the performance of well-learned, constrained behaviors. This apparent redundancy likely stems from the bidirectional interactions between the various cortical and subcortical structures involved in motor control. We posit that these bidirectional connections enable flexible interactions across structures that change depending upon behavioral demands, such as during acquisition, execution or adaptation of a skill. Observing the system across both multiple actions and behavioral timescales can help isolate the functional contributions of individual structures, leading to an integrated understanding of the neural control of movement.
Bond, James Bond [Media Notes 142]
So far we’ve had 27 James Bond films, 25 by Eon Productions, and two others. I’ve seen many of them in theaters, perhaps even a majority, but certainly not all of them. Amazon has gathered the 25 Eon films together so we can conveniently binge them in chronological order or in whatever order we choose. In the past few weeks I’ve watched through them all, more or less in order.
When I started watching them I had no intention of publishing remarks about any of them, much less something about the lot of them. But then, as sometimes happens, I decided that perhaps I should write something. It happened like this:
I started watching from the beginning, with Dr. No (1962). This introduced much of the Bond nexus, such as the gun-barrel credit-opener, the women, the suits, the martini, the physical prowess, though not the gadget fetishism, which came two films later. It also introduced SPECTRE (Special Executive for Counter-intelligence, Terrorism, Revenge and Extortion) as a nebulous international criminal enterprise dedicated to enriching itself through elaborate schemes.
Then I watched the second one, From Russian with Love (1963). SPECTRE continues, and we get to see Robert Shaw as an athletic villain (13 years later he played Quint in Jaws). It became a smash success and was, and still is, regarded as one of the best Bond films. I read that judgment in the film’s Wikipedia entry, which I read after I saw the film (something I do regularly), and I concur. There’s minimal gadgetry and none of the big-tech nonsense which infested some of the Roger Moore films (I’m thinking of Moonraker in particular).
Next we have Goldfinger (1964). This may be the first Bond I saw in the theater, or was it From Russian with Love? I don’t recall, but whichever it was, the theater was packed. Auric Goldfinger, a dealer in gold bullion, is the villain. He’s a freelancer unaffiliated with any organization. This film introduced the first of the gadget tricked-out cars, the iconic Aston Martin DB5. Goldfinger also has an extensive pre-credit sequence which is unconnected with the rest of the film. That’s one of the features I had in mind when I started on this Bond orgy.
I was under the impression that that had become a feature of all the Bond films. Not quite. Most, perhaps all, later Bond films do have a long pre-credit sequence, but those sequences aren’t necessarily as disconnected from the plot as this one is. I suppose I should have been keeping notes on this, but I wasn’t, alas. In any event, this feature interests me because it’s characteristic of the Indiana Jones films and, in the case of The Temple of Doom, the precredit sequence is better than the film itself. The disconnected pre-credit sequence is also a feature of the delightful True Lies, which is somewhat of a parody of the spy-adventure film, perhaps even a comic critique of it.
That’s the first three Bond films, all with Sean Connery as Bond. He established Bond as a film character and was its best incarnation. If you’ve never seen any of the Bond films, watch these three, perhaps starting with From Russia with Love. After that I suggest you watch one of the Daniel Craig films (Skyfall, but I’m not sure it makes much difference), the most recent incarnation of Bond, and one of the Pierce Brosnan films (GoldenEye?), the penultimate Bond. After that, watch as you please. Roger Moore played Bond in seven films, more than any other. Some of the films were OK (perhaps Live and Let Die, with its Paul McCartney theme song), some were awful. He was the fourth Bond. Timothy Dalton played Bond in two films; he was the third Bond. George Lazenby was the second Bond, and had only one film.
It was a minor feature of that one film, On Her Majesty’s Secret Service (1969), that gave me the idea of writing something about Bond. The great Louis Armstrong shows up on the soundtrack singing a love ballad, “We Have All the Time in the World,” that was written for the film. It serves as the backdrop for Bond's courtship of the woman he rescues from suicide (Diana Rigg) in the opening segment. The song also appears over the end credits of the most recent (and likely the last) Bond film (he dies at the end), No Time to Die. There’s more to the film than Satchmo on the soundtrack and recent opinion ranks it as one of the best Bond films. I concur.
That leaves me with a last set of remarks. Bond kills a lot of people in these films, and he does it well. Others kill people too and many of them are expert at it. That is, it’s not just the killing, it’s the expertise in the craft. Why? I don’t think it’s sufficient to point out that, after all, death is a signal feature of human existence, one that’s the source of a great deal of anxiety and grief. How do we get from that fact to the appearance of death in so many of our stories?
And Bond, he’s not only expert in killing. He’s also a womanizer. Is that combination a mere contingency or is it central to the franchise’s appeal?
Sunday, November 17, 2024
In search of AI-shaped holes (and roles)
“There are no AI-shaped holes lying around”
— Matt Clifford (@matthewclifford) September 12, 2024
-> this is how I reconcile the facts that (a) AI is already powerful and (b) it’s having relatively little impact so far
Making AI work today requires ripping up workflows and rebuilding *for* AI. This is hard and painful to do…
* * * * *
The thing about chess is that the structure of the search space is well-understood. The space may be unimaginably huge, but it's well-structured, and we know a great deal about searching such structures.
What's the search space for practical fusion power? Or the cure for pancreatic cancer? For the matter, what's the search space for the meaning of Shakespeare's Hamlet?
Search is only one aspect of a task, there are others – see my posts on chess and language. In general, what are the computational requirements of a task? If we want to find AI-shaped holes in organizations, we need to look at the tasks being performed and how those tasks are assigned to roles. In order to find AI-shaped holes we may have to refactor the way the computational requirements of tasks are assigned to roles so as to create AI-shaped roles.
Saturday, November 16, 2024
In the beginning: OpenAI Email Archives (from Musk v. Altman)
As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockman have been released.
I have found reading through these really valuable, and I haven't found an online source that compiles all of them in an easy to read format. So I made one.
Here's the first four emails:
Sam Altman to Elon Musk - May 25, 2015 9:10 PM
Been thinking a lot about whether it's possible to stop humanity from developing AI.
I think the answer is almost definitely not.
If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.
Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation.
Sam
Elon Musk to Sam Altman - May 25, 2015 11:09 PM
Probably worth a conversation
Sam Altman to Elon Musk - Jun 24, 2015 10:24 AM
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement.
I think we’d ideally start with a group of 7-10 people, and plan to expand from there. We have a nice extra building in Mountain View they can have.
I think for a governance structure, we should start with 5 people and I’d propose you, Bill Gates, Pierre Omidyar, Dustin Moskovitz, and me. The technology would be owned by the foundation and used “for the good of the world”, and in cases where it’s not obvious how that should be applied the 5 of us would decide. The researchers would have significant financial upside but it would be uncorrelated to what they build, which should eliminate some of the conflict (we’ll pay them a competitive salary and give them YC equity for the upside). We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t. At some point we’d get someone to run the team, but he/she probably shouldn’t be on the governance board.
Will you be involved somehow in addition to just governance? I think that would be really helpful for getting work pointed in the right direction getting the best people to be part of it. Ideally you’d come by and talk to them about progress once a month or whatever. We generically call people involved in some limited way in YC “part-time partners” (we do that with Peter Thiel for example, though at this point he’s very involved) but we could call it whatever you want. Even if you can’t really spend time on it but can be publicly supportive, that would still probably be really helpful for recruiting.
I think the right plan with the regulation letter is to wait for this to get going and then I can just release it with a message like “now that we are doing this, I’ve been thinking a lot about what sort of constraints the world needs for safefy.” I’m happy to leave you off as a signatory. I also suspect that after it’s out more people will be willing to get behind it.
Sam
Elon Musk to Sam Altman - Jun 24, 2015 11:05 PM
Agree on all
There's much more at the link. Amazing stuff.
Reading through the rest I'm reminded of a line from The Blues Brothers: "We're on a mission from God."
Hossenfelder on stagnation: “Science is in trouble and it worries me”
Show notes:
Innovation is slowing, research productivity is declining, scientific work is becoming
more[less] disruptive. In this video I summarize what we know about the problem and what possible causes have been proposed. I also explain why this matters so much to me.00:00 Intro
00:33 Numbers [1]
06:33 Causes
10:32 Speculations [2]
16:25 Bullshit Research
22:06 Epilogue
I've given a fair amount of attention to this topic under the label, stagnation. I've also written a working paper: Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, July 25, 2019.
[1] Here she opens by citing results from Bloom et al. Are Ideas Getting Harder to Find? (2017), and other more recent papers.
[2] Mentions Fast Grants, with clips showing Patrick Collison and Tyler Cowen.
Friday, November 15, 2024
Rez Ball: Community, Alcoholism, and Basketball [Media Notes 141]
A week or three ago I watched Rez Ball streaming on Netflix. It’s about basketball on the Navajo reservation (the rez), where basketball is close to a civic religion – think of football in West Texas as depicted in Friday Night Lights. It was directed by Sydney Freeland, a Navajo, and has an ensemble cast of people you never heard of because, well, it’s not that kind of film. But you may have heard of the producer, LeBron James.
Note, however, that the phrase “rez ball” also, and more fundamentally, indicates the style of basketball cultivated on the rez: “Run fast, shoot fast, don’t ever stop.” In the middle of the film we’ll see the Cuska Warriors playing against a seven-second shot clock which they sound out.
The film opens with credits playing on the screen while we hear voices commenting on some kids playing basketball. Then we zoom on in a desert while a voice utters, “From the shadow of beautiful Shiprock, coming at you with 50,000 watts” (as Shiprock, a recurring image, comes into view in the center of the frame). Now we see the kids, two of them playing one on one, with their parents commenting. One boy takes a three-point shot – “Jimmy with the three.” It bounces off the rim, and we jump cut to a third scene where a young man rebounds the ball. He takes it out and then drives in to make a layup. Finally, we see the title, “Rez Ball” over a black screen.
Obviously – at least in film logic – the two kids in the second scene have grown into the young men we see in the third. In the next minute or three we learn that one of the young men, Nataanii Jackson (the Braided Assassin), is a basketball star who sat out his junior year because his mother and sister (see saw them in the first scene) had been killed in a drunk driving accident. The other young man is his best friend, Jimmy Holiday (Kauchani Bratt), also a star player.
It's the beginning of a new season. The Cuska Warriors are about to play their first game of the season against the Gallup Bengals. They win, just barely. Their coach, Heather Hobbs (Jessica Matten), who’d been a star in college and then played professionally in the WNBA, is not happy – they’re ranked second in state – and lets them know it.
Their next game is against the Santa Fe Catholic Coyotes, who’d won the state championship the previous year. They crush the Warriors by 70. In the locker room after the game we learn why Nataanii Jackson hadn’t shown up to play. He’d killed himself.
At this point, roughly 25 minutes into the film, we know how the rest of the story will go, at least in rough outline. How do we know? It’s a sports film, and these films follow certain conventions. The Warriors will keep losing for a while, then they’ll get it together (a process that includes a day herding sheep), and come back to meet the Coyotes for the state championship. Do I have to tell you who wins that game?
The game is close at the half and stays close through the second half. Santa Fe is ahead, 78-77, as Holiday takes a shot at the final whistle. It doesn’t go in, but Holiday was fouled, so he gets three free throws. The first one goes in. Tie game.
Now the film switches gears.
I need to tell you about Holiday’s mother, Gloria Holiday (Julia Jones). She grew up with Coach Hobbs. They played ball together. Gloria was the better player, a natural. But...well, we’re not told, nor do the details matter. As the film opens she’s a single mom and an alcoholic. This and that, gets a job, starts going to AA meetings, gets a car, a junker. Decides to go to the championship game, which is off the rez. It’s one thing for her to drive on the rez, but once she gets off the rez she’s taking a risk because she has outstanding DUIs. Just as she arrives at the arena her car starts blowing smoke. That gets her pulled over by the police and then she’s taken to jail. But, knowing that her son is playing in the game, one of the officers sits down with her with a small radio and they listen to the final quarter.
We know all that as Jimmy gets ready to take his second shot. The lights go out, a spot comes on, shining directly on Jimmy over the basket from the left. The court disappears. A second spotlight shines over Jimmy from the right. We can see his mother sitting in a lawn chair: “That’s the thing about Natives. No matter how hard we try, we always find a way to lose. It’s in our blood.” He misses his second shot. We’re back in the court, fully lit. As Jimmy takes a couple of preparatory bounces the court once again goes black and his mother reappears: “The higher you go, the greater the fall. I just don’t wanna’ see you get hurt.” He replies, “Only one way to find out,” and takes the shot.
A day or two later Jimmy wakes up to hear someone shooting baskets in the hoop in front of his home. He goes out and sees his mother. They chat, she hands him a pile of recruitment letters, and remarks. “You gotta’ work on those free throws, though. Show me what you got, champ.” They start playing, talking trash.
Roll end credits.
* * * * *
While I don’t have a particular interest in sports films, I’ve watched a fair number of them. This is easily one of the best. Why? Think about those last two free throws and what the film had to do to support them.
Tuesday, November 12, 2024
Yudkowsky + Wolfram on AI Risk [Machine Learning Street Talk]
This is a long, rambling, conversation (4 hours), so I have a hard time recommending the whole thing. I’d say that Wolfram and Yudkowsky do manage to find one another by the 4th hour (sections 6 & 7) and say some interesting things about computation and AI risk (much of the earlier conversation was on tangential matters). I note that the whole thing has been transcribed and there’s a Dropbox link for the conversation.
I will note that the conversation they did have was much better than what I had anticipated, which was a lot of talking past one another. And, yes, there was some of that, but as soon as that got going they worked hard at understanding what each was getting at.
Wolfram has some interesting remarks on computational irreducibility scattered throughout – that’s certainly one of his key concepts, and an important one. He also asserts, here and there, that he’s long been used to the idea that he faces computers smarter than he is; he also notes that he regards the universe as smarter than he is.
My sense is that the computation and AI risk stuff could be written up in a tight 2K words or so, but I don’t have any plans to make the attempt. That might be an exercise for a good student. Perhaps one of the current LLMs (Claude?) could do it.
TOC:
1. Foundational AI Concepts and Risks
[00:00:00] 1.1 AI Optimization and System Capabilities Debate
[00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations
[00:20:09] 1.3 Existential Risk and Species Succession
[00:23:28] 1.4 Consciousness and Value Preservation in AI Systems2. Ethics and Philosophy in AI
[00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation
[00:36:30] 2.2 Ethics and Moral Philosophy Debate
[00:39:58] 2.3 Existential Risks and Digital Immortality
[00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation3. Truth and Logic in AI Systems
[00:54:39] 3.1 AI Persuasion Ethics and Truth
[01:01:48] 3.2 Mathematical Truth and Logic in AI Systems
[01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics
[01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate4. AI Capabilities and Constraints
[01:21:21] 4.1 AI Perception and Physical Laws
[01:28:33] 4.2 AI Capabilities and Computational Constraints
[01:34:59] 4.3 AI Motivation and Anthropomorphization Debate
[01:38:09] 4.4 Prediction vs Agency in AI Systems5. AI System Architecture and Behavior
[01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction
[01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior
[02:09:41] 5.3 Machine Learning as Assembly of Computational Components
[02:29:52] 5.4 AI Safety and Predictability in Complex Systems6. Goal Optimization and Alignment
[02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems
[02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior
[03:02:18] 6.3 Optimization Goals and Human Existential Risk
[03:08:49] 6.4 Emergent Goals and AI Alignment Challenges7. AI Evolution and Risk Assessment
[03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory
[03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate
[03:56:05] 7.3 AI Risk and Biological System Analogies
[04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality8. Future Implications and Economics
[04:13:01] 8.1 Economic and Proliferation Considerations
Monday, November 11, 2024
Tom Dietrich on the current evolution of AI
Posted in a Substack conversation here:
An alternative view of what is happening is that we have been passing through three different phases of LLM-based development.
In Phase 1, "scaling is all you need" was the dominant view. As data, network size, and compute scaled, new capabilities (especially in-context learning) emerged. But each increment in performance required exponentially more data and compute.
In Phase 2, "scaling + external resources is all you need" became dominant. It started with RAG and toolformer, but has rapidly moved to include invoking python interpreters and external problem solvers (plan verifiers, wikipedia fact checking, etc.).
In Phase 3, "scaling + external resources + inference compute is all you need". I would characterize this as the realization that the LLM only provides part of what is needed for a complete cognitive system. OpenAI doesn't call it this, but we could view o1 as adopting the impasse mechanism of SOAR-style architectures. If the LLM has high uncertainty after a single forward pass through the model, it decides to conduct some form of forward search combined with answer checking/verification to find the right answer. In SOAR, this generates a new chunk in memory, and perhaps in OpenAI, they will salt this away as a new training example for periodic retraining. The cognitive architecture community has a mature understanding of the components of the human cognitive architecture and how they work together to achieve human general intelligence. In my view, they give us the best operational definition of AGI. If they are correct, then building a cognitive architecture by combining LLMs with the other mechanisms of existing cognitive architectures is likely to produce "AGI" systems with capabilities close to human cognitive capabilities.
Thursday, November 7, 2024
From Shakespeare to Trump @3QD
My latest post at 3 Quarks Daily: Shakespeare, the Starry Welkin, and Donald Trump. I’m playing around with the idea that, while we (whoever “we” are) regard Shakespeare as central to our culture, Elizabethan England is in fact a foreign culture. It may be in our cultural lineage, but it is still foreign. The language is somewhat different, as are the customs. The political system is quite different, a monarchy vs. a representative democracy. The economic system is quite different as well, though I don’t mention it in the post, nor do I mention the educational system.
I then work my way around to Henry IV, Part II, where I quote from the scene known as the rejection of Falstaff, which I’ve discussed in a number of posts at New Savanna, including, Trump, Gibbs & NCIS, and the Queen @3QD. This is the scene where Falstaff approaches his good body, Hal, who has now become Henry V and expects advancement at court. Hal rebuffs him, however, pointing out he is no longer the person he was when he was hanging out with Falstaff and the gang. Thus he distinguished between his personal life, with its duties and obligations, and his public life, as king. He may be the same biological individual, but he is a different social individual.
That’s an important distinction, one that Trump does not make. He has in the past, and presumably will do so in the future, treated his position as president as his personal fiefdom. That’s what corruption is about, failure to distinguish between one’s private personal life and one obligation to an organization in which one holds a position.
That’s one thing that was on my mind. But there’s another, and this is somewhat different. How has Shakespeare influenced our culture and society? On the one hand he has influence subsequent writers and so his influence has diffused through the population in that way. That process has been going on for over four centuries.
But what about his current influence? How does that work? The plays continue to be taught in secondary schools and in colleges and universities, and they continue to be performed in various amateur and professional venues. There is a considerable scholarly establishment devoted to studying his work and publishing about them. There is a sizeable institutional structure built around him – do a search on “Shakespeare festival” and see what comes up. How does that influence our culture and society? Much of that seems to me ceremonial and symbolic in nature (I include the scholarship). There’s something going on here that’s more than ordinary influence. I’m not at all sure how that works.
More later.
Monday, November 4, 2024
Sunday, November 3, 2024
This election is different
Ezra Klein, There’s Something Very Different About Harris vs. Trump, NYTimes, Nov. 3, 2024.
To Democrats, the institutions that govern American life, though flawed and sometimes captured by moneyed interests, are fundamentally trustworthy. They are repositories of knowledge and expertise, staffed by people who do the best work they can, and they need to be protected and preserved.
The Trumpist coalition sees something quite different: an archipelago of interconnected strongholds of leftist power that stretch from the government to the universities to the media and, increasingly, big business and even the military. This network is sometimes called the Cathedral and sometimes called the Regime; Trump refers to part of it as the Deep State, Vivek Ramaswamy calls the corporate side “Woke Inc.” and JD Vance has described it as a grave threat to democracy.
There's more at the link.