World Models: The old, the new and the wishful #SundayHarangue
— Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) March 15, 2026
There is a lot of chatter about world models of late--even more than can be explained by Yann betting his entire new enterprise on it. I was going to comment on this clamor in my class this week, and thought I will… pic.twitter.com/22wWQDQdSw
NEW SAVANNA
“You won't get a wild heroic ride to heaven on pretty little sounds.”– George Ives
Wednesday, March 18, 2026
World models, some notes
Taking notes by hand is more effective than by laptop (?)
This is a 12-year-old study that has failed replication three times. And the underlying claim is still probably right.
— Aakash Gupta (@aakashgupta) March 18, 2026
The paper is Mueller and Oppenheimer, 2014. 67 students at Princeton. Longhand note-takers scored higher on conceptual questions. Became the most cited paper in… https://t.co/VXTNfQAuvt
Tuesday, March 17, 2026
Psychological Well-Being for Introverts (like me)
Dana G. Smith, Social Ties Help You Live Longer. What Does That Mean for Introverts? NYTimes, Oct. 9, 2025.
Considering all the research around socializing and longevity, some introverts can be forgiven for feeling doomed. People who have strong relationships generally live longer, and the unicorns known as “super-agers” — older adults who have the memory abilities of someone 20 years younger — tend to be especially outgoing. On the flip side, chronic loneliness raises the risk for cognitive decline and even early death.
But experts say it doesn’t take as much socializing to reap those longevity benefits as one might think, namely a few close ties and some everyday activities that facilitate contact with the wider world. It’s less about the sheer number of connections you have, and more about what those connections do for you.
In other words, introverts don’t need to be the life of the party to have a long and healthy life.
Our relationships contribute to health and longevity in a few critical ways: They provide emotional support, cognitive stimulation, care during times of crisis and motivation to have healthier habits. If your current relationships check those four boxes, you’re probably in pretty good shape. But if you’re missing one or two, it may be time to re-evaluate your social network.
Not everybody needs “the same amount of social activity,” said Dr. Ashwin Kotwal, an associate professor of medicine specializing in geriatrics at the University of California, San Francisco School of Medicine. “But getting some social activity is important.”
Meta-level Question: That article dates from October of 2025. So why did the Times serve it up to me in March of 2026? Is it serving that article up to everyone because it’s popular? Or am I getting it because I’ve got a social-media profile that says “introvert”? I have no trouble imagining that it’s the latter, but I don’t really know. Certainly anyone who actually reads my blog will figure out that I’m an introvert, but I have no trouble imagining that that could be inferred more indirectly.
There’s more at the link.
Now you can run a 100B parameter LLM on your laptop
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.
— Nainsi Dwivedi (@NainsiDwiv50980) March 16, 2026
It's called BitNet. And it does what was supposed to be impossible.
No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at… pic.twitter.com/hsEoNVw49V
Monday, March 16, 2026
The brain's dopamine response to music peaks in the mid-teens
Your brain peaked musically somewhere around age 16. Everything since then has been a dopamine echo.
— Aakash Gupta (@aakashgupta) March 16, 2026
Between the ages of 12 and 22, the mesolimbic dopamine pathway, the same circuit that processes cocaine and sex, fires at levels in response to sound that it will never reach… https://t.co/QUxgPiRxps pic.twitter.com/HTsZKp7Ol3
Sunday, March 15, 2026
On the relevance of intellectual history for understanding present events (AI)
Jim Olds, The Chronology Problem, Mar. 12, 2026.
We are surprisingly bad at knowing when things began.
I’ve been thinking about this for a while, partly because I lived through several of the transitions we now misremember. In 1987, I used the Internet for early text-based email, file transfers, and reaching colleagues at other universities. In August of 1991, in the face of an impending direct hit of Hurricane Bob, I moved all of my image data from Woods Hole to NIH in Bethesda in a matter of minutes. This was entirely unremarkable at the time. And yet when I mention it today, people often look mildly startled, as if I’ve claimed to have owned a smartphone in 1987. In their minds, the Internet began sometime around 1994 or 1995, when the Web arrived and made it visible to everyone. Before that, apparently, there was nothing.
Olds then goes on to say more about the (deep) origins of the web, artificial intelligence, climate science, and economics. Here's what he had to say about AI:
The field of artificial intelligence may be the most dramatic case study in collective chronological confusion we have. Most people who interact with today’s language models and image generators believe they are witnessing something genuinely unprecedented — a technology that sprang into being sometime around 2017. What happened is more complicated and more interesting.
The mathematical foundations for neural networks were laid in 1943, when Warren McCulloch and Walter Pitts published a paper describing how neurons could, in principle, compute logical functions. Frank Rosenblatt simulated a working perceptron at the Cornell Aeronautical Laboratory in 1958 — a system that could learn from examples. The 1986 backpropagation paper by Rumelhart, Hinton, and Williams, which most practitioners treat as a founding document, was itself a rediscovery and refinement of ideas that had been circulating since the early 1970s. Yann LeCun was training convolutional neural networks to read handwritten digits for the U.S. Postal Service in 1989. The architecture underlying those systems is recognizably the ancestor of what powers modern computer vision.
None of this was secret. It was published, presented, and in some cases deployed in real systems. What happened instead was a kind of institutional forgetting, accelerated by two “AI winters” — periods when funding dried up, interest collapsed, and computer science turned its attention elsewhere. Researchers who had spent careers on neural approaches moved on or retired. Graduate students who might have built on their work were instead trained in other paradigms. When the hardware finally caught up with the ambitions of the 1980s, around 2012, the rediscovery felt like a revolution. In some ways, it was. But the conceptual foundations were not new, and the people who had laid them got less credit than they deserved, partly because so many of the field’s new practitioners didn’t know they existed.
The practical cost here is the same as elsewhere: repeated investment in problems that had already been partially solved, frameworks that were novel mainly to their authors, and a set of origin myths that flatter the present at the expense of the past. The deeper cost is that we don’t understand what was tried and discarded and why — which algorithms were abandoned for reasons of computational expense rather than theoretical inadequacy, and which might be worth revisiting now that the expense has fallen.
To Olds’s list I would add Miriam Yevick's 1975 paper, Holographic or fourier logic, published in Pattern Recognition. Unfortunately that paper got lost as it didn't fit into either cognitive science or artificial intelligence. What she proved was the for one class of visual objects, those with a complex geometry, neural networks provided the best computational regime while for another class of objects, those with simple geometry, symbolic computation provided the best computational regime. That has a direct bearing on the current debate over whether or not new architectures involving symbolic processing are necessary.
Saturday, March 14, 2026
What electrochemical machine has 100 trillion connections in a volume the size of a cantaloupe?
That one neuron connects to about 7,000 others. Your brain has 86 billion of them. Do the math and you get somewhere around 100 trillion connections inside your head. More connections than stars in 1,500 galaxies.
— Anish Moonka (@AnishA_Moonka) March 14, 2026
And each connection point is way more complicated than anyone… https://t.co/sUkcS7T3rA
The profession of literary criticism as I have observed it over the course of 50 years [& related matters]
Updated 6.23.17.
In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.
“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto
March 31, 2010 (originally at The Valve)
https://new-savanna.blogspot.com/2011/06/naturalist-criticism-not-cognitive-not.html
I declare my commitment to ‘naturalist’ literary criticism, thereby denying ‘cognitive criticism,’ with which I had associated myself for years, and ‘Darwinian criticism,’ with which I had never associated myself. Takes the form of a loose dialog.
(2006-2016)
(2007-2011)
Lévi-Strauss and Myth: Some Informal Notes
(2007-2011)
(2007-2015)
(May 5, 2014)
(January 30, 2015)
(August 24, 2015)
Three for 3QD: Man-Machine Collaboration, E.T. the Extra-Terrestrial, American Heartbreak in Jersey City
Generally when I post an article to 3 Quarks Daily I will follow up with a post here at New Savanna linking to the 3QD piece and extending or commenting on that argument in some way. However, as I’ve indicated in this post, Coming out of melancholy, again, from Feb. 5, I went into psychological hibernation (aka melancholy) last September. While I did manage to post to 3QD during that period, I didn’t post notices here at the Savanna. Here are those notices, belatedly.
* * * * *
Some Hybrid Remarks on Man-Machine Collaboration, September 12, 2025.
That essay touches on a number of things: 10 LLMs as cultural technology, 2) my Fourth Arena concept, 3) Latour on the (false) distinction between nature and culture, and 4) the issue of how proper attribution for hybrid essays (essays where an AI played a significant role). In between 3 and 4 I inserted an essay by ChatGPT.
* * * * *
E.T. the Extra-Terrestrial: Into the Bopi with Steven Spielberg, Oct. 12, 2025.
It’s what its title suggests, an essay review of Spielberg’s film. It’s staged as a science fiction story, but is that really what it is, science fiction? From the article:
On the whole, my sense is that, in making this film Spielberg ventured into the bopi. And just what, pray tell, is that? I have the term from by friend, Charlie Keil. Early in his career he did fieldwork among the Tiv of Nigeria. The bopi is an area that’s set aside for children’s play. Moreover, adults are forbidden to enter the bopi. [...] But the whole film feels like an imaginative bopi. It’s a kid-centric world in which adults are an intrusive presence. [...]
Ultimately the story of E.T. seems to be almost an allegory or metaphor for art itself, a zone apart from the world into which we move to revivify and reconstruct.
* * * * *
American Heartbreak: The ‘Urban Design Studio’ in Jersey City, Nov. 3, 2025.
This is a photo essay about a graffiti site in Jersey City, now demolished.
Friday, March 13, 2026
Friday Fotos: What’s a tablescape?
Just what the name suggests, like a landscape only on a table top.
I’ve been taking photos of my meals and posting them here since September of 2018. Since I eat my meals indoors where they’re arrayed on the top of a table, my food photos will often, though not necessarily always, catch the table itself. What I mean by a tablescape, however, is more specialized. Here’s a recent tablescape:
To get that shot I set the camera on the table, pointed it in an appropriate direction, and snapped the shutter. That means your point of view is about an inch or an inch-and-a-half above the table itself and that your angle of regard is parallel to the tabletop. Just as landscapes are photographed or painted from some location on the land itself, so a tablescape is photographed from a location on the table itself.
This, in contrast, is NOT a tablescape, though a table is quite visible:
But I’m holding my camera in my hand so that I can get a particular shot. In this case, I’m interested in how the glimpse of the placemat you see through the carafe is displaced relative to what you see through the air (due to optics). Couldn’t get that shot in a tablescape.
There is an element of chance in taking a tablescape. Why? Because you can’t line up the photo in the normal way, either through a viewfinder or viewscreen. I suppose you could try, but it would be difficult, not terribly successful and not worth the effort. Fact is, the chance element is one reason for taking tablescapes. You don’t quite know what you’re going to see.
I doubt that I would ever have deliberately taken this photo, but as a tablescape it’s fine.
Or is it? And that’s why I’m taking these shots, to force me to think about each photo. Is this an image I want to keep, to show to others, why? When you shoot, say, the Empire State Building, which I can do quite easily, those questions don’t arise quite so insistently. Why not? Because the Empire State Building is an iconic structure and, as such, is certified photo-worthy. Shooting certified photo-worthy subjects is a no-brainer.
Here’s another tablescape:
Notice the expanse of the table itself in the foreground. That’s a typical feature of table shots. I cropped most of the table out in that first shot up there. Notice also in that first shot, I’m pointing out the window, where you see street lights in Hoboken and tall buildings in Manhattan. This shot, in contrast is pointed at the interior of the restaurant. Look at the upper left; looks like shadows of some plants cast on the side of a column. What plants, where? Notice the reflection of the carafe on the shiny surface of the table. There are shadows cast across the table as well.
Here’s one last tablescape, without comment:














