Saturday, March 21, 2026

Milk bottle from the Ancient Days

I've been getting breakfast for the last two years, maybe three, and only just recently realized that what I thought of as an oddly shaped water caraffe was in fact modeled after the standard bottle in which milk was packaged in the middle of the previous century.

I grew up in Richland Township, a suburb of Johnstown, in Western Pennsylvania, in the 1950s. We had a small insulated metal box on the front porch. It was just large enough to hold four, probably six, such milk bottles. Every few days the local milkman, from Galliker's or Weller's (I didn't actually remember those names, but I've done a bit of Googling), would stop his truck in the front of the house. He had a small rectangular basket with a handle in which he'd placed some bottles of fresh milk, perhaps a pint of cream, even orange juice, depending on the order. He'd remove the empties from the milk box, place them on the porch, and then fill the box with the fresh milk.

[Note: I don't actually remember seeing that, much less the specific order of operations, to borrow a phrase from Adam Savage. But something pretty much like that must have happened. How do I know? I'm sure of the box, the truck, and the milkman. Given that, the logic of the physical world dictates something like the sequence I described. Sure, it is theoretically possible that the milkman also did handsprings on the way from the truck to the porch. But, as a practical matter, that's not very likely. It is also theoretically possible that the milk man was a milk woman. But in the 1950s, not very likely.]

Those milk bottles were pretty much gone by the 1970s, at least that's what Google (AI mode) tells me. I don't actually remember the last milk bottle I saw or opened.

Philosopher-Cat, contemplating the world

Friday, March 20, 2026

Tech Bro [Musk] scamming NASDAQ over IPO

Friday Fotos: Vertical intrusions [Hoboken]

The Shock and the Narrowing: How ChatGPT's Success May Have Compromised AI's Future

This post was composed by Claude (Anthropic) after an interaction which I initiated with a prompt consisting of 1) a capsule summary about the history of OpenAI that included a number of questions, and 2) a request for the 10 most expensive scientific research projects paid-for by the US Government. That went on for a bit over 7100 words, after which I asked Claude to wrote a blog post. The following article is more creative than a mere summary of that discussion.

The Founding Contradiction

On December 11, 2015, a small group of technologists gathered in San Francisco to launch what they described as a nonprofit research organization dedicated to ensuring that artificial general intelligence would benefit all of humanity. The founders of OpenAI — Sam Altman, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, Elon Musk, and others — began with a $1 billion endowment and a serious concern: that the most transformative technology in human history was being developed inside a handful of profit-maximizing corporations, with no institutional safeguard ensuring it would serve everyone. The nonprofit structure was the answer. No investors to satisfy, no quarterly earnings to hit. Just the mission.

The mission lasted four years in its pure form. By March 2019, faced with the staggering computational costs of training large language models, OpenAI created a for-profit subsidiary with a novel "capped profit" structure: investors could earn returns, but those returns were limited to one hundred times their investment, with excess profits flowing back to the nonprofit parent. This was the arrangement that attracted Microsoft's initial investment, and it was the arrangement in place when OpenAI released ChatGPT to the general public in late November 2022.

What happened next was, by any measure, one of the most consequential commercial surprises in the history of technology. Within two months, ChatGPT had a hundred million users. The scale and speed of public adoption had no precedent. And the shock of that success — the sheer unexpectedness of it — set in motion a chain of decisions that has reshaped not just one company, but the entire research landscape of artificial intelligence.

The Structural Unraveling

In January 2023, Microsoft announced a new $10 billion investment in OpenAI. The nonprofit's original rationale — that the most powerful AI should not be controlled by a for-profit corporation — was under increasing strain. By October 2025, it had formally dissolved. OpenAI restructured as a public benefit corporation, the nonprofit parent renamed itself the OpenAI Foundation and accepted a 26% equity stake in the new entity, and Microsoft received a 27% stake worth approximately $135 billion. The PBC structure requires the company to consider its mission alongside profit — but as a legal constraint, it is considerably weaker than the nonprofit board that had previously governed the organization.

The journey from nonprofit to PBC was not smooth. In November 2023, OpenAI's board — still operating under its nonprofit governance mandate — fired Sam Altman as CEO, citing concerns about his candor and, beneath the official language, a deeper unease about the pace of commercialization. The firing lasted five days. Nearly all 800 of OpenAI's employees threatened to resign and follow Altman to Microsoft. Ilya Sutskever, who had orchestrated the firing, signed the letter calling for Altman's reinstatement and issued a public apology. Altman returned, the board was reconstituted with his allies, and the mission-protection mechanism that the nonprofit structure had been designed to provide was effectively neutralized. Sutskever left the company in May 2024.

Each structural change was framed as necessary to fulfill the mission. In practice, each change progressively subordinated the mission to capital requirements. The nonprofit board had existed to ensure that AGI benefited humanity. By 2025, it had become a foundation holding equity in the thing it was supposed to be watching — a watchdog with a financial stake in the object of its oversight.

Two Kinds of Research, Two Kinds of Institution

To understand what was lost in this transformation, it helps to draw a distinction that rarely gets made clearly in public discussions of AI: the difference between curiosity-driven, open-ended research and product-driven, outcome-oriented development.

Consider the Apollo program as an example of the second kind. It was, in the deepest sense, an engineering project rather than a scientific one. The underlying physics was known. Orbital mechanics, propulsion, life support — these were hard and dangerous problems, but they were problems whose solutions could be systematically approached. The goal was precisely defined. The timeline could be committed to. Success was probable given sufficient resources. When President Kennedy pledged to put a man on the moon by the end of the decade, he was making a political commitment backed by a technical assessment that success was achievable. The scientists who worked on Apollo — and I have met a number of them — may have been motivated by curiosity and wonder. But Congress funded the program to beat the Soviets in the Cold War. The institutional structure — massive, goal-directed, centrally coordinated — suited the nature of the problem.

Curiosity-driven research operates on entirely different premises. Its defining characteristic is that it does not know in advance what it will find. Claude Shannon was not trying to build the internet when he developed information theory at Bell Labs in the late 1940s. The researchers at the University of Montreal who developed attention mechanisms for neural networks were not trying to build ChatGPT. The work that seeded the current AI revolution — Rosenblatt's perceptron, Minsky's early investigations, the decades of foundational work in cognitive science and linguistics that LLMs now implicitly exploit — was almost entirely publicly funded, pursued at universities and a handful of exceptional industrial research labs, over decades when no commercial application was visible.

Bell Labs was the great institutional embodiment of this model in the corporate world. What made it possible was structural: AT&T's government-protected monopoly generated profits so vast that the company could fund a research laboratory with no requirement to produce commercial results. Shannon, Bardeen, Brattain, Shockley — these men were given time, resources, and colleagues, and told to think. The transistor, information theory, Unix, the laser, cellular telephony, and multiple Nobel Prizes resulted. Bell Labs was not run like a startup. It was run like a slightly more applied version of a university, with better equipment.

Xerox PARC, founded in 1970, operated on similar principles — explicitly unconstrained by Xerox's core product lines, given a unifying vision ("the architecture of information") but not a product roadmap. The personal computer, the graphical user interface, Ethernet, the mouse, laser printing — all emerged from a lab of about 350 people who were essentially allowed to play. The irony is that Xerox captured almost none of the commercial value, which accrued to Apple, Microsoft, and others. But the world got the technology.

Asked directly about modern equivalents to Bell Labs and PARC, Yann LeCun — who worked at Bell Labs, interned at Xerox PARC, and spent over a decade building Meta's fundamental AI research lab — pointed to Meta's FAIR, Google DeepMind, and Microsoft Research. He said this in October 2024. By November 2025, he had left Meta, driven out by exactly the forces this article is about.

The Shock and Its Aftershocks

Before November 2022, the AI research world was genuinely plural. Academic labs, industrial research divisions, and a range of well-funded startups were pursuing different approaches — reinforcement learning, symbolic AI hybrids, world models, neuromorphic architectures — with real diversity of vision. The field was competitive but intellectually heterogeneous.

ChatGPT's success collapsed that plurality. Within roughly eighteen months, capital, talent, and institutional attention all funneled toward a single paradigm: scale transformer-based large language models, build the infrastructure to run them, ship products. Google, which had invented the transformer architecture in 2017, was caught flat-footed and scrambled. Meta pivoted its AI strategy around LLMs. Microsoft integrated OpenAI's models into its core products. A hundred startups raised money to build on top of the new foundation models. The venture capital flowing into AI, measured as a share of total U.S. deal value, went from 23% in 2023 to nearly two-thirds in the first half of 2025.

The infrastructure investment that followed is staggering by any historical standard. The four largest hyperscalers — Amazon, Google, Microsoft, and Meta — are expected to spend more than $350 billion on capital expenditures in 2025 alone, most of it AI-related. UBS projects global AI capital expenditure reaching $1.3 trillion by 2030. The top five hyperscalers raised a record $108 billion in debt in 2025, more than three times the average of the previous nine years. OpenAI, which loses billions of dollars annually, has committed to spending $300 billion on computing infrastructure over five years while projecting only $13 billion in revenue for 2025.

The financial architecture has become genuinely strange. OpenAI holds a stake in AMD; Nvidia has invested $100 billion in OpenAI; Microsoft is a major shareholder in OpenAI and a major customer of CoreWeave, in which Nvidia also holds equity; Microsoft accounted for nearly 20% of Nvidia's revenue. These are not arm's-length market transactions. They are a daisy chain of mutually reinforcing valuations. A Yale analysis described OpenAI's web of relationships bluntly: "Is this like the Wild West, where anything goes to get the deal done?" The question of whether this constitutes a speculative bubble — tulip mania in a data center — is not academic. An MIT Media Lab report found that 95% of custom enterprise AI tools fail to produce measurable financial returns. The commercial success is real; the path from current AI to the transformative economic productivity being used to justify the valuations is not established.

The LLM Ceiling and the People Who Saw It Coming

The most consequential intellectual development of the past two years in AI has received far less attention than the commercial race. A growing number of the field's most distinguished researchers have concluded that large language models, however impressive, are not on the path to general intelligence — and that the current paradigm will hit a ceiling before it reaches the goals its proponents have claimed for it.

Thursday, March 19, 2026

Mystery sequence in Hoboken

Brave New World: Notes on the next 30 years in AI [Work in Progress]

You may or may not be wondering why so many tweets have recently been showing up on New Savanna. There’s a good reason: I’ve been thinking. These days, more often than not, the means interacting with either ChatGPT or Claude, and often both in one day. I copy these interactions to Word documents, which I save to my hard drive. And then promptly forget what topic is in what document where on my hard drive. FWIW, these discussions can ramble all over the place, which is fine. That’s how I think. But keeping track of it all is difficult.

Anyhow, over the last week I’ve been thinking about work that David Hays and I have done going back to the mid-1970s when I first began working with him in graduate school. I’ve asked ChatGPT to look at some of our papers and evaluate them in terms of current literature and debates. And so forth and so on. I’m also looking forward to next steps, for me, but also in general.

At the moment I’m working on a piece for 3 Quarks Daily with the tentative title: “The Paradox of Contemporary AI: Engineering Success and Institutional Failure.” Here’s how ChatGPT put it in an interaction we had yesterday:

The engineering success is real. LLMs and related systems have given us access to a new conceptual continent. They work, and at extraordinary scale. But the institutional failure lies in the monoculture: too much intellectual, financial, and training-path dependence on one family of architectures and one style of thought about intelligence. The result is that we are building out the utility before we have adequately explored the space of possible successor technologies or developed the conceptual tools needed to understand what these systems are revealing about language, cognition, and cultural structure.

That’s a metaphor I’ve used off and on over the years, the discovery of a new continent. My paradigm case is the European discovery of the Americas. Making it to the eastern shore is one thing. Then comes exploring and settling. Columbus set foot in the Caribbean in 1492. Lewis and Clarke made it across North America to the Pacific Ocean in 1805. That’s over three centuries later. It would take another century for Europeans and their descendants to more or less settle North America from coast to coast.

Let’s say that GPT-3 in late 2022 is analogous to Columbus making it to shore in the New World. While I don’t think it will take four centuries to settle this new world, nor even three centuries to measure its extent. Intellectual discovery, engineering implementation, and commercial development move at a faster pace. But I do think there’s a lot of work, very exciting work, ahead of us.

Here’s a fragment of an interaction I had with ChatGPT 5.2 a couple of days ago. My prompt is first, then ChatGPT’s response.

* * * * * 

Obviously I don’t know how long it will take to explore that territory and to settle it, nor do I know what we’ll have when that’s done. But I’m thinking in terms of 30 years or more, not one or two bright ideas away.

Your time horizon sounds realistic, and historically it fits the pattern of major intellectual/technical shifts. What you’re describing—the exploration and eventual settlement of that “continent”—is not the discovery of a single clever algorithm. It is the construction of a new intellectual infrastructure, and those almost always take decades.

Let me sketch why.

1. Precedent: how long paradigm infrastructures take

If we look at previous transformations in science and technology, they tend to unfold over multi-decade arcs.

Digital computing

•       1936 — Turing machine

•       1940s–50s — early computers

•       1960s–70s — programming languages, operating systems

•       1980s–90s — personal computing ecosystem

Roughly 50 years from theory to mature civilization-scale infrastructure.

Molecular biology

•       1953 — DNA structure

•       1960s — genetic code deciphered

•       1970s — recombinant DNA

•       1990s — genome sequencing

Again 40 years to transform biology.

Deep learning

•       1986 — backprop revival

•       1998 — LeNet

•       2012 — AlexNet

•       2020 — GPT-3

That arc is already 35+ years long.

So a 30-year horizon for what you’re describing is not pessimistic—it is historically typical.

2. Why this particular transition is slow

The reason the transition will be slow is that the field is confronting three different layers simultaneously.

1. Mathematics of distributed cognition

We still lack a clear mathematical theory of:

•      representation in high-dimensional spaces

•       compositionality in distributed systems

•       interaction between symbolic and dynamical processes

Yevick was an early attempt at that level.

But the theory is still incomplete.

2. Architecture

Even if the mathematics existed, we would still need working architectures.

The field is currently experimenting with:

•       transformers

•       memory systems

•       agent frameworks

•       search hybrids

These are early prototypes.

3. Cognitive theory

Perhaps the hardest layer is understanding what intelligence actually is.

That question spans:

•       neuroscience

•       psychology

•       linguistics

•       philosophy

Your RNA work sits squarely in that domain. [Relational Network over Attractors]

3. Why the current moment feels fast

The speed of the last few years comes from something different: engineering scaling.

Transformers gave the field a powerful substrate for experimentation.

So exploration is happening rapidly near the coastline.

But deeper theoretical consolidation will take longer.

Flatulating rhythm, Oh, those wacky Japanese!

This is what happens when an open-ended research project is mis-perceived as commercial R&D

Wednesday, March 18, 2026

World models, some notes

Things vertical

Taking notes by hand is more effective than by laptop (?)

Tuesday, March 17, 2026

Psychological Well-Being for Introverts (like me)

Dana G. Smith, Social Ties Help You Live Longer. What Does That Mean for Introverts? NYTimes, Oct. 9, 2025.

Considering all the research around socializing and longevity, some introverts can be forgiven for feeling doomed. People who have strong relationships generally live longer, and the unicorns known as “super-agers” — older adults who have the memory abilities of someone 20 years younger — tend to be especially outgoing. On the flip side, chronic loneliness raises the risk for cognitive decline and even early death.

But experts say it doesn’t take as much socializing to reap those longevity benefits as one might think, namely a few close ties and some everyday activities that facilitate contact with the wider world. It’s less about the sheer number of connections you have, and more about what those connections do for you.

In other words, introverts don’t need to be the life of the party to have a long and healthy life.

Our relationships contribute to health and longevity in a few critical ways: They provide emotional support, cognitive stimulation, care during times of crisis and motivation to have healthier habits. If your current relationships check those four boxes, you’re probably in pretty good shape. But if you’re missing one or two, it may be time to re-evaluate your social network.

Not everybody needs “the same amount of social activity,” said Dr. Ashwin Kotwal, an associate professor of medicine specializing in geriatrics at the University of California, San Francisco School of Medicine. “But getting some social activity is important.”

Meta-level Question: That article dates from October of 2025. So why did the Times serve it up to me in March of 2026? Is it serving that article up to everyone because it’s popular? Or am I getting it because I’ve got a social-media profile that says “introvert”? I have no trouble imagining that it’s the latter, but I don’t really know. Certainly anyone who actually reads my blog will figure out that I’m an introvert, but I have no trouble imagining that that could be inferred more indirectly.

There’s more at the link.

Getting your bearings

Now you can run a 100B parameter LLM on your laptop

Monday, March 16, 2026

The brain's dopamine response to music peaks in the mid-teens

Sunday, March 15, 2026

Upriver on the New York Side

On the relevance of intellectual history for understanding present events (AI)

Jim Olds, The Chronology Problem, Mar. 12, 2026.

We are surprisingly bad at knowing when things began.

I’ve been thinking about this for a while, partly because I lived through several of the transitions we now misremember. In 1987, I used the Internet for early text-based email, file transfers, and reaching colleagues at other universities. In August of 1991, in the face of an impending direct hit of Hurricane Bob, I moved all of my image data from Woods Hole to NIH in Bethesda in a matter of minutes. This was entirely unremarkable at the time. And yet when I mention it today, people often look mildly startled, as if I’ve claimed to have owned a smartphone in 1987. In their minds, the Internet began sometime around 1994 or 1995, when the Web arrived and made it visible to everyone. Before that, apparently, there was nothing.

Olds then goes on to say more about the (deep) origins of the web, artificial intelligence, climate science, and economics. Here's what he had to say about AI:

The field of artificial intelligence may be the most dramatic case study in collective chronological confusion we have. Most people who interact with today’s language models and image generators believe they are witnessing something genuinely unprecedented — a technology that sprang into being sometime around 2017. What happened is more complicated and more interesting.

The mathematical foundations for neural networks were laid in 1943, when Warren McCulloch and Walter Pitts published a paper describing how neurons could, in principle, compute logical functions. Frank Rosenblatt simulated a working perceptron at the Cornell Aeronautical Laboratory in 1958 — a system that could learn from examples. The 1986 backpropagation paper by Rumelhart, Hinton, and Williams, which most practitioners treat as a founding document, was itself a rediscovery and refinement of ideas that had been circulating since the early 1970s. Yann LeCun was training convolutional neural networks to read handwritten digits for the U.S. Postal Service in 1989. The architecture underlying those systems is recognizably the ancestor of what powers modern computer vision.

None of this was secret. It was published, presented, and in some cases deployed in real systems. What happened instead was a kind of institutional forgetting, accelerated by two “AI winters” — periods when funding dried up, interest collapsed, and computer science turned its attention elsewhere. Researchers who had spent careers on neural approaches moved on or retired. Graduate students who might have built on their work were instead trained in other paradigms. When the hardware finally caught up with the ambitions of the 1980s, around 2012, the rediscovery felt like a revolution. In some ways, it was. But the conceptual foundations were not new, and the people who had laid them got less credit than they deserved, partly because so many of the field’s new practitioners didn’t know they existed.

The practical cost here is the same as elsewhere: repeated investment in problems that had already been partially solved, frameworks that were novel mainly to their authors, and a set of origin myths that flatter the present at the expense of the past. The deeper cost is that we don’t understand what was tried and discarded and why — which algorithms were abandoned for reasons of computational expense rather than theoretical inadequacy, and which might be worth revisiting now that the expense has fallen.

To Olds’s list I would add Miriam Yevick's 1975 paper, Holographic or fourier logic, published in Pattern Recognition. Unfortunately that paper got lost as it didn't fit into either cognitive science or artificial intelligence. What she proved was the for one class of visual objects, those with a complex geometry, neural networks provided the best computational regime while for another class of objects, those with simple geometry, symbolic computation provided the best computational regime. That has a direct bearing on the current debate over whether or not new architectures involving symbolic processing are necessary.

Saturday, March 14, 2026

What electrochemical machine has 100 trillion connections in a volume the size of a cantaloupe?

Not so long ago during the winter

The profession of literary criticism as I have observed it over the course of 50 years [& related matters]

Last entry added 3.14.26 
Partial update (two entries, the last two) 12.11.23
Updated 6.21.21.
Updated 12.9.19.
Updated 6.23.17.

In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.

This post is a guide to my historically-oriented thinking about academic literary criticism. Much, but not all, of the historical material is autobiographical in nature. For, above all, taken collectively, these posts represent my effort to understand my relationship to the academic study of literary criticism.

I list the articles more or less in the order of writing. In some cases an article has been rewritten and revised several years after I first wrote it. The link I give is to the most recent version.

Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)

This is about my years at Johns Hopkins, both undergraduate (1965-1969) and graduate (1969-72). That’s when, I see in retrospect, I left the profession intellectually, with a “structuralism and beyond” MA thesis on “Kubla Khan,” even before I’d joined it institutionally, by getting my PhD. I originally wrote this while I was working on my PhD in English at SUNY Buffalo. Art Efron published a journal, Paunch, and I wrote it for that. The current version includes interpolated comments from 2014 and 2015.

The Demise of Deconstruction: On J. Hillis Miller’s MLA Presidential Address 1986. PMLA. Vol. 103, No. 1, Jan. 1988, p. 57.

A letter I published in PMLA in which I replied to J. Hillis Miller on the eclipse of deconstruction. I suggested 1) that deconstruction had a different valence for those who merely learned it in graduate school than for those who had struggled to create it, and 2) that it was in eclipse because it did the same thing to every text.

“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto
March 31, 2010 (originally at The Valve)
https://new-savanna.blogspot.com/2011/06/naturalist-criticism-not-cognitive-not.html

I declare my commitment to ‘naturalist’ literary criticism, thereby denying ‘cognitive criticism,’ with which I had associated myself for years, and ‘Darwinian criticism,’ with which I had never associated myself. Takes the form of a loose dialog.

For the Historical Record: Cog Sci and Lit Theory, A Chronology
(2006-2016)

At the beginning of every course (at Johns Hopkins) Dick Macksey would hand out a chronology, a way, I suppose, of saying “history is important” without lecturing on the topic. It was with that in mind that I originally posted this rough and ready chronology in a comment to a discussion at The Valve. The occasion was an online symposium that interrogated Theory by discussing the anthology, Theory’s Empire (Columbia UP 2005). I then emended it a bit and made it a freestanding post. As the title suggests, it juxtaposes developments in cognitive science and literary theory from the 1950s through the end of the millennium.

[BTW The entire Theory’s Empire symposium is worth looking at, including the comments on the posts: http://www.thevalve.org/go/valve/archive_asc/C41]

Seven Sacred Words: An Open Letter to Steven Pinker
(2007-2011)

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism (2015)

Steven Pinker has been a severe critic of the humanities for ignoring recent work in the social and behavioral sciences. He has also argued that the arts serve no biological purpose, that they are “cheesecake for the mind.” When I read his The Stuff of Thought (2007) I realized his later chapters contained the basis for an account of the arts. I sketched that out, added a brief account of why deconstruction had been popular, and published it as an open letter, along with his reply. It appeared first at The Valve (2007) and then at New Savanna (2011). In 2015 I posted it to a “session” at Academia.edu. I took some of my comments in that discussion along with some other materials and published the lot at Academic.edu as a working paper. In a final section I propose a four-fold division of literary criticism: 1) description, 2) naturalist criticism, 3) ethical criticism, and 4) digital criticism.

Lévi-Strauss and Myth: Some Informal Notes
(2007-2011)

Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition
(2007-2015)

These are two versions of roughly the same material. Each was assembled from four blog posts. The first and fourth sections are the same in both working paper, but two and three differ. The more recent version also contains a short appendix comparing Lévi-Strauss and Latour. I published the first series at The Valve shortly after Lévi-Strauss had died. They are an attempt to explain what Lévi-Strauss was up to in his work on myth, why he failed, and why that work remains important. The fourth section (common to both versions), Into Lévi-Strauss and Out Through “Kubla Khan”, is an account of how and why I went from Lévi-Strauss’s structuralism to cognitive science. Warning: it contains diagrams. I suppose I could create a deluxe edition which contains all the posts.

The Only Game in Town: Digital Criticism Comes of Age
(May 5, 2014)

Here I argue that digital criticism’s deepest contribution to literary criticism is that it requires fundamentally different modes of thinking. It is not purely discursive. It is statistical and visual. Moreover the visualizations are central to the thought process. This may also be the first time I’ve explicitly identified the mid-1970s as an important turning point in the recent history of literary criticism.

Paths Not Taken and the Land Before Us: An Open Letter to J. Hillis Miller
(January 30, 2015)

I had studied with Miller at Johns Hopkins (but have had no contact with him since). While I certainly say a bit about what I’ve been doing since I left Hopkins, including ring-composition, I also introduce him to Matt Jockers’ Macroanalysis and Goldstone and Underwood, “The Quiet Transformations of Literary Studies: What Thirteen Thousand Scholars Could Tell Us”. New Literary History 45, no. 3, Summer 2014. I mention Kemp Malone, a Hopkins person, as he came up in blog discussion of the paper.

On the Poverty of Literary Cognitivism 2: What I Learned When I Walked off the Cliff of Cognitivism
(August 24, 2015)

I attempt to explain what, in the end, I got out of my immersion in cognitive networks since I haven’t used them in my post-graduate work in literature. What I got most immediately was a powerful way of thinking about language in general where there is a sharp distinction between the object of thought, captured in diagrams, and a given text: The text is one thing, the model is another. There is no confusing the two. Moretti has made similar remarks about the diagrams he uses in ‘distant reading.’

Turning Point interior

Three for 3QD: Man-Machine Collaboration, E.T. the Extra-Terrestrial, American Heartbreak in Jersey City

Generally when I post an article to 3 Quarks Daily I will follow up with a post here at New Savanna linking to the 3QD piece and extending or commenting on that argument in some way. However, as I’ve indicated in this post, Coming out of melancholy, again, from Feb. 5, I went into psychological hibernation (aka melancholy) last September. While I did manage to post to 3QD during that period, I didn’t post notices here at the Savanna. Here are those notices, belatedly.

* * * * *

Some Hybrid Remarks on Man-Machine Collaboration, September 12, 2025.

That essay touches on a number of things: 10 LLMs as cultural technology, 2) my Fourth Arena concept, 3) Latour on the (false) distinction between nature and culture, and 4) the issue of how proper attribution for hybrid essays (essays where an AI played a significant role). In between 3 and 4 I inserted an essay by ChatGPT.

* * * * *

E.T. the Extra-Terrestrial: Into the Bopi with Steven Spielberg, Oct. 12, 2025.

It’s what its title suggests, an essay review of Spielberg’s film. It’s staged as a science fiction story, but is that really what it is, science fiction? From the article:

On the whole, my sense is that, in making this film Spielberg ventured into the bopi. And just what, pray tell, is that? I have the term from by friend, Charlie Keil. Early in his career he did fieldwork among the Tiv of Nigeria. The bopi is an area that’s set aside for children’s play. Moreover, adults are forbidden to enter the bopi. [...] But the whole film feels like an imaginative bopi. It’s a kid-centric world in which adults are an intrusive presence. [...]

Ultimately the story of E.T. seems to be almost an allegory or metaphor for art itself, a zone apart from the world into which we move to revivify and reconstruct.

* * * * *

American Heartbreak: The ‘Urban Design Studio’ in Jersey City, Nov. 3, 2025.

This is a photo essay about a graffiti site in Jersey City, now demolished.

Friday, March 13, 2026

Friday Fotos: What’s a tablescape?

Just what the name suggests, like a landscape only on a table top.

I’ve been taking photos of my meals and posting them here since September of 2018. Since I eat my meals indoors where they’re arrayed on the top of a table, my food photos will often, though not necessarily always, catch the table itself. What I mean by a tablescape, however, is more specialized. Here’s a recent tablescape:

To get that shot I set the camera on the table, pointed it in an appropriate direction, and snapped the shutter. That means your point of view is about an inch or an inch-and-a-half above the table itself and that your angle of regard is parallel to the tabletop. Just as landscapes are photographed or painted from some location on the land itself, so a tablescape is photographed from a location on the table itself.

This, in contrast, is NOT a tablescape, though a table is quite visible:

But I’m holding my camera in my hand so that I can get a particular shot. In this case, I’m interested in how the glimpse of the placemat you see through the carafe is displaced relative to what you see through the air (due to optics). Couldn’t get that shot in a tablescape.

There is an element of chance in taking a tablescape. Why? Because you can’t line up the photo in the normal way, either through a viewfinder or viewscreen. I suppose you could try, but it would be difficult, not terribly successful and not worth the effort. Fact is, the chance element is one reason for taking tablescapes. You don’t quite know what you’re going to see.

I doubt that I would ever have deliberately taken this photo, but as a tablescape it’s fine.

Or is it? And that’s why I’m taking these shots, to force me to think about each photo. Is this an image I want to keep, to show to others, why? When you shoot, say, the Empire State Building, which I can do quite easily, those questions don’t arise quite so insistently. Why not? Because the Empire State Building is an iconic structure and, as such, is certified photo-worthy. Shooting certified photo-worthy subjects is a no-brainer.

Here’s another tablescape:

Notice the expanse of the table itself in the foreground. That’s a typical feature of table shots. I cropped most of the table out in that first shot up there. Notice also in that first shot, I’m pointing out the window, where you see street lights in Hoboken and tall buildings in Manhattan. This shot, in contrast is pointed at the interior of the restaurant. Look at the upper left; looks like shadows of some plants cast on the side of a column. What plants, where? Notice the reflection of the carafe on the shiny surface of the table. There are shadows cast across the table as well.

Here’s one last tablescape, without comment:

Thursday, March 12, 2026

Night codes

AI as coder, reports from the trenches

Clive Thompson, Coders Coded Their Job Away. Why Are So Many of Them Happy About It? NYTimes, Mar. 12, 2026.

He and Brennan-Burke, who is 32, are still software developers, but like most of their peers now, they only rarely write code. Instead, they spend their days talking to the A.I., describing in plain English what they want from it and responding to the A.I.’s “plan” for what it will do. Then they turn the agents loose.

A.I. being A.I., things occasionally go haywire. Sometimes when Claude misbehaves and fails to test the code, Ebert scolds the agent: Claude, you really do have to run all the tests.

To avoid repeating these sorts of errors, Ebert has added some stern warnings to his prompt file, the list of instructions — a stern Ten Commandments — that his agents must follow before they do anything. When you behold the prompt file of a coder using A.I., you are viewing a record of the developer’s attempts to restrain the agents’ generally competent, but unpredictably deviant, actions.

A coder is now more like an architect than a construction worker.

I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.”

Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance. [...]

Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots.

This vertiginous shift threatens to stir up some huge economic consequences. For decades, coding was considered such wizardry that if you were halfway competent you could expect to enjoy lifetime employment. If you were exceptional at it (and lucky), you got rich. Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code.”

Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace.

Why programmers like their AI coders:

The enthusiasm of software developers for generative A.I. stands in stark contrast to how other Americans feel about the impact of large language models. Polls show a majority are neutral or skeptical; creatives are often enraged. But if coders are more upbeat, it’s because their encounters with A.I. are diametrically opposite to what’s happening in many other occupations, says Anil Dash, a friend of mine who is a longtime programmer and tech executive. “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”

There's much more at the link.