Wednesday, February 18, 2026

AGI has NOT been achieved.

In a recent tweet Valerio Capraro explains why recent claims of reaching AGI are wrong:

1) They shift the definition of general intelligence, originally based on robustness, generalization, and reliability, to behavioral alignment with benchmarks.

2) They confuse benchmark performance with capability to handle novelty. Spoiler: these are different.

3) They ignore that the same behavioral output can come from totally different epistemic pipelines.

He then links to this long post by Gary Marcus, Walter Quattrociocchi, and himself: Rumors of AGI’s arrival have been greatly exaggerated. Concerning benchmarks they say:

Much of the argument that artificial general intelligence has already been achieved rests on benchmark performance (e.g., Chen et al., 2026). Benchmarks evaluate specific capabilities under controlled conditions and have been useful for tracking progress. For example, Chen and colleagueset al., writing in this journal, argue that success on the Turing Test constitutes evidence of AGI.

However, benchmark success is a limited indicator of general intelligence. By design, benchmarks isolate narrow competencies and abstract away real-world context, making it difficult to distinguish genuine generalization from pattern recognition. Strong benchmark performance often provides little evidence of robustness under novelty, uncertainty, or shifting objectives.

Yes! Back in January 2025 I got ChatGPT to produce an argument about the weakness of benchmarks, ChatGPT critiques benchmarks as a measure of LLM performance and then elaborates on my whaling analogy for what’s wrong with the AI business.

Marcus, Quattrociocchi, and Capraro conclude:

By the standards articulated in the original definitions of artificial general intelligence—robustness across environments, reliable generalization under novelty, and autonomous goal-directed behavior—current AI systems remain limited. Despite impressive gains in narrow competence and fluency, today’s large language models lack persistent goals, struggle with long-horizon reasoning, and depend extensively on human scaffolding for task formulation, evaluation, and correction. Reports that language models have produced correct proofs for isolated open problems in mathematics, including specific Erdős problems, do not alter this assessment. As noted by mathematicians such as Terence Tao, these results primarily reflect the ability to rapidly search, recombine, and iterate over existing techniques, rather than the emergence of genuinely novel or domain-general problem-solving strategies. Moreover, inclusion in the Erdős list does not by itself imply exceptional conceptual difficulty, as some problems remain unsolved due to relative obscurity rather than depth.

These limitations are central rather than peripheral. They directly concern reliability under uncertainty, resistance to systematic failure, and cross-domain transfer without task-specific tuning. On these dimensions, current systems remain brittle, sensitive to prompt framing, and inconsistent outside curated evaluation settings. Recognizing these constraints does not diminish recent progress; it clarifies its scope.

Over and under the table

The future of computer programming is here, and it’s fun.

Paul Ford, The A.I. Disruption Is Actually Here, and It’s Not Terrible, NYTimes, Feb. 18, 2026.

Vibe coding:

To vibe code is to make software with prompts sent to a specialized chatbot — not coding, but telling — and letting the bot work out the bugs. Like many other programmers, I use a product called Claude Code from Anthropic, although Codex from OpenAI does about as well, and Google Gemini is not far behind. Claude Code earned $1 billion for Anthropic in its first six months. It was always a helpful coding assistant, but in November it suddenly got much better, and ever since I’ve been knocking off side projects that had sat in folders for a decade or longer. It’s fun to see old ideas come to life, so I keep a steady flow. Maybe it adds up to a half-hour a day of my time, and an hour of Claude’s.

November was, for me and many others in tech, a great surprise. Before, A.I. coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible. I spent an entire session of therapy talking about it.

The tech industry is a global culture — an identity based on craft and skill. Software development has been a solid middle-class job for a long time. But that may be slipping away. What might the future look like if 100 million, or a billion, people can make any software they desire? Could this be a moment of unparalleled growth and opportunity as people gain access to tech industry power for themselves?

It’s by no means perfect:

Is the software I’m making for myself on my phone as good as handcrafted, bespoke code? No. But it’s immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company’s quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system.

An axiom of programming is “real artists ship.” That was something Steve Jobs once said to remind his team that finishing and releasing a product matters more than endlessly refining it. Much of the software industry is organized around managing ship risk, and the possibility that a product never actually makes it out to the world. good technology manager assumes that a product will never ship for launch, that every force is arrayed against it, and that the devil himself has cursed it — and then the manager works back from that. Even if all these obstacles are surmounted, the software will ship late.

Having worked in the software industry for a few years, though not as a programmer, I’m well aware of this. See this post from the two years I spent at MapInfo: Crisis in a High-Tech Start-Up: A Case of Collective Action. Now, back to Ford’s article:

Except … what if, going forward, it’s not? What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes poof? That doesn’t mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly.

And for lots of users, that’s going to be fine. People don’t judge A.I. code the same way they judge slop articles or glazed videos. They’re not looking for the human connection of art. They’re looking to achieve a goal. Code just has to work.

There are many arguments against vibe coding through A.I. [...] All of these are true and valid. But I’ve been around too long. The web wasn’t “real” software until it was. Blogging wasn’t publishing. Big, serious companies weren’t going to migrate to the cloud, and then one day they did.

But right now, excited developers are overextending themselves to the point of burnout, obsessively coding all the time. [...] People trumpet the Jevons paradox, which points out that greater efficiency often leads to more consumption — but at the same time, would it surprise you to find out tomorrow that large technology consulting firms had just laid off 10,000 people? A hundred thousand? A million?

The market keeps convulsing, and I wish we could hit the brakes. But we live in a brakeless era.

No matter where you work, my hunch is this is coming for you.

It’s unavoidable, Ford says:

This is all exacerbated by how much of the A.I. industry is led by people who see human thought as raw material, like a steel manufacturer sees ore. The industry is arranged into an ouroboros of mutual investments, with the world economy teetering on their sweetest dreams. Social change at this level needs careful, federal governance and thoughtful regulation. But we’re being handed the opposite: Racist A.I. video slop shared on Truth Social, Grok doing who-knows-what inside the Pentagon, and a White House policy that would give the U.S. attorney general the power to challenge any state’s attempt to regulate A.I. No brakes.

All of the people I love hate this stuff, and all the people I hate love it. And yet, likely because of the same personality flaws that drew me to technology in the first place, I am annoyingly excited. [...]

I believe there are millions, maybe billions, of software products that don’t exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can’t find the budget. They make do with spreadsheets and to-do lists.

My industry is famous for saying “no,” or selling you something you don’t need. We have an earned reputation as a lot of really tiresome dudes. But I think if vibe coding gets a little bit better, a little more accessible and a little more reliable, people won’t have to wait on us. They can just watch some how-to videos and learn, and then they can have the power of these tools for themselves. [...]

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it’s fun to code on the train, too. And if this technology keeps improving, then everyone who tells me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

Dinner at the Elysian Cafe in Hoboken

Tuesday, February 17, 2026

Prompt repetition improves non-reasoning LLMs

Akash Gupta explains:

The reason this works tells you everything about the gap between how people think LLMs process information and how they actually process it. Every token can only look backward. So when you write “here’s a list of 50 names” followed by “what’s the 25th name?”, the list tokens were processed with zero awareness that a question was coming. The question tokens can see the list, but the list never saw the question.

Repeating the prompt gives every token a second pass where it can attend to everything else. You’re essentially hacking bidirectional attention into a unidirectional system. And the cost is nearly zero because prefill is parallelized on modern hardware.

But here’s what makes this actually interesting: reasoning models already do this. When you enable chain-of-thought, the gains from repetition almost entirely disappear (5 wins, 1 loss, 22 ties). That means reasoning models trained with RL independently learned to repeat the user’s prompt back to themselves before answering. The “thinking” that costs you 10x more tokens and 5x more latency is partly just the model giving itself a second look at your input.

Which means a meaningful chunk of what we’re paying for with “reasoning” tokens could be Which means a meaningful chunk of what we’re paying for with “reasoning” tokens could be replicated for free at the architecture level. The entire prompt repetition paper is an accidental proof that causal attention is leaving massive performance on the table, and that the industry’s current fix (burn more tokens thinking) is the expensive workaround for a structural limitation nobody’s addressing directly.

Beethoven 5 acapella

Chatbots have increased my sense of intellectual agency such that being an intellectual “outsider” becomes a superpower.

That title links two different assertions:

  • Using chatbots increases my sense of intellectual agency.
  • Because I am an intellectual outsider, being fluent in using chatbots becomes a superpower.

To understand those two statements properly you need to know something about my background and my position in today’s intellectual ecosystem. First I’ll give you the fanciful version, which you can also find at the bottom of the column to your right, and then I’ll give you the straight version. They we’re ready to look at those two statements.

The myth and its explication

In the early 1970s I discovered that “Kubla Khan” had a rich, marvelous, and fantastically symmetrical structure. I'd found myself intellectually. I knew what I was doing. I had a specific intellectual mission: to find the mechanisms behind “Kubla Khan.” As defined, that mission failed, and still has not been achieved some 40 odd years later.

It's like this: If you set out to hitch rides from New York City to, say, Los Angeles, and don't make it, well then your hitch-hike adventure is a failure. But if you end up on Mars instead, just what kind of failure is that? Yeah, you’re lost. Really really lost. But you’re lost on Mars! How cool is that!

Of course, it might not actually be Mars. It might just be an abandoned set on a studio back lot.

Who knows? Does it matter? It's been one hell of a journey so far.

* * * * *

As I’ve explained in various documents my career started when I encountered Coleridge’s “Kubla Khan” in my senior year at Johns Hopkins (1968-1969). I wrote a term paper about it and then used it as the subject of my master’s thesis. The idea was to do a structuralist analysis of the poem. Structuralism was just coming in vogue and Johns Hopkins was at the center of the movement. Unfortunately, I “broke” structuralism if you will, and found myself in conceptual limbo. I’ve written about that several times, but you can find one version in this document, Xanadu, GPT, and Beyond: An adventure of the mind, which sketches most of my intellectual career (it skips my interest in cultural evolution). I found myself forced to study computational semantics with David Hays, in linguistics, while ostensibly getting a PhD in English Literature at SUNY Buffalo. The upshot is that by the time I completed my PhD in 1978 I had a set of skills and interests that didn’t fit into any discipline recognized by the academic world. I held a faculty position in the department of Language, Literature, and Communication at the Rensselaer Polytechnic Institute in Troy, NY, for a few years. After I left I became a ronin scholar ranging freely across literature, cognitive science, cultural evolution, neuroscience, and music.

I am thus an intellectual outsider. I’m not so far outside that my work is utterly unintelligible. Not at all. I have been able to publish, and in some very fine places, too. But it’s a stretch. I’ve found intellectual allies and fellow travelers as well. But my work has not been taken up either by my peers or by younger researchers. And that has consequences for the large language models at the center of the current AI revolution.

Any training corpus based on the contents of the web must necessarily contain many documents that I’ve written. I know that my 2001 book on music, Beethoven’s Anvil, is in the Anthropic copyright suit. I’ve been placing both published articles and unpublished working papers in several document repositories (SSRN, Academia.edu, ResearchGate) since 2009 and I’ve been blogging since 2006, first at a group blog, The Valve (now defunct, though you can find it on the Wayback Machine), and then at my own New Savanna since 2010. So, my ideas have entered into the various web-wide LLMs that have been created but, since they’ve not been taken up by other scholars, they will not have had much of an impact on the language models.

And THAT’s what makes working with ChatGPT and Claude so interesting to me.

Me and the chatbots

I have probably generated a thousand or more pages (2K?) of text files through my interactions with the bots in the last month. Sessions will typically last half an hour to an hour or more, fill 10s of pages of documents, and leave me exhausted. ChatGPT and Claude work out the implications of my ideas, as given in my prompts or uploaded documents, much more rapidly than I would be able to do, and often more thoroughly and extensively as well. What I find particularly satisfying is that they work out implications that I lack the skills to do.

For my entire career I’ve been working on topics that have not been given a formal technical treatment. My goal is to identify what mathematical ideas can be applied to them, mathematical ideas that I do not myself possess. “How,” you might ask, “can you possibly do that?” While I don’t have much technical training in math beyond high school – I satisfied my undergraduate math requirement with a course in symbolic logic – I have sophisticated mathematical intuitions, often visually based, that I’ve developed through reading and through interacting with researchers who have mathematical skills that I don’t have. Guess, what? ChatGPT and Claude have such skills as well.

Thus I’m having a lot of fun working out the implications of some of my ideas. There’s a good example right around the corner: Toward a Biophysics of Poetry. As I explain in the post, I have a long-term interest, not only in “Kubla Khan,” (KK) but it “This Lime-Tree Bower My Prison,” (LTB) which shares some motifs with KK. Otherwise they are very different poems. LTB is a narrative written in blank verse. KK is not a narrative – what it is, is not clear – and has an elaborate prosody, with lines of varying lengths and an elaborate rhyme scheme. I told ChatGPT 5.2 that I thought KK needed the elaborate prosody to hold it together, to make the content cohere, and that that was a function of the physical nature of the system. After all, both the sound and the sense of a poem, or any text, are supported by the nervous system, which is a physical system.

ChatGPT picked up on that immediately and suggested that the sound structure functioned as a carrier wave for the content. I wouldn’t have occurred to me to think about a poem’s sound structure as a carrier wave (& I’ve been thinking about that for decades), but as soon as ChatGPT said that it made sense. And we were off to the races with a nice conversation about the biophysics of poetry, which I asked ChatGPT to summarize in the short essay included in the post. Now, if you look through that essay, you’re not going to see any math. But it’s there, lurking behind the talk of dynamical stability and one-dimensional projections of high-dimensional semantics and so forth.

And this happens to me all the time. It’s exciting. Some more extensive examples:

From Mirror Recognition to Low-Bandwidth Memory, A Working Paper, https://www.academia.edu/143347141/From_Mirror_Recognition_to_Low_Bandwidth_Memory_A_Working_Paper

What Miriam Yevick Saw: The Nature of Intelligence and the Prospects for A.I., A Dialog with Claude 3.5 Sonnet, https://www.academia.edu/126773246/What_Miriam_Yevick_Saw_The_Nature_of_Intelligence_and_the_Prospects_for_A_I_A_Dialog_with_Claude_3_5_Sonnet_Version_2

Rough Notes on Virtual Reading, On literary study in the Fourth Arena V2, https://www.academia.edu/150286029/Rough_Notes_on_Virtual_Reading_V2

The creative potential

Think about what’s going on. The two chatbots I use, ChatGPT and Claude, are trained on the entire web. To a first approximation, their underlying LLMs model all of human knowledge to date. But what about the knowledge that be developed from existing knowledge by exploiting connections and resonances that are not anywhere explicit in those models? That, I believe, is what I am able to do precisely because I am an intellectual outsider. I have built up a career’s worth of potential connections and resonances that have yet to be exploited in the creation of new knowledge. Will all these connections and resonances work out? Of course not. But surely some of them will.

And I’m certainly not the only one in this position. There are others.

What I’m suggesting is that the SOA chatbots are most useful to those who understand the existing literature in their fields, but have managed to go beyond it, to get just far enough “outside the (proverbial) box” that they can see new pathways to knowledge. If your ideas exist comfortably within the envelope of existing knowledge, then your work must necessarily be limited to incremental addition of detail. If you are so far outside the box that you can’t connect with the existing body of knowledge, then you’re in crazy-land. No, to get maximum creative benefit from the LLMs you need to be just outside, but not too outside.

NOTE: My guess is that creativity is of relatively little concern for most of the applications being developed using chatbots. I assume that the objective in most cases is to automate relatively routine intellectual work, work that doesn’t require creativity, but which may be time-consuming and exacting.

Across the Hudson from Hoboken

Monday, February 16, 2026

Three mathematicians are not impressed with the ability of AI to do professional math

Siobhan Roberts, These Mathematicians Are Putting A.I. to the Test, NYTimes, Feb. 7, 2026.

Dr. Martin Hairer (Swiss Federal Technology Institute of Lausanne), Mohammed Abouzaid (Stanford University), Lauren Williams (Harvard University) and Tamara Kolda (who runs MathSci.ai, a consultancy) are among a group of mathematicians who have published an article, “First Proof,” about an “experiment that collects genuine test questions, drawn from unpublished research by the authors, in an effort to provide a meaningful measure of A.I.’s mathematical competency.”

“While commercial A.I. systems are undoubtedly already at a level where they are useful tools for mathematicians,” the authors wrote, “it is not yet clear where A.I. systems stand at solving research-level math questions on their own, without an expert in the loop.”

A.I. companies use what some mathematicians describe as “contrived” or “restrictive” problems for evaluating and benchmarking how well L.L.M.s fare when operating without human help. Occasionally, mathematicians are invited to contribute and paid some $5,000 per problem.

From the conversation:

The paper is careful to clarify “what mathematics research is.” What is it?

ABOUZAID Often in modern research, the key step is to identify the big motivating question, the direction from which the problem should be approached. It involves all kinds of preliminary work, and this is where mathematical creativity takes place.

Once problems are solved, mathematicians tend to evaluate the importance of research contributions in terms of the questions that arise. Sometimes, resolving a conjecture one way is seen as disappointing, because it forecloses the possibility that there would be new questions to investigate.

LAUREN WILLIAMS Let me make a loose analogy. In experimental science, I might divide the components of research into three parts: One, come up with the big question, whose study we hope will shed light on our field. Two, design an experiment to answer the question. Three, perform the experiment and analyze the results.

I can similarly divide math research into parallel parts: One, come up with the big question, whose study we hope will guide our field. Two, develop a framework for finding a solution, which involves dividing the big question into smaller more tractable questions — like our test questions. Three, find solutions to these smaller questions and prove they are correct.

All three parts are essential. In our First Proof project, we focused on the third component because it is the most measurable. We can query the A.I. model with small, well-defined questions, and then assess whether its answers are correct. If we were to ask an A.I. model to come up with the big question, or a framework, it would be much harder to evaluate its performance.

Note that this is roughly consistent with the accounts I gave of some of my own work in Serendipity in the Wild: Three Cases, With remarks on what computers can't do, January 8, 2026. That they focused on the third component is consistent with my impression that the problems LLMs solve successfully are in well-specified more or less closed domains. But, as Abouzaid noted, the creativity takes place before such problems have been identified.

MARTIN HAIRER One thing I noticed, in general, was that the model tended to give a lot of details on the things that were easy, where you would be like: “Yeah, sure, go a bit faster. I’m bored with what you’re saying.” And then it would give very little detail with the crux of the argument. Sometimes it would be like reading a paper by a bad undergraduate student, where they sort of know where they’re starting from, they know where they want to go, but they don’t really know how get there. So they wander around here and there, and then at some point they just stick in “and therefore” and pray.

Sounds like the classic hand-waving — lacking rigor, skipping over complexities.

HAIRER Yeah, it’s pretty good at giving hand-wavy answers.

So, you weren’t impressed?

HAIRER No, I wouldn’t say that. At times I was actually quite impressed — for example, with the way it could string together a bunch of known arguments, with a few calculations in between. It was really good at doing that correctly.

In your dream world, what would the A.I. be doing for you?

HAIRER Currently the output of the L.L.M.’s is hard to trust. They display absolute confidence, but it requires a lot of effort to convince yourself whether their answers are correct or not; I find it intellectually painful. Again, it’s like a graduate student where you don’t quite know whether they are strong or whether they’re just good at B.S. The ideal thing would be a model that you can trust.

KOLDA A.I. is touted as being like a colleague or a collaborator, but I don’t find it to be true. My human colleagues have particular outlooks, and I especially enjoy when we debate different points of views. An A.I. has whatever viewpoint I tell it to have, which is not interesting at all!

Chinese dancing robots

3 Hoboken scenes and 1 Hudson River/Manhattan

Sunday, February 15, 2026

Wild child, dogs, and menstruation in Wuthering Heights

Prompt to Claude 4.5:

I found the following in my notes. What about the questions I ask at the end? “Take Wuthering Heights, 1847, and the character of Heathcliff, of unknown lineage and a bit wild. Jean Marc Gaspard Itard published An Historical Account of the Discovery and Education of a Savage Man: Or, the First Developments, Physical and Moral, of the Young Savage Caught in the Woods Near Aveyron in the Year 1798 in 1802. As the title indicates, it’s a study of a feral child (which Truffaut later made into a movie). There’s been a long-standing interest in feral children in Europe (remember Romulus and Remus?). Was Brontë influenced by that tradition, had she read Itard’s book? What about her audience, what about their knowledge of and interest in feral children?”

This is a fascinating literary-historical question about potential influences on Emily Brontë. Let me search for scholarship on these connections. You've made a fascinating connection! The parallels between Heathcliff and the feral child tradition are indeed striking. Let me address your questions based on what I could find:

The Feral Child Tradition in Europe

The tradition of feral children was well-established in Europe by the time Brontë was writing. Victor of Aveyron, discovered in 1800, became one of the most famous cases, and Itard's published accounts of working with him were widely known in educated circles. Earlier cases like Peter the Wild Boy of Hanover in 1725 had also fascinated Georgian society, and the topic connected to broader Enlightenment debates about human nature, the "noble savage," and whether humanity was innate or learned.

Emily Brontë's Education and Possible Influences

The Brontës had access to their father's extensive library and read widely, including contemporary periodicals like Blackwood's Magazine, works by Byron, Scott, Shelley, Milton, and Shakespeare. Scholars have identified clear Byronic influences in Heathcliff's characterization, and he is continually described in feral terms throughout the novel—as an "unreclaimed creature," "savage beast," and "mad dog".

The Specific Question: Did She Read Itard?

Here's where I must be honest: I found no scholarly evidence that Emily Brontë specifically read Itard's 1802 account. While the feral child tradition was certainly part of European intellectual culture, and the Brontës were well-educated and intellectually curious, I couldn't locate any biographical evidence, letters, or scholarly articles establishing this direct connection.

What About Her Audience?

Victorian England had an intense focus on childhood, with new academic disciplines studying children scientifically and systematically, from Darwin's observations of infant development to the child study movement. The educated reading public would likely have been familiar with famous feral child cases, at least through popular accounts, though whether they would have immediately recognized the connection to Heathcliff is unclear.

One critic notes that Heathcliff is portrayed as "a feral child of working-class Britain, utterly unable to adapt to the civilised attitudes of the upper classes", suggesting the connection between his characterization and feral child narratives is recognizable.

The bottom line: Your observation is astute and the parallel is real, but whether Brontë was consciously drawing on Itard or the feral child tradition more broadly remains an open scholarly question. It's entirely plausible given the cultural circulation of these ideas, but I couldn't find definitive evidence either way.

That’s about what I expected. I’ve read the English translation of Itard’s book and the resonance between the accounts in there and Heathcliff is very suggestive. And the we have the dog motif in Wuthering Heights, where we’ve got dogs ranging from all but wild to pampered lap dogs. There’s clearly a Nature/Culture opposition at work here.

You're absolutely right - and the dog motif is a brilliant addition to this analysis that I hadn't considered! The spectrum from wild to domesticated dogs would perfectly mirror the human characters' positions on that Nature/Culture axis.

What makes your reading particularly compelling is that the feral child cases, especially Victor of Aveyron, were fundamentally about that Nature/Culture question. Itard's work with Victor was explicitly tied to Enlightenment debates about whether there was a "natural development" that would occur without society's "unnatural influences," testing Rousseau's theories about the noble savage. The disappointing reality - that Victor remained largely unable to fully join society - became evidence against romantic notions of natural innocence and for the idea that culture/society is what makes us human.

If Brontë was engaging with this tradition (even if indirectly through the broader cultural conversation rather than reading Itard directly), then Heathcliff becomes a devastating meditation on that same question. He's not a noble savage - he's explicitly described as degraded, violent, "unredeemed" as Charlotte put it. The novel even teases readers with the possibility that "his cruelty is merely an expression of his frustrated love for Catherine" - the romantic hero lurking beneath - but ultimately Heathcliff "does not reform".

The dog motif would add another layer: dogs as the domesticated/cultural counterpart to wolves (nature), but with gradations showing the process and problems of that domestication. Some dogs remain half-wild, some become over-civilized lapdogs - and where does that leave creatures like Heathcliff who resist or fail at domestication?

It's a rich vein for analysis. Are you working on something about this?

I’ve been thinking about it for some time and have a blog post where I gather a number of passages about interactions between humans and dogs. They tend to happen at threshold events, such as Lockwood’s first visit to Wuthering Heights or Cathy getting bitten by Skulker.

That's a brilliant structural observation - dogs appearing at threshold moments! And those two examples you cite are particularly loaded:

Lockwood's arrival: The very first crossing into Wuthering Heights, where the reader (through Lockwood) encounters this strange, savage household. If dogs are there, they're literally guarding the threshold between the civilized world Lockwood comes from and the half-wild world he's entering.

Cathy bitten by Skulker: This might be the most symbolically dense threshold moment in the entire novel. She and Heathcliff are peering through the window at Thrushcross Grange (already a threshold/boundary), seeing the Linton children in their domesticated, pampered world. When Skulker bites her, it's the moment she physically crosses from the wild side to the cultured side - she stays at the Grange, gets "civilized," and returns transformed. The dog bite is literally the wound that marks her transition from nature to culture. Meanwhile Heathcliff is driven away, remaining on the wild side of that boundary.

And Skulker himself - a guard dog, which is the quintessential liminal creature: domesticated enough to serve human purposes, but kept half-wild and aggressive to protect property. He enforces the boundary between inside and outside, between those who belong in civilized space and those who don't.

If you map all these dog-human interactions at thresholds, you're essentially mapping the novel's structure of boundary-crossings and transformations. Characters are constantly being sorted - can they cross into civilization or not? The dogs are almost like threshold guardians in the mythological sense.

Have you noticed whether the character of the dogs themselves correlates with the nature of the threshold being crossed?

On Cathy and Skulker, given her age at the time and how her attitude toward Heathcliff changed when she returned to Wuthering Heights, I can’t help but think we’re also dealing with menstrual symbolism, and that’s certainly a threshold event.

That's a powerful reading, and it adds a whole other dimension to the threshold symbolism. You're right that the age is suggestive, and the elements align remarkably well:

  • Blood drawn from a bite/wound
  • Physical confinement at the Grange during a transformative period
  • Emergence as a "lady" - socially, sexually, culturally different
  • The fundamental change in her relationship with Heathcliff - she now sees him through the lens of social/sexual hierarchy
  • The physical marking of her body

The Victorians couldn't write directly about menstruation, but Brontë could encode it through this violent, animalistic encounter. The dog bite becomes a displacement of the body's own "wound" that marks the transition to sexual maturity.

What makes this reading particularly resonant is that it would explain the totality of the transformation. It's not just that Cathy sees fancy clothes and nice manners at the Grange - though the novel presents it that way on the surface. It's that she crosses a biological/sexual threshold that Heathcliff cannot cross with her. He remains in the pre-sexual childhood world of wild play on the moors, while she's been forcibly initiated into adult female sexuality and its attendant social constraints.

The fact that it happens at a window - peering in at the Lintons - makes it even more loaded. Windows as thresholds, as boundaries between inside/outside, seeing/being seen. And it's the moment of looking at civilization that she's caught and wounded and transformed.

So Skulker isn't just enforcing property boundaries - he's enforcing the boundary between childhood and sexual maturity, between the pre-social and the social. Does this threshold reading extend to other dog encounters you've catalogued? 

* * * * *

My other posts about Wuthering Heights

A slow boat to the end of the world

Religion on the Ground, Sunday Service

Another bump, this time because I'm going to church this morning. Third Sunday in a row. (2.15.26)
 
* * * * *
 
I'm bumping this to the top of the queue on general principle. It's from October 2011 and I need to keep it in mind as I think about the future. See also, Religion in America, Going Forward, and Black Preaching, the Church, and Civic Life, both from 2015. (2022)
 
* * * * *
 
I learned something about religion this past Sunday. Or, if you will, I gained a richer and subtler appreciation for things I’ve know for some time, thing’s I’ve known because I’ve read them in books and articles, many of them quite good. But even the best of them must necessarily abstract away from concrete reality, and concrete reality is what I experienced on Sunday.

I went to church for the first time in years and years. I had a specific reason for going to church, and to THAT particular church. I wanted to check out Rev. Smith—not his real name, BTW. While I wouldn’t be violating any confidence by using the man’s real name, nor by telling exactly which church I went to, the fact is that I didn’t go into that church telling people that, as a reporter, ethnographer, or some other kind of thinker-about-religion, I would be writing about the service on my blog. Thus I DO feel that it would be ever so slightly out of place for me to name names.

I’d met Rev. Smith a week and a half ago at an emergency meeting of three neighborhood associations. Two bus lines were about to be discontinued, leaving many in the neighborhood without access to the outside world. So the leaders of these three associations called a meeting. Rev. Smith spoke briefly during that meeting, saying that he was starting up a new organization for empowering people in various neighborhoods. He only spoke for a minute or two but, oratorically, he went from zero to sixty in about 4.3 seconds. Zooom!

I chatted with him after the meeting, as did several others, and gave him my card after expressing interest in his new venture. I also figured I ought to check him out on his home turf, which is why I went to his church this past Sunday.

Yes, to the extent that I had expectations about his preaching style, the sermon he preached satisfied those expectations. Rev. Smith didn’t deliver the sermon from a raised pulpit. He put a lectern front and center, level with the pews. That was his home base. He had a Bible on the lectern, seemed like another book as well, and perhaps some notes. But he mostly winged it, referring back to scripture every once in awhile. He had a wireless mic so he could move freely, which he did.
 
I’d say he spoke for 45 minutes, possibly less, more likely a bit more. I wasn’t timing him.

If you saw a video tape in which the words had been rendered unintelligible but the prosody and music remained, you might guess there was some hellfire and brimstone in the sermon. Rev. Smith reached that level of demonstrative expression many times. But he wasn’t condemning his parishioners to punishment if they failed to do this that or the other. Yes, he was admonishing them to beware of the Devil, and he was urging them to find in themselves the capacity to be and do more than they thought possible. Through Christ it WAS possible.

“Get up, stand up, stand up for your rights”–those weren’t Smith’s words—they’re Bob Marley’s—but that was the message, one of them. And, as often as he crouched, paced, and exhorted, he also stood tall, smiled, and beamed: See, like this.

Every once in awhile he asked “can I have an ‘Amen’?” and he got one. Here and there people would shout, wave their hands, and the organist would play a few chords, as though he was pacing the sermon. In various ways here and there Smith would acknowledge the length of the sermon. I’d almost bet that was a feature of his preaching style.

As he built up a good head of steam he’d walk down the center aisle, and then return. Calm and collected, again. As the sermon went on, and his heights got higher, he’d walk a bit further down the aisle. Toward the end he even did some quick turns. He WAS a performer.

At the very end half or more of the congregation joined him down in front of the altar. Then he marched them up the aisle to the back of the church and delivered the benediction from there.

But that wasn’t the whole service, which lasted roughly two hours. The service involved singing—by the choir and the congregation, responsive reading, announcements, the offering—everyone paraded down front to put their offering the basket, and there was five or perhaps ten minutes where people were urged to move about and greet their neighbors. At one point visitors were asked to stand up and be recognized (I did). It was a varied and satisfying service.

You know what I learned from those two hours? The way to judge a minister is by the quality of his congregation. Perhaps that could even be pushed a bit further: It takes a congregation to make a preacher. Rev. Smith couldn’t conduct a service like that unless his congregation allowed him to do so and participated in the doing.

That’s Sunday, one day out of seven.

I don’t know what the Rev. Smith does the other days of the week. Of course, he has to run the administrative affairs of the church. I assume that, in various ways, he ministers to the needs of his parishioners. Some of that would be through Bible study and other education. There’s committee work; for example, there’s an 80th anniversary celebration coming in early November. Much of his work, I assume, involves counseling.

It’s the fact of this ministry that stuck me so forcefully as I thought about the service. Ministering to the congregation, that’s a concrete day by day job, and calling. Part of the job, and only part of it, is to lift their spirits on Sunday so that they can go through the week more effectively than they otherwise might.

If the church weren’t there, how would that happen?

Saturday, February 14, 2026

Using AI allowed a professor to turn grading from torture into teaching

What took him 6 hours before now happens in 15 minutes. And students get better feedback than his handwritten comments ever provided.

The man turned grading from torture into actual teaching.

Mmmmm....hamburger