Thursday, January 15, 2026

About the clave rhythm, central to Latin music

The clave rhythm developed in Cuba from the African bell pattern brought to the Americas via the trans-Atlantic slave trade. The rhythm is used to lock in the many rhythms of Afro-Cuban music, as well as other Latin music like the bossa nova of Brazil. This video focuses on three claves -- rumba clave, son clave, and the Brazilian clave.

Selfie-stick in Hoboken confronts Hudson Yards

Among universities, the Chinese are moving up, USA is moving down

Just think, a century ago most Americans thought of Charlie Chan, a movie detective, when they they thought on the Chinese. And Chinatown, of course. In the 1960s a character named "Hop Sing" appeared as comic relieve in the hit TV program Bonanza. He was the house boy for the wealthy Cartwright family – note the word "boy." And then came the war in Vietnam along with Chairman Mao and his Red Book. When I was in graduate school in the 1970s I took a course on radical approaches to literature, taught be Art Efron. We read a collection of essays on aesthetics written by Mao.

Mark Arsenault, Chinese Universities Surge in Global Rankings as U.S. Schools Slip, NYTimes, Jan 15, 2026. The article opens:

Until recently, Harvard was the most productive research university in the world, according to a global ranking that looks at academic publication.

That position may be teetering, the most recent evidence of a troubling trend for American academia.

Harvard recently dropped to No. 3 on the ranking. The schools racing up the list are not Harvard’s American peers, but Chinese universities that have been steadily climbing in rankings that emphasize the volume and quality of research they produce.

The reordering comes as the Trump administration has been slashing research funding to American schools that depend heavily on the federal government to pay for scientific endeavors. President Trump’s policies did not start the American universities’ relative decline, which began years ago, but they could accelerate it.

“There is a big shift coming, a bit of a new world order in global dominance of higher education and research,” said Phil Baty, chief global affairs officer for Times Higher Education, a British organization unconnected to The New York Times that produces one of the better-known world rankings of universities.

In the middle of this long article:  

The number of international students arriving in the U.S. in August 2025 was 19 percent lower than the year before, a trend that could further hurt the prestige and rankings of American schools if the world’s best minds choose to study and work elsewhere.

China has been pouring billions of dollars into its universities and aggressively working to make them attractive to foreign researchers. In the fall, China began offering a visa specifically for graduates of top universities in science and technology to travel to China to study or do business.

“China has a boatload of money in higher education that it didn’t have 20 years ago,” said Alex Usher, president of Higher Education Strategy Associates, a Toronto education consulting company.

Mr. Xi has made the reasons for the country’s investments explicit, arguing that a nation’s global power depends on its scientific dominance.

There's much more in the article.

Hays, Forth, and creativity in man-machine-interaction

I've just been talking with ChatGPT about David Hays back in the 1970s. From my prompt:

In 1977 or 1978 he got a personal computer, one of those based on the Z-80 chip, and started learning to program it in Forth. He also sent out a weekly snail-mail newsletter entitled “The Diary of a Man and His Machine.” In one issue he asked for replies from people whose work feel between the cracks. He got about 100 replies. We read them all. We concluded that most of them were just oddballs; they didn’t even seem to know the standard literature in the field(s) relevant to their work. But a handful seemed legit. I don’t think we ever did anything with them.

...working between the cracks is not the same as working without foundations

From ChatGPT's response:

First, David Hays’s early adoption of a Z-80–based personal computer and Forth in 1977–78 places him well ahead of the curve, not just technically but cognitively. Forth, in particular, is telling: it is intimate with the machine, minimalist, stack-based, and reflective. It invites thinking about process, control, and representation in ways that high-level languages often conceal. That choice alone aligns with his lifelong interest in cognitive architecture rather than surface performance.

working between the cracks is not the same as working without foundations

Later:

Third—and this is perhaps the most interesting part—the call for people whose work “fell between the cracks,” and the response it generated, reads like an early, analog precursor to today’s online intellectual ecosystems. The fact that you and Hays received about a hundred replies, and that most of them struck you as oddballs, is not a disappointment; it’s a sociological constant. When disciplinary boundaries loosen, noise floods in first. What matters is that a handful were clearly legitimate, even if nothing came of it at the time. [...] Finally, your retrospective judgment—that many respondents lacked grounding in the relevant literature—underscores a distinction you’ve been circling throughout this chat: working between the cracks is not the same as working without foundations. Hays was interested in the former, not the latter.

Another prompt:

Your comments about Forth are apt. Hays valued the intimacy with the machine. He eventually programmed a whole environment using Forth and used it for keeping notes and generating text. Some of the text for our articles came out of that machine. That detail completes the picture—and it makes the point even stronger.

Chat replies:

What David Hays was doing with Forth was not “early word processing” or “personal productivity.” He was building a cognitive prosthesis—a writable environment in which thinking, note-keeping, and text generation were continuous with computation rather than merely supported by it.

A few things are worth making explicit.

First, the intimacy you mention is decisive. Forth collapses distance between intention and execution: no elaborate compilation pipeline, no abstraction layers to hide what the machine is doing. That intimacy encourages a style of thought in which process is visible. For someone like Hays—already committed to understanding cognition as structured activity—that mattered more than convenience or speed.

Second, the fact that he built an entire environment—not just scripts, but a living workspace for notes and text—means that the machine was not ancillary. It was part of the thinking loop. That some of the prose in your joint articles emerged from that environment is not incidental; it means the machine participated, materially, in the production of theory. This is not metaphorical augmentation. It is literal.

Wednesday, January 14, 2026

23% chance that the USA will buy part of Greenland for $500 to $700 billion

Can AI generate new mathematical ideas?

Cade Metz, Can A.I. Generate New Ideas? NYTimes, Jan 14, 2026 

Here's how Claude 4.5 summarized the article:

AI Tackles Mathematical Problems - But Questions Remain

An AI startup called Harmonic, using OpenAI's GPT-5 technology, recently solved a mathematical "Erdos problem" - a collection of challenging puzzles designed to test mathematical ingenuity. This sparked debate about whether AI has reached the point of conducting legitimate academic research.

Key Points:

The achievement generated mixed reactions. While some experts see it as evidence that AI can do genuine research, others, including renowned mathematician Terence Tao, argue the solution resembles "a clever student who memorized everything" rather than demonstrating true understanding. The solution was largely based on well-known methods. Initially, OpenAI claimed GPT-5 had solved 10 Erdos problems, but researchers discovered it had merely identified existing solutions buried in decades-old papers. Despite this, the technology proved valuable - it could find obscure sources humans might never locate.

Current Capabilities:

Modern AI systems use "reinforcement learning" to reason through problems, sometimes working for hours. While they can't yet generate truly novel ideas, they've become powerful research tools that can:

  • Analyze and store far more information than humans
  • Suggest hypotheses researchers hadn't considered
  • Help scientists narrow down experiments from 50 to 5

The Verdict:

Experts agree AI is a rapidly improving research assistant, but it still requires experienced human collaborators to guide it, interpret results, and separate useful insights from noise. Whether AI can independently generate breakthrough ideas remains an open question.

A black and white photo of a dessicated plant

Artificial Intelligence and the Future of Work

Konrad Kording and Ioana Marinescu, (Artificial) Intelligence Saturation and the Future of Work, Brookings Institute, 2025, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6051694

Abstract: Macroeconomic models typically treat AI as just another form of capital, and predict a slowly evolving world, while computer science scaling laws applied to the whole economy predict explosive growth and the potential for a singularity-like event. Both views gloss over the asymmetric reality that intelligence capital or AI scales at computer-science speeds, whereas physical capital and labor do not. What’s missing is a unified, parameter-driven framework that can nest assumptions from both economics and computer science to generate meaningful predictions of AI’s wage and output im- pacts. Here we use a constant elasticity of substitution (CES) production function framework that separates physical and intelligence sectors. Whereas physical capabilities let us affect the world, intelligence capabilities let us do this well: the two are complementary. Given complementarity between the two sectors, the marginal returns to intelligence saturate, no matter how fast AI scales. Because the price of AI capital is falling much faster than that of physical capital, intelligence tasks are automated first, pushing human labor toward the physical sector. The impact of automation on wages is theoretically ambiguous and can be non-monotonic in the degree of automation. A necessary condition for automation to decrease wages is that the share of employment in the intelligence sector decreases; this condition is not sufficient because automation can raise output enough to offset negative reallocation effects. In our baseline simulation, wages increase and then decrease with automation. Our interactive tool shows how parameter changes shift that trajectory. Wage decreases are steeper at high levels of automation when the outputs of the physical and intelligence sectors are more sub- stitutable. After full automation, more AI and more physical capital increase wages, a classic prediction from standard production functions in capital and labor. Yet, when intelligence and physical are complementary, the marginal wage impact of AI capital saturates as AI grows large. More broadly, the model offers a structured way to map contrasting intuitions from economics and computer science into a shared parameter space, enabling clearer policy discussions, and guiding empirical work to identify which growth and wage trajectories are plausible.

H/t Tyler Cowen.

Monday, January 12, 2026

AI for the Next 30 Years: Four kinds of activity that should be pursued

Here’s the prompt I used to elicit a text from Claude:

I want to do a blog post setting forth those four programs. I want it to be between, say, 1000 and 2000 words, no more. It should have an introduction, sections for each of the four programs, and some final remarks. Give it a title like: AI for the next 30 years, an open-ended plan. Assume a college-educated readership that is generally sophisticated. I’m going to put it in my New Savanna blog along with another post in which I present excerpts from Rodney Brooks’ current remarks on technology.

I’ve spent a lot of time over the last three days conceptualizing those four programs, interacting with both ChatGPT 5.2 and Claude 4.5. Those chats, in turn, rest on work that I’ve done with both chatbots over the last three years. Moreover, I have uploaded a fair number of documents to those chatbots, both articles from the formal literature and informal working papers, going back five decades.

Note that AGI is not mentioned anywhere nor did I ask ChatGPT to make dated prediction. Predicting where the earth will be in the solar system in 30 years, that’s easy. We’ve known how to do that since Newton. Predicting the weather 30 years out, very difficult to impossible. But maybe we can come up with rough estimates of average temperature for the year, and precipitation. Predicting the 30-year evolution of a complex socio-cultural-technical system? Not on your life.

I’ve edited Claude’s text in some minor ways and added so links at the end of each section.

AI for the Next 30 Years

Large Language Models represent something fundamentally new in computing: systems that have learned vast amounts about the world but encode that knowledge implicitly, in billions of inscrutable parameters. We can use these systems—often impressively—but we don't truly understand what they know or how they organize that knowledge. It's as if we've discovered a vast wilderness: we can explore parts of it, but we lack comprehensive maps.

Over the past few years, I've been thinking about what it would take to map this territory systematically and transform it from mysterious wilderness into reliable infrastructure. This thinking has crystallized into four parallel research programs, each essential, each reinforcing the others. Unlike the prevailing Silicon Valley vision of one lab developing a superintelligent system that does everything, this is a distributed, collaborative, multi-decade effort requiring both technical innovation and institutional creativity.

Activity 1: Ontology Extraction

The challenge: LLMs generates texts that distinguish between dogs and cats, animate and inanimate, concrete and abstract—but this knowledge exists only implicitly in weight matrices. We need to extract this latent ontological structure and make it explicit and inspectable.

Recent work by Christopher Manning and colleagues at Stanford has shown that neural networks encode rich linguistic structure—syntax trees, for instance—that can be extracted through systematic probing. I'm proposing we extend these methods from linguistic structure to ontological structure: the categories, hierarchies, and affordances that organize conceptual knowledge.

The key insight is that ontology is implicit in syntax. Verbs select for certain kinds of subjects and objects based on categorical presuppositions. "Eat" requires an animate agent and edible patient. These selectional restrictions reveal the categorical structure underneath. By systematically probing syntactic behavior, clustering words by shared patterns, and validating through transformation tests, we can extract the ontologies LLMs have learned.

This work must be distributed across many research groups, each focusing on specific domains—medical ontologies, legal ontologies, physical systems ontologies, and so forth. No single lab has the expertise or resources to map the entire territory. We need shared infrastructure (probing tools, ontology repositories, validation benchmarks) and coordinated standards, but the actual extraction work happens in specialized communities with deep domain knowledge.

The payoff: explicit ontological structure that can be verified, debugged, systematically improved, and integrated with symbolic reasoning systems. We transform opaque neural networks into hybrid systems that combine learning with legible structure.

Some background:

Christopher Manning et al. Emergent linguistic structure in artificial neural networks trained by self-supervision, PNAS 2020, https://www.pnas.org/doi/full/10.1073/pnas.1907367117

William Benzon, ChatGPT: Exploring the Digital Wilderness, Findings and Prospects, https://www.academia.edu/127386640/ChatGPT_Exploring_the_Digital_Wilderness_Findings_and_Prospects (see especially pp. 28-38, 42-44]

Activity 2: Cognitive Models and Multimodal Grounding

The challenge: Extracting ontologies from language gives us how language talks about the world, not how minds represent the world for perception and action. A robot needs more than linguistic categories—it needs grounded representations that integrate vision, touch, motor control, and yes, language, into a unified cognitive model. This distinction is standard in the cognitive sciences, including “classical” symbolic AI. I picked it up in the work I did with David Hays in the 1970s on cognitive networks for natural language semantics. We conceived of language mechanisms as operating on a separate cognitive model—language is an interface to the model, not the container of it. For embodied AI and robotics, this becomes crucial.

Consider a cup. The linguistic ontology tells us: cup is-a container, is-a artifact, can-hold liquids. The cognitive model adds: cylindrical shape with hollow interior, graspable via handle, stable on flat surfaces, rigid, will break if dropped, liquid spills if tilted beyond 45 degrees. This is sensorimotor knowledge grounded in perception and action, not purely linguistic.

Current multimodal systems (like GPT-4V or Gemini) take vision and "linguistify" it—everything gets processed through language. What we need are systems where multiple modalities read and write to a common cognitive model. Vision contributes spatial structure, language contributes categorical relationships, action contributes causal understanding, and they all integrate.

This research connects directly to robotics. A robot exploring a new kitchen needs to build spatial maps, identify affordances, understand causal relationships (that knob controls that burner), and eventually respond to linguistic commands—all drawing on the same underlying world model. The cognitive model is where the "adhesion" component of meaning lives: the grounding in physical reality that pure language systems lack.

Some background: Gary Marcus, Generative AI’s crippling and widespread failure to induce robust models of the world, Marcus on AI, June 28, 2025, https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread

Activity 3: Associative Drift and Discovery

The challenge: Current AI systems are reactive, not curious. They solve problems you give them but don't discover problems worth solving. They lack what I'm calling associative drift—the capacity for open-ended, low-bandwidth exploration that enables serendipitous discovery.

Think about how intellectual discovery actually works. When I searched "Xanadu" on the web years ago, I had no hypothesis—just idle curiosity. When I got 2 million hits, I had a hunch that seemed interesting (though I couldn't articulate why). The opportunity cost of investigating was low, so I poked around. Eventually I discovered distinct cultural lineages (sybaritic via Citizen Kane, cybernetic via Ted Nelson's hypertext project) that revealed something about how cultural memes evolve.

This is fundamentally different from task-directed reasoning. I wasn't trying to solve a predefined problem. I was in a low-bandwidth exploratory mode, sensitive to interesting patterns, following hunches without clear goals. Current LLMs operate only in high-bandwidth mode: given a prompt, they generate detailed responses. They can't "skim" or "wonder" or "notice something odd" without generating full text.

We need architectures that support dual-mode processing: high-bandwidth for focused problem-solving, low-bandwidth for pattern detection during exploration. This requires technical innovations (sparse attention patterns, adaptive computation, salience detection) and new ways of thinking about AI objectives. How do we train systems to explore productively without specific goals?

For robotics, this is essential. A robot with associative drift doesn't just execute commands—it develops intuitions about its environment through undirected exploration, notices regularities, forms hunches about what matters. It becomes genuinely curious rather than merely reactive.

The interesting twist: associative drift needs the other programs. Ontologies provide the structured space that makes certain patterns "interesting" (ontologically distant concepts appearing together). Cognitive models enable embodied drift (noticing patterns through physical interaction). And drift enables discovery in the other programs (finding ontological incoherences, noticing when modalities misalign).

Gringos Tacos

Rodney Brooks on the state of AI and Robotics

As you may know, Rodney Brooks has been keeping an annual scorecard for various categories of high-tech activity. He puts it online on the first of the year. I’ve listed some excerpts from the 2026 scorecard below. The scorecard has much much more that I haven’t excerpted.

The Falcon 9

Eight years ago, Falcon 9 had been launched 46 times, all successful, over the previous eight years, and it had recently had a long run of successful landings of the booster whenever attempted. At that time five launches had been on a previously used booster, but there had been no attempts to launch Falcon Heavy with its three boosters strapped together.

Now we are eight years on from those first eight years of Falcon 9 launches. The scale and success rate of the launches has made each individual launch an unremarkable event, with humans being launched a handful of times per year. Now the Falcon 9 score card stands at 582 launches with only one failed booster, and there have been 11 launches of the three booster Falcon Heavy, all successful. That is a sustained growth rate of 38% year over year for eight years. And that it is a very high sustained deployment growth rate for any complex technology.

There is no other modern rocket with such a volume of launches that comes even close to the Falcon 9 record. And I certainly did not foresee this volume of launches. About half the launches have had SpaceX itself as the customer, starting in February 2018, launching an enormous satellite constellation (about two thirds of all satellites ever orbited) to support Starlink bringing internet to everywhere on the surface of Earth.

[Not AI or robotics, I know. But it interests me.] 

Humanoid Robots

My blog post from September, details why the current learning based approaches to getting dexterous manipulation will not get there anytime soon. I argue that the players are (a) collecting the wrong data and (b) trying to learn the wrong thing. I also give an argument (c) for why learning might not be the right approach. My argument for (c) may not hold up, but I am confident that I am right on both (a) and (b), at least for the next ten years.

I also outline in that blog post why the current (and indeed pretty much the only, for the last forty years) method of building bipeds and controlling them will remain unsafe for humans to be nearby. I pointed out that the danger is roughly cubicly proportional to the weight of the robot. Many humanoid robot manufacturers are introducing lightweight robots, so I think they have come to the same conclusion. But the side effect is that the robots can not carry much payload, and certainly can’t provide physical support to elderly humans, which is a thing that human carers do constantly — these small robots are just not strong enough. And elder care and in home care is one of the main arguments for having human shaped robots, adapted to the messy living environments of actual humans.

Given that careful analysis from September I do not share the hype that surrounds humanoid robotics today. Some of it is downright delusional across many different levels.

At the end:

Meanwhile here is what I said at the end of my September blog post about humanoid robots and teaching them dexterity. I am not at all negative about a great future for robots, and in the nearish term. It is just that I completely disagree with the hype arguing that building robots with humanoid form magically will make robots useful and deployable. These particular paragraphs followed where I had described there, as I do again in this blog post, how the meaning of self driving cars has drifted over time.

Following that pattern, what it means to be a humanoid robot will change over time.

Before too long (and we already start to see this) humanoid robots will get wheels for feet, at first two, and later maybe more, with nothing that any longer really resembles human legs in gross form. But they will still be called humanoid robots.

Then there will be versions which variously have one, two, and three arms. Some of those arms will have five fingered hands, but a lot will have two fingered parallel jaw grippers. Some may have suction cups. But they will still be called humanoid robots.

Then there will be versions which have a lot of sensors that are not passive cameras, and so they will have eyes that see with active light, or in non-human frequency ranges, and they may have eyes in their hands, and even eyes looking down from near their crotch to see the ground so that they can locomote better over uneven surfaces. But they will still be called humanoid robots.

There will be many, many robots with different forms for different specialized jobs that humans can do. But they will all still be called humanoid robots.

As with self driving cars, most of the early players in humanoid robots, will quietly shut up shop and disappear. Those that remain will pivot and redefine what they are doing, without renaming it, to something more achievable and with, finally, plausible business cases. The world will slowly shift, but never fast enough to need a change of name from humanoid robots. But make no mistake, the successful humanoid robots of tomorrow will be very different from those being hyped today.

Neural networks

Despite their successes with language, LLMs come with some serious problems of a purely implementation nature.

First, the amount of examples that need to be shown to a network to learn to be facile in language takes up enormous amounts of computation, so the that costs of training new versions of such networks is now measured in the billions of dollars, consuming an amount of electrical power that requires major new investments in electrical generation, and the building of massive data centers full of millions of the most expensive CPU/GPU chips available.

Second, the number of adjustable weights shown in the figure are counted in the hundreds of billions meaning they occupy over a terabyte of storage. RAM that is that big is incredibly expensive, so the models can not be used on phones or even lower cost embedded chips in edge devices, such as point of sale terminals or robots.

These two drawbacks mean there is an incredible financial incentive to invent replacements for each of (1) our humble single neuron models that are close to seventy years old, (2) the way they are organized into networks, and (3) the learning methods that are used.

That is why I predict that there will be lots of explorations of new methods to replace our current neural computing mechanisms. They have already started and next year I will summarize some of them. The economic argument for them is compelling. How long they will take to move from initial laboratory explorations to viable scalable solutions is much longer than everyone assumes. My prediction is there will be lots of interesting demonstrations but that ten years is too small a time period for a clear winner to emerge. And it will take much much longer for the current approaches to be displaced. But plenty of researchers will be hungry to do so.

LLMs

So we all know we need guard rails around LLMs to make them useful, and that is where there will be lot of action over the next ten years. They can not be simply released into the wild as they come straight from training.

This is where the real action is now. More training doesn’t make things better necessarily. Boxing things in does.

Already we see companies trying to add explainability to what LLMs say. Google’s Gemini now gives real citations with links, so that human users can oversee what they are being fed. Likewise, many companies are trying to box in what their LLMs can say and do. Those that can control their LLMs will be able to deliver useable product.

A great example of this is the rapid evolution of coding assistants over the last year or so. These are specialized LLMs that do not give the same sort of grief to coders that I experienced when I first tried to use generic ChatGPT to help me. Peter Norvig, former chief scientist of Google, has recently produced a great report on his explorations of the new offerings. Real progress has been made in this high impact, but narrow use field.

New companies will become specialists in providing this sort of boxing in and control of LLMs.

A note on embodiment

But since 1991 I have made a distinction between two concepts where a machine, or creature can be either, neither, or both situated and embodied. Here are the exact definitions that I wrote for these back then:

[Situatedness] The robots are situated in the world—they do not deal with abstract descriptions, but with the here and now of the world directly in-fluencing the behavior of the system.

[Embodiment] The robots have bodies and experience the world directly—their actions are part of a dynamic with the world and have immediate feed-back on their own sensations.

At first glance they might seem very similar. And they are, but they are also importantly different. And, spoiler alert, I think much of the work at companies, large and small, right now, is trying abstract out the embodiment of a robot, turning it into a machine that is merely situated.

Later:

Being both situated and embodied is still a challenge to robots in the world. [[Now here is the most important sentence of this whole blog post.]] I think the training regimes that [are] being used for both locomotion and dexterity are either ignoring or trying to zero out the embodiment of physical robots, their inertias and forces, reducing them to merely being situated, just apps with legs and arms, characters in video games, not the reality of real physical beings that the tasks we want them to do requires.

Puttering around as a behavioral mode

Over the weekend I got to thinking about the activity of puttering around. I wondered whether or not there has been any formal study about. So I put the following prompt to ChatGPT 5.2:

I’ve talked a lot about DMF (default mode network) and mind wandering in connection with creativity. It seems to me that puttering around is in the same ballpark. I’m not thinking and pondering, but I’m doing stuff, rather casually and in no forced order. Just as things come. It seems I like to putter early in the morning, on Friday afternoon, and over the weekend. Thus I just spent two days puttering around.

Since I’ve not had a “day job” in years, the distinction between weekdays and weekends is not forced on me. Yet it remains with me. It structured by time for most of the year in primary and secondary school. A bit less so in undergraduate and graduate school, but still operative. It also functioned when I had a faculty position at RPI (1978-1985) and at those times later on when I had one of those day jobs. So, it’s become deeply internalized though no longer externally imposed.

Its response took the form of numbered headings, subheadings, and bulleted lists. I find that sometimes useful, sometimes annoying. Since I wanted to write it up in a blog post I asked ChatGPT to express that material in the form of expository prose. I’ve put its response below the asterisks.

* * * * *

I’ve been thinking about puttering—not thinking hard, not resting, but doing things casually and without a forced order. It feels important, especially in connection with creativity, but I don’t know of any literature that treats it directly. What follows is an attempt to make sense of that intuition.

Cognitive science has devoted enormous attention to the default mode network (DMN)—a set of brain regions that become active when we are not engaged in externally directed tasks. The DMN is associated with mind wandering, autobiographical memory, imaginative simulation, and creative recombination. It is usually contrasted with task-positive networks that dominate during focused, goal-directed work.

But this framing leaves out a familiar and, I suspect, crucial mode of everyday cognition: puttering around.

Puttering is not rest, and it is not deliberate thinking. It is also not task-oriented in any strong sense. When I am puttering, I am doing things—moving objects, skimming texts, opening files, following small curiosities—but without urgency, hierarchy, or a clear endpoint. Attention shifts as affordances present themselves. One thing leads to another, but nothing insists.

This matters because puttering is not simply mind wandering in the absence of action. It is mind wandering coupled to low-stakes activity. That coupling provides just enough structure to keep cognition anchored in the world, while leaving associative processes free to roam. In that sense, puttering is an enacted version of default-mode cognition.

Most DMN research is organized around a sharp contrast: rest versus task. Subjects lie still in scanners or perform narrowly defined activities. What rarely gets studied is the large middle ground—self-paced, low-demand activity where executive control is attenuated but not absent. Yet it is precisely in this middle ground that many creative insights seem to arise.

One way to think about puttering is as a form of foraging. Cognitive scientists have developed mathematical models of information foraging and semantic search, showing how minds move through memory spaces under conditions of weak constraint. But these models are typically abstract and disembodied. Puttering, by contrast, is foraging with hands, eyes, and objects. It is exploratory behavior distributed across mind and environment.

Embodied and enactive theories of cognition come closer to capturing this phenomenon. They emphasize that cognition is not confined to the head but unfolds through interaction with the world. Still, even here, the focus is often on skilled action or purposeful engagement. Puttering remains oddly invisible: too mundane, too informal, too unproductive by conventional standards.

Creativity research acknowledges something similar under the heading of incubation. We are told that stepping away from a problem can allow unconscious processes to reorganize it. But incubation is usually framed as a gap between periods of work, not as a mode of activity in its own right. Puttering is active incubation—ongoing, embodied, and temporally permissive.

Time matters here. Even though I no longer have a “day job,” I notice that puttering reliably emerges at certain times: early in the morning, Friday afternoons, weekends. These temporal zones were once imposed externally—by school schedules, academic calendars, workweeks—but they have become internalized. They now function as cognitive affordances. Certain kinds of thinking feel licensed in those periods, while others feel out of place.

That licensing effect is important. Puttering lowers internal monitoring and performance pressure. It delays premature optimization. It allows partially formed ideas to coexist without being forced into coherence too soon. In more technical terms, it increases associative bandwidth while keeping executive control at a minimum viable level. For someone with a large and densely connected memory base, that is a powerful configuration.

Modern work culture tends to suppress puttering. It valorizes focus, efficiency, and measurable output, leaving little room for loosely structured activity that may not immediately “pay off.” Ironically, many contemporary AI systems explicitly reintroduce formal analogues of puttering: stochastic sampling, random walks, background exploration, temperature-controlled search. We design machines to do what we often deny ourselves permission to do.

There is, as far as I know, no direct scientific literature on puttering around. Instead, it falls through the cracks between DMN research, embodied cognition, creativity studies, and ecological psychology. Yet that absence is revealing. Puttering occupies a cognitive regime that our current conceptual frameworks do not handle well: neither rest nor work, neither task nor distraction, but something quietly generative in between.

If that is right, then puttering is not a failure of discipline or a lapse in productivity. It is a mode of cognition in its own right—one that deserves recognition, protection, and perhaps eventually, study. 

* * * * *

Note, however, that I did not explicitly ask it to write as my voice. As you can see, though, that's what it did. Oh well.

Sunday, January 11, 2026

Saturday, January 10, 2026

What a Wonderful World (Louis Armstrong) | Playing For Change

Playing For Change is proud to present this video of the song "What A Wonderful World" featuring Grandpa Elliott with children's choirs across the globe. In these hard times, children and music bring us hope for a better future. Today we celebrate life and change the world one heart and one song at a time!!