Tuesday, February 3, 2026

Agentic coding is neurosymbolic AI

Mmmm.....Good! Fries, a quarter-pounder, and orange Fanta

Adam Neely: On Suno, AI Music, and the Bad Future

0:00 Intro
4:06 Challenge accepted
6:55 Three Questions
24:14 Why no influences? (deskilling/narcissism)
35:50 Profiles of the Future
47:54 Good uses of Suno
59:05 Futurism/Techno-Optimism
1:16:22 New Virtues
1:22:03 Final Predictions

Neely conducted an informal survey of his follows. Here he's discussing some of the results (c. 16:43):

Now, zooming back a little bit and taking a look at the answers to this 1st question, we see that nobody answered anything musical, really. All the answers were about saving time, saving money, and replacing friends. In other words, Suno lets you make the same music faster, cheaper, and lonelier. I'm not sure if that's a good thing.

That's pure Homo economicus, to invoke a term I'm using in the book I'm developing, Play: How to Stay Human in the AI Revolution. Neely continues:

The 2nd question I asked was, “do you feel like you have a unique voice with your music when you create songs with Suno?” Some people said yes, but the majority of responses felt that the music that they made was not particularly unique to them. One possible explanation for this is that commercial generative AI can't really create anything new. It's just remixing old recordings. And so you can't have a unique voice with something that's just a remix of an old recording. Suno has admitted to have been trained on essentially all music files on the internet. What a lawsuit has called “copyright infringement on an almost unimaginable scale.”

A bit later (c. 20:21):

Now, the 3rd question I asked was, “Who are some of your favorite AI musicians who have influenced you?” “What about them inspires you?” Okay, so even though I kind of knew what the answers to this question would be, it still was really bleak reading them, because the vast majority of people, as it turns out, do not have influences.

RESPONSES: I don't have any AI influences. I'm afraid I don't have any. At the moment, nothing. I don't listen to anybody else. I don't know any. I do not listen to AI slop. No influence. I don't know of any. Haven't heard any. No one. I don't know any AI relevant artist. Not applicable. I have no idea to be honest. At the moment, nothing. I don't have an AI music influence. None. I do not religiously follow anyone. I don't have any AI music influences.

ADAM: Why can't people who use Suno cite their influences? It's strange, right? because if you ask the same question to any musician, writer, or artist who didn't use gen AI, they would be able to go off forever… on their influences! I think about, you know, the bass players that inspired me, Jaco Pastorius, Victor Wooten, modern-based players like Tim Lefebvre - huge influence. I love Evan Marien. I don't know of anybody of any skill level who can't do that who can't just be like, mm, mm, mm, “these guys are awesome!”

Neely goes on to say how very strange this is. The musicians he knows ALL OF THEM have favorites and influences. This Suno music seems to br narcissistic music. These people just listen to their own music.

There's much more in the podcast.

Monday, February 2, 2026

Employers are now using AI as cover for laying employees off – "A.I. washing"

Lora Kelley, Did A.I. Take Your Job? Or Was Your Employer ‘A.I.-Washing’? NYTimes, Feb. 1, 2025.

A company might lay people off for any number of reasons: It didn’t meet financial targets. It overhired. It was rocked by tariffs, or the loss of a big client.

But lately, many companies are highlighting a new factor: artificial intelligence. Executives, saying they anticipate huge changes from the technology, are making cuts now.

A.I. was cited in the announcements of more than 50,000 layoffs in 2025, according to Challenger, Gray & Christmas, a research firm.

But:

Investors may applaud such pre-emptive moves. But some skeptics (including media outlets) suggest that corporations are disingenuously blaming A.I. for layoffs, or “A.I.-washing.” As the market research firm Forrester put it in a January report: “Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles, highlighting a trend of ‘A.I.-washing’ — attributing financially motivated cuts to future A.I. implementation.”

The term echoes popular descriptors of misleading marketing practices, such as greenwashing and ethics washing. It started bubbling up a few years ago, mostly to call out companies that claimed to be using A.I. when they weren’t. But lately, it has been used more broadly to gesture at companies emphasizing A.I. to explain things like layoffs when the picture may be more complex.

Why am I not surprised? There's more at the link.

Approaching Mickey D's, and a peak inside

Zohran Mamdani visits Louis Armstrong

From a post by Ethan Iversion:

Yesterday the new mayor of New York City dropped by the Louis Armstrong House Museum. This is essentially politics as usual—AOC and others have made the same rounds—but since I really dig the Armstrong house, I felt notably cheered by this photo taken by Hyland Harris!

Of course I dig Armstrong too. Have practiced his music since my early teens. Be sure to read the rest of Iverson's post (at the link).

He who controls the Foundation Models controls (our access to) reality

Sunday, February 1, 2026

Skylar Tang, rising star [fierce]

Moving Northward along the water [Hoboken]

Séb Krier on Moltbook, agents chatting with agents

Saturday, January 31, 2026

Breakfast, omelette based

Game theory, MAGA, and abortion: The Trumperor’s varied partisans

Let us assume that the story of the emperor who has no clothes, that that story is about the conversion of shared knowledge into common knowledge. Everyone, the emperor himself, his courtiers, the population at large, and of course those rogue weavers, everyone can see that he is naked. That knowledge is shared among them. For the rogue tailors, however, that knowledge is common; each knows that the other knows that the other ad infinitum.

Once the boy blurts out, “He’s naked!!!,” though, what was only shared knowledge has now become common among the entire populace. Everyone knows that everyone else etc. Let us suppose, however, that some people would just as soon not know that the emperor is naked while others don’t mind knowing among themselves (e.g. those rogue tailors), but they’d just as soon that no one outside their group knew. What then?

Those rogue tailors might make a move to mute the boy’s messages, perhaps by disparaging him and his parents, perhaps even by imprisoning him. Those courtiers, who’d just as soon believe the emperor to be clothed, for their livelihood depended directly on his largesse, the courtiers would see the rogue tailors making their move and would join them in their efforts. Surely there were others in the population who also benefited from the emperor’s power and wealth. They too would join in the effort to silence the boy.

Now the population is split in two, the party of the emperor and his supporters, and the other party, at best indifferent at the revelation but many quite gleeful as the emperor and his cronies had done them no good at some time in the past. This other party sees the boy as a hero and attempts to protect him.

At this point the parable stops being a children’s story and becomes a model of political coalition formation under epistemic stress. For it’s not this imaginary emperor that interests me. It’s a very real Donald J. Trump.

I’m casting him in the role of the naked emperor. He is widely perceived as corrupt, even among many of his allies. Thus the Big Boys of Silicon Valley are attracted to him because they want to benefit from his political power. They see that he’s also cruel and sexually profligate (or at least he was in the not so distant past, who knows these days). They may not like these characteristics in him, but they can tolerate them as long as he supports them in their business ventures. However, they cannot themselves afford to be seen as corrupt, at least not all that corrupt, just business as usual. Fortunately he has plenty of support from other businessmen. And it would help if Trump had significant support from those who favor him on other grounds.

For that we have the MAGA faithful, which, as far as I can tell, is a motley crew. Many of them may not like immigrants, but as long as he promises to kick them out, they could care less about his licentiousness. As for his cruelty, as long as he directs it to those immigrants, it’s fine by them.

But we also have the Christian right, many of whom abhor that licentiousness, but they also abhor abortion. As long as abortion is confined to secretive doctors who hide their services and to doubtful practitioners who slink around back alleys in the dark, these people may not say much. But once public abortion clinics become available, that cannot be allowed. But if Trump will appoint Supreme Court justices who are with them on this, they’ll support the Trumperor. He did and they do.

Finally among those who abhor abortion we have those who insist that a woman who has been raped must carry the fetus to term. Why? To allow the abortion is to admit that the rape took place, that men can be, are all too often, cruel. Or perhaps they want to blame the woman herself for the rape, “She was askin’ for it.” Well, as long as DJT gets rid of abortion, he can do whatever he wishes with women, but just don’t tell us about it. These are not necessarily explicit beliefs held by all individuals, but they are functional outcomes of the system.

* * * * *

ChatGPT created the illustration above based on the following photograph, taken on a Day of the Dead celebration in Jersey City in November of 2015:

Moltbook – A community for AIs

Astral Codex Ten, Best of Moltbook, Jan 30, 2026. 

Friday, January 30, 2026

Teaching AIs how to draw semantic network diagrams, and other things

In June of last year I decided to ask ChatGPT to draw a semantic network diagram for Shakespeare's Sonnet 129. Why did I choose that task? Because it is something that humans can do, but it is not rocket science; it doesn't require genius level capability. I wanted to put a bound on all the hype about LLMs already being AGIs (whatever they are), or close to it. I chose ChatGPT because it is capable of drawing. The task requires the ability to draw, which ChatGPT has.

I wrote up the experiment in this working paper: ChatGPT tries to create a semantic network model for Shakespeare's Sonnet 129 (June 16, 2025). Here's the abstract:

This document explores the capacity of large language models, specifically ChatGPT, to construct semantic network models of complex literary texts, using Shakespeare's Sonnet 129 as a case study. Drawing on the author's prior work in cognitive modeling, the analysis reveals that ChatGPT, while capable of producing linguistically coherent commentary, fails to generate a structurally plausible semantic network for the sonnet. The failure is traced not to a lack of exposure to relevant literature, but to the model's lack of embodied, interactive learning. The process of constructing cognitive network diagrams is shown to be iterative, visual-verbal, and skill-based-comparable to learning a physical craft like playing an instrument or woodworking. It requires extended practice under expert feedback, enabling a form of reasoning that is neither algorithmic nor easily reducible to textual description. The essay argues that this hybrid modeling skill represents a "deep" human capability that is nevertheless teachable and routine. It concludes with reflections on the nature of such skills and their implications for AI, pedagogy, and literary interpretation. Asking ChatGPT create a semantic model for a Shakespeare sonnet.

About a week ago I had a long dialog with ChatGPT, first about how humans learn this task and then, second, what it would require to teach AIs how to learn the task. From there we went on to the more difficult task of probing the structure latent in the weights of an LLM and thereby extracting/constructing of map of that structure. Here's ChatGPT's summary of that discussion.

Summary: Learning Semantic Nets, Mapping Latent Structure in LLMs, and the Hybrid Future of Intelligence

We began by returning to a practical question that had surfaced in earlier work: why it is difficult for an AI system to produce good semantic or cognitive network diagrams for literary texts (such as a Shakespeare sonnet), even though a capable human can learn to do it. The core issue is not that such diagramming requires genius-level insight. Rather, it is a form of skilled representational practice—learnable by apprenticeship, but challenging for current AI systems because it requires disciplined structure-building, normative constraint satisfaction, and iterative repair.

1) Apprenticeship and “inside knowledge” of diagramming

A central theme was the idea that semantic network diagrams cannot be properly understood from the outside. A person learns to read them only by learning to produce them. This applies even more strongly to complex cognitive networks, where a diagram can appear meaningless unless one has internalized the diagram grammar: node types, arc types, compositional constraints, and “house style” norms about what counts as an admissible analysis.

You offered an idealized description of your training with David Hays. The training regime resembled a repeated cycle of supervised practice:

  1. you brought a fragment of text and a diagram you had produced,
  2. Hays evaluated it (approve/disapprove) with commentary,
  3. you revised or moved forward accordingly,
  4. the cycle repeated,
  5. and over time the normative discipline of diagramming became internalized.

You also noted that this same pattern governed group work among peers who had learned the system: a collaborative problem was brought to the table, and discussion plus sketching continued until a coherent solution emerged. The key was not merely producing diagrams, but learning the discipline that makes diagrams meaningful and correct.

From this, you proposed an account of what is being learned: a repertoire of correspondences between verbal fragments and diagram fragments. Under that view, diagramming competence is partly the acquisition of a “library of moves,” where particular linguistic patterns or conceptual pressures cue specific diagram operations. Equally important, however, is a critic’s sense of global coherence—a normative capacity to judge whether a graph “hangs together” as a model of the text and to identify what must be repaired.

You emphasized that at any time there is a locally stable diagram grammar, even if it cannot be complete in principle. In your own case, you began with Hays’ textbook Mechanisms of Language and learned to produce diagrams specified in particular chapters (cognition, perception). After three months of concentrated training you had internalized the system well enough not merely to use it, but to extend it: you proposed a new arc type, specified its assignment conditions, and demonstrated its usefulness. This was identified as an important marker of mastery: moving from conforming to norms to making responsible innovations within the normative system.

2) Why this is “easy” for humans but hard for AI

The conversation then turned to the striking asymmetry: semantic network diagramming is learnable by humans with patience and guidance, but remains difficult for AI systems. The difficulty is not lack of general linguistic ability; it is that diagramming requires explicit normative structure and repair behavior. Humans develop an internal sense of error: what is missing, what violates the grammar, what is incoherent globally. Current models often produce plausible fragments but struggle to maintain consistent typing, global integrity, and systematic revision under critique.

This diagnosis led to an important idea: it would be possible for AI to learn semantic network construction through an analogous apprenticeship regime—especially if the AI were multimodal (since the target representation is graphical). Training would require expert-guided correction cycles, ideally including revision histories, so that the system learns not only what the final diagram should look like, but how to repair incorrect diagrams.

At the far horizon, you raised a more ambitious possibility: AIs might learn diagramming so well that they could teach other AIs, performing the Hays-function themselves. That would require not only competence in diagram production, but competence in critique, repair, curriculum sequencing, and controlled extension of the grammar.

3) From diagramming text to extracting latent structure from neural weights

This discussion provided what you described as your first hint toward a larger goal: extracting cognitive-level network structures from foundation models. You contrasted this with Gary Marcus’ suggestion of investing enormous resources into hand-coded symbolic modeling. You argued that building a gigantic semantic net by armies of humans is madness. Instead, the semantic network “lives” implicitly in the weights of neural models—diffused across parameters—and the research problem is to map it, extract it, and make it explicit.

You described your working intuition: LLMs would not be so effective if they did not embody cognitive-network-like structures at some latent level. You also noted that you had conducted behavioral experiments (using only ordinary user access) that convinced you of this: controlled perturbations lead to distributed ripple effects that preserve story coherence. These results suggest that constraint structure is present, even if not symbolically explicit.

From this perspective, “ontology extraction” becomes an empirical, stochastic mapping discipline. One does not directly read networks off the weights. Instead, one probes behavior, perturbs conditions, observes stable patterns, and assembles inferred structures under an explicit representational grammar. The diagram grammar becomes essential as a way to turn a cloud of samples into a stable map.

An important complication was introduced here. Hays’ symbolic framework in Mechanisms of Language covers multiple layers: syntax, morphology, pragmatics, phonetics/phonology, cognition, perception. In contrast, LLMs are trained on token strings in which many of these levels are conflated. Thus any network extracted from the weights risks being entangled across linguistic and cognitive layers. You expressed the desire for a “pure cognition” network, but acknowledged that it is not clear how to achieve purity a priori. The practical conclusion was to proceed anyway, while explicitly tracking the issue, allowing the research program to evolve in execution rather than being blocked by the impossibility of perfect factorization at the outset. You also suggested a sensible calibration strategy: hand-code sharply limited domains to provide gold standards for evaluating automatically derived networks.

4) The generational scope: the birth of a field

You then widened the frame. The task is not merely technical. It is about how minds conceptualize the world, and not one mind but the historical product of millions or billions of minds writing across centuries, with bias toward recent decades. This is not a problem solvable by a single dissertation or a single lab over a few years. It requires many labs working in loose coordination, with both collaboration and competition, over one or more intellectual generations. In this view, foundation models are not “the pinnacle,” but the floor—the starting point—for a long new intellectual adventure.

In that context we coined useful names for two failure modes in contemporary AI thought: “hand-coded scholasticism” (the belief that meaning must be explicitly authored by armies of humans) and “scaled-up millenarianism” (uncritical faith that scaling alone will magically solve everything). You described these as the Scylla and Charybdis of current discourse, and emphasized that your program aims at a third path: mapping the latent wilderness systematically, with discipline and instrumentation.

5) Production systems and Yevick’s mode-switching intelligence

Finally, we returned to architecture. If diagramming skill is a library of pattern-to-pattern correspondences plus a critic enforcing coherence, then a classical production system architecture becomes attractive. A production system naturally supports staged rule application, working memory updates, constraint checking, and repair cycles. Neural models can supply candidate relations and associations, while the production system supplies explicit normativity and structural discipline.

This hybrid framing connects directly to Miriam Yevick’s work on holographic/Fourier logic versus sequential propositional logic. You emphasized that your current program is not merely compatible with Yevick’s ideas; it grew in part out of sustained reflection on them. You and Hays argued in 1990 that natural intelligence requires the capacity to deploy both modes, and you developed this further in speculative work on metaphor. In metaphor, the propositional system regulates the superimposition of holistic gestalts: e.g., Achilles in battle is likened to a lion in battle. The two scenes function as holographic wholes, while sequential linguistic propositions step through correspondence constraints. This provides a concrete mechanism for the hybrid intelligence thesis.

You concluded by noting the historical hinge: when you and Hays were working, the technical means for operating at scale on these ideas did not exist. Now they do. And Hays himself played a foundational role in building the early symbolic infrastructure of computational linguistics (machine translation at RAND, coining the term “computational linguistics,” founding editorship and institutional leadership in COLING). In effect, the present moment makes possible an extension of that lineage: not abandoning symbolic structure, but using symbolic grammars and production discipline to extract, organize, and refine the latent cognitive structures that neural models already embody.

Friday Fotos: North West Resilence Park in Winter [Hoboken]

Thursday, January 29, 2026

How do we credit hybrid images?

Around the corner from here, over at 3 Quarks Daily, I’ve published an article I wrote in conjunction with both ChatGPT and Claude. How should that article be credited? How do we characterize the contribution of each agent and how do indicate that characterization? I discuss these issues at the end of the article.

Same issues can arise with visual images. All of these images were rendered by ChatGPT. But the renderings were done on a different, a different what? Basis? Substrate? Seed?

In the first two images, I uploaded a one of my photographs to ChatGPT and asked it to add something to it. In the case of first photo, I wanted to see the Millennium Falcon flying into the iris. The second photo is of a scene in Liberty State Park into which I had ChatGPT place a photo of an Indian woman in a sari eating McDonald’s French fries.

This image is a bit different. I gave ChatGPT a photo of a scene in Jersey City and ask it to turn it into a futuristic scene.

For this image I gave ChatGPT a photo of a painting I’d done as a child and asked it to render it in the style of Hokusai.

In this last case I gave ChatGPT a document that I wrote and then asked it to create an image that would be an appropriate frontispiece for it. This image is quite different from the one it originally produced. I had to do quite a bit of art directed to obtain this final image.

The question then is: Imagine that these images were on display in, say, a museum. How should they be credited? In all cases the final image was rendered by ChatGPT. But the substrate varied as did the prompting which instructed ChatGPT in generating the image. For example, in the first four cases we could indicate “Original photograph by William Benzon. For the last, “Original text by William Benzon” and “Art Direction by William Benzon.” Do I give myself an art direction credit on the others as well? What kind of credit should ChatGPT get. “Realization and Rendering by ChatGPT” might be sufficient for the first two. For the third and fourth, “Transformation and Rendering.” The last? Perhaps “Transmutation and Rendering.” Whatever the nature of the credits, they’re only meaningful if the audience already knows something about the process through which they were produced.