Monday, September 30, 2024

The formal structure of “And Your Bird Can Sing”

Around the corner at Crooked Timber Belle waring has an interesting post, Final Choruses and Outros Apparently. Her first example is a Beatles tune from Rubber Soul (1966), “And Your Bird Can Sing.” If you like to listen to music analytically – which I do, though not all the time, not at all! – it can throw you for a loop. You think you know what’s going on, then it goes sideways and you don’t know where you are. Just when you’re about to give up, it catches you and you are THERE.

OK, so I listened to "And Your Bird Can Sing" and took notes. I think it goes like this:

1) 4 bar instrumental (parallel guitar lines)
2) A-strain, 8 bars
3) A-strain, 8 bars
4) B-strain, 8 bars
5) 8 bar instrumental
6) B-strain, 8 bars
7) A-strain, 8 bars
8) instrumental outro, 12 bars

We start with a parallel guitars line (played by George and Paul) which is used in various ways. Up though #4 it could be a standard AABA tune, like “I Got Rhythm”. Now, if that's what was going on, we'd go back to the A-strain.

But that's not what happens, not at all. Instead we get those parallel guitars, and not for 4 bars, but for 8. Then we get a repetition of the B-strain. And that, in turn, is followed by (a return to) the A-strain, with added harmony line. It ends with an extended version of the parallel guitars line.

I suppose we can think of it as a variation on the AABA tune where the B section (often called the bridge) is extended. What makes this extended bridge (sections 4, 5, and 6) particularly interesting is the inclusion of that purely instrumental line in the middle (section 5). That’s a bit disorienting. Where are we? Are we going way back to the intro, even before the beginning of the song proper? Not really. But it really isn’t until we return to the final repetition of the A-strain (with added harmony) that our equilibrium is restored: Now I know where we are.

Those parallel guitar lines are quite striking and stand in contrast to the A and B strains, which carry the lyrics. The Wikipedia entry for the song, which is interesting and worth a read, quite properly noted that it anticipates a “type of pop-rock arrangement would later be popularised by Southern rock bands such as the Allman Brothers Band and Lynyrd Skynyrd, as well as hard rock and metal acts such as Thin Lizzy, Boston and Iron Maiden.” 

* * * * *

Here's a recent cover version by musicians you’ve probably never heard of. Notice that one guitarist (Josh Turner) plays the parallel lines originally played by George and Paul.

  * * * * *

For extra credit. Here’s a different, and I believe earlier, version by the Beatles. The structure is somewhat different. Setting aside the laughter and whistling, what are the formal differences?

FutureWorld on the Hudson?

Wolfram on Machine Learning

Wolfram has a post in which he reflects on the work he’s done in the last five years: Five Most Productive Years: What Happened and What’s Next. On ChatGPT:

So at the beginning of February 2023 I decided it’d be better for me just to write down once and for all what I knew. It took a little over a week [...]—and then I had an “explainer” (that ran altogether to 76 pages) of ChatGPT.

Partly it talked in general about how machine learning and neural nets work, and how ChatGPT in particular works. But what a lot of people wanted to know was not “how” but “why” ChatGPT works. Why was something like that possible? Well, in effect ChatGPT was showing us a new science discovery—about language. Everyone knows that there’s a certain syntactic grammar of language—like that, in English, sentences typically have the form noun-verb-noun. But what ChatGPT was showing us is that there’s also a semantic grammar—some pattern of rules for what words can be put together and make sense.

My version of “semantic grammar” is the so-called “great chain of being,” which is about conceptual ontology, roughly: “rules for what words can be put together and make sense.” Here’s a post where I discuss it on the context of Wolfram’s work: Stephen Wolfram is looking for “semantic grammar” and “semantic laws of motion” [Great Chain of Being].

A bit later Wolfram says a bit more about what he’s recently discovered about the “essence of machine learning”:

So just a few weeks ago, starting with ideas from the biological evolution project, and mixing in some things I tried back in 1985, I decided to embark on exploring minimal models of machine learning. I just posted the results last week. And, yes, one seems to be able to see the essence of machine learning in systems vastly simpler than neural nets. In these systems one can visualize what’s going on—and it’s basically a story of finding ways to put together lumps of irreducible computation to do the tasks we want. Like stones one might pick up off the ground to put together into a stone wall, one gets something that works, but there’s no reason for there to be any understandable structure to it.

And the future? Among other things: “symbolic discourse language”:

But finally there was blockchain, and with it, smart contracts. And around 2015 I started thinking about how one might represent contracts in general not in legalese but in some precise computational way. And the result was that I began to crispen my ideas about what I called “symbolic discourse language”. I thought about how this might relate to questions like a “constitution for AIs” and so on. But I never quite got around to actually starting to design the specifics of the symbolic discourse language.

But then along came LLMs, together with my theory that their success had to do with a “semantic grammar” of language. And finally now we’ve launched a serious project to build a symbolic discourse language. And, yes, it’s a difficult language design problem, deeply entangled with a whole range of foundational issues in philosophy. But as, by now at least, the world’s most experienced language designer (for better or worse), I feel a responsibility to try to do it.

In addition to language design, there’s also the question of making all the various “symbolic calculi” that describe in appropriately coarse terms the operation of the world. Calculi of motion. Calculi of life (eating, dying, etc.). Calculi of human desires. Etc. As well as calculi that are directly supported by the computation and knowledge in the Wolfram Language.

And just as LLMs can provide a kind of conversational linguistic interface to the Wolfram Language, one can expect them also to do this to our symbolic discourse language. So the pattern will be similar to what it is for Wolfram Language: the symbolic discourse language will provide a formal and (at least within its purview) correct underpinning for the LLM. It may lose the poetry of language that the LLM handles. But from the outset it’ll get its reasoning straight.

The symbolic discourse language is a broad project. But in some sense breadth is what I have specialized in. Because that’s what’s needed to build out the Wolfram Language, and that’s what’s needed in my efforts to pull together the foundations of so many fields.

Thursday, September 19, 2024

Aaron Sorkin: As a fictional president, Trump would be "simply implausible"

Marc Tracy, Aaron Sorkin Thinks Life Still Imitates ‘The West Wing’, NYTimes, Sept. 19, 2024.

We are speaking to each other the day after the only scheduled debate between the two presidential candidates this year.

If I had scripted last night’s debate, you would have said that I made Kamala Harris fight a straw man. A lot [of shows and movies are] going to be written about this time that we’re living in now. But my prediction is that you’ll never see Donald Trump as anything but an offscreen character. You’ll see him on a television set on the news. Because he is simply implausible.

There is a movie coming out about Trump, but to your point, it is set 40 years ago.

Sebastian Stan is playing Trump in the ’70s and ’80s. I mean President Trump. Even saying it doesn’t really sound right.

It has been a pretty dramatic summer politically. What have you made of it?

Over the years, cable newscasters have used the phrase “‘West Wing’ moment,” as in: “There’s a clash over the debt ceiling. There’s not going to be a ‘West Wing’ moment.” They’ve used that to mean: an unrealistically high expectation of character triumphing over selfishness, and in the real world, there are not “‘West Wing’ moments.” I believe that the morning Biden stepped out of the race, that was a “West Wing” moment. That’s the kind of thing we write stories about.

Boats lined up on a pier

Wednesday, September 18, 2024

Emergence

Monday, September 16, 2024

Sunday, September 15, 2024

LLMS are not fundamentally about language [Karpathy]

Note that some time ago I pointed out that transformers would operate in the same way on strings of colored beads as they do on strings of word tokens.

Saturday, September 14, 2024

Tree stump and water, two versions

Can we make an AI scientist?

Sam Rodriques, What does it take to build an AI Scientist? August 15, 2024:

What will it take to build an AI Scientist?

I run FutureHouse, a non-profit AI-for-Science lab where we are automating research in biology and other complex sciences. Several people have asked me to respond to Sakana's recent AI Scientist paper. However, judging from comments on HackerNews, Reddit and elsewhere, I think people already get it: Sakana’s AI Scientist is just ChatGPT (or Claude) writing short scripts, making plots, and grading its own work. It's a nice demo, but there's no major technical breakthrough. It's also not the first time someone has claimed to make an AI Scientist, and there will be many more such claims before we actually get there.

So, putting Sakana aside: what are the problems we have to solve to build something like a real AI scientist? Here’s some food for thought, based on what we have learned so far:

It will take fundamental improvements in our ability to navigate open-ended spaces, beyond the capabilities of current LLMs

Scientific reasoning consists of essentially three steps: coming up with hypotheses, conducting experiments, and using the results to update one’s hypotheses. Science is the ultimate open-ended problem, in that we always have an infinite space of possible hypotheses to choose from, and an infinite space of possible observations. For hypothesis generation: How do we navigate this space effectively? How do we generate diverse, relevant, and explanatory hypotheses? It is one thing to have ChatGPT generate incremental ideas. It is another thing to come up with truly novel, paradigm-shifting concepts.

I note that this is quite different from playing games like chess or Go. Those gaves have huge search spaces, much larger than we can explicitly construct. But they are well-structured spaces. The space of scientific hypotheses is not at all well-structured. I discuss this problem in various posts, including this one: Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds] (August 13, 2024).

It will take tight integration with experiments

Once we have a hypothesis, we then need to decide which experiment to conduct. This is an iterative process. How can we identify experiments that will maximize our information gain? How do we build affordance models that tell us which experiments are possible and which are impossible? Affordance models are critical, because discovery is about doing things that have never been done before.

There's much more at the link.

Friday, September 13, 2024

There are no AI-shaped holes lying around

Friday, September 6, 2024

Mary Spender and Adam Neely talk about being musicians on tour and on YouTube

Rick Beato on the current Spotify top ten 10

Two points:

  • First time since in the last three or four years that the Spotify top ten didn't include any rap or hip hop.
  • First time since ??? when there's a top ten tune with a key change: Sabrina Carpenter, "Please, Please, Please."