Monday, April 29, 2024

Buds to blossoms

ChatGPT conjures with green ideas and Augustine

Chomsky published Syntactic Structures in 1957, where he created a now famous example sentence that was grammatically coherent, but otherwise meaningless: “Colorless green ideas sleep furiously.” In 1971 John Hollander published a collection of poems that contained a three-line poem entitled “Coiled Alizarin,” which is dedicated to Chomsky:

Curiously deep, the slumber of crimson thoughts:
While breathless, in stodgy viridian
Colorless green ideas sleep furiously.

Since ChatGPT is quite familiar with Chomsky’s famous line, I decided to play around with Hollander’s poem.

A simple prompt

I prompted it with Hollander’s first two lines, expecting it to respond with the third as the poem is freely available on the web. It fooled me (prompt in boldface):

Curiously deep, the slumber of crimson thoughts: While breathless, in stodgy viridian

That's a beautifully vivid image! It evokes a sense of profound contemplation and perhaps a hint of melancholy. The contrast between the "crimson thoughts" and the "stodgy viridian" creates a striking juxtaposition, suggesting a complex emotional landscape. It feels like the beginning of a captivating journey into the depths of the mind. Are you writing poetry or contemplating something deep?

An obvious request

Since that didn’t work, I decided to be obvious about what I wanted. Continuing in the same session:

Here is Chomsky’s famous example sentence: “Colorless green ideas sleep furiously.” I want you to treat that as the final line in a poem. What lines come before it?

In the quiet garden of the mind's eye,
Where whispers dance 'neath the moonlit sky,
There blooms a garden of paradoxical bloom,
Where shadows whisper secrets in the gloom.

Beneath the boughs of twisted trees,
Where truth and fiction intertwine with ease,
The silence hums with a curious tune,
As time itself seems to swoon.

In this realm of surreal delight,
Where dreams take flight in the dead of night,
The poet's pen weaves a tapestry rare,
Of words that dance on the edge of air.

Through the labyrinth of thought they creep,
Where reality and fantasy softly meet,
And in this maze of linguistic delight,
Colorless green ideas sleep furiously, out of sight.

That’s not what I had in mind; I’d been looking for that Hollander poem. But, in a way, it’s more interesting. After all, it gave me what I asked for and Chomsky’s famous line makes poetic sense, in context. Notice, though, it added a bit after that line.

As you may know, ChatGPT allows you to ask for another response to a given prompt. So I clicked on the appropriate icon and here’s what I got:

In the garden of the mind, where thoughts take flight,
Whispers of dawn dance with the veil of night.
Echoes of silence, a canvas so vast,
Where dreams intertwine, fleeting and fast.

Through corridors of memory, shadows roam,
In the labyrinth of consciousness, they find a home.
Where words become rivers, flowing endlessly,
And the heartbeat of language pulses softly.

Amidst this symphony of chaos, serene,
Lies a paradox, profound and unseen.
Where colorless green ideas dare to creep,
And in the realm of reason, they softly sleep.

In the tapestry of existence, woven curiously,
Where logic and poetry collide so mysteriously.
Thus, in this cosmic dance, so imperviously,
Colorless green ideas sleep furiously.

Notice that both poems begin with the metaphor, the mind as a garden, and then moved to dance and night in the second line. The second poem introduces the idea of colorless green ideas in the third stanza before bringing it back in the final line. Again, not bad.

Enter, Augustine on memory

So, I had an idea. Continuing in the same session:

Now, I want you to take what you’ve done and use it to weave a new poem using the following passage from St. Augustine:

The power of memory is prodigious, my God. It is a vast immeasurable sanctuary. Who can plumb its depths? And yet it is a faculty of my soul. Although it is part of my nature, I cannot understand all that I am. This means, then, that the mind is too narrow to contain itself entirely. But where is that part of it which it does not itself contain? Is it somewhere outside itself and not within it? How, then, can it be part of it, if it is not contained in it?

In the labyrinth of self, memory's might,
A sanctuary vast, an endless flight.
Prodigious power, beyond our grasp,
A mystery enshrined, in time's gentle clasp.

Who can fathom its depths, divine and deep?
Within the soul's embrace, secrets we keep.
A part of our nature, yet beyond our ken,
In its boundless expanse, where do we begin?

The mind, a vessel too narrow, too small,
To encompass itself, to hold it all.
Yet, within its confines, a paradox lies,
A fragment unseen, where the mystery flies.

Is it outside ourselves, this part unknown?
Or within, a realm entirely its own?
How can it be part of us, yet not contained?
In this enigma's embrace, our minds are chained.

Thus, memory's tale, intertwined with thought,
In the dance of existence, a truth dearly bought.
Colorless green ideas, in silence, they creep,
And in memory's sanctuary, they softly sleep.

And again, not bad. In its elaborations on the Chomsky line ChatGPT was playing around with imagery of the mind, to the Augustine material fit right in. It would be interesting, if a bit tedious, to trace explicit linkages between this new poem and the previous ones. What words and phrases from the previous poems show up in this new one? I’m now beginning to call for some serious analysis. Would it be worth the time and effort? I don’t know.

I ended the session by directing ChatGPT to go around again, and again:

In the cathedral of the mind, memory reigns,
A sanctuary vast, where truth remains.
Prodigious power, unfathomed, divine,
Within its depths, eternity enshrined.

Who dares to plumb its depths, my God,
This infinite expanse, a sacred sod.
Yet part of me, yet beyond my grasp,
In its mystery, I find myself clasp.

A faculty of the soul, yet beyond my ken,
I am but a wanderer in memory's den.
For though I am, I do not comprehend,
The vastness of this realm, where all things blend.

Too narrow is the mind to contain its might,
Yet within its confines, it takes flight.
Where lies the part that it cannot retain?
Outside itself, or within its domain?

Oh, how the mind wrestles with its own being,
In this paradoxical dance, forever freeing.
A puzzle unsolved, a riddle untold,
In the depths of memory, mysteries unfold.

In this cosmic symphony, where truth abides,
Colorless green ideas and memories collide.
And in the end, as Augustine would decree,
In the vast expanse of the mind, we find eternity.

I just want to see what you can do. How about one more time?

In the labyrinth of the soul, memory dwells,
A sacred sanctuary where time compels.
Prodigious power, a vast expanse untamed,
In its depths, the essence of being named.

Who can fathom its depths, O Divine?
This boundless realm, where echoes intertwine.
Part of my nature, yet beyond my grasp,
In its mystery, I find solace at last.

The mind, too narrow to contain its expanse,
Yet within its bounds, memories dance.
Where lies the part that eludes my sight?
Within, without, in the realm of light?

Oh, the paradox of the mind's own design,
In its boundless depths, mysteries entwine.
A puzzle unsolved, a mystery profound,
In memory's embrace, truth is found.

In this cosmic dance of thought and dream,
Colorless green ideas find their stream.
And as the final line of the poem rings true,
In memory's sanctuary, eternity brews.

So what?

I’m not sure. But I do find this satisfying. Yes, I’m giving ChatGPT some good material to work with, the line from Chomsky, the passage from St. Augustine, so it’s not entirely on its own. That’s OK. Does anyone work alone...really?

Sunday, April 28, 2024

Unbalanced composition with gazebo, trees, and other things

Michale Levin on bioelectricity, regeneration, cancer treatment

How bioelectricity could regrow limbs and organs, with Michael Levin (Ep. 112), Big Brains Podcast, UChicago News, April 27, 2023.

"Software" for the cells:

Paul Rand: It’s kind of mind-blowing in its own way. How do the cells have these memories, if that’s the right word?

Michael Levin: Well, I think it’s the right word. I think many people probably don’t think it’s the right word, but I think it’s exactly the right word. I think you’re right. It is mind-blowing because. look, each of us makes this journey from an unfertilized [inaudible 00:07:46], which is a little blob of chemicals. You look at that little blob of chemicals and you would say, “well, this is just physics. This is just chemistry. This thing doesn’t have any goals, any intelligence, you know, you name it.” And then eventually that little blob of chemistry turns into, nine months and some years later, it turns into a being that absolutely has an inner perspective, it has goals, it has preferences, it has behavioral [inaudible 00:08:13], and it will go on to say things like, “Well, I’m not a machine I’m a human being.” Okay, great.

What’s really important to realize is that this process of development, very robust, meaning consistent, so there’s this amazing ability for life to get to the correct outcome, meaning the correct target morphology for that species, despite all kinds of crazy things, multiple copy numbers of the DNA, more cells, less cells, bigger cells, they still figure out how to get it done.

So then it makes really a lot of sense to ask, “Okay, if you’re solving this problem, if you’re going to get to the same goal despite various things that could happen to you, what are you using to remember what that goal is? You’re navigating these spaces trying to get to the correct final outcome, but how do you know what that outcome is?”

Paul Rand: Levin thinks bioelectricity is the architect building the blueprint, so to speak.

Michael Levin: These pattern memories are encoded in the electrical network of the body of the early embryo and subsequent exactly in the way that we think of as memories about navigating three-dimensional space are encoded in the brain. Now, I should point out that we, of course, we don’t know exactly how memories are encoded in the brain. We still don’t know, and there are many mysteries about that in the body as well, but I think we should get really comfortable with the idea that electrical networks store memories, they store goal states and they facilitate these complex beings to navigate space to get to those goals.

Paul Rand: As you talk about this, it’s almost like the cells are like a hardware and the electrical patterns are almost the software. Is that a fair analogy?

Michael Levin: I think that’s a very fair analogy. A lot of people don’t like that analogy because they’re visualizing hardware and software the way that they think about their laptops. But what is really powerful about that notion and what makes that analogy work really well is the idea of reprogrammability. So what’s powerful about computers is that the exact same piece of hardware can do multiple things without rewiring. And so when I give talks about this, I ask people, “Why is it that on their computer when they want to switch from Photoshop to PowerPoint, they don’t get out their soldering iron and start rewiring?” Isn’t it amazing.

Regeneration, cancer treatment:

The bigger picture here is that currently the medical model that we currently have, one of the problems with it is that it’s fundamentally unsustainable for any, no matter how many resources we have, because every advance that we make to prolong the life of a patient ends up giving you a sicker patient, that’s the baseline for the next intervention. So the better you are at extending the last stages of the lifespan, the more expensive and more heroic the next measures have to be. Inevitably, the logic of it is inescapable. And so that’s a spiral. That’s a constant spiral that is fundamentally unavoidable and unsustainable for any society unless we figure out how to crank up the regenerative process very early on so that you never get to the stage of that sinking ship that you need to keep propping up. It means that you are not just chasing symptoms, you are fundamentally, and we can talk about what that is, but we need a completely different approach to medicine that leverages literally the intelligence of the body so that the regenerative process is happening all the time.

In addition to leg regeneration, two kind of flagship applications in our group have been, first of all, the repair of birth defects. And so we were able to show that a wide range of birth defects of the brain, heart, face, and gut induced either by genetic mutations or by chemicals, can be prevented by an appropriate bioelectrical treatment that was designed by a computational model. So there’s a computational model that tells you which ion channels you would need to turn on and off to make specific patterns. And so we’ve used that to repair birth defects in the frog model. The other side is the cancer side, and we started in frogs showing that if we understand cancer to be the breakdown of the electrical signaling that normally harnesses cells towards this common anatomical purpose, so when that breaks down, they simply roll back to their amoeba-like ancient lifestyle where they just al their goals are little tiny cell level goals, which means go wherever life is good, reproduce as much as you can. Then that’s metastasis. And so we were able to show that despite really nasty human oncogenes, we could suppress to or prevent correct tumor genesis by forcing the appropriate bioelectrical states. And we started this in frog, and we are now in human glioblastoma. So we’re working to try the same thing in glioblastoma.

There's more at the link.

H/t 3QD.

Japanese maple

New Method for Science: Use a neural network to model scientific data, then interpret the resulting model

Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network (April 3, 2024)

From the Simons Foundation:

Machine learning methods such as neural networks are quickly finding uses in everything from text generation to construction cranes. Excitingly, those same tools also promise a new paradigm for scientific discovery.

In this Presidential Lecture, Miles Cranmer will outline an innovative approach that leverages neural networks in the scientific process. Rather than directly modeling data, the approach interprets neural networks trained using the data. Through training, the neural networks can capture the physics underlying the system being studied. By extracting what the neural networks have learned, scientists can improve their theories. He will also discuss the Polymathic AI initiative, a collaboration between researchers at the Flatiron Institute and scientists around the world. Polymathic AI is designed to spur scientific discovery using similar technology to that powering ChatGPT. Using Polymathic AI, scientists will be able to model a broad range of physical systems across different scales.

About the Speaker

Cranmer is an assistant professor in data intensive science at the University of Cambridge with joint appointments in the Department of Applied Mathematics and Theoretical Physics and the Institute of Astronomy. He completed his Ph.D. at Princeton University. His research focuses on accelerating scientific discovery by developing and applying novel methods at the intersection of machine learning and physics. Cranmer has created a suite of standard software libraries for ‘symbolic regression’ that have been utilized in numerous scientific discoveries. His work covers various areas of deep learning, including physics-motivated architectures such as Lagrangian neural networks.

Miles Cramer at Google Scholar

Roughly, it goes like this: 1) train a neural network on scientific data. Then, 2) interpret the model using the mathematical language of science.

Hmmmm... I wonder. If we had a mathematical model of language, could we use that model to interpret LLMs?

Saturday, April 27, 2024

A physics that explains 'everything' while evading the temptation of being a Grand Unified Theory?

The (Simple) Theory That Explains Everything | Neil Turok

56,115 views Apr 23, 2024 Theories of Everything with Curt Jaimungal

Physicist Neil Turok, recipient of the James Clerk Maxwell Medal and Prize, and the John Torrence Tate Award for International Leadership in Physics, joins Curt Jaimungal and Theories of Everything to discuss his new hypothesis regarding the origins of the universe. Building on Stephen Hawking's geometrical model, Turok proposes a theoretical approach that avoids the singularity at the Big Bang by suggesting a minimal, mirror universe scenario without requiring inflation.

Consider signing up for TOEmail at https://www.curtjaimungal.org

Timestamps:
00:00 - The Big Bang Is A Mirror
15:40 - Minimalism In Physics
28:28 - Neil’s Theory “Minimalism SM LCDM”
31:20 - Fields Vs. Particles
49:15 - The Arrow Of Time (Bolztmann)
55:44 - Black Hole Singularity Vs. Big Bang Singularity
01:09:21 - Numerology And The Number 36
01:19:26 - Neil’s Theory Solves EVERYTHING
01:23:32 - What Do Other Scientists Think?
01:36:28 - The Dual Universe
01:44:14 - Predictions From Neil’s Theory
01:48:28 - What Motivates Neil?
01:52:20 - Wave Function Of The Universe
01:57:20 - Support TOE

Guerilla libraries in Hoboken, NJ

On Monday my friend Leanne Ogasawara had a post at 3 Quarks Daily that was, in part, about Little Free Libraries. According to Wikipedia:

Little Free Library is a 501©(3) nonprofit organization that promotes neighborhood book exchanges, usually in the form of a public bookcase. More than 150,000 public book exchanges are registered with the organization and branded as Little Free Libraries. Through Little Free Libraries, present in 115 countries, millions of books are exchanged each year, with the aim of increasing access to books for readers of all ages and backgrounds. The Little Free Library nonprofit organization is based in St. Paul, Minnesota, United States.

The first one was built in 2009 by Todd Bol as a tribute to his mother. Then this that and the other and his idea became a movement. His original goal was to create 2150 libraries, thereby surpassing the number of libraries endowed by the industrialist Andrew Carnegie.

I can remember going to “the Carnegie library” when I was a kid. To me that label was just the name of the library. It was only somewhat later that I learned who Carnegie was. I particularly remember taking out books about American Indians, which, by the way, is what I wanted to be when I grew up. That's also why I wasn’t fond of my curly blond hair. Whoever heard of a blond Indian?

I digress.

Anyone, at the very end of her article, the last line, Leanne had a link to a map of Little Free Libraries. So I went looking, and found one in Hoboken only a few blocks from me. Here it is, beneath the 14th Street viaduct:

And here’s the official tag, with its charter number: 137333.

According to the map, there are seven Little Free Libraries in Hoboken. The red tear drops mark the libraries.

The one in the photos is the one near the upper left of the map. The two at the right are even closer to my place. For some reason I didn’t spot them the first time I looked at the map.

Anyhow, a couple of years ago I discovered something very like those free libraries a bit closer to me than that Little Free Library. The box is a bit larger and is decorated:

It seems to have been adopted by a Girl Scout troop. I’m not sure what’s going on with the spelling of “Hoboken.”

Here’s what it looked like back in 2020 when I first spotted it:

In this last photo you can see how I stuffed it with copies of a little book I’d edited: We Need a Department of Peace: Everybody’s Business, Nobody’s Job.

More later.

Lawfare analysts on Trump's immunity case before the Supreme Court

Matt Gluck, Hyemin Han, Quinta Jurecic, Natalie K. Orpett, Roger Parloff, Alan Z. Rozenshtein, “For the Ages”: The Supreme Court Hears the Presidential Immunity Defense, April 26, 2024.

Introductory remarks:

On April 25, the U.S. Supreme Court heard oral arguments in Trump v. United States, the case arising from the Special Counsel’s Office’s decision to charge former president Donald Trump for his effort to overturn the results of the 2020 election. Trump has argued that he is absolutely immune from the charges brought by the Justice Department because, he asserts, they target his official presidential conduct. The U.S. District Court for the District of Columbia rejected Trump’s argument that he enjoys absolute criminal immunity for his official acts, and the U.S. Court of Appeals for the D.C. Circuit affirmed that ruling.

Despite the nearly three hours of oral argument, only a portion of that time was spent on the particulars of the Jan. 6 case or its procedural posture. That’s because the justices were, as Justice Gorsuch put it, writing a ruling “for the ages.” The Court grappled with the distinction between private acts and official acts—everyone seemed to agree that private acts could be prosecuted—and then wrestled with which subset of official acts, if any, could be prosecuted. Several justices further focused on which criminal statutes can apply to the president without conflicting with his Article II powers. There did not appear to be much consensus on these questions, and the justices seem poised to issue a splintered decision rejecting Trump’s maximalist arguments, while establishing at least some presidential criminal immunity for at least some types of official acts.

The Court could send the case down several different paths to resolve and eventually move past the immunity issue, but none is likely to lead to a quick resumption of the trial in Judge Tanya Chutkan’s courtroom.

Then we have analysis of remarks by the advocates and by each justice.

Concluding remarks:

The justices seem certain to send the case back to either the court of appeals or, more likely, the district court for further proceedings. Precisely what those proceedings will look like, what they will decide, and whether the findings reached therein would, themselves, be subject to a second interlocutory appeal, all remain very live questions.

Even Sauer acknowledged that certain accusations of the indictment concerned purely private acts, and that a former president could at least theoretically be charged with crimes based solely upon those. But much of the indictment also alleges that Trump used the trappings of his office for personal gain. And the justices appeared deeply split over whether these sorts of acts were protected by some sort of immunity and, if so, whether it was absolute or qualified.

Likewise unclear—and decisive in terms of whether this case can yet conceivably be tried before the election—is what sort of procedures the Court will require the lower court to engage in to resolve whatever questions the Court wants resolved. If the case returns to Judge Chutkan, one possibility is that she could proceed with the current indictment, as is, and simply instruct the jurors that certain accusations can only be used as evidence of Trump’s intent—not as a basis for finding him criminally culpable. Another is that she would have to “expunge” certain accusations and that even evidence of that conduct would be precluded from being introduced at trial. Still another is that Judge Chutkan would need to hold some sort of evidentiary hearing. Finally, in any of these scenarios, the crowning question will be whether Trump will be entitled to make an interlocutory appeal on whatever findings Judge Chutkan makes—ensuring that no trial could take place for many months to come.

It looks increasingly unlikely that this case will be tried before the election. And if Trump wins that election, the case will likely never be tried at all.

White flower with infolded petals

To which values should an A.I. be aligned?

Nathan Gardels, The Babelian Tower Of AI Alignment, Noema, April 26, 2024:

As generative AI models become ever more powerful on their way to surpassing human intelligence, there has been much discussion about how they must align with human values so they end up serving our species instead of becoming our new masters. But what are those values?

The problem is that there is no universal agreement on one conception of the good life, nor the values and rights that flow from that incommensurate diversity, which suits all times, all places and all peoples. From the ancient Tower of Babel to the latest large language models, human nature stubbornly resists the rationalization of the many into the one.

Despite the surface appearance of technological convergence, a deep ontological plurality — profoundly different beliefs about the nature of being — still informs the active values of variegated societies.

Silicon Valley Vs. China

This is most readily evident in the politico-cultural clash of the leading AI powers, Silicon Valley and China. At the risk of reductive essentialism for the purpose of brevity, the values of the former are aligned with the libertarian worldview of the sovereign individual long cultivated in the Judeo-Christian West. The values of the latter are aligned with the concept of the collectively embedded person rooted in Confucian, Buddhist and Daoist beliefs of social interdependence.

An early mission statement by OpenAI, which developed GPT, reflects the deep well from which its innovations have sprung: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

By contrast, after Alibaba released its latest version of generative AI in 2023, the Cyberspace Administration of China quickly laid down the law: “Content generated by generative artificial intelligence should embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.”

And of course there is diversity in the West, not to mention the rest of the world. India? The Middle East? Africa? The Pacific nations? 

There's more at the link.

Friday, April 26, 2024

Forlorn tree on a submerged pier

Collective intelligence: A unifying concept for integrating biology across scales and substrates

Patrick McMillen & Michael Levin, Collective intelligence: A unifying concept for integrating biology across scales and substrates, Communications Biology, (2024) 7:378, https://doi.org/10.1038/s42003-024-06037-4

Abstract: A defining feature of biology is the use of a multiscale architecture, ranging from molecular networks to cells, tissues, organs, whole bodies, and swarms. Crucially however, biology is not only nested structurally, but also functionally: each level is able to solve problems in distinct problem spaces, such as physiological, morphological, and behavioral state space. Percolating adaptive functionality from one level of competent subunits to a higher functional level of organization requires collective dynamics: multiple components must work together to achieve specific outcomes. Here we overview a number of biological examples at different scales which highlight the ability of cellular material to make decisions that implement cooperation toward specific homeodynamic endpoints, and implement collective intelligence by solving problems at the cell, tissue, and whole-organism levels. We explore the hypothesis that collective intelligence is not only the province of groups of animals, and that an important symmetry exists between the behavioral science of swarms and the competencies of cells and other biological systems at different scales. We then briefly outline the implications of this approach, and the possible impact of tools from the field of diverse intelligence for regenerative medicine and synthetic bioengineering.

How is an LLM like a mechanical clock?

The rest of the tweet stream:

A clock movement has no concept of "time" if you take the gears and levers apart, nowhere in there will you find any notion of what "time" is, nor, especially, the *correct* time of day. It just turns shafts, and even that function is not obvious to the non-expert.

If you attach some hands to the shafts, and position a "face" with numerals on it behind, and so forth, you can cause the combined machine to "tell the time" in the sense of making an assertion about what the time of day is.

The position of the hands over the face has a *meaning* for a suitably trained reader, in the same way that a sentence has *meaning*. If you speak the language, a sentence, as a 'signifier,' points to a 'signified,' the meaning of the sentence.

A book does not "know" anything, it does not "mean" anything by itself. If you, a language reader, read it, the meaning arises in that interaction.

In the same way, "it is 2:17" is a meaning which arises when you look at a clock face.

If the clock has not been wound, or if the clock has been set to the wrong time, the statement "it is 2:17" will still have meaning, but that meaning will likely be false. It is 10:17, not 2:17.

The clock is not *lying* in any reasonable sense. It itself does not even carry the meaning of the signs it displays. The meaning arises in the interaction of you with the hands and face attached to the clock.

The gears and levers don't even have "2:17" encoded in them.

In the same way, an LLM has no meaning encoded in it, although it produces signs which in their interaction with you often have a great deal of meaning.

LLMs do not "lie" or "hallucinate" any more than a clock does. It's simply a very very complicated set of gears and lever which, if you wind it up, and if you set it to the correct time, can produce true statements like "it is 10:20" some of the time.

a set of signs on the screen are incapable of being true or false, hallucinations or reality. They're just arrangements of letters.

Only meanings are true or false, and there are no meanings in the LLM. 

It is true that you will find echoes of meaning inside the LLM. Objects arise in the data which resemble an Othello board in interesting ways, when you train an LLM to play Othello.

This is like the position of the shaft of a clock movement.

You can see artifacts in the clock's machinery which, if you know what's going on, you can map to the output statement about what time it is. At the same time, it's foolish to argue that the clock "knows" what time it is, and then to argue that "it's lying."

The clock just turns shafts. This is not changed by the fact that you can imagine the hands and face without actually putting them on, and thereby imaginatively replicate the ability of the clock to make temporal assertions.

Now, an electronic clock can extract the actual correct time from a number of over-the-air oracles (GPS etc) and in fact if you take one of these apart you *will* find a place in there which contains the actual time. Allegedly the "correct" time.

A sequence of ones and zeros in a memory location encodes the "correct" time in some sense, or at least is supposed to.

There is a meaning inside the thing, kinda! It can be wrong, now, and it can arguably lie! If the clock is damaged, say.

When you look at the display you see that it reads "2:17" but the correct time is now 10:27. The clock is making some mistake between the oracle it consults and the display is shows you. It is "lying" or "hallucinating" in some meaningful way.

There is a model of the world attached to the clock, which the clock consults. That model can be interpreted to "mean" that "the time is 10:27" and the display on the clock can be interpreted to "mean" that "the time is 2:17" and that mismatch is a falsehood.

LLMs have no world model. They are clock movements, just clock movements. Attach them to a screen and a keyboard, and you've given the clock hands and a face. You can now read meaning if you like, but to propose that the "meaning" is inside the LLM is simply false.

fin/

Then there's this:

Friday Fotos: Let's play in the park!

Take your kids to work day – to Mt. Kenya [Don't punish women scientists because they have kids]

Toby Kiers, A Simple Act of Defiance Can Improve Science for Women, NYTimes, April 26, 2024.

sci

They don’t tell you beforehand that it will be a choice between having a career in science or starting a family. But that’s the message I heard loud and clear 17 years ago, in my first job after completing my Ph.D. in evolutionary biology. During a routine departmental meeting, a senior academic announced that pregnant women were a financial drain on the department. I was sitting visibly pregnant in the front row. No one said anything.

She has another child. There are incidents: "It felt like an impossible choice: to be a bad scientist or a bad mother."

What to do:

Through an act of academic defiance: I bring my kids with me on my scientific expeditions. It’s a form of rebellion that is available to mothers not just in the sciences but also in other disciplines that require site visits and field work, such as architecture and journalism. [...]

It started for me as a simple necessity. When my son was just under 2 and my daughter not yet 4, I took them on an expedition to the base of Mount Kenya in Africa, to study how fungi help trees defend themselves against the elephants and giraffes who feed on them. My son was still nursing, and I didn’t want to stop working. My husband, a poet, came along to stay with them at base camp.

As time went on, I began to embrace the decision to bring my kids with me on my expeditions, not as an exigency of parenting but as a kind of feminist act.

It has worked out well:

I started tasting soils in the field — a technique I now use to notice subtle differences across ecosystems — only after seeing my kids eat dirt. Children have an uncanny ability to make local friends quickly; many of those new friends have led me to obscure terrain and hidden fungal oases that I otherwise would never have come across. And my kids’ naïve minds routinely force me to rethink old assumptions by asking questions that are simultaneously absurd and profound. Can you taste clouds? Do fungi dream? How loud are our footsteps underground?

And so it goes. More examples of how things worked out.

At its core, feminism is about having the power to choose. For female scientists, this means having the ability to bring children into the field — or the full support to leave them at home. The pressure is acute because, as research shows, women on scientific teams are significantly less likely than men to be credited with authorship. So for me, it is crucial to keep collecting data with my own hands.

There's much more at the link.

Thursday, April 25, 2024

Lilacs

The Industrial Revolution started earlier than conventional wisdom would have us believe

Fred Lewsey, ‘Nation of makers’: Britain industrialised over a century earlier than history books claim, University of Cambridge, 5 April, 2024.

Britain was well on its way to an industrialised economy under the reign of the Stuarts in the 17th century – over 100 years before textbooks mark the start of the Industrial Revolution – according to the most detailed occupational history of a nation ever created.

Built from more than 160 million records and spanning over three centuries, the University of Cambridge’s Economies Past website uses census data, parish registers, probate records and more to track changes to the British labour force from the Elizabethan era to the eve of World War One.

The research shows that 17th century Britain saw a steep decline in agricultural peasantry, and a surge in people who manufactured goods: from local artisans like blacksmiths, shoemakers and wheelwrights, to an explosion in networks of home-based weavers producing cloth for wholesale.

Historians say the data suggests that Britain was emerging as the world’s first industrial powerhouse several generations before the mills and steam engines of the late 18th century – long credited as the birth of global industry and economic growth.

Distributed manufacture:

Yet in the England of 1700, half of all manufacturing employment was in the countryside. “In addition to village artisans, there were networks of weavers in rural areas who would work for merchants that supplied wool and sold the finished articles,” said Shaw-Taylor.

Industries of textiles, or metalworkers making nails and scythes, were shaped like “factories without machines spread out over hundreds of households” according to Shaw-Taylor – and increasingly produced goods for international markets.

In Gloucestershire, for example, expansions in textiles, footwear and metals saw the share of the male workforce in industry grew from a third (33%) to almost half (48%) over the 17th century.comm

While in Lancashire, the share of men in manufacturing work grew from 42% in 1660 to 61% in 1750, driven by a doubling of textile workers (from 15% to 30%). This all occurred prior to the Industrial Revolution.

Some networks evolved into workshops, and eventually the mills of Blake’s visions as industries migrated to the North of England, where coal was abundant and crops were harder to grow.

This meant that by the mid-18th century – considered the start of the Industrial Revolution – much of England’s South and East had actually lost its long-established industries, and even returned to farm labour, according to the research.

There's more at the link.

H/t Tyler Cowen.

POST BOP JAZZ STYLES (Was 1959 the most important year?)

 

From the YouTube page:

This video introduces a series of episodes focusing on the various styles of jazz that emerged in the 1950s following the bebop era. By almost any account, this was one of the most impactful decades in jazz history, one which foretold a future in which it would become increasingly difficult to define jazz music. You'll also learn about Blue Note and Prestige, two independent record labels that played an outsize role in documenting the music of the post-bop era.

Construction in the cove

Katja Grace: The first future and the best future [opportunity costs]

The post:

It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.

But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.

People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures.

My comment: YES. 

At the moment the A.I. world is dominated by an almost magical believe in large language models. Yes, they are marvelous, a very powerful technology. By all means, let's understand and develop them. But they aren't the way, the truth and the light. They're just a very powerful and important technology. Heavy investment in them has an opportunity cost, less money to invest in other architectures and ideas. 

And I'm not just talking about software, chips, and infrastructure. I'm talking about education and training. It's not good to have a whole cohort of researchers and practitioners who know little or nothing beyond the current orthodoxy about machine learning and LLMs. That kind of mistake is very difficult to correct in the future. Why? Because correcting it means education and training. Who's going to do it if no one knows anything else? 

Moreover, in order to exploit LLMs effectively we need to understand how they work. Mechanistic interpretability is one approach. But: We're not doing enough of it. And by itself it won't do the job. People need to know more about language, linguistics, and cognition in order to understand what those models are doing.

Wednesday, April 24, 2024

Kids these days

Elon's Xanadu rumpus room

At last! ChatGPT does, shall we say, interesting imitations of “Kubla Khan”

As you may know, I have a long-term interest in Coleridge’s “Kubla Khan” and have written a lot of posts about it. If you scan through those posts you’ll find that some of the more recent one’s involve ChatGPT. Here’s my first attempt to have ChatGPT parody the poem:

How ChatGPT parodied “Kubla Khan” and pwned DJT45 at the same time

While the Chatster’s performance is not without interest, the parodies were not very good. A month and a half later I decided to give it another try:

Again, not without interest, but no cigar.

Meanwhile, I have managed to prompt ChatGPT into imitations of Wallace Stevens’ “Thirteen Ways of Looking at a Blackbird.” I did it by using the technique I’d used to investigate story-telling. I’d give it a story and then ask ChatGPT to use that as the basis for a new story where I specified that the new story have a different protagonist. ChatGPT could do whatever it wanted to do with the rest. That worked well for stories, and it worked will with Stevens as well. A month ago I asked ChatGPT to substitute “A.I.” for “blackbirds.” Just the other day I had it produce a poem about irises.

Perhaps it will work with “Kubla Khan.” I decided to restrict the exercise to the first 36 lines. First, I had it substitute “Elon Musk” for “Kubla Khan,” and then “FDR.” The results? Good poetry, no. Some of it is silly. But it’s more interesting than the earlier tries. Three cigars and a pop tart.

As always, I set my prompts in boldface and ChatGPT’s response in plain-face.

From Kubla Khan to Elon Musk

I’m going to give you the first 36 lines of “Kubla Khan.” I want you to base a new poem on that, using “Elon Musk” instead of “Kubla Khan.”

In Xanadu did Kubla Khan
A stately pleasure-dome decree:
Where Alph, the sacred river, ran
Through caverns meaureless to man
Down to a sunless sea.
    So twice five miles of fertile ground
With walls and towers were girdled round:
And there were gardens bright with sinuous rills,
Where blossomed many an incense-bearing tree;
And here were forests ancient as the hills,
Enfolding sunny spots of greenery.

But oh! that deep romantic chasm which slanted
Down the green hill athwart a cedarn cover!
A savage place! as holy and enchanted
As e'er beneath a waning moon was haunted
By woman wailing for her demon lover!
And from this chasm, with ceaseless turmoil seething
As if this earth in fast thick pants were breathing,
A mighty fountain momently was forced:
Amid whose swift half-intermitted burst
Huge fragements vaulted like rebounding hail,
Of chaffy grain beneath the thresher's flail:
And 'mid these dancing rocks at once and ever
It flung up momently the sacred river.
Five miles meandering with a mazy motion
Through wood and dale the sacred river ran,
Then reached the caverns endless to man,
And sank in tumult to a lifeless ocean:
And 'mid this tumult Kubla heard from far
Ancestral voices prophesying war!

    The shadow of the dome of pleasure
    Floated midway on the waves;
    Where was heard the mingled measure
    From the fountain and the caves.
It was a miracle of rare device,
A sunny pleasure-dome with caves of ice!

In Tesla's realm, where dreams take flight,
Elon Musk decreed with might:
A wondrous hub of innovation, grand,
Where streams of thought would gently land.

Through circuits vast and wires deep,
Electric currents would swiftly creep,
From silicon valleys to skies above,
A symphony of progress and love.

So twice ten miles of tech terrain,
With factories and launchpads, a futuristic reign.
And there were labs with glowing screens,
Where ideas blossomed, futuristic dreams.

And here were forests of silicon trees,
Enfolding secrets, buzzing with ease.

But oh! That deep, electric abyss,
Where AI whispers in binary bliss.
A digital place, both dark and bright,
Where minds explore, day and night.

And from this abyss, with ceaseless spark,
As if the universe ignites in the dark,
A mighty surge, endlessly unfurled,
Of innovation, changing the world.

Amidst these circuits, where electrons race,
Humanity's future finds its place.
And through this maze, Elon heard from far,
The call of progress, like a guiding star.

The shadow of the rocket's trail,
Hovered above, without fail.
Where was heard the rumble and roar,
Of engines burning, yearning for more.

It was a marvel of modern lore,
A technological empire to explore!

KK to FDR

Do you know who FDR is?

Yes, FDR refers to Franklin D. Roosevelt, the 32nd President of the United States. He served from 1933 to 1945, making him the only president to serve four terms. FDR is known for his leadership during the Great Depression and World War II. He implemented the New Deal programs to combat economic depression and instituted numerous reforms, including Social Security. His presidency had a profound impact on American politics and society.

Good. I’m going to give you the first 36 lines of “Kubla Khan.” I want you to base a new poem on that, using “Elon Musk” instead of “Kubla Khan.”

[The first 36 line of "Kubla Khan" go here, as above.]

In Xanadu did FDR stand
A leader mighty, bold and grand,
Where laws, like rivers, freely ran
Through chambers vast, a sacred plan
Down to a nation's sea.

So twice four terms of governance he found,
With policies and programs girdled round:
And there were acts bright with promise still,
Where blossomed dreams of a nation's will;
And here were reforms, ancient as the hills,
Enfolding hope in valleys and on hills.

But oh! that deep, tumultuous time which raged
Down the slopes of despair, by history engaged!
A turbulent era, fraught and enchanted,
As e'er beneath a waning moon was planted
By people yearning for a brighter day!
And from this time, with ceaseless change unfurled,
As if the nation in fast-paced swirls were hurled,
A mighty movement, momentous, forced:
Amid whose swift and ever-changing course
Great challenges arose like thunderous hail,
Of trials beneath the leader's tireless tale:
And 'mid these tumults, through trials and endeavor,
He navigated the nation ever.

Four terms, meandering with a mazy motion,
Through trials and triumphs, the leader's devotion,
Then reached the caverns, challenges to span,
And sank in tumult, faced by a world at war's plan:
And 'mid this tumult, FDR heard from far
Ancestral voices, echoing a call to mar!

The shadow of leadership, dome of governance,
Floated midway, amidst the expanse,
Where was heard the mingled measure
Of struggle and triumph, of loss and treasure.
It was a miracle of leadership, concise,
A steady hand guiding through storms and ice.

Pink! Pink! Pink! [Sakura]

Basketball GOAT: Jordan or James?

I've thinking about GOATs – Greatest Of All Time – in response to Tyler Cowen's book, GOAT: Who is the Greatest Economist of all Time and Why Does it Matter? I was looking over the NYTimes yesterday, as I a wont to do, and came across this article in The Athletic, their sports affiliate: Who is the NBA GOAT: LeBron or Jordan? For current players, longevity is narrowing the gap (by Sam Amick and Josh Robbins, Apr. 23, 2024). So I thought I'd quote some passages just to see how NBA players think about the issue:

While Michael Jordan won the “Greatest of All Time” category for the third consecutive time, his once-massive lead over LeBron James has shrunk significantly with every passing poll. This time around, James almost took the mantle. The data speaks loud and clear...

  • 2019 (the first time The Athletic conducted the poll): Jordan earned 73 percent of the votes, with James second at 11.9 percent (a gap of 61.1 percentage points)
  • 2023: Jordan earned 58.3 percent of the votes, with James second at 33 percent (a gap of 25.3 percent)
  • 2024: Jordan earned 45.9 percent of the votes, with James second at 42.1 percent (a gap of just 3.8 percent)

But why has Jordan’s lead shrunk so much? We wanted to let the players themselves explain.

The consistent rationale among LeBron voters, both old and new, is that his longevity is the ultimate difference-maker between the two. He’ll be 40 years old on Dec. 30, yet is still great enough to be widely considered one of the best players in today’s game. While Jordan was epic in his 14- year career, from his 6-0 record in the NBA Finals to his five Most Valuable Player awards and his incredible two-way play, many players shared the view that James’ ability to remain elite for more than two decades puts him over the top.

Jordan, to review, retired twice (in 1993 and 1998) during his storied career and played 14 seasons in a 19-year span. When he was James’ age, in the last of his two forgettable seasons in Washington, he was putting up good numbers on a bad Wizards team that went 37-45 in both of his postseason-less campaigns. James, meanwhile, has saved some of his best work for last:

  • He broke Kareem Abdul-Jabbar’s all-time scoring record on Feb. 7, 2023
  • He became the first player to be named to a 20th All-Star team in February
  • He was one of three players to average at least 25 points, eight assists and seven rebounds this season (the others were Nikola Jokić and Luka Dončić)

Out of respect for the GOAT incumbent, we’ll begin by highlighting this nuanced opinion from a Jordan voter who believes MJ’s influence on the entire sports world — not just basketball — is a deciding X-factor.

“The greatest ever is LeBron James, (but) the greatest of all time is Michael Jordan,” the player said. “The difference is stats. When you talk about impact, Michael Jordan. When you talk about stats and numbers, LeBron. Mike has the most impact, so that makes him the greatest ever in all aspects because he doesn’t just impact basketball. He impacts people who look up to him in tennis and football. But you won’t hear that about LeBron. ... LeBron changed the game, but more so how it’s played. Jordan changed how it’s viewed. And that’s a big difference.”

I find that last remark particularly interesting. One of Cowen's criteria for economic greatness was influencing the world of thinking beyond economics. Preferring Jordan because of his influence on sports in general is the same kind of consideration, no? 

There's more at the link.

Tuesday, April 23, 2024

Irises & ChatGPT@3QD

I’ve posted a new article at 3 Quarks Daily:

The Irises Are Blooming Early This Year

Yes, it IS about irises, and contains photos of irises, but it manages to work its way to ChatGPT, where I include another parody of Wallace Stevens’s famous “Thirteen Ways of Looking at a Blackbird.” Last month it became a poem about an A.I. Now it’s about irises.

Another Stevens imitation

Thirteen Ways

 

By Wallace Stevens

By ChatGPT

I

Among twenty snowy mountains,   

The only moving thing   

Was the eye of the blackbird.  

Among twenty blooming irises,

The only stirring thing

Was the flutter of a butterfly.

II

I was of three minds,   

Like a tree   

In which there are three blackbirds.

I was of three minds,

Like a garden

Where there are three irises.

III

The blackbird whirled in the autumn winds.   

It was a small part of the pantomime.  

The iris swayed in the spring breeze.

It was a small part of the dance.

IV

A man and a woman   

Are one.   

A man and a woman and a blackbird   

Are one. 

A gardener and a sunbeam

Are one.

A gardener and a sunbeam and an iris

Are one.

V

I do not know which to prefer,   

The beauty of inflections   

Or the beauty of innuendoes,   

The blackbird whistling   

Or just after.   

I do not know which to admire more,

The elegance of petals

Or the subtlety of colors,

The iris blooming

Or just before.

VI

Icicles filled the long window   

With barbaric glass.   

The shadow of the blackbird   

Crossed it, to and fro.   

The mood   

Traced in the shadow   

An indecipherable cause. 

Raindrops adorned the petals

With delicate jewels.

The silhouette of the iris

Danced upon them, back and forth.

The feeling

Traced in the silhouette

A mystery beyond words.

VII

O thin men of Haddam,   

Why do you imagine golden birds?   

Do you not see how the blackbird   

Walks around the feet   

Of the women about you? 

O wise gardeners of old,

Why do you dream of other flowers?

Do you not see how the iris

Swirls around the stems

Of the flowers beside it?

VIII

I know noble accents   

And lucid, inescapable rhythms;   

But I know, too,   

That the blackbird is involved   

In what I know.  

I know the language of flowers

And the undeniable pulse of nature;

But I know, too,

That the iris is entwined

In all that I comprehend.

IX

When the blackbird flew out of sight,   

It marked the edge   

Of one of many circles. 

When the iris swayed out of view,

It marked the boundary

Of one of many moments.

X

At the sight of blackbirds   

Flying in a green light,   

Even the bawds of euphony   

Would cry out sharply.   

At the sight of irises

Dancing in the sunlight,

Even the cynics of beauty

Would pause in wonder.

XI

He rode over Connecticut   

In a glass coach.   

Once, a fear pierced him,   

In that he mistook   

The shadow of his equipage   

For blackbirds.   

She strolled through the garden,

Lost in thought.

Once, a joy overcame her,

As she mistook

The scent of the blooms

For irises.

XII

The river is moving.   

The blackbird must be flying. 

The breeze is stirring.

The iris must be swaying.

XIII

It was evening all afternoon.   

It was snowing   

And it was going to snow.   

The blackbird sat   

In the cedar-limbs.

It was morning all evening.

It was blossoming

And it was going to blossom.

The iris stood

In the garden's embrace.

Meta: A word about process

Some of the articles I write for 3 Quarks are like pulling teeth. I’ll make notes, sketch outlines, draft sections, perhaps as a post here at New Savanna, and then assemble the pieces into the final article on the Saturday and Sunday before the article shows up on Monday. This is one of those pieces: Western Metaphysics is Imploding. Will We Raise a Phoenix from The Ashes? [Catalytic AI]. I liked it a lot. But friends tell me it left them a bit mystified.

Other pieces that come easy. This one for example: Old School: Torpor and Stupor at Johns Hopkins. That was a while ago, so the writing process is not clear in my mind. But I pretty sure it’s one of those pieces where I thought about it a bit, did a little web surfing (in that case, I had to get the photo and a link or two) and then just sat down and drafted it. No doubt I stepped away from the computer every now and then, but it was basically one work session. I wrote a draft Sunday morning and early afternoon, checked it over, and then upload it.

Those two pieces are quite different in kind. The Western Metaphysics piece developed a complex argument whereas Torpor and Stupor was narrative in kind. Complex arguments require a complex web of connections between the various pieces. That’s hard to do and requires you to flit back and forth making things fit and relate. Narratives have a simpler structure. Torpor and Stupor didn’t tell a single continuous story. Rather, it was organized as a set of vignettes, each of them a little narrative. There was no argument to speak of. Just a an overall flow.

This irises piece was closer to the come-easy kind than the pulling-teeth kind. I had some points to make, but I made them more though analogy and metaphor than explicit argument. It had three sections. ChatGPT’s Stevens imitation went in the middle. I prepared that on Friday evening and made a few notes. More notes on Saturday. But I didn’t start writing until Sunday morning, and then I drafted it from beginning to end over course of, say, two to three hours.

The most interesting thing about the process was the decisions I made on Sunday morning to query ChatGPT about the nature of blossoms. I didn’t need to do that as I already more or less knew the story. But in so doing I was able to introduce ChatGPT into the exposition and thus prepare the way for the poem. That opened the way for the concluding discussion about DNA, strings, and complexity.

I wonder how LLMs manage different kinds of discourse? That’s what’s in the back of my mind.