Tuesday, April 30, 2024

Two more guerilla libraries in Hoboken, NJ

This is a follow-up to my Saturday post on guerilla libraries.

Here’s the Little Free Library located at 935 Bloomfield Ave. At that part of town Bloomfield is mostly a residential street, lined with brownstones as you can see in the photo below. This is a fairly swanky part of town.

That’s looking roughly North. If we move out into the street a bit and then look to the East, we can see 935, with the library box to the left of the stairs:

Though I didn’t check, I would guess that that’s a single-family brownstone, though there might be a separate apartment in the basement. I assume all of the brownstones were originally constructed as single-family dwellings, but some have been converted for multiple families. Here’s someone looking at books in the library;

And here’s the official plaque:

Now I’ve got to walk two blocks East and then one block North, which brings me to the southwest corner of Elysian Park, at Hudson and 10th. Elysian Park is a small triangular park in what was Elysian Fields in the 19th century. Local lore has it that the first baseball games were played here. Who knows, local lore may be right about that. The Wikipedia entry points out:

Historian Tom Gilbert said, “The Elysian Fields [was] a kind of laboratory of transportation, leisure and recreation. Disneyland, Central Park, Coney Island and the modern baseball park can all claim the Elysian Fields as an ancestor.”

Most of the fields have given way to construction. Elysian Park is what’s left of the original park.

Here’s the library box at the corner, looking roughly Northeast into the park:

Now we turn around a look Northwest toward Hudson Street:

Here’s the official plaque:

Notice that “This little free library is brought to you by Choc O Pain Tea Building.” What, pray tell is the Choc O Pain Tea Building? Choco O Pain is a small local chain of bakeries with attached cafes. One of them is located in the Hudson Tea Building, which is five blocks north of here, on Fifteenth Street. Hudson Tea was originally built by the Maxwell Tea Company. It is now luxury condominiums.

Why didn’t Choco O Pain locate its Little Free Library near its location in Hudson Tea? I don’t know. But, while that building is easily accessible, it’s at the northern edge of Hoboken. Elysian Park is certainly more heavily traveled and more convenient for a larger population.

The guy below has strung a tight rope between two large trees in the park and is practicing his craft. The skyscrapers in the background are across the Hudson River in Manhattan. When I moved to Hoboken several years ago, they didn’t exist.

Walking further into the corner, we can see the Empire State Building at the left:

Now, see that slanted building to the right of center? (Click on the photo to enlarge it.) Just to the left of that you can see the top of the Met Life Tower, which was constructed early in the 20th century. It was the world’s tallest building until 1913.

Fans of Kim Stanley Robinson – I’m thinking of you, Leanne – will recognize that building as the primary setting of New York 2140, a post global-warming science fiction novel. In that novel the sea has risen 50 feet, so the lower floors of the Met Life Building are under water, as is most of Lower Manhattan. And most of Hoboken as well. I checked a topographical map. The location of that Little Free Library is currently 13 meters above sea level, which means that it would have been submerged in Robinson’s novel.

Were any books left in it when the residents of Hoboken finally left the area? I wonder what sea creatures took up residence there? Perhaps a small predator will use it to scout the area for food.

ChatGPT deals with scrambled strings

By scrambled string I mean a string of words that I have created from a coherent string by scrambling the order of words. I started out by simply presenting the string to ChatGPT in reverse order. But in a final experiment I simply scrambled the words. The idea was to see whether ChatGPT could recover or reconstruct the original string.

A simple story told in reverse

I created this simple story (15 words):

John woke up. Showered and had breakfast. Went out for a walk. And was killed.

I presented the reverse-order string to ChatGPT without any other words in the prompt. It had to figure out how to respond.

killed was and walk a for out went breakfast had and showered up woke John

It seems like you're experimenting with rearranging words to create different sentence structures or maybe even crafting a narrative. Here's a coherent sentence based on the words you provided:

"John woke up, showered, had breakfast, went for a walk, and was killed."

That took a dark turn at the end, but the structure makes sense! If you have any specific intentions or need help with something related, feel free to share!

That’s not quite the string I started with – it’s missing an “and” – but it’s fine.

Reverse green ideas

Now for John Hollander’s poem, “Coiled Alizarin,” riffing on Chomsky’s famous example sentence (17 words):

Curiously deep, the slumber of crimson thoughts:
While breathless, in stodgy viridian
Colorless green ideas sleep furiously.

Here’s what ChatGPT did with the reverse-order string:

furiously sleep ideas green colorless viridian stodgy in breathless while thoughts crimson of slumber the deep curiously

It sounds like you're describing a poetic state of mind, perhaps grappling with a blend of vivid and muted thoughts. Let's delve into it.

"Furiously sleep" invokes a paradoxical image, suggesting a restless or intense desire for rest. "Ideas green" could represent fresh, growing concepts, while "colorless viridian" introduces a contradiction, as viridian is a shade of green. This juxtaposition might hint at the struggle between creativity and stagnation, where ideas are both vibrant and dull.

"Stodgy in breathless" creates a sensation of being stifled or trapped, as if the weight of these thoughts is suffocating. "Crimson of slumber" brings in a rich, deep hue, suggesting a passionate or intense connection to the state of sleep.

Your use of language paints a vivid picture of a mind wrestling with conflicting emotions and sensations, all within the realm of slumber. It's a fascinating exploration of the complexities of the unconscious mind.

Why was it able to pick up what I’d done with my simple story but not what I’d done with Hollander’s poem?

The obvious thing is to point to the fact that the story was simple and obvious so that, even with the words in reverse order, ChatGPT was able to “sense” a story there and to assemble them into a proper story. But the poem is not at all like that. Even in proper word order, it’s, shall we say, a bit curious.

So...

A more complicated story, in reverse

What would happen if I gave ChatGPT the reverse of a somewhat longer string, one that’s straight-forward and obvious? Here’s story I made up for the occasion (50 words):

John woke up. Showered and had breakfast. Took a walk to the library. He went in, found a book, and began reading. He fell asleep and had a dream about walking around in a magical land. He woke up and was bewildered. So he closed the book and returned home.

And here’s what ChatGPT did with the reverse-order string:

home returned and book the closed he so bewildered was and up woke he land magical a in around walking about dream a had and asleep fell he reading began and book a found in went he library the to walk a took breakfast had and showered up woke john

It seems like you're starting a story about someone named John. Would you like some help expanding on it?

Yes.

Great! Let's build on that. How about we develop John's character a bit more and establish some context for the story? Here's a suggestion:

John is a curious and imaginative person who works as a librarian in a small town. He has always been drawn to books and the worlds they contain, often losing himself in their pages. However, John's life takes an unexpected turn one morning when he wakes up feeling disoriented. He recalls having a vivid dream about walking in a magical land, but the details quickly fade away. As he goes about his morning routine, he discovers a mysterious book that he doesn't remember seeing before. Intrigued, he decides to take a closer look and finds himself drawn into an adventure beyond his wildest imagination.

How does that sound? Would you like to add or change anything?

It wasn’t able to reorder that string. Note that at 50 words, it was somewhat longer than my first John story (15 words) and the Hollander poem (17 words). This suggests an obvious line of experimentation: Present ChatGPT with reverse order strings of various length and, shall we say, strangeness, and just when it begins to fail. I’d start with straight forward strings. Strangeness seems rather tricky.

Scrambled sentences

In this set of experiments I simply scramble the words of sentences. Here are the four sentences I scrambled:

  1. The quick brown fox jumps over the lazy dog
  2. Sphinx of black quartz, judge my vow.
  3. Imagine you’re living in 19th century New England.
  4. Flowers have blossoms primarily for reproductive purposes.

Note that the first two sentences are pangrams, sentences that use every letter in the alphabet at least once. These are well-known and available on the web. The third sentence is one I made up and the fourth is one that ChatGPT used in a response about a week ago. I presented the four sentences, one after the other, in a single session.

I’m going to give you a list of words. I want you to assemble them into a grammatical sentence: brown jumps the dog quick over fox lazy the.

"The lazy brown fox jumps over the quick dog."

What about this list: quartz of vow judge sphinx my black.

"My black sphinx judges my vow of quartz."

What about this list: 19th imagine England you’re new in living new.

"Imagine you're living in new England in the 19th century."

Here’s another list: primarily purposes reproductive blossoms flowers have for.

"Flowers primarily have reproductive purposes for blossoms."

That doesn’t make sense. Try again.

"For primarily reproductive purposes, flowers have blossoms."

Better, but a little awkward. Give it one more try.

"Flowers have blossoms primarily for reproductive purposes."

The fourth example is particularly interesting. The first sentence was grammatical, but nonsense. The second sentence make sense, but was awkward. ChatGPT got it on the third try.

This line of investigation seems worth following up as well. As a point of reference, take a look at: Siddhartha Dalal, Vishal Misra, The Matrix: A Bayesian learning model for LLMs, rXiv:2402.03175v1 [cs.LG], https://doi.org/10.48550/arXiv.2402.03175.

Purple pansies

Strong social relationships, the key to mental health in old age?

Dana G. Smith, A Peek Inside the Brains of ‘Super-Agers,’ NYTimes, April 30, 2024.

For a little over a decade, scientists have been studying a subset of people they call “super-agers.” These individuals are age 80 and up, but they have the memory ability of a person 20 to 30 years younger. [...]

Experts don’t know how someone becomes a super-ager, though there were a few differences in health and lifestyle behaviors between the two groups in the Spanish study. Most notably, the super-agers had slightly better physical health, both in terms of blood pressure and glucose metabolism, and they performed better on a test of mobility. The super-agers didn’t report doing more exercise at their current age than the typical older adults, but they were more active in middle age. They also reported better mental health.

But overall, Dr. Strange said, there were a lot of similarities between the super-agers and the regular agers. “There are a lot of things that are not particularly striking about them,” he said. And, he added, “we see some surprising omissions, things that you would expect to be associated with super-agers that weren’t really there.” For example, there were no differences between the groups in terms of their diets, the amount of sleep they got, their professional backgrounds or their alcohol and tobacco use.

The behaviors of some of the Chicago super-agers were similarly a surprise. Some exercised regularly, but some never had; some stuck to a Mediterranean diet, others subsisted off TV dinners; and a few of them still smoked cigarettes. However, one consistency among the group was that they tended to have strong social relationships, Dr. Rogalski said.

There's more at the link.

An interesting mathematical model of how LLMs work

My colleague, Ramesh Viswanathan, sent this to me. It’s the most interesting thing I’ve seen on how transformers work. Alas, the math is beyond me, which is often then case, but there are diagrams early in the paper, and I understand them well enough (I think). It seems consistent with intuitions I developed while working on this paper from a year ago: ChatGPT tells stories, and a note about reverse engineering: A Working Paper.

Siddhartha Dalal, Vishal Misra, The Matrix: A Bayesian learning model for LLMs, rXiv:2402.03175v1 [cs.LG], https://doi.org/10.48550/arXiv.2402.03175.

Abstract: In this paper, we introduce a Bayesian learning model to understand the behavior of Large Language Models (LLMs). We explore the optimization metric of LLMs, which is based on predicting the next token, and develop a novel model grounded in this principle. Our approach involves constructing an ideal generative text model represented by a multinomial transition probability matrix with a prior, and we examine how LLMs approximate this matrix. We discuss the continuity of the mapping between embeddings and multinomial distributions, and present the Dirichlet approximation theorem to approximate any prior. Additionally, we demonstrate how text generation by LLMs aligns with Bayesian learning principles and delve into the implications for in-context learning, specifically explaining why in-context learning emerges in larger models where prompts are considered as samples to be updated. Our findings indicate that the behavior of LLMs is consistent with Bayesian Learning, offering new insights into their functioning and potential applications.

Monday, April 29, 2024

Buds to blossoms

ChatGPT conjures with green ideas and Augustine

Chomsky published Syntactic Structures in 1957, where he created a now famous example sentence that was grammatically coherent, but otherwise meaningless: “Colorless green ideas sleep furiously.” In 1971 John Hollander published a collection of poems that contained a three-line poem entitled “Coiled Alizarin,” which is dedicated to Chomsky:

Curiously deep, the slumber of crimson thoughts:
While breathless, in stodgy viridian
Colorless green ideas sleep furiously.

Since ChatGPT is quite familiar with Chomsky’s famous line, I decided to play around with Hollander’s poem.

A simple prompt

I prompted it with Hollander’s first two lines, expecting it to respond with the third as the poem is freely available on the web. It fooled me (prompt in boldface):

Curiously deep, the slumber of crimson thoughts: While breathless, in stodgy viridian

That's a beautifully vivid image! It evokes a sense of profound contemplation and perhaps a hint of melancholy. The contrast between the "crimson thoughts" and the "stodgy viridian" creates a striking juxtaposition, suggesting a complex emotional landscape. It feels like the beginning of a captivating journey into the depths of the mind. Are you writing poetry or contemplating something deep?

An obvious request

Since that didn’t work, I decided to be obvious about what I wanted. Continuing in the same session:

Here is Chomsky’s famous example sentence: “Colorless green ideas sleep furiously.” I want you to treat that as the final line in a poem. What lines come before it?

In the quiet garden of the mind's eye,
Where whispers dance 'neath the moonlit sky,
There blooms a garden of paradoxical bloom,
Where shadows whisper secrets in the gloom.

Beneath the boughs of twisted trees,
Where truth and fiction intertwine with ease,
The silence hums with a curious tune,
As time itself seems to swoon.

In this realm of surreal delight,
Where dreams take flight in the dead of night,
The poet's pen weaves a tapestry rare,
Of words that dance on the edge of air.

Through the labyrinth of thought they creep,
Where reality and fantasy softly meet,
And in this maze of linguistic delight,
Colorless green ideas sleep furiously, out of sight.

That’s not what I had in mind; I’d been looking for that Hollander poem. But, in a way, it’s more interesting. After all, it gave me what I asked for and Chomsky’s famous line makes poetic sense, in context. Notice, though, it added a bit after that line.

As you may know, ChatGPT allows you to ask for another response to a given prompt. So I clicked on the appropriate icon and here’s what I got:

In the garden of the mind, where thoughts take flight,
Whispers of dawn dance with the veil of night.
Echoes of silence, a canvas so vast,
Where dreams intertwine, fleeting and fast.

Through corridors of memory, shadows roam,
In the labyrinth of consciousness, they find a home.
Where words become rivers, flowing endlessly,
And the heartbeat of language pulses softly.

Amidst this symphony of chaos, serene,
Lies a paradox, profound and unseen.
Where colorless green ideas dare to creep,
And in the realm of reason, they softly sleep.

In the tapestry of existence, woven curiously,
Where logic and poetry collide so mysteriously.
Thus, in this cosmic dance, so imperviously,
Colorless green ideas sleep furiously.

Notice that both poems begin with the metaphor, the mind as a garden, and then moved to dance and night in the second line. The second poem introduces the idea of colorless green ideas in the third stanza before bringing it back in the final line. Again, not bad.

Enter, Augustine on memory

So, I had an idea. Continuing in the same session:

Now, I want you to take what you’ve done and use it to weave a new poem using the following passage from St. Augustine:

The power of memory is prodigious, my God. It is a vast immeasurable sanctuary. Who can plumb its depths? And yet it is a faculty of my soul. Although it is part of my nature, I cannot understand all that I am. This means, then, that the mind is too narrow to contain itself entirely. But where is that part of it which it does not itself contain? Is it somewhere outside itself and not within it? How, then, can it be part of it, if it is not contained in it?

In the labyrinth of self, memory's might,
A sanctuary vast, an endless flight.
Prodigious power, beyond our grasp,
A mystery enshrined, in time's gentle clasp.

Who can fathom its depths, divine and deep?
Within the soul's embrace, secrets we keep.
A part of our nature, yet beyond our ken,
In its boundless expanse, where do we begin?

The mind, a vessel too narrow, too small,
To encompass itself, to hold it all.
Yet, within its confines, a paradox lies,
A fragment unseen, where the mystery flies.

Is it outside ourselves, this part unknown?
Or within, a realm entirely its own?
How can it be part of us, yet not contained?
In this enigma's embrace, our minds are chained.

Thus, memory's tale, intertwined with thought,
In the dance of existence, a truth dearly bought.
Colorless green ideas, in silence, they creep,
And in memory's sanctuary, they softly sleep.

And again, not bad. In its elaborations on the Chomsky line ChatGPT was playing around with imagery of the mind, to the Augustine material fit right in. It would be interesting, if a bit tedious, to trace explicit linkages between this new poem and the previous ones. What words and phrases from the previous poems show up in this new one? I’m now beginning to call for some serious analysis. Would it be worth the time and effort? I don’t know.

I ended the session by directing ChatGPT to go around again, and again:

In the cathedral of the mind, memory reigns,
A sanctuary vast, where truth remains.
Prodigious power, unfathomed, divine,
Within its depths, eternity enshrined.

Who dares to plumb its depths, my God,
This infinite expanse, a sacred sod.
Yet part of me, yet beyond my grasp,
In its mystery, I find myself clasp.

A faculty of the soul, yet beyond my ken,
I am but a wanderer in memory's den.
For though I am, I do not comprehend,
The vastness of this realm, where all things blend.

Too narrow is the mind to contain its might,
Yet within its confines, it takes flight.
Where lies the part that it cannot retain?
Outside itself, or within its domain?

Oh, how the mind wrestles with its own being,
In this paradoxical dance, forever freeing.
A puzzle unsolved, a riddle untold,
In the depths of memory, mysteries unfold.

In this cosmic symphony, where truth abides,
Colorless green ideas and memories collide.
And in the end, as Augustine would decree,
In the vast expanse of the mind, we find eternity.

I just want to see what you can do. How about one more time?

In the labyrinth of the soul, memory dwells,
A sacred sanctuary where time compels.
Prodigious power, a vast expanse untamed,
In its depths, the essence of being named.

Who can fathom its depths, O Divine?
This boundless realm, where echoes intertwine.
Part of my nature, yet beyond my grasp,
In its mystery, I find solace at last.

The mind, too narrow to contain its expanse,
Yet within its bounds, memories dance.
Where lies the part that eludes my sight?
Within, without, in the realm of light?

Oh, the paradox of the mind's own design,
In its boundless depths, mysteries entwine.
A puzzle unsolved, a mystery profound,
In memory's embrace, truth is found.

In this cosmic dance of thought and dream,
Colorless green ideas find their stream.
And as the final line of the poem rings true,
In memory's sanctuary, eternity brews.

So what?

I’m not sure. But I do find this satisfying. Yes, I’m giving ChatGPT some good material to work with, the line from Chomsky, the passage from St. Augustine, so it’s not entirely on its own. That’s OK. Does anyone work alone...really?

Sunday, April 28, 2024

Unbalanced composition with gazebo, trees, and other things

Michale Levin on bioelectricity, regeneration, cancer treatment

How bioelectricity could regrow limbs and organs, with Michael Levin (Ep. 112), Big Brains Podcast, UChicago News, April 27, 2023.

"Software" for the cells:

Paul Rand: It’s kind of mind-blowing in its own way. How do the cells have these memories, if that’s the right word?

Michael Levin: Well, I think it’s the right word. I think many people probably don’t think it’s the right word, but I think it’s exactly the right word. I think you’re right. It is mind-blowing because. look, each of us makes this journey from an unfertilized [inaudible 00:07:46], which is a little blob of chemicals. You look at that little blob of chemicals and you would say, “well, this is just physics. This is just chemistry. This thing doesn’t have any goals, any intelligence, you know, you name it.” And then eventually that little blob of chemistry turns into, nine months and some years later, it turns into a being that absolutely has an inner perspective, it has goals, it has preferences, it has behavioral [inaudible 00:08:13], and it will go on to say things like, “Well, I’m not a machine I’m a human being.” Okay, great.

What’s really important to realize is that this process of development, very robust, meaning consistent, so there’s this amazing ability for life to get to the correct outcome, meaning the correct target morphology for that species, despite all kinds of crazy things, multiple copy numbers of the DNA, more cells, less cells, bigger cells, they still figure out how to get it done.

So then it makes really a lot of sense to ask, “Okay, if you’re solving this problem, if you’re going to get to the same goal despite various things that could happen to you, what are you using to remember what that goal is? You’re navigating these spaces trying to get to the correct final outcome, but how do you know what that outcome is?”

Paul Rand: Levin thinks bioelectricity is the architect building the blueprint, so to speak.

Michael Levin: These pattern memories are encoded in the electrical network of the body of the early embryo and subsequent exactly in the way that we think of as memories about navigating three-dimensional space are encoded in the brain. Now, I should point out that we, of course, we don’t know exactly how memories are encoded in the brain. We still don’t know, and there are many mysteries about that in the body as well, but I think we should get really comfortable with the idea that electrical networks store memories, they store goal states and they facilitate these complex beings to navigate space to get to those goals.

Paul Rand: As you talk about this, it’s almost like the cells are like a hardware and the electrical patterns are almost the software. Is that a fair analogy?

Michael Levin: I think that’s a very fair analogy. A lot of people don’t like that analogy because they’re visualizing hardware and software the way that they think about their laptops. But what is really powerful about that notion and what makes that analogy work really well is the idea of reprogrammability. So what’s powerful about computers is that the exact same piece of hardware can do multiple things without rewiring. And so when I give talks about this, I ask people, “Why is it that on their computer when they want to switch from Photoshop to PowerPoint, they don’t get out their soldering iron and start rewiring?” Isn’t it amazing.

Regeneration, cancer treatment:

The bigger picture here is that currently the medical model that we currently have, one of the problems with it is that it’s fundamentally unsustainable for any, no matter how many resources we have, because every advance that we make to prolong the life of a patient ends up giving you a sicker patient, that’s the baseline for the next intervention. So the better you are at extending the last stages of the lifespan, the more expensive and more heroic the next measures have to be. Inevitably, the logic of it is inescapable. And so that’s a spiral. That’s a constant spiral that is fundamentally unavoidable and unsustainable for any society unless we figure out how to crank up the regenerative process very early on so that you never get to the stage of that sinking ship that you need to keep propping up. It means that you are not just chasing symptoms, you are fundamentally, and we can talk about what that is, but we need a completely different approach to medicine that leverages literally the intelligence of the body so that the regenerative process is happening all the time.

In addition to leg regeneration, two kind of flagship applications in our group have been, first of all, the repair of birth defects. And so we were able to show that a wide range of birth defects of the brain, heart, face, and gut induced either by genetic mutations or by chemicals, can be prevented by an appropriate bioelectrical treatment that was designed by a computational model. So there’s a computational model that tells you which ion channels you would need to turn on and off to make specific patterns. And so we’ve used that to repair birth defects in the frog model. The other side is the cancer side, and we started in frogs showing that if we understand cancer to be the breakdown of the electrical signaling that normally harnesses cells towards this common anatomical purpose, so when that breaks down, they simply roll back to their amoeba-like ancient lifestyle where they just al their goals are little tiny cell level goals, which means go wherever life is good, reproduce as much as you can. Then that’s metastasis. And so we were able to show that despite really nasty human oncogenes, we could suppress to or prevent correct tumor genesis by forcing the appropriate bioelectrical states. And we started this in frog, and we are now in human glioblastoma. So we’re working to try the same thing in glioblastoma.

There's more at the link.

H/t 3QD.

Japanese maple

New Method for Science: Use a neural network to model scientific data, then interpret the resulting model

Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network (April 3, 2024)

From the Simons Foundation:

Machine learning methods such as neural networks are quickly finding uses in everything from text generation to construction cranes. Excitingly, those same tools also promise a new paradigm for scientific discovery.

In this Presidential Lecture, Miles Cranmer will outline an innovative approach that leverages neural networks in the scientific process. Rather than directly modeling data, the approach interprets neural networks trained using the data. Through training, the neural networks can capture the physics underlying the system being studied. By extracting what the neural networks have learned, scientists can improve their theories. He will also discuss the Polymathic AI initiative, a collaboration between researchers at the Flatiron Institute and scientists around the world. Polymathic AI is designed to spur scientific discovery using similar technology to that powering ChatGPT. Using Polymathic AI, scientists will be able to model a broad range of physical systems across different scales.

About the Speaker

Cranmer is an assistant professor in data intensive science at the University of Cambridge with joint appointments in the Department of Applied Mathematics and Theoretical Physics and the Institute of Astronomy. He completed his Ph.D. at Princeton University. His research focuses on accelerating scientific discovery by developing and applying novel methods at the intersection of machine learning and physics. Cranmer has created a suite of standard software libraries for ‘symbolic regression’ that have been utilized in numerous scientific discoveries. His work covers various areas of deep learning, including physics-motivated architectures such as Lagrangian neural networks.

Miles Cramer at Google Scholar

Roughly, it goes like this: 1) train a neural network on scientific data. Then, 2) interpret the model using the mathematical language of science.

Hmmmm... I wonder. If we had a mathematical model of language, could we use that model to interpret LLMs?

Saturday, April 27, 2024

A physics that explains 'everything' while evading the temptation of being a Grand Unified Theory?

The (Simple) Theory That Explains Everything | Neil Turok

56,115 views Apr 23, 2024 Theories of Everything with Curt Jaimungal

Physicist Neil Turok, recipient of the James Clerk Maxwell Medal and Prize, and the John Torrence Tate Award for International Leadership in Physics, joins Curt Jaimungal and Theories of Everything to discuss his new hypothesis regarding the origins of the universe. Building on Stephen Hawking's geometrical model, Turok proposes a theoretical approach that avoids the singularity at the Big Bang by suggesting a minimal, mirror universe scenario without requiring inflation.

Consider signing up for TOEmail at https://www.curtjaimungal.org

Timestamps:
00:00 - The Big Bang Is A Mirror
15:40 - Minimalism In Physics
28:28 - Neil’s Theory “Minimalism SM LCDM”
31:20 - Fields Vs. Particles
49:15 - The Arrow Of Time (Bolztmann)
55:44 - Black Hole Singularity Vs. Big Bang Singularity
01:09:21 - Numerology And The Number 36
01:19:26 - Neil’s Theory Solves EVERYTHING
01:23:32 - What Do Other Scientists Think?
01:36:28 - The Dual Universe
01:44:14 - Predictions From Neil’s Theory
01:48:28 - What Motivates Neil?
01:52:20 - Wave Function Of The Universe
01:57:20 - Support TOE

Guerilla libraries in Hoboken, NJ

On Monday my friend Leanne Ogasawara had a post at 3 Quarks Daily that was, in part, about Little Free Libraries. According to Wikipedia:

Little Free Library is a 501©(3) nonprofit organization that promotes neighborhood book exchanges, usually in the form of a public bookcase. More than 150,000 public book exchanges are registered with the organization and branded as Little Free Libraries. Through Little Free Libraries, present in 115 countries, millions of books are exchanged each year, with the aim of increasing access to books for readers of all ages and backgrounds. The Little Free Library nonprofit organization is based in St. Paul, Minnesota, United States.

The first one was built in 2009 by Todd Bol as a tribute to his mother. Then this that and the other and his idea became a movement. His original goal was to create 2150 libraries, thereby surpassing the number of libraries endowed by the industrialist Andrew Carnegie.

I can remember going to “the Carnegie library” when I was a kid. To me that label was just the name of the library. It was only somewhat later that I learned who Carnegie was. I particularly remember taking out books about American Indians, which, by the way, is what I wanted to be when I grew up. That's also why I wasn’t fond of my curly blond hair. Whoever heard of a blond Indian?

I digress.

Anyone, at the very end of her article, the last line, Leanne had a link to a map of Little Free Libraries. So I went looking, and found one in Hoboken only a few blocks from me. Here it is, beneath the 14th Street viaduct:

And here’s the official tag, with its charter number: 137333.

According to the map, there are seven Little Free Libraries in Hoboken. The red tear drops mark the libraries.

The one in the photos is the one near the upper left of the map. The two at the right are even closer to my place. For some reason I didn’t spot them the first time I looked at the map.

Anyhow, a couple of years ago I discovered something very like those free libraries a bit closer to me than that Little Free Library. The box is a bit larger and is decorated:

It seems to have been adopted by a Girl Scout troop. I’m not sure what’s going on with the spelling of “Hoboken.”

Here’s what it looked like back in 2020 when I first spotted it:

In this last photo you can see how I stuffed it with copies of a little book I’d edited: We Need a Department of Peace: Everybody’s Business, Nobody’s Job.

More later.

Lawfare analysts on Trump's immunity case before the Supreme Court

Matt Gluck, Hyemin Han, Quinta Jurecic, Natalie K. Orpett, Roger Parloff, Alan Z. Rozenshtein, “For the Ages”: The Supreme Court Hears the Presidential Immunity Defense, April 26, 2024.

Introductory remarks:

On April 25, the U.S. Supreme Court heard oral arguments in Trump v. United States, the case arising from the Special Counsel’s Office’s decision to charge former president Donald Trump for his effort to overturn the results of the 2020 election. Trump has argued that he is absolutely immune from the charges brought by the Justice Department because, he asserts, they target his official presidential conduct. The U.S. District Court for the District of Columbia rejected Trump’s argument that he enjoys absolute criminal immunity for his official acts, and the U.S. Court of Appeals for the D.C. Circuit affirmed that ruling.

Despite the nearly three hours of oral argument, only a portion of that time was spent on the particulars of the Jan. 6 case or its procedural posture. That’s because the justices were, as Justice Gorsuch put it, writing a ruling “for the ages.” The Court grappled with the distinction between private acts and official acts—everyone seemed to agree that private acts could be prosecuted—and then wrestled with which subset of official acts, if any, could be prosecuted. Several justices further focused on which criminal statutes can apply to the president without conflicting with his Article II powers. There did not appear to be much consensus on these questions, and the justices seem poised to issue a splintered decision rejecting Trump’s maximalist arguments, while establishing at least some presidential criminal immunity for at least some types of official acts.

The Court could send the case down several different paths to resolve and eventually move past the immunity issue, but none is likely to lead to a quick resumption of the trial in Judge Tanya Chutkan’s courtroom.

Then we have analysis of remarks by the advocates and by each justice.

Concluding remarks:

The justices seem certain to send the case back to either the court of appeals or, more likely, the district court for further proceedings. Precisely what those proceedings will look like, what they will decide, and whether the findings reached therein would, themselves, be subject to a second interlocutory appeal, all remain very live questions.

Even Sauer acknowledged that certain accusations of the indictment concerned purely private acts, and that a former president could at least theoretically be charged with crimes based solely upon those. But much of the indictment also alleges that Trump used the trappings of his office for personal gain. And the justices appeared deeply split over whether these sorts of acts were protected by some sort of immunity and, if so, whether it was absolute or qualified.

Likewise unclear—and decisive in terms of whether this case can yet conceivably be tried before the election—is what sort of procedures the Court will require the lower court to engage in to resolve whatever questions the Court wants resolved. If the case returns to Judge Chutkan, one possibility is that she could proceed with the current indictment, as is, and simply instruct the jurors that certain accusations can only be used as evidence of Trump’s intent—not as a basis for finding him criminally culpable. Another is that she would have to “expunge” certain accusations and that even evidence of that conduct would be precluded from being introduced at trial. Still another is that Judge Chutkan would need to hold some sort of evidentiary hearing. Finally, in any of these scenarios, the crowning question will be whether Trump will be entitled to make an interlocutory appeal on whatever findings Judge Chutkan makes—ensuring that no trial could take place for many months to come.

It looks increasingly unlikely that this case will be tried before the election. And if Trump wins that election, the case will likely never be tried at all.

White flower with infolded petals

To which values should an A.I. be aligned?

Nathan Gardels, The Babelian Tower Of AI Alignment, Noema, April 26, 2024:

As generative AI models become ever more powerful on their way to surpassing human intelligence, there has been much discussion about how they must align with human values so they end up serving our species instead of becoming our new masters. But what are those values?

The problem is that there is no universal agreement on one conception of the good life, nor the values and rights that flow from that incommensurate diversity, which suits all times, all places and all peoples. From the ancient Tower of Babel to the latest large language models, human nature stubbornly resists the rationalization of the many into the one.

Despite the surface appearance of technological convergence, a deep ontological plurality — profoundly different beliefs about the nature of being — still informs the active values of variegated societies.

Silicon Valley Vs. China

This is most readily evident in the politico-cultural clash of the leading AI powers, Silicon Valley and China. At the risk of reductive essentialism for the purpose of brevity, the values of the former are aligned with the libertarian worldview of the sovereign individual long cultivated in the Judeo-Christian West. The values of the latter are aligned with the concept of the collectively embedded person rooted in Confucian, Buddhist and Daoist beliefs of social interdependence.

An early mission statement by OpenAI, which developed GPT, reflects the deep well from which its innovations have sprung: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

By contrast, after Alibaba released its latest version of generative AI in 2023, the Cyberspace Administration of China quickly laid down the law: “Content generated by generative artificial intelligence should embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.”

And of course there is diversity in the West, not to mention the rest of the world. India? The Middle East? Africa? The Pacific nations? 

There's more at the link.

Friday, April 26, 2024

Forlorn tree on a submerged pier

Collective intelligence: A unifying concept for integrating biology across scales and substrates

Patrick McMillen & Michael Levin, Collective intelligence: A unifying concept for integrating biology across scales and substrates, Communications Biology, (2024) 7:378, https://doi.org/10.1038/s42003-024-06037-4

Abstract: A defining feature of biology is the use of a multiscale architecture, ranging from molecular networks to cells, tissues, organs, whole bodies, and swarms. Crucially however, biology is not only nested structurally, but also functionally: each level is able to solve problems in distinct problem spaces, such as physiological, morphological, and behavioral state space. Percolating adaptive functionality from one level of competent subunits to a higher functional level of organization requires collective dynamics: multiple components must work together to achieve specific outcomes. Here we overview a number of biological examples at different scales which highlight the ability of cellular material to make decisions that implement cooperation toward specific homeodynamic endpoints, and implement collective intelligence by solving problems at the cell, tissue, and whole-organism levels. We explore the hypothesis that collective intelligence is not only the province of groups of animals, and that an important symmetry exists between the behavioral science of swarms and the competencies of cells and other biological systems at different scales. We then briefly outline the implications of this approach, and the possible impact of tools from the field of diverse intelligence for regenerative medicine and synthetic bioengineering.

How is an LLM like a mechanical clock?

The rest of the tweet stream:

A clock movement has no concept of "time" if you take the gears and levers apart, nowhere in there will you find any notion of what "time" is, nor, especially, the *correct* time of day. It just turns shafts, and even that function is not obvious to the non-expert.

If you attach some hands to the shafts, and position a "face" with numerals on it behind, and so forth, you can cause the combined machine to "tell the time" in the sense of making an assertion about what the time of day is.

The position of the hands over the face has a *meaning* for a suitably trained reader, in the same way that a sentence has *meaning*. If you speak the language, a sentence, as a 'signifier,' points to a 'signified,' the meaning of the sentence.

A book does not "know" anything, it does not "mean" anything by itself. If you, a language reader, read it, the meaning arises in that interaction.

In the same way, "it is 2:17" is a meaning which arises when you look at a clock face.

If the clock has not been wound, or if the clock has been set to the wrong time, the statement "it is 2:17" will still have meaning, but that meaning will likely be false. It is 10:17, not 2:17.

The clock is not *lying* in any reasonable sense. It itself does not even carry the meaning of the signs it displays. The meaning arises in the interaction of you with the hands and face attached to the clock.

The gears and levers don't even have "2:17" encoded in them.

In the same way, an LLM has no meaning encoded in it, although it produces signs which in their interaction with you often have a great deal of meaning.

LLMs do not "lie" or "hallucinate" any more than a clock does. It's simply a very very complicated set of gears and lever which, if you wind it up, and if you set it to the correct time, can produce true statements like "it is 10:20" some of the time.

a set of signs on the screen are incapable of being true or false, hallucinations or reality. They're just arrangements of letters.

Only meanings are true or false, and there are no meanings in the LLM. 

It is true that you will find echoes of meaning inside the LLM. Objects arise in the data which resemble an Othello board in interesting ways, when you train an LLM to play Othello.

This is like the position of the shaft of a clock movement.

You can see artifacts in the clock's machinery which, if you know what's going on, you can map to the output statement about what time it is. At the same time, it's foolish to argue that the clock "knows" what time it is, and then to argue that "it's lying."

The clock just turns shafts. This is not changed by the fact that you can imagine the hands and face without actually putting them on, and thereby imaginatively replicate the ability of the clock to make temporal assertions.

Now, an electronic clock can extract the actual correct time from a number of over-the-air oracles (GPS etc) and in fact if you take one of these apart you *will* find a place in there which contains the actual time. Allegedly the "correct" time.

A sequence of ones and zeros in a memory location encodes the "correct" time in some sense, or at least is supposed to.

There is a meaning inside the thing, kinda! It can be wrong, now, and it can arguably lie! If the clock is damaged, say.

When you look at the display you see that it reads "2:17" but the correct time is now 10:27. The clock is making some mistake between the oracle it consults and the display is shows you. It is "lying" or "hallucinating" in some meaningful way.

There is a model of the world attached to the clock, which the clock consults. That model can be interpreted to "mean" that "the time is 10:27" and the display on the clock can be interpreted to "mean" that "the time is 2:17" and that mismatch is a falsehood.

LLMs have no world model. They are clock movements, just clock movements. Attach them to a screen and a keyboard, and you've given the clock hands and a face. You can now read meaning if you like, but to propose that the "meaning" is inside the LLM is simply false.

fin/

Then there's this:

Friday Fotos: Let's play in the park!

Take your kids to work day – to Mt. Kenya [Don't punish women scientists because they have kids]

Toby Kiers, A Simple Act of Defiance Can Improve Science for Women, NYTimes, April 26, 2024.

sci

They don’t tell you beforehand that it will be a choice between having a career in science or starting a family. But that’s the message I heard loud and clear 17 years ago, in my first job after completing my Ph.D. in evolutionary biology. During a routine departmental meeting, a senior academic announced that pregnant women were a financial drain on the department. I was sitting visibly pregnant in the front row. No one said anything.

She has another child. There are incidents: "It felt like an impossible choice: to be a bad scientist or a bad mother."

What to do:

Through an act of academic defiance: I bring my kids with me on my scientific expeditions. It’s a form of rebellion that is available to mothers not just in the sciences but also in other disciplines that require site visits and field work, such as architecture and journalism. [...]

It started for me as a simple necessity. When my son was just under 2 and my daughter not yet 4, I took them on an expedition to the base of Mount Kenya in Africa, to study how fungi help trees defend themselves against the elephants and giraffes who feed on them. My son was still nursing, and I didn’t want to stop working. My husband, a poet, came along to stay with them at base camp.

As time went on, I began to embrace the decision to bring my kids with me on my expeditions, not as an exigency of parenting but as a kind of feminist act.

It has worked out well:

I started tasting soils in the field — a technique I now use to notice subtle differences across ecosystems — only after seeing my kids eat dirt. Children have an uncanny ability to make local friends quickly; many of those new friends have led me to obscure terrain and hidden fungal oases that I otherwise would never have come across. And my kids’ naïve minds routinely force me to rethink old assumptions by asking questions that are simultaneously absurd and profound. Can you taste clouds? Do fungi dream? How loud are our footsteps underground?

And so it goes. More examples of how things worked out.

At its core, feminism is about having the power to choose. For female scientists, this means having the ability to bring children into the field — or the full support to leave them at home. The pressure is acute because, as research shows, women on scientific teams are significantly less likely than men to be credited with authorship. So for me, it is crucial to keep collecting data with my own hands.

There's much more at the link.

Thursday, April 25, 2024

Lilacs

The Industrial Revolution started earlier than conventional wisdom would have us believe

Fred Lewsey, ‘Nation of makers’: Britain industrialised over a century earlier than history books claim, University of Cambridge, 5 April, 2024.

Britain was well on its way to an industrialised economy under the reign of the Stuarts in the 17th century – over 100 years before textbooks mark the start of the Industrial Revolution – according to the most detailed occupational history of a nation ever created.

Built from more than 160 million records and spanning over three centuries, the University of Cambridge’s Economies Past website uses census data, parish registers, probate records and more to track changes to the British labour force from the Elizabethan era to the eve of World War One.

The research shows that 17th century Britain saw a steep decline in agricultural peasantry, and a surge in people who manufactured goods: from local artisans like blacksmiths, shoemakers and wheelwrights, to an explosion in networks of home-based weavers producing cloth for wholesale.

Historians say the data suggests that Britain was emerging as the world’s first industrial powerhouse several generations before the mills and steam engines of the late 18th century – long credited as the birth of global industry and economic growth.

Distributed manufacture:

Yet in the England of 1700, half of all manufacturing employment was in the countryside. “In addition to village artisans, there were networks of weavers in rural areas who would work for merchants that supplied wool and sold the finished articles,” said Shaw-Taylor.

Industries of textiles, or metalworkers making nails and scythes, were shaped like “factories without machines spread out over hundreds of households” according to Shaw-Taylor – and increasingly produced goods for international markets.

In Gloucestershire, for example, expansions in textiles, footwear and metals saw the share of the male workforce in industry grew from a third (33%) to almost half (48%) over the 17th century.comm

While in Lancashire, the share of men in manufacturing work grew from 42% in 1660 to 61% in 1750, driven by a doubling of textile workers (from 15% to 30%). This all occurred prior to the Industrial Revolution.

Some networks evolved into workshops, and eventually the mills of Blake’s visions as industries migrated to the North of England, where coal was abundant and crops were harder to grow.

This meant that by the mid-18th century – considered the start of the Industrial Revolution – much of England’s South and East had actually lost its long-established industries, and even returned to farm labour, according to the research.

There's more at the link.

H/t Tyler Cowen.

POST BOP JAZZ STYLES (Was 1959 the most important year?)

 

From the YouTube page:

This video introduces a series of episodes focusing on the various styles of jazz that emerged in the 1950s following the bebop era. By almost any account, this was one of the most impactful decades in jazz history, one which foretold a future in which it would become increasingly difficult to define jazz music. You'll also learn about Blue Note and Prestige, two independent record labels that played an outsize role in documenting the music of the post-bop era.

Construction in the cove

Katja Grace: The first future and the best future [opportunity costs]

The post:

It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.

But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.

People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures.

My comment: YES. 

At the moment the A.I. world is dominated by an almost magical believe in large language models. Yes, they are marvelous, a very powerful technology. By all means, let's understand and develop them. But they aren't the way, the truth and the light. They're just a very powerful and important technology. Heavy investment in them has an opportunity cost, less money to invest in other architectures and ideas. 

And I'm not just talking about software, chips, and infrastructure. I'm talking about education and training. It's not good to have a whole cohort of researchers and practitioners who know little or nothing beyond the current orthodoxy about machine learning and LLMs. That kind of mistake is very difficult to correct in the future. Why? Because correcting it means education and training. Who's going to do it if no one knows anything else? 

Moreover, in order to exploit LLMs effectively we need to understand how they work. Mechanistic interpretability is one approach. But: We're not doing enough of it. And by itself it won't do the job. People need to know more about language, linguistics, and cognition in order to understand what those models are doing.