NEW SAVANNA
“You won't get a wild heroic ride to heaven on pretty little sounds.”– George Ives
Friday, April 8, 2022
The first time Will Smith got in the zone with Jazzy Jeff
A friend loaned me a copy of Will Smith’s autobiography, Will (2021). He talks about the first time he jammed with Jazzy Jeff, his DJ partner (82-83):
There are rare moments as an artist that you cannot quantify or measure. As much as you try, you can barely reproduce them and it’s near impossible to describe them. But every artist knows what I’m talking about–those moments of divine inspiration where creativity flows out of you so brilliantly and effortlessly that somehow you are than you have ever been before.
That night with Jeff was the first time I ever tasted it, the place that athletes call “the zone.” It felt like we already existed as a group and we just had to catch up to ourselves–natural, comfortable, home.
Jeff could sense my rhyme style. He always knew when my jokes were coming, when to drop the track out so people could clearly hear the punch line, and I could tell by which hand he was using what type of scratch was coming. He preferred different scratches with his left hand than with his right. Sensing this, I could draw the audience’s attention to which scratch was coming by which hand he was transitioning to. He was choosing the tracks and adjusting the tempos based on what he felt best accentuated the narrative structure and the flow of my rhymes. And just as the music crescendoed, I’d throw down a dagger of a line and Jeff would drop the beat into the funkiest, hottest, party-rocking shit these Philly kids had ever seen in their lives.
Earlier he had talked about being in church when he was younger and the visiting pastor, Reverent Ronald West, showed up (p. 35):
Reverend West led the choir. He always started off seated, playing the piano with this left hand, directing the choir with his right, calmly leaning into some slow, Mahalia Jackson–style ballad to warm up the elders.
This was just the calm before the storm.
Slowly, he would transform, allowing the music to carry him into a trance. Tears would fill his eyes, sweat building on his brow, as he rummaged for his hanky to clear the fog from his glasses. The drums, the bass, the voices, all rising at his command, as if imploring the Holy Spirit to show itself. And then, like clockwork, an ecstatic crescendo, and...BOOM! The Holy Ghost fills the room. Reverend West explodes from his seat, kicking over the stool, both hands possessed, banging in praise on the piano. Then, with a guttural roar, he blazes across the stage to the three-tiered electric organ, demanding that it do what God intended it to do, swirling massive orchestral Baptist chords, all the while sweat flying; the congregation erupting, singing, dancing; old women passing out in the aisles, weeping; Reverend West pointing, directing, never once losing control of the choir and the band...until his body would collapse in surrender and gratitude for the merciful bliss of God’s love.
As the music settled, Gigi [Smith’s grandmother] returned to her seat, dabbling tears from her eyes, and my little heart pounding–not even totally what that sweet vibration was inside my body–and all I could think was I wanna do THAT. I want to make people feel like THAT.
Multimodal Reasoning with Language in AI Systems
With multiple foundation models “talking to each other”, we can combine commonsense across domains, to do multimodal tasks like zero-shot video Q&A or image captioning, no finetuning needed.
— Andy Zeng (@andyzengtweets) April 7, 2022
Socratic Models:
website + code: https://t.co/Zz0kbV5GTQ
paper: https://t.co/NpsW61Ka3s pic.twitter.com/D5630owUt6
Abstract for paper linked above:
Large foundation models can exhibit unique capabilities depending on the domain of data they are trained on. While these domains are generic, they may only barely overlap. For example, visual-language models (VLMs) are trained on Internet-scale image captions, but large language models (LMs) are further trained on Internet-scale text with no images (e.g. from spreadsheets, to SAT questions). As a result, these models store different forms of commonsense knowledge across different domains. In this work, we show that this model diversity is symbiotic, and can be leveraged to build AI systems with structured Socratic dialogue -- in which new multimodal tasks are formulated as a guided language-based exchange between different pre-existing foundation models, without additional finetuning. In the context of egocentric perception, we present a case study of Socratic Models (SMs) that can provide meaningful results for complex tasks such as generating free-form answers to contextual questions about egocentric video, by formulating video Q&A as short story Q&A, i.e. summarizing the video into a short story, then answering questions about it. Additionally, SMs can generate captions for Internet images, and are competitive with state-of-the-art on zero-shot video-to-text retrieval with 42.8 R@1 on MSR-VTT 1k-A. SMs demonstrate how to compose foundation models zero-shot to capture new multimodal functionalities, without domain-specific data collection. Prototypes are available at this http URL.
To Model the Mind: Speculative Engineering as Philosophy
New working paper. Title above, abstract, contents, and introduction below. Download at:
- Academia: https://www.academia.edu/75749826/To_Model_the_Mind_Speculative_Engineering_as_Philosophy
- SSRN: https://ssrn.com/abstract=4078246
- Research Gate: https://www.researchgate.net/publication/359816337_A_Working_Paper_To_Model_the_Mind_Speculative_Engineering_as_Philosophy/stats
Abstract: Are brains computers? Some say yes, some say no. Does it matter? Ideas about computing have certainly proven fruitful in understanding how brains give rise to minds. That’s what this paper is about. The central section is a review of Grace Lindsey’s wonderful book Models of the Mind: How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain (2021). I precede it with a bit of philosophy and follow it with brief notices about five books, each proposing computationally inspired models of the mind.
Contents
Introduction: Brains, machines, and computation 2
Speculative Engineering as Philosophy 4
To Understand the Mind We Must Build One, A Review of Models of the Mind – Bye Bye René, Hello Giambattista 9
Five Good Books 15
Introduction: Brains, machines, and computation
I remember when electronic digital computers were sometimes called electronic brains. The following graph from Google Ngram shows the rise and fall of the terms “electronic brain” and “electric brains.”
Why the quick rise and fall? I’d guess that’s when these remarkable machines first gained public attention. It wasn’t clear what kind of beast they were. How do we refer to them? For a while, we tried out the idea that they were a kind of brain. After all, they did the kinds of things that brains did. They tabulated, sorted, and calculated.
But they also inspired. During the interval of that peak the study of artificial intelligence was inaugurated at a conference at Dartmouth in 1956. Machine translation, the use of a computer to translate text from one language to another arose in the 1950s and then collapsed, alas, in the mid-1960s for lack of practical results. Noam Chomsky conceived of grammar in computational terms. Warren McCulloch and Walter Pitts conceived of neurons as tiny logic engines in the early 1940s and computational ideas began taking hold in neuroscience and philosophy.
Are brains computers? Some say yes, some say no. Does it matter? For ideas about computing have certainly proven fruitful in understanding how brains give rise to minds. That’s what this paper is about. The central section is a review of Grace Lindsey’s wonderful book Models of the Mind: How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain (2021). I precede it with a bit of philosophy and follow it with brief notices about five books, each proposing computationally inspired models of the mind.
* * * * *
Speculative Engineering as Philosophy: Engineering is about how things are designed and constructed. I am interested in how the brain works, how it constructs a mind. When we theorize about that, thinking about models, experiments, or simulations, we are speculating about the engineering principles on which the brain operates. I argue that that is a form of philosophy, in the broadest sense of the term, though not necessarily as philosophy exists as an academic discipline.
To Understand the Mind We Must Build One, A Review of Models of the Mind – Bye Bye René, Hello Giambattista: Descartes believed that truth is verified through observation. Vico had a different view, believing that “What is true is precisely what is made.” Grace Lindsey ‘s Models of the Mind is Viconian in spirit. Its subtitle tells the story: How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain. Lindsey traces the history of the of a wide variety of models and techniques, often back into the 19th and even 18th centuries, in a simple and direct way. I turn my review on a few cases: 1) the 1943 McCulloch and Pitts model of neurons as logical operators, 2) Frank Rosenblatt’s Perceptron from the late 1950s, and 3) Jerome Lettvin’s 1959 work on the frog’s visual system, which Nicholas Humphrey parlayed in a 1970 article on the monkey’s visual system. That last brings in an evolutionary angle. The whole thing is wrapped up by Joyce’s Finnegans Wake, and its Latin translation.
Five Good Books: Short notices for five books, each about the mind and/or brain, each in a different style: 1) John von Neumann (1958), The Computer and the Brain, 2) Herbert A. Simon (1981), The Sciences of the Artificial, 3) William Powers (1973), Behavior: The Control of Perception, 4) David G. Hays (1981), Cognitive Structures, and 5) Valentine Braitenberg (1999), Vehicles: Experiments in Synthetic Psychology.
Thursday, April 7, 2022
What does "understand" mean? Can Google's Pathways ascend the stairway to AGI heaven?
Over at Marginal Revolution Alex Tabarrok posted about Pathways, The Chinese Room Thinks. He claims:
It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism. [...] The sheer ability of AI to reason, counter-balances our initial intuition, bias and hubris, making the defects in Searle’s argument easier to accept.
Hmmm... I'm not so sure. I commented:
Well, back in the early-1970s ARPA (now DARPA) sponsored The Speech Understanding Project. It was a five-year effort spread over, I believe, four research groups. The goal was to produce a question-answering system (a term of art) that would take a spoken-language question as input and produce the correct answer to the question. The problem domain was defined by a database of navy ships. So you ask "When was the USS Forestal commissioned?" If it answers "1955," that's scored as a yes. A correct answer was taken to mean that the system had understood the question.
Is that understanding? Well, it's something. But it's hard to say that it is understanding in any deep sense. Still, that required a major research effort involving 10s if not (low) 100s of people and much of the work was quite interesting, if you've got a taste for that sort of thing, which I did in those days. What GPT-3 did in response to Jerry Seinfeld's bit about the Roosevelt tramway is a deeper kind of understanding. As is Pathways' performance in the examples Alex posted above.
How much deeper? If, on a scale of one to ten, we say ARPA's speech understanding systems works at level 1, where do we put Pathways? Does such a metric make sense? Is understanding one-dimensional?
If you ask me to explain Einstein's theory of relativity, I can certainly produce something. But it won't be very deep. Certainly better than 1, but probably not a 5, depending on how you define those levels. In what sense do I understand the theory of relativity?
(BTW, YouTube has a bunch of videos where something is explained on five levels: 1 grade school, 2 high school, 3) college graduate, 4) graduate student, 5) professional.)
What about Steven Spielberg's Jaws? We can ask questions: What was the sheriff's name? Who was the second person killed? How many people were killed? What animal did the killing? Those are relatively easy questions. These are more difficult: Why was the mayor reluctant to close the beach? Quint was haunted by an experience he had as a young adult. Tell us about that experience? Why did it haunt him? I suspect that most people who've seen the movie would have little trouble answering those questions.
And then there's this sort of question, one that I set out to answer: How does the movie exhibit Girard's theory of sacrifice? Of course, one might argue that it doesn't exhibit Girard's theory at all, and some, of course, would say that Girard's theory doesn't hold water. What kind of understanding is going on there? I suspect that most people who've seen Jaws could not deal with those questions. They know little, if anything at all, about Girard and likely don't care much for that sort of thing. Does that mean they don't understand the movie? To say so would be perverse.
But I also think that most people who've seen the movie understand it on a deeper level than is required to answer the first set of questions about the film. And they probably cannot verbalize much of their understanding. Does that mean they don't really understand the movie?
It's not at all clear to me just what one is claiming when one says that Pathways understands. For that matter, what is one claiming when one says that Pathways doesn't really understand?
It seems to me that we're caught between a rock and a hard place. The rock is our current conceptual system, which seems to treat understanding either as a binary variable – one either understands or one doesn't – or as a one-dimensional variable where the distinctions between one level and the next are not at all clear. The hard place is to start coming up with a new conceptual vocabulary that gives us a finer-grained an more useful way to comprehend and think about what these alien mentalities – is that the word we want? – are doing. I think we've got to start investigating that hard place.
Here's my notice of Pathways.
* * * * *
Addendum: Let's assume for a minute that we're going to rate understanding on a scale, say, from 1 to 10. GPT-3 rates, say, 3. Along comes Pathways and it's clearly better than GPT-3. Where does it go? 4? 5? 6?
No.
Given that considerable distance remains between Pathways and humans, I'd say that a 1-10 scale is insufficient. Let's make it 1-100. GPT-3 goes in at 23 and Pathways at, say, 38.
That is to say, each time one of these remarkable results comes in I think it enlarges our sense of the measure space. Maybe it even forces us to start adding dimensions. It just makes the world larger.
* * * * *
Gary Marcus comments:
This week seems like win for AI, but it's actually a step back:
— Gary Marcus 🇺🇦 (@GaryMarcus) April 7, 2022
• No info about training set [Dall-E]
• Sparse disclosure of methods and errors [both]
• Anecdotal data only [PaLM jokes]
• Cherry-picking [both]
• No access for scientific community [both] https://t.co/HCn1g5yG7L
Star Trek: The Motion Picture [Media Notes 70] – I’ll take romance
I’ve been re-watching Star Trek: The Original Series – I’ve almost finished the first season – and thinking, how clunky! Shatner the ham, how plain everything is, how obvious and intrusive the moralizing, and how primitive the special effects! But in its day … Can’t say I was ever a fan, but I’ve watched most of the Trek franchise (except for the animated series).
Fortunately Tyler Cowen posted a link to a new review of the first Star Trek movie: Tim Greiving, ‘Star Trek: The Motion Picture’ Is the Best ‘Star Trek’ Film for Non-Trekkies, The Ringer, April 5, 2022. I watched it when it came out in 1997. My dominant memory is that it was slow, though I sense an undercurrent of something else. I’m thinking Greiving’s review-essay is about that something else. Anyhow, I’ve watched it and, yes, it’s slow, but there is something there.
The concept, a stately ontological fable, with touches of grace:
Star Wars may have inspired the suits to make a Star Trek movie, but screenwriter Harold Livingston—working from a story conceived by Roddenberry and sci-fi author Alan Dean Foster (who ghostwrote the novelization of Star Wars)—ignored the mysticism and kid-friendly adventure of George Lucas’s universe and instead plunged into mankind’s quest for meaning. For all the flak the film has taken for being too heady, it actually employs an admirably lean, focused concept for a sci-fi blockbuster: a mysterious object in space, the clash and competition between an old-school captain and a whiz kid, forbidden love between old flames, and a contemplation on the dynamic tension between emotion and logic, between carbon units (humans) and machines.
It's not a space opera, nor is it jammed end-to-end with battles – though, for what it’s worth, Gene Roddenberry pitched the original series as “Wagon Train to the stars” – Wagon Train was a major TV Western of the 1950s and 1960s. It is a romance and, it turns out in a final twist that I’d completely forgotten, a cosmic one at that. If you want to know what motivated those NASA kids to fly to the moon, this movie has a sense of that.
At least I think it does, though I’ve not been there myself, so what do I know? I’d imagine that actually spending an extended period of time in space could get to be something of a slog, punctuated, perhaps, by moments of terror. But I’d imagine that someone of an appropriate sensibility, who’d cultivated their imagination – for that’s what it would take, no? cultivation, would be able to sense the majesty and wonder of it all. I’d like to think so, otherwise, why go there at all? Anyhow, I felt a bit of that when I visited Kennedy Space Flight Center in, I believe it was, 1997. Sacred ground.
But I digress. Back to Greiving’s essay. The movie’s style:
Director Robert Wise was more of an Old Hollywood craftsman than an auteur with his own remarkable style, but he brought some serious pedigree: He edited Citizen Kane before directing a sci-fi staple, The Day the Earth Stood Still, and two mid-century musical classics, West Side Story and The Sound of Music. In fact, he treated this film like a roadshow musical spectacular—complete with an overture and several lengthy set pieces where Jerry Goldsmith’s music takes the wheel, only instead of song and dance routines, they’re choreographed numbers for animated energy fields and ship models. One of the chief complaints people have about this movie is epitomized by the roughly five-minute, wordless sequence where Scotty (James Doohan) slowly flies Kirk around the parked Enterprise. […]
In this way, The Motion Picture is much more a cousin to 2001 or Close Encounters of the Third Kind—with their shared effects wizard Douglas Trumbull, who died in February, at ship’s helm—than the whiz-bang dogfights in Star Wars. The visuals in The Motion Picture, which include everything from vintage matte paintings to animated light streaks for a wormhole scene to elaborate models, are absolutely of their time—no matter how much Wise was able to gussy them up with 2000-era CGI in his director’s cut. But it’s part of the movie’s vintage 1979 charm that it sits smack between Close Encounters and Blade Runner in a heyday of tactile, handmade illusions before computers ruled the earth. And it’s the mood of these hypnotic, psychedelic space sequences that’s timeless—mostly thanks to the symphonic majesty of Goldsmith’s cosmic French impressionism, in one of the greatest film scores (and main themes) ever composed.
So not everyone wants a mood-altering mind trip through the heavens—fine. But Stanley Kubrick defined a very specific and very powerful template with his 1968 space movie that many others wanted to follow and revise in their own image, from Spielberg to Denis Villeneuve. Villeneuve’s epic Dune may have more jaw-droppingly realistic visuals, but it’s every bit as much a glacially patient and somber slow movement for spaceships and (electronic) orchestra.
That said, The Motion Picture is as much a character story as it is spectacle; it’s fundamentally about the necessity of “foolish human emotions,” to quote Bones. Wise managed to get far more naturalistic, grounded performances from Shatner, the Hamlet of hams, and his cohort of TV actors than they ever gave on the small screen or in later movies. This feels like a 1970s movie, and not just because it introduces Bones in a chest-hair-and-bling-bearing tracksuit and a shaggy Bee Gees beard.
There’s much more at the link.
Jerry Goldsmith, Chinatown theme
Greiving praises Jerry Goldsmith’s music, justly so. He’s scored many films, but I associate him with the main theme from Chinatown (1974), perhaps because I’m a trumpet player. It was performed by Uan Rasey, a legend of the Hollywood studios.
Man in Space, imagination needed
I’ve done a number of posts on the theme, man in space, which I’ve gathered under that link. Many of them are about movies or TV programs, for that’s how most of us experience outer space, isn’t it? We’ve never been there, have we? You might contrast this movie with the way The Crown depicted Prince Philip’s disappointment over the conversation he had with the Apollo 11 astronauts (Armstrong, Aldrin, and Collins). Their banal banter didn’t touch his soul. Who knows what they actually felt? They didn’t know how to express (it), and Philip didn’t know how to elicit (it).
Wednesday, April 6, 2022
Nicolas Berggruen, not a philosopher king, but a wealthy man who respects and wants to nurture philosophy
And it appears that he means philosophy in the broadest sense of the term and not necessarily an academic discipline.
Michael Steinberger, How the ‘Homeless Billionaire’ Became a Philosopher King, The NYTimes, April 6, 2022.
He made his fortune, low single-digit billions, in private equity and now
wants to “empower ideas,” with an emphasis on “courageous or creative thinking.” Tobias Rees, a German American philosopher whose work has been supported by the Berggruen Institute, suggests that Berggruen might best be thought of as a kind of latter-day Medici. He is, Rees says, a wealthy patron trying to stimulate a “philosophical and artistic renaissance or spring for our times.”
He's established a prize:
Berggruen told me the purpose of his prize is to fill a void left by the Nobel Prizes, which include the Peace Prize as well as honors for literature, medicine, chemistry and physics but not philosophy. (Several philosophers, however, have been awarded the Nobel for literature, including Bertrand Russell, Albert Camus and Jean-Paul Sartre, who turned it down.) Berggruen said his prize is “a signal that philosophy is equally important,” a point underscored by the $1 million given to the winner, approximately the same amount awarded to Nobel recipients. But Damasio, a dapper, soft-spoken scholar originally from Portugal, said that money couldn’t buy prestige and acceptance. Instead, the power of the ideas they celebrate is what gives intellectual prizes their currency.
The Berggruen Prize for Philosophy and Culture, which debuted in 2016, has honored some prominent contemporary philosophers — Peter Singer, Martha Nussbaum, Charles Taylor. But the “culture” part hints at a broader mandate, and the jury seems to have an ecumenical conception of philosophy. The 2020 winner was Dr. Paul Farmer, the medical anthropologist, humanitarian and author renowned for his work in impoverished communities. (Farmer died in February.) The 2019 recipient was the Supreme Court justice Ruth Bader Ginsburg. Damasio told me that while Ginsburg wasn’t a philosopher, her work revolved around ideas and applying them in the real world. “There’s a lot of commonality,” he said. The aim of the prize, he added, was to honor philosophy in “a broad sense, not the narrow, continental sense,” and to celebrate “love of knowledge, critical knowledge.”
Note that
Perhaps surprisingly, Berggruen is not a member of the Berggruen Prize jury. “I actually don’t think I’m qualified,” he said over lunch. He wanted the Berggruen to be recognized as a “proper prize” with independent judges.
He's established a think tank, The Berggruen Institute.
But Dawn Nakagawa, the institute’s executive vice president, says its mission has evolved in recent years; the institute is now “a lot more unique and philosophical. The new horizon of our work is really to try to poke our nose into the unknown.” Nakagawa cites something called the Transformations of the Human project, which developed as part of the institute. ToftH, as it is known, was initially conceived by Tobias Rees, who believed that artificial intelligence and biotechnology were redefining what it meant to be human and who wanted to foster conversations among technologists, philosophers and artists about where all of this innovation is taking us as a species. ToftH has facilitated such exchanges at Facebook, Google and other tech companies and also provided assistance on projects.
According to Nakagawa, the emphasis these days is on nurturing revolutionary ideas. “If we develop one idea that actually changes and shifts the way the world thinks, that is success,” she says. “But success may not come until long after we’re dead,” she adds, noting that “this work requires patient capital.” Berggruen, the source of that capital, seems to be very patient.
There's more at the link.
Superhuman AI, a 21st century Philosopher's Stone?
I’ve been reading perhaps more than is healthy about such things as Superintelligence and AI takeoff (gradual or FOOM!) and am wondering whether or not the idea of superintelligence is a 21st Century equivalent of the Philosopher’s Stone (Arabic: ḥajar al-falāsifa, Latin: lapis philosophorum) of Olde. Superintelligence is the idea of a being, generally thought of as an AI of some kind, that is more intelligent that humans are. The Philosopher’s Stone is an alchemical belief in a substance that can transmute base metals into gold.
I dropped this concern into the Twitterverse last evening and Ted Underwood observed:
The parts of thinking that are clearly scalable—speed, parallel processing, and memory—are already superhuman in our laptops. But it doesn’t make our laptops evil masterminds.
— Ted Underwood 🇺🇦 (@Ted_Underwood) April 6, 2022
Good question, thought I to myself. What are those other parts of thinking?
This morning Ted came back with:
You sent me down a rabbit hole and I returned with this essay, which mostly persuades me. He may slightly understate the linear scalability of things like parallel processing. But that’s not what ppl think they mean by superhuman https://t.co/t08vyRoSDV
— Ted Underwood 🇺🇦 (@Ted_Underwood) April 6, 2022
So I took a look at that article, “The Myth of a Superhuman AI,” which is from 2017 and is by Kevin Kelly. Very interesting. Kelly sets the stage:
Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:
- Artificial intelligence is already getting smarter than us, at an exponential rate.
- We’ll make AIs into a general purpose intelligence, like our own.
- We can make human intelligence in silicon.
- Intelligence can be expanded without limit.
- Once we have exploding superintelligence it can solve most of our problems.
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.
- Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
- Humans do not have general purpose minds, and neither will AIs.
- Emulation of human thinking in other media will be constrained by cost.
- Dimensions of intelligence are not infinite.
- Intelligences are only one factor in progress.
If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief — a myth.
I like that set of parallels a lot, a whole lot. I got me excited that rather than finish reading the article I decided to make this post.
What’s in question is the nature of the world: What kinds of things and processes exist now or could exist in the future? The alchemical idea of a philosopher’s stone is embedded in a network of ideas about the nature of physical reality, its objects, processes, and actions. The same for superintelligence. Superintelligence is about minds, brains, computers, and about the future.
I know very little about the history of alchemy, but I do know the no less a thinker than Isaac Newton took it quite seriously:
Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church.
By the 19th Century, however, alchemy no longer held the attention of the most serious and venturesome thinkers. But it persists in popular culture, e.g. the Harry Potter universe, or Full Metal Alchemist.
That is to say, the idea of the philosopher’s stone didn’t disappear overnight. It was a gradual process, taking place over centuries, as the (so-called) scientific revolution radiated out from its earliest footholds in 16th Century astronomy and physics. Will the idea of artificial superintelligence undergo a similar process?
* * * * *
Question: Why are the prophets of Superintelligence more worried about the danger it might present to humanity than interested in the possibility that it will reveal to us the Secrets of the Universe? See my post from March 5, These bleeding-edge AI thinkers have little faith in human progress and seem to fear their own shadows.
* * * * *
A couple of hours later: I’ve now read Kevin Kelly’s article. Very good. Some passages:
Likewise, there is no ladder of intelligence. Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence.
Temperature is not infinite — there is finite cold and finite heat. There is finite space and time. Finite speed. Perhaps the mathematical number line is infinite, but all other physical attributes are finite. It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum?
Many proponents of an explosion of intelligence expect it will produce an explosion of progress. I call this mythical belief “thinkism.” It’s the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence.[…] No super AI can simply think about all the current and past nuclear fission experiments and then come up with working nuclear fusion in a day. A lot more than just thinking is needed to move between not knowing how things work and knowing how they work.
Likewise, the evidence so far suggests AIs most likely won’t be superhuman but will be many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash. Instead there will be a galaxy of finite intelligences, working in unfamiliar dimensions, exceeding our thinking in many of them, working together with us in time to solve existing problems and create new problems.
* * * * *
An article by Melanie Mitchell led me to this statement in the New York Times:
Eric Horvitz, who oversees much of the A.I. work at Microsoft, argued that neural networks and related techniques were small advances compared with technologies that would arrive in the years to come.
“Right now, what we are doing is not a science but a kind of alchemy,” he said.
Tuesday, April 5, 2022
PaLM: Scaling Language Modeling with Pathways [common sense, explains jokes]
Google develops Pathways Language Model, an AI system that can explain jokes and do commonsense reasoning, among other things. https://t.co/E8k3mRdcTK ; https://t.co/XSGqAyRqib pic.twitter.com/6Z0CDaZ0sn
— Kaj Sotala (@xuenay) April 5, 2022
Since it has jokes within its repertoire, I wonder if it could explain Seinfeld's tramway bit?
Abstract from the linked paper (second link in the tweet):
Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM).
We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of- the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state- of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.
The machine in my mind, my mind on the machine: Will we ever build a machine to equal the human brain?
Some years ago, back in the Jurassic Era, I imagined that one day I would have access to a computer system I could use to “read” literary texts, such as a play by Shakespeare. As such a system would have been based on a simulation of the human mind, I would be able to trace its operations as it read through a text – perhaps not in real time, but it would store a trace of those actions and I could examine that trace after the fact. Alas, such a system has yet to materialize, nor do I expect to see such a marvel in my lifetime. As for whether or not such a system might one day exist, why bother speculating? There’s no principled way to do so.
Whyever did I believe such a thing? I was young, new to the research, and tremendously excited by it. And, really, no one knew what was possible back in those days. Speculation abounded, as it still does.
While I’ll return to that fantasy a bit later, this is not specifically about that fantasy. That fantasy is just one specific example of how I have been thinking about the relationship between the human mind and computational approximations to it. As far as I can recall I have never believed that one day it would be possible to construct a human mind in a machine. But I have long been interested in what the attempt to do so has to teach us about the mind. This post is a record of much of my thinking on the issue.
It’s a long way through. Sit back, get comfortable.
The early years
When I was nine years old I saw Forbidden Planet, which featured a Robot named Robbie. In the days and weeks afterward I drew pictures of Robbie. Whether or not I believed that such a thing would one day exist, I don’t remember.
Some years later I read an article, either in Mechanix Illustrated or Popular Mechanics – I read both assiduously – about how Russian technology was inferior to American. The article had a number of photographs, including one of a Sperry Univac computer – or maybe it was just Univac, but it was one of those brands that no longer exists – and another, rather grainy one, of a Russian computer that had been taken from a Russian magazine. The Russian photo looked like a slightly doctored version of the Sperry Univac photo. That’s how it was back in the days of electronic brains.
When I went to college at Johns Hopkins in the later 1960s one of the minor curiosities in the freshman dorms was an image of a naked woman ticked out in “X”’s and “O”’s on computer print-out paper. People, me among them, actually went to some guy’s dorm room to see the image and to see the deck of punch cards that, when run through the computer, would cause that image to be printed out. Who’d have thought, a picture of a naked woman – well sorta’, that particular picture wasn’t very exciting, it was the idea of the thing – coming out of a computer. These days, of course, pictures of naked women, men too, circulate through computers around the world.
Two years after that, my junior year, I took a course in computer programming, one of the first in the nation. As things worked out, I never did much programming, though some of my best friends make their living at the craft. I certainly read about computers, information theory, and cybernetics. The introductory accounts I read always mentioned analog computing as well as digital, but that ceased some years later when personal computers became widespread.
I saw 2001: A Space Odyssey when it came out in 1968. It featured a computer, HAL, that ran a spacecraft while on a mission to Jupiter. HAL decided that the human crew endangered the mission and set about destroying them. Did I think that an artificially intelligent computer like HAL would one day be possible? Nor do I recall what I thought about the computers in the original Star Trek television series. Those were simply creatures of fiction. I felt no pressure to form a view about whether or not they would really be possible.
Graduate school (computational semantics)
In 1973 I entered the Ph.D. program in the English Department at the State University of New York at Buffalo. A year later I was studying computational semantics with David Hays in the Linguistics Department. Hays had been a first-generation researcher in machine translation at the RAND Corporation in the 1950s and 1960s and left RAND to found SUNY’s Linguistics Department in 1969. I joined his research group and also took a part-time job preparing abstracts of the technical literature for the journal Hays edited, The American Journal of Computational Linguistics (now just Computational Linguistics). That job required that I read widely in computational linguistics, linguistics, cognitive science, and artificial intelligence.
I note, in passing, that computational linguistics and artificial intelligence were different enterprises at the time. They still are, with different interests and professional associations. By the late 1960s and 1970s, however, AI had become interested in language, so there was some overlap between the two communities.
In 1975 Hays was invited to review the literature in computational linguistics for the journal Computers and Humanities. Since I was up on the technical literature, Hays asked me to coauthor the article:
William Benzon and David Hays, “Computational Linguistics and the Humanist”, Computers and the Humanities, Vol. 10. 1976, pp. 265-274, https://www.academia.edu/1334653/Computational_Linguistics_and_the_Humanist.
I should note that all the literature we covered was within that has come to be known the symbolic approach to language and AI. Connectionism and neural networks did not exist at that time.
Our article had a section devoted to semantics and discourse in which we observed (p. 269):
In the formation of new concepts, two methods have to be distinguished. For a bird, a creature with wings, its body and the added wings are equally concrete or substantial. In other cases something substantial is molded by a pattern. Charity, an abstract concept, is defined by a pattern: Charity exists when, without thought of reward, a person does something nice for someone who needs it. To dissect the wings from the bird is one kind of analysis; to match the pattern of charity to a localized concept of giving is also an analysis, but quite different. Such pattern matching can be repeated recursively, for reward is an abstract concept used in the definition of charity. Understanding, we believe, is in one sense the recursive recognition of patterns in phenomena, until the phenomenon to be understood fits a single pattern.
Hays had explored that concept of abstract concepts in an article on concepts of alienation. One of his students, Brian Phillips, was finishing a computational dissertation in which he used that concept in analyzing stories about drownings. Another student, Mary White, was finishing a dissertation in which she analyzed the metaphysical concepts of a millenarian community. I was about to publish a paper in which I used the concept to analyze a Shakespeare sonnet, “The Expense of Spirit” (Cognitive Networks and Literary Semantics). That set the stage for the last section of our article.
Let us create a fantasy, a system with a semantics so rich that it can read all of Shakespeare and help in investigating the processes and structure that comprise poetic knowledge. We desire, in short, to reconstruct Shakespeare the poet in a computer. Call the system Prospero.
How would we go about building it? Prospero is certainly well beyond the state of the art. The computers we have are not large enough to do the job and the architecture makes them awkward for our purpose. But we are thinking about Prosper now, and inviting any who will to do the same, because the blueprints have to be made before the machine can be built.
A bit later (p. 272-273):
The state of the art will support initial efforts in any of these directions. Experience gained there will make the next step clearer. If the work is carefully planned, knowledge will grow in a useful way. [...]
We have no idea how long it will take to reach Prospero. Fifteen years ago one group of investigators claimed that practical automatic translation was just around the corner and another group was promising us a computer that could play a high-quality game of chess. We know more now than we did then and neither of those marvels is just around the current corner. Nor is Prospero. But there is a difference. To sell a translation made by machine, one must first have the machine. Humanistic scholars are not salesmen, and each generation has its own excitement. In some remote future may lie the excitement of using Prospero as a tool, but just at hand is the excitement of using Prospero as a distant beacon. We ourselves and our immediate successors have the opportunity to clarity the mechanisms of artistic creation and interpretation. We may well value that opportunity, which can come but once in intellectual history.
Neurosymbolic hybrid approach to driver collision warning
Hybrid neurosymbolic system beats end-to-end deep learning in autonomous driving study from Caltech. https://t.co/5ODssnoupa
— Gary Marcus 🇺🇦 (@GaryMarcus) April 5, 2022
Abstract from the linked article:
There are two main algorithmic approaches to autonomous driving systems: (1) An end-to-end system in which a single deep neural network learns to map sensory input directly into appropriate warning and driving responses. (2) A mediated hybrid recognition system in which a system is created by combining independent modules that detect each semantic feature. While some researchers believe that deep learning can solve any problem, others believe that a more engineered and symbolic approach is needed to cope with complex environments with less data. Deep learning alone has achieved state-of-the-art results in many areas, from complex gameplay to predicting protein structures. In particular, in image classification and recognition, deep learning models have achieved accuracies as high as humans. But sometimes it can be very difficult to debug if the deep learning model doesn't work. Deep learning models can be vulnerable and are very sensitive to changes in data distribution. Generalization can be problematic. It's usually hard to prove why it works or doesn't. Deep learning models can also be vulnerable to adversarial attacks. Here, we combine deep learning-based object recognition and tracking with an adaptive neurosymbolic network agent, called the Non-Axiomatic Reasoning System (NARS), that can adapt to its environment by building concepts based on perceptual sequences. We achieved an improved intersection-over-union (IOU) object recognition performance of 0.65 in the adaptive retraining model compared to IOU 0.31 in the COCO data pre-trained model. We improved the object detection limits using RADAR sensors in a simulated environment, and demonstrated the weaving car detection capability by combining deep learning-based object detection and tracking with a neurosymbolic model.
Remarks on the visual system and artificial neural nets
I wrote this blog post a year ago about how the visual system is not convolutional. I didn't think much of it until somebody told me last week they had found it useful. I re-read it today: it *is* really interesting! You should totally read it https://t.co/2yKaJZ5R6p
— Patrick Mineault (@patrickmineault) April 5, 2022
Monday, April 4, 2022
New state laws are making states more different from each other
Shawn Hubler and Jill Cowan, Flurry of New Laws Move Blue and Red States Further Apart, NYTimes, April 3, 2022.
As Republican activists aggressively pursue conservative social policies in state legislatures across the country, liberal states are taking defensive actions. Spurred by a U.S. Supreme Court that is expected to soon upend an array of longstanding rights, including the constitutional right to abortion, left-leaning lawmakers from Washington to Vermont have begun to expand access to abortion, bolster voting rights and denounce laws in conservative states targeting L.G.B.T.Q. minors.
The flurry of action, particularly in the West, is intensifying already marked differences between life in liberal- and conservative-led parts of the country. And it’s a sign of the consequences when state governments are controlled increasingly by single parties. Control of legislative chambers is split between parties now in two states — Minnesota and Virginia — compared with 15 states 30 years ago.
“We’re further and further polarizing and fragmenting, so that blue states and red states are becoming not only a little different but radically different,” said Jon Michaels, a law professor who studies government at the University of California, Los Angeles.
Americans have been sorting into opposing partisan camps for at least a generation, choosing more and more to live among like-minded neighbors, while legislatures, through gerrymandering, are reinforcing their states’ political identities by solidifying one-party rule.







