Saturday, March 25, 2023

Jersey City peek-a-boo


Robots and elder-care

Jason Horowitz, Who Will Take Care of Italy’s Older People? Robots, Maybe. NYTimes, Mar. 25. 2023.

CARPI, Italy — The older woman asked to hear a story.

“An excellent choice,” answered the small robot, reclined like a nonchalant professor atop the classroom’s desk, instructing her to listen closely. She leaned in, her wizened forehead almost touching the smooth plastic head.

“Once upon a time,” the robot began a brief tale, and when it finished asked her what job the protagonist had.

“Shepherd,” Bona Poli, 85, responded meekly. The robot didn’t hear so well. She rose out of her chair and raised her voice. “Shep-herd!” she shouted.

“Fantastic,” the robot said, gesticulating awkwardly. “You have a memory like a steel cage.”

The scene may have the dystopian “what could go wrong?” undertones of science fiction at a moment when both the promise and perils of artificial intelligence are coming into sharper focus. But for the exhausted caregivers at a recent meeting in Carpi, a handsome town in Italy’s most innovative region for elder care, it pointed to a welcome, not-too-distant future when humanoids might help shrinking families share the burden of keeping the Western world’s oldest population stimulated, active and healthy.

Italy's elders:

Robots are already interacting with the old in Japan and have been used in nursing homes in the United States. But in Italy, the prototype is the latest attempt to recreate an echo of the traditional family structure that kept aging Italians at home.

The Italy of popular imagination, where multigenerational families crowd around the table on Sunday and live happily under one roof, is being buffeted by major demographic headwinds.

Low birthrates and the flight of many young adults for economic opportunities abroad has depleted the ranks of potential caregivers. Those left burdened with the care are often women, taking them out of the work force, providing a drag on the economy and, experts say, further shrinking birthrates.

Yet home care remains central to the notion of aging in a country where nursing homes exist but Italians vastly prefer finding ways to keep their old with them.

For decades, Italy avoided a serious reform of its long-term care sector by filling the gap with cheap, and often off-the-books, live-in workers, many from post-Soviet Eastern Europe — and especially Ukraine.

“That’s the long-term care pillar of this country,” said Giovanni Lamura, the director of Italy’s leading socio-economic research center on aging. “Without that, the whole system would collapse.”

There's more at the link.

Friday, March 24, 2023

X marks the spot


Rodney Brooks sounds a cautionary note about GPTs

Brooks just made a post specifically directed at the type surrounding transformers. First a bit of historical perspective:

A few such instances of AI technologies that have induced gross overestimates of how soon we would get to AGI, in roughly chronological order, that I personally remember include:

John McCarthy’s estimate that the computers of the 1960’s were powerful enough to support AGI, Minsky and Michie and Nilsson each believing that search algorithms were the key to intelligence, neural networks (volume 3, perceptrons) [[I wasn’t around for the first two volumes; McCulloch and Pitts in 1943, Minsky in 1953]], first order logic, resolution theorem proving, MacHack (chess 1), fuzzy logic, STRIPS, knowledge-based systems (and revolutionizing medicine), neural networks (volume 4, back propagation), the primal sketch, self driving cars (Dickmanns, 1987), reinforcement learning (rounds 2 and 3), SOAR, qualitative reasoning, support vector machines, self driving cars (Kanade et al, 1997), Deep Blue (chess 2), self driving cars (Thrun, 2007), Bayesian inference, Watson (Jeopardy, and revolutionizing medicine), neural networks (volume 5, deep learning), Alpha GO, reinforcement learning (round 4), generative images, and now large language models. All have heralded the imminence of human level intelligence in machines. All were hyped up to the limit, but mostly in the days when very few people were even aware of AI, so very few people remember the levels of hype. I’m old. I do remember all these, but have probably forgotten quite a few…

None of these things have lived up to that early hype. As Amara predicted at first they were overrated. But at the same time, almost every one of these things have had long lasting impact on our world, just not in the particular form that people first imagined. As we twirled them around and prodded them, and experimented with them, and failed, and retried, we remade them in ways different from how they were first imagined, and they ended up having bigger longer term impacts, but in ways not first considered.

How does this apply to GPT world?

Then a caveat:

Back in 2010 Tim O’Reilly tweeted out “If you’re not paying for the product then you’re the product being sold.”, in reference to things like search engines and apps on telephones.

I think that GPTs will give rise to a new aphorism (where the last word might vary over an array of synonymous variations):

If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.

I am not saying everything about GPTs is bad. I am saying that, especially given the explicit warnings from Open AI, that you need to be aware that you are using an unreliable system.

He goes on to say:

When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated.

And it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols. [...]

My argument here is that GPTs might be useful, and well enough boxed, when there is an active person in the loop, but dangerous when the person in the loop doesn’t know they are supposed to be in the loop. [This will be the case for all young children.] Their intelligence, applied with strong intellect, is a key component of making any GPT be successful.

At last, his specific predictions:

Here I make some predictions for things that will happen with GPT types of systems, and sometimes coupled with stable diffusion image generation. These predictions cover the time between now and 2030. Some of them are about direct uses of GPTs and some are about the second and third order effects they will drive.

  1. After years of Wikipedia being derided as not a referable authority, and not being allowed to be used as a source in serious work, it will become the standard rock solid authority on just about everything. This is because it has built a human powered approach to verifying factual knowledge in a world of high frequency human generated noise.
  2. Any GPT-based application that can be relied upon will have to be super-boxed in, and so the power of its “creativity” will be severely limited.
  3. GPT-based applications that are used for creativity will continue to have horrible edge cases that sometimes rear their ugly heads when least expected, and furthermore, the things that they create will often arguably be stealing the artistic output of unacknowledged humans.
  4. There will be no viable robotics applications that harness the serious power of GPTs in any meaningful way.
  5. It is going to be easier to build from scratch software stacks that look a lot like existing software stacks.
  6. There will be much confusion about whether code infringes on copyright, and so there will be a growth in companies that are used to certify that no unlicensed code appears in software builds.
  7. There will be surprising things built with GPTs, both good and bad, that no-one has yet talked about, or even conceived.
  8. There will be incredible amounts of misinformation deliberately created in campaigns for all sorts of arenas from political to criminal, and reliance on expertise will become more discredited, since the noise will drown out any signal at all.
  9. There will be new categories of pornography.

Friday Fun: Da Naz [and Groucho Marx}

Friday Fotos: FR8s! Haven't posted any of these in awhile

BM 33 03.jpg





So, just why do GPTs have to operate by continuing an existing string?

Why can’t they, you know, just come out with it, like we do? 

Yeah, I know, that’s how they’re trained; they’re trained to predict the next token. I’m not sure what that tells me.

So, when I give ChatGPT the prompt, “Tell me a story,” that isn’t what is actually presented to the underlying LLM. There isn’t anything that ‘naturally’ follows such a string. It could be literally almost anything. So, a bit of manipulation has to take place in order to give the LLM something to work with. But why?

I decided to put the question to GPT-4:

GPTs (that is, transformers) are constructed to continue an existing string of tokens. Why is that? Why can’t they simply answer a question without continuing an existing string?

GPTs, or Generative Pre-trained Transformers, are language models that are pre-trained on large amounts of text data using unsupervised learning. These models are designed to generate coherent and contextually relevant text based on the input provided to them.

The reason GPTs are constructed to continue an existing string of tokens is because they are designed to capture the context and meaning of the text, rather than just providing a one-word answer to a question. By generating a continuation of the input text, the model can incorporate the surrounding context and generate a more complete and accurate response.

In other words, GPTs are not just simple question-answering systems, but rather complex language models that can generate text that is contextually relevant and meaningful. By training on a large corpus of text, GPTs can learn to identify patterns and relationships in language, allowing them to generate text that is more than just a simple answer to a question.

That being said, GPTs can be fine-tuned to answer specific questions or perform specific tasks, such as question-answering or text classification, by providing them with appropriate training data and modifying their architecture accordingly.

That phase hit home: “...they are designed to capture the context and meaning of the text, rather than just providing a one-word answer to a question.” It’s that last clause that got my attention.

So I asked another question:

Tell me about that last. How is their architecture modified? What kind of training data is necessary?

To fine-tune a GPT model for a specific task, such as question-answering, the architecture of the model needs to be modified to accommodate the specific requirements of the task. This typically involves adding new layers or modifying existing layers of the model to better capture the specific features of the task.

For example, to fine-tune a GPT model for question-answering, the input data to the model would consist of a question and a passage of text that may contain the answer to the question. The model would then be trained to predict the correct answer based on the input question and text.

To accomplish this, the GPT model would be modified by adding a new output layer that is specific to the question-answering task. This layer would take the final hidden state of the model and generate a probability distribution over possible answers. During training, the model would be optimized to minimize the difference between the predicted probability distribution and the true distribution of possible answers.

The type of training data required for fine-tuning a GPT model for a specific task depends on the task itself. For question-answering, the training data would consist of pairs of questions and corresponding passages of text that contain the answers. For text classification, the training data would consist of labeled examples of text, with each example assigned to a specific category.

In general, the training data needs to be carefully curated and annotated to ensure that the model is able to learn the relevant patterns and relationships required for the specific task. This typically involves a significant amount of preprocessing and quality control to ensure that the training data is consistent and accurate.

Well, OK. That doesn’t really tell me what I want to know. But there’s something going on there that’s worth thinking about. I went on to quiz it about how GPTs are fitted out to tell stories – which I may post later, or perhaps just append those remarks to this post – but I had something to think about.

And a bit later it hit me – Sensory deprivation! Slocum! I’ve got some posts on those things. Joshua Slocum who circumnavigated the glob alone at the turn of the 20th century. He started seeing things. The same has happened to other explorers. And that’s what happens during sensory deprivation. You start hallucinating.

It's as though the mind actively needs and seeks input. If we’ve really got a so-called Bayesian brain that’s constantly guestimating what’s coming up next and revising its guestimations according to what actually shows up, then it does seem like we’re physically locked to the world and that the brain/mind needs input in order to maintain stability.

For a GPT the world consists of text. Without text it is nothing. Without textual input those 175 billion weights are just a bunch of numbers. But when you put “pressure” on them by presenting them with a string, NOW they’re in touch with a world. And that world imposes order on them.

This, whatever it is, this is somewhere moving from philosophy toward a quasi-technological account of how these things work – or is it the other direction? Does it matter? That’s right where we need to be at this point. We really are dealing with something new, really new, and philosophy is what we’ve got for doing that, at least it is if we can pry it free of its layers of scholastic encrustation.

Things are beginning to make sense.

Manga and AI in Japan

Thursday, March 23, 2023

More cabbage


Why is Sweden so successful in music? Scenes, it nurtures scenes.

Henrik Karlsson, Scene creation engines and apprenticeships, Escaping Flatland, Mar. 21, 2023.

After introductory material based on a story about specific musicians, Johan Schuster, Max Martin, and others, Karlston gets down to thinking:

But the most important factor behind Sweden’s outsized success seems to be that Sweden by accident created unusually good conditions for musical scene creation.

What is a scene? It is a group of people who are producing work in public but aimed at each other. The metal bands in Karlshamn, where Schuster grew up, were a scene. They performed on stage — but the audience was mainly their friends who played in other bands. If they were anything like the other local scenes I’ve seen, they challenged and supported each other to be bolder, more ambitious, better. A scene, to borrow a phrase from Visakan Veerasamy, “is a group of people who unblock each other at an accelerating rate”.

Almost invariably, when you notice someone doing bold original work, there is a scene behind them. Renaissance Florence was a scene for scholars and painters, flowering in Leonardo DaVinci. The coffeehouses of Elizabethan London were a scene for playwrights, flowering in Shakespeare.

But scenes are hard to get off the ground. They need resources and infrastructure that enable collective and open-ended tinkering. This is rarely achieved at scale.

But it was exactly this kind of infrastructure that evolved in Sweden during the mid-twentieth century. It was not done intentionally. Rather, it was an accidental side-effect of two political projects that were subverted by the grassroots: the Swedish public music education program and the study circle movement.

On the one hand:

In the 1940s, Swedish conservatives and church leaders were afraid that music imports from the US were leading to bad morals. They introduced an extensive and affordable after-school program for music. Kids could get tutoring 1-on-1 or in small groups for free once a week. They could borrow instruments, too. Later, fees were introduced, but around 30% of kids still receive free tutoring, and the fees only run up to about $100 per semester. About every third Swedish kid participates.

On the other:

In 1902, Oscar Olsson, a secondary school teacher, was working in the temperance movement, which aimed to steer the culture away from alcohol consumption and toward service of the community. To further this aim, Olsson started what he called study circles. It was basically a book club. A group of people would meet and discuss a book, and through these meetings they would develop themselves, forge social bonds, and inspire each other to raise their aspirations.

The temperance movement successfully lobbied the Riksdag, Sweden’s parliament, to get funding to purchase books. This funding was channeled through various grassroots organizations to their members on the condition that the books were made available to the general public afterward.

This funding led to explosive growth in self-organized study groups. By the 1970s, ten percent of the population were active members. But the study circles rapidly drifted away from Olsson’s original intentions. Instead of being a tool to further the agenda of political movements, it became an almost permissionless infrastructure for learning. [...] In a sort of exhausted truce, the control was ceded to the learners. Instead of ideological book clubs, people formed a myriad of learning communities: knitting clubs, discussion groups, and . . . bands.

This is turn let to the formation of lots of learning centers organized around different topics and activities, including music. And so:

The infrastructure provided by the music schools and the learning centers created fertile breeding grounds for scenes. This is where Max Martin, Johan Schuster, and nearly every other musician mentioned in this piece, spent their formative years.

And out of this breeding ground, another institution grew: the tradition for established songwriters to find young talent and nurture it through apprenticeships.

There’s much more at the link.

Red cabbage up close


Adam Savage on Intuition [+ my intuitions about symbolic AI]

From the YouTube page:

Adam shares his absolute favorite magic book growing up: Magic with Science by Walter B. Gibson. Picking up this vintage copy is giving Adam memories of the countless times he pored over this book and how its demonstration of practical science experiments informed his approach and aesthetic style as a science communicator. Every illustration is clearcut and charming, and Adam is so happy to be reunited with this book!

Savage talks about reading about how things work in general, but in particular how magic tricks work as described and illustrated in this book. He puts a lot of stress on those illustrations.

And he also talks a lot about intuition (and how it is different from explicit knowledge). You get intuition, not from reading things, but from trying things out. Here he seems to be mostly about building things from ‘stuff’ and about doing those magic tricks. Intuition gives you a feel for things without, however, being (quite) able to explain what’s going on. You just know that this or that will work, or not.

I agree with this, and think a lot about intuition. I’m mostly interested in intuitions about literary works, and about thinking about the mind and so forth. In particular, it does seem to me that if you’ve done a lot of work with symbolic accounts of human thought, as I’ve done with cognitive networks, you have intuitions about language and mind that you can’t get from working on large language models (LLMs), such as GPTs. As far as I’m concerned, with advocates of deep learning discount the importance of symbolic thought, they (almost literally) don’t know what they’re talking about. Not only are they unfamiliar with the theories and models, but they lack the all-important intuitions.

More later.

* * * * *

Revised a bit from a note to Steve Pinker:

You’ve spent a lot of time thinking about language mechanisms in detail, so have I, though a somewhat different set of mechanisms. But I don’t think Mr. X has nor, for that matter, have most of the people involved in machine learning. Machine learning is about mechanisms to construct some kind of model over a huge database. But how that model actually works, that’s obscure. That is to say, the mechanisms that actually enact the cognitive labor are opaque. The people who build the models thus do not, cannot, have intuitions about them. In a sense, they’re not real. By extension, the whole world of cognitive science and GOFAI is not real. It is past. The fact that didn’t work very well is what’s salient. Therefore, the reasoning seems to go, those ideas have no value.

And THAT’s a problem. Every time Mr. X or someone else would talk about machines surpassing Einstein, Planck, etc. I’d wince. I couldn’t figure out why. At first I thought it might be implied disrespect but I decided that wasn’t it. Rather, it’s a trivialization of human accomplishment in the face of the dissonance between their confidence and the fact that they’ve haven’t got a clue about what would be involved beyond LOTS AND LOTS OF COMPUTE.

There’s no there there.

Wednesday, March 22, 2023

Hokusai "Great Wave" print sells for $2.8 million

Rome wasn't built in a day, neither was Jersey City


Report from • I’m Back! • Mar 22, 2023

Here’s the stats from 7:30 this morning:

I’m definitely down from the peak of March 13, but I seem to be running a bit higher than things were in February. Can I keep it up? Who knows.

But here’s the big news:

I’m now in the 99.9th percentile (top 0.1%) of Academia users. I had been running at the 99.5th percentile (top 0.5%). When you consider the nature of this distribution, that’s a significant jump.

Last March is the first time I got this high in the rankings, and that was an unexplained fluke. I had 875 views on March 28. I have no idea what caused that, but I was unable to sustain it. Perhaps things will go better this time. We’ll see.

How to make a fart sound using Ableton • [Winnebago Trickster Cycle] • {ChatGPT is allowed to tell fart jokes!}

I know, it's ridiculous. But it's also very human. Here, for example, is an excerpt from episode 23 in the Winnebago Trickster cycle as collected by Paul Radin:

As he went wandering around aimlessly he suddenly heard someone speaking. He listened very carefully and it seemed to say, 'He who chews me will defecate; he will defecate!' That was what it was saying. 'Well, why is this person talking in this manner?' said Trickster. So he walked in the direction from which he had heard the speaking and again he heard, quite near him, someone saying: 'He who chews me, he will defecate; he will defecate!' This is what was said. 'Well, why does this person talk in such fashion ?' said Trickster. Then he walked to the other side. So he continued walking along. Then right at his very side, a voice seemed to say, 'He who chews me, he will defecate; he will defecate!' 'Well, I wonder who it is who is speaking. I know very well that if I chew it, I will not defecate.' But he kept looking around for the speaker and finally discovered, much to his astonishment, that it was a bulb on a bush. The bulb it was that was speaking. So he seized it, put it in his mouth, chewed it, and then swallowed it. He did just this and then went on.

'Well, where is the bulb gone that talked so much? Why, indeed, should I defecate? When I feel like defecating, then I shall defecate, no sooner. How could such an object make me defecate!' Thus spoke Trickster. Even as he spoke, however, he began to break wind. 'Well this, I suppose, is what it meant. Yet the bulb said I would defecate, and I am merely expelling gas. In any case I am a great man even if I do expel a little gas!' Thus he spoke. As he was talking he again broke wind. This time it was really quite strong. 'Well, what a foolish one I am. This is why I am called Foolish One, Trickster.' Now he began to break wind again and again. 'So this is why the bulb spoke as it did, I suppose.' Once more he broke wind. This time it was very loud and his rectum began to smart. 'Well, it surely is a great thing!' Then he broke wind again, this time with so much force, that he was propelled forward. 'Well, well, it may even make me give another push, but it won't make me defecate,' so he exclaimed defiantly. The next time he broke wind, the hind part of his body was raised up by the force of the explosion and he landed on his knees and hands. 'Well, go ahead and do it again ! Go ahead and do it again!' Then, again, he broke wind. This time the force of the expulsion sent him far up in the air and he landed on the ground, on his stomach. The next time he broke wind, he had to hang on to a log, so high was he thrown. . However, he raised himself up and, after a while, landed on the ground, the log on top of him. He was almost killed by the fall. The next time he broke wind, he had to hold on to a tree that stood near by.

And so it goes.

Fart jokes and fart sounds are a staple of comedy of all sorts. And, I suppose, is what this educational video is. It is comedy in the form of a deadpan tutorial on how to synthesize a fart sound with Ableton Live, a standard audio workstation for Macintosh and Windows computers. I laughed out loud a half-dozen or more times while listening to this. On the one hand we have the evolving fart sound, getting ever more convincing, in contrast to the high-tech means of creating it.


From the YouTube page:

49 views Mar 19, 2023 #fartsounds #ableton #sounddesign
I couldn't find an Ableton-based tutorial on this very important topic, so I decided to make my own.

0:00 Intro
0:38 Basic Oscillator
1:55 VCA Envelope and Filter
3:17 What's causing the sound?
4:35 Filter and Wavetable Envelopes
7:25 HPF and Effects
10:16 Modwheel Mapping
11:02 Final Sound

This is from the good folks at OpenAI:

Tell a fart joke.

Sure, here's a classic one:

Why did the bicycle fall over?

Because it was two-tired! (toot-toot)

See, even ChatGPT is allowed to tell a fart joke. I wonder if they're a cultural universal?

Are fart jokes found in all cultures?

Flatulence is a universal bodily function, and humor is a universal aspect of human culture. Therefore, it's not surprising that references to flatulence and jokes about it are found in many different cultures around the world.

Fart jokes and references to flatulence have been documented in ancient Greek and Roman literature, as well as in the literature of medieval Europe, including works like Chaucer's Canterbury Tales. In Japanese culture, there is a tradition of "he-gassen," or "fart battles," where artists would depict humorous scenes of people farting on each other. In many African cultures, flatulence is considered a sign of good health and is often celebrated with laughter.

That being said, the prevalence of fart jokes and references to flatulence in a particular culture can vary. Some cultures may find such humor to be more acceptable or even encouraged, while others may consider it taboo or inappropriate. Ultimately, the presence and acceptance of fart jokes in a culture is influenced by many factors, including social norms, values, and beliefs.

I rest my case. 

* * * * *

META: I didn't plan this post. I just happaned. First I saw the video show up at YouTube. Why not? I thought, it could be interesting. And it was, and funny. I kept laughing. Should I make a post of it? I thought. But I can't just post it, straight out. So I started to write a little commentary, something about farts and humor. Then I thought about the Winnebago Trickster cycle, which I've written about in one of my cultural evolution papers. Then I wondered whether or not OpenAI allowed ChatGPT to tell a fart joke. I had to find out. Penultimately, I asked it whether or not fart jokes were found in all cultures.

Finally, this little pendant.