Friday, July 26, 2024

The art of misdirection – Giving shots to infants and toddlers

One day my YouTube feed presented me with this sort video. I was curious, so I watched it. It was quite remarkable.

This one is similar, but instead of giving a shot to a toddler, we see a doctor givine a shot to an infant.

Magicians and pickpockets also employ misdirection, through to achieve somewhat different ends.

Thursday, July 25, 2024

Whisper Not - Benny Golson and McCoy Tyner

Whisper Not (Golson): Benny Golson (born 1929), tenor saxophone; McCoy Tyner (1938-2020), piano; Avery Sharpe, bass and Aaron Scott on drums. Jazz in Marciac 1997.

Hot pink petals

Communication stripped to the barest minimum?

Hays, D. G. (1973). "Language and Interpersonal Relationships." Daedalus 102(3): 203-216. The following passage is from pp. 204-205:
The experiment strips conversation down to its barest essentials by depriving the subject of all language except for two pushbuttons and two lights, and by suggesting to him that he is attempting to reach an accord with a mere machine. We brought two students into our building through different doors and led them separately to adjoining rooms. We told each that he was working with a machine, and showed him lights and pushbuttons. Over and over again, at a signal, he would press one or the other of the two buttons, and then one of two lights would come on. If the light that appeared corresponded to the button he pressed, he was right; otherwise, wrong. The students faced identical displays, but their feedback was reversed: if student A pressed the red button, then a moment later student B would see the red light go on, and if student B pressed the red button, then student A would see the red light. On any trial, therefore, if the two students pressed matching buttons they would both be correct, and if they chose opposite buttons they would both be wrong.

We used a few pairs of RAND mathematicians; but they would quickly settle on one color, say red, and choose it every time. Always correct, they soon grew bored. The students began with difficulty, but after enough experience they would generally hit on something. Some, like the mathematicians, chose one color and stuck with it. Some chose simple alternations (red-green-red-green). Some chose double alternations (red-red-green-green). Some adopted more complex patterns (four red, four green, four red, four green, sixteen mixed and mostly incorrect, then repeat). The students, although they were sometimes wrong, were rarely bored. They were busy figuring out the complex patterns of the machine.

But where did the patterns come from? Although neither student knew it, they arose out of the interaction of two students.

Monday, July 22, 2024

Double reflection of the sun

The risks of proliferating AI agents

Malcolm Murray, The Shifting Nature Of AI Risk, 3 Quarks Daily, July 22, 2024.

Second and more importantly, much of the focus in AI is currently on building AI agents. As can be seen in comments from Sam Altman and Andrew Ng, this is seen as the next frontier for AI. When intelligent AI agents are available at scale, this is likely to change the risk landscape dramatically. [...]

Introducing agentic intelligences could mean that we lose the ability to analyze AI risks and chart their path through the risk space. The analysis can become too complex since there are too many unknown variables. Note that this is not a question of “AGI” (Artificial General Intelligence), just agentic intelligences. We already have AI models that are much more capable than humans in many domains (playing games, producing images, optical recognition). However, the world is still recognizable and analyzable because these models are not agentic. The question of AGI and when we will “get it”, a common discussion topic, is a red herring and the wrong question. There is no such thing as one type of “general” intelligence. The question should be when will we have a plethora of agentic intelligences operating in the world trying to act according to preferences that may be largely unknown or incomprehensible to us.

For now, what this means is that we need to increase our efforts on AI risk assessment, to be as prepared as possible as AI risk continues to get more complex. However, it also means we should start planning for a world where we won’t be able to understand the risks. The focus in that world needs to instead be on resilience.

There's more at the link.

Sunday, July 21, 2024

A touch of yellow

Nine-year old chess prodigy hopes to become the youngest grandmaster ever

Isabella Kwai, At 5, She Picked Up Chess as a Pandemic Hobby. At 9, She’s a Prodigy. NYTimes, July 21, 2024.

Since learning chess during a pandemic lockdown, Bodhana Sivanandan has won a European title in the game, qualified for this year’s prestigious Chess Olympiad tournament, and established herself as one of England’s best players.

She also turned 9 in March. That makes Bodhana, a prodigy from the London borough of Harrow, the youngest player to represent England at such an elite level in chess, and quite possibly the youngest in any international sporting competition.

“I was happy and I was ready to play,” Bodhana said in a phone interview, two days after she learned that she had been selected for this year’s Olympiad, an international competition considered to be the game’s version of the Olympics.

The fourth-grader, who learned chess four years ago when she stumbled across a board her father was planning to discard, knows exactly what she wants to accomplish next. “I’m trying to become the youngest grandmaster in the world,” she said, “and also one of the greatest players of all time.”

Chess is one of the arenas in which prodigies emerge. Music and math are others. Why? I assume it has something to do with their brains. What?

I note also that playing chess has been central to the development of AI and that it is the first arena in which computers and equaled and then surpassed the best human performance. What can we make of that? I don’t know, but surely there’s something to be discovered here.

* * * * *

Note the final paragraph of this post, On the significance of human language to the problem of intelligence (& superintelligence).

The question of machine superintelligence would then become:

Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?

That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.

Human Go players learn from superhuman AIs

There are more links in the thread.

* * * * *

So: "Last year, we found superhuman Go AIs are vulnerable to “cyclic attacks”. This adversarial strategy was discovered by AI but replicable by humans."

Superhuman Go AIs discover a new region of the Go search-space. That's one thing. The fact that, once discovered, humans are able to exploit this region against a superhuman Go AI. That is just as interesting. 

One question we can ask about superintelligence is whether or not so-called superintelligent AIs can do things that are inherently and forever beyond human capacity. In this particular case, we have humans learning things initially discovered by AIs.

Saturday, July 20, 2024

Coffee and cream

Masha Gessen: Are we on the edge of an autocratic breakthrough?

Masha Gessen, Biden and Trump Have Succeeded in Breaking Reality, NYTimes, July 20, 2024.

The last three paragraphs:

As for Trump, despite the gestures he made in his speech on Thursday night toward national reconciliation, tolerance and unity, the convention reflected the ultimate consolidation of his power. If he is elected, a second Trump administration seems likely to bring what the Hungarian sociologist Balint Magyar has termed an “autocratic breakthrough” — structural political change that is impossible to reverse by electoral means. But if we are in an environment in which nothing is believable, in which imagined secrets inspire more trust than the public statements of any authority, then we are already living in an autocratic reality, described by another of Arendt’s famous phrases: “Nothing is true and everything is possible.”

It’s tempting to say that Trump’s autocratic movement has spread like an infection. The truth is, the seeds of this disaster have been sprouting in American politics for decades: the dumbing down of conversation, the ever-growing role of money in political campaigns, the disappearance of local news media and local civic engagement and the consequent transformation of national politics into a set of abstracted images and stories, the inescapable understanding of presidential races as personality contests.

None of this made the Trump presidency inevitable, but it made it possible — and then the Trump presidency pushed us over the edge into the uncanny valley of politics. If Trump loses this year — if we are lucky, that is — it will not end this period; it will merely bring an opportunity to undertake the hard work of recovery.

Nahre Sol talks with Tigan Hamasyan

From the Wikipedia entry for Hamasyan:

Tigran Hamasyan (Armenian: Տիգրան Համասյան; born July 17, 1987) is an Armenian jazz pianist and composer. He plays mostly original compositions, strongly influenced by the Armenian folk tradition, often using its scales and modalities. In addition to this folk influence, Hamasyan is influenced by American jazz traditions and, to some extent, as on his album Red Hail, by progressive rock. His solo album A Fable is most strongly influenced by Armenian folk music. Even in his most overt jazz compositions and renditions of well-known jazz pieces, his improvisations often contain embellishments based on scales from Middle Eastern/Southwest Asian traditions.

Friday, July 19, 2024

Friday Fotos: Ever more flowers

Refactoring training data to make smaller LLMs

Tentatively, most interesting thing I've seen coming out of the AI world in a year or so.

A tweet by Andrej Karpathy:

LLM model size competition is intensifying… backwards!

My bet is that we'll see models that "think" very well and reliably that are very very small. There is most likely a setting even of GPT-2 parameters for which most people will consider GPT-2 "smart". The reason current models are so large is because we're still being very wasteful during training - we're asking them to memorize the internet and, remarkably, they do and can e.g. recite SHA hashes of common numbers, or recall really esoteric facts. (Actually LLMs are really good at memorization, qualitatively a lot better than humans, sometimes needing just a single update to remember a lot of detail for a long time). But imagine if you were going to be tested, closed book, on reciting arbitrary passages of the internet given the first few words. This is the standard (pre)training objective for models today. The reason doing better is hard is because demonstrations of thinking are "entangled" with knowledge, in the training data.

Therefore, the models have to first get larger before they can get smaller, because we need their (automated) help to refactor and mold the training data into ideal, synthetic formats.

It's a staircase of improvement - of one model helping to generate the training data for next, until we're left with "perfect training set". When you train GPT-2 on it, it will be a really strong / smart model by today's standards. Maybe the MMLU will be a bit lower because it won't remember all of its chemistry perfectly. Maybe it needs to look something up once in a while to make sure.

That tweet is linked to this: