Tuesday, May 30, 2023

Processes mediated by consciousness

See this post on consciousness and the ideas of William Powers: Consciousness, reorganization and polyviscosity, Part 1: The link to Powers

What about the hypothesis of a Global Neural Workspace?

Sunday, May 28, 2023

Yellow, red, orange

Bullet Train [Media Notes 92] – Whitewashing

Bullet Train is a 2022 action comedy starring Brad Pitt and a supporting ensemble. Pitt plays a hitman who takes a job to steal a briefcase full of cash that is being transported on the bullet train from Tokyo to Kyoto. Several other hitmen are on the same train, each on their own mission. As the film unfolds in turns out that their life lines have crossed in various says so that that apparently independent missions collide on the train. There’s a lot of action, lots of blood and violence, the train crashes and somehow Pitt manages to make it out alive. FWIW, I couldn’t quite follow what was going on, but I’m not sure that matters very much.

This note is about the fact that Pitt is a white actor playing a character who was Japanese in the novel on which the film is based. That fact, as you can imagine, occasioned some criticism, which is mentionbed in the film’s Wikipedia entry. This sort of thing is not something that’s high on my list of things to observe – no doubt a reflection of my white privilege – but I’m inclined to take such matters on a case-by-case basis rather than lay down a blanket principle. FWIW, about two years ago I had a post about an op-ed from the NYTimes in which I agreed that casting Mickey Rooney as Japanese in Breakfast at Tiffany’s was absurd, but that there’s no reason to object to casting Olivier as Othello, a role written, after all, for a white actor.

In this case, however, I was a little bothered with Brad Pitt as a hitman living in Japan. For whatever reason, that struck me as a bit incongruous. I was slightly less annoyed that one of the other hitmen, The Prince, was played by a white woman disguised as a schoolgirl. These casting decisions disturbed some kind of balance in the film.

Neil Gershenfeld: Self-Replicating Robots and the Future of Fabrication

0:00 - Introduction
1:29 - What Turing got wrong
6:53 - MIT Center for Bits and Atoms
20:00 - Digital logic
26:36 - Self-assembling robots
37:04 - Digital fabrication
47:59 - Self-reproducing machine
55:45 - Trash and fabrication
1:00:41 - Lab-made bioweapons
1:04:56 - Genome
1:16:48 - Quantum computing
1:21:19 - Microfluidic bubble computation
1:26:41 - Maxwell's demon
1:35:27 - Consciousness
1:42:27 - Cellular automata
1:46:59 - Universe is a computer
1:51:45 - Advice for young people
2:01:02 - Meaning of life 

This is a clip from the video above, perhaps the most important segment of the discussion.

Tuesday, May 23, 2023

Monday, May 15, 2023

Two more AI videos: Geoffrey Hinton, Aaronson & Hanson

Perhaps I'll offer some commentary later in separate posts. I'm posting these here and now as a reminder to myself.

Thursday, May 4, 2023

Green leaves

A phylogenetic approach to the neural basis of behavior

Abstract of the linked article, Resynthesizing behavior through phylogenetic refinement:

This article proposes that biologically plausible theories of behavior can be constructed by following a method of "phylogenetic refinement," whereby they are progressively elaborated from simple to complex according to phylogenetic data on the sequence of changes that occurred over the course of evolution. It is argued that sufficient data exist to make this approach possible, and that the result can more effectively delineate the true biological categories of neurophysiological mechanisms than do approaches based on definitions of putative functions inherited from psychological traditions. As an example, the approach is used to sketch a theoretical framework of how basic feedback control of interaction with the world was elaborated during vertebrate evolution, to give rise to the functional architecture of the mammalian brain. The results provide a conceptual taxonomy of mechanisms that naturally map to neurophysiological and neuroanatomical data and that offer a context for defining putative functions that, it is argued, are better grounded in biology than are some of the traditional concepts of cognitive science.

Cf. Benzon and Hays, 1988, Principles and Development of Natural Intelligence. Note as as well that Cisek has been influenced by William Powers.

Wednesday, May 3, 2023

The medieval nature of arguments about AGI, etc.

Cherry blossoms (& a turret in the background)

How large language models recall factual associations

Go to Twitter for the rest of the tweet stream. Here's the abstract of the article:

Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.

Monday, May 1, 2023

Looking out the windows orf my breakfast joint

Using an LLM to decode fMRI images of continuous speech

Oliver Whang, "A.I. Is Getting Better at Mind-Reading," NYTimes, May 1, 2023.

The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on vast amounts of writing to predict the next word in a sentence or phrase. In the process, the models create maps indicating how words relate to one another. A few years ago, Dr. Huth noticed that particular pieces of these maps — so-called context embeddings, which capture the semantic features, or meanings, of phrases — could be used to predict how the brain lights up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language models provide ways to decipher it.”

In their study, Dr. Huth and his colleagues effectively reversed the process, using another A.I. to translate the participant’s fMRI images into words and phrases. The researchers tested the decoder by having the participants listen to new recordings, then seeing how closely the translation matched the actual transcript.

Almost every word was out of place in the decoded script, but the meaning of the passage was regularly preserved. Essentially, the decoders were paraphrasing.

Limitations:

...training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.

Participants were also able to shield their internal monologues, throwing off the decoder by thinking of other things. A.I. might be able to read our minds, but for now it will have to read them one at a time, and with our permission.

* * * * *

The original research:

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci (2023). https://doi.org/10.1038/s41593-023-01304-9

Abstract: A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.

* * * * * 

Related posts.