Pages in this blog

Are we watching the end of the 40-hour work week? [progress]

Alyson Krueger, Gen Z Knows What It Wants From Employers. And Employers Want Them. NYtimes, July 31, 2022.

In the past year, Legoland New York has joined a growing number of companies that are working to create an environment that is attractive and stimulating to younger employees and that embraces who they are and where they hope to go. By recruiting Generation Z workers — born in the late 1990s and early 2000s — the employers aim both to tap their energy and creativity and offset an acute labor shortage, with some 11 million unfilled jobs in May, according to the Bureau of Labor Statistics.

Last fall, Legoland began to allow employees like Ms. Ross to have piercings, tattoos and colored hair. A national hospitality company has begun to experiment with a four-day workweek. The health care company GoodRx is permitting employees to work not just from home but from anywhere in the country, enlisting an outside company to provide ad hoc offices upon request. Other companies are carefully laying out career paths for their employees, and offering extensive mental health benefits and financial advice.

The goal is not only to get younger employees through the door but also to keep them in their jobs, not an easy feat. Surveys show that younger workers are comfortable switching jobs more frequently than other generations. But, with these efforts, many companies have so far avoided the labor shortages afflicting their competitors.

Attitudes about work are changing:

According to Roberta Katz, an anthropologist at Stanford who studies Generation Z, younger people and previous generations view the workplace fundamentally differently.

“American Gen Zers, for the most part, have only known an internet-connected world,” Dr. Katz wrote in an email. In part because they grew up using collaborative platforms like Wikipedia and GoFundMe, she said, younger employees came to view work “as something that was no longer a 9-to-5-in-the-office-or-schoolroom obligation.”

Andrew Barrett-Weiss, the workplace experience director of GoodRx, which provides discounts for prescriptions, said giving employees that kind of autonomy and flexibility had helped the company close more than one deal. GoodRx offers employees the opportunity not only to be fully remote but also to have a desk wherever they want to travel in the United States.

There's more at the link.

Other posts on the changing work-week:

Ellen & Yoyoka Q&A: “I usually just start to feel the music and go crazy”

Yoyoka is a Japanese girl (12 years I believe) who plays the drums and has become fairly well known on social media. Ellen is 10 and plays bass. Ellen's father, Hovak, is asking the questions. This exchange happenes at about 22:17.

Hovak: How did you develop your feel for different styles of music and precise timing.

Ellen: I don't know why but for funk I had bang, I had banged like rock for some reason so and I kind of feel the beat no matter what. Like I just automatically start going like this [moves body, Yoyoka, too]. For rock, for rock I usually just, instead of having like you know automatic like you know b' or anything, I usually just start to feel the music and go crazy, unless, unless, you know, you're not at the place for that, but and usually for pop or soul music it's gonna' get stuck in my head and I'm gonna' sing, sing it whenever I feel like singing. Basically that's the whole, that's I'm just I'm just basically saying the main idea of the feel that I get. And there's a lot of music genres that would be very, in that case this question would be a very long question so OK.

Yoyoka [on-screen translation]: So the first thing she said was uh so, whatever music regardless of the you know category of music, if I like it, I just enjoy it, feel it, and then I can't you know play, I can you know play with it naturally. I just happens. The second thing, for example funk music, it's very unique, so it's very upbeat, so I have the intention in my mind like you know um uh so just move like the body movements like up and down. [they all move their heads up and down and chuckle]

Saturday, July 30, 2022

This video of moving about in the metaverse is not very compelling

The metaverse starts about 4:08. Of course, we've only got a 2-D view of the 3-D space the Zuckerberg and deGrasse Tyson are moving in (via Oculus headsets).

Hazing may not increase social solidarity

Aldo Cimino, Benjamin J. Thomas, Does hazing actually increase group solidarity? Re-examining a classic theory with a modern fraternity, Evolution and Human Behavior, 2022, ISSN 1090-5138, https://doi.org/10.1016/j.evolhumbehav.2022.07.001.

Abstract: Anthropologists and other social scientists have long suggested that severe initiations (hazing) increase group solidarity. Because hazing groups tend to be highly secretive, direct and on-site tests of this hypothesis in the real world are nearly non-existent. Using an American social fraternity, we report a longitudinal test of the relationship between hazing severity and group solidarity. We tracked six sets of fraternity inductees as they underwent the fraternity's months-long induction process. Our results provide little support for common models of solidarity and suggest that hazing may not be the social glue it has long been assumed to be.

Keywords: Hazing; Newcomers; Rites of passage; Fraternities

Beethoven was a supremely gifted improvisor – From the testimony of his contemporaries [the past isn't what you thought it was]

My own encounter with improvisation was indirect, fitful, indirect, and almost accidental. I started taking trumpet lessons in the fourth grade. Initially I was in a group lesson with two clarinetists, my friends Billy Cover and Jackie Barto. We were grouped together because each of us played an instrument pitched in B-flat. I remember that before long I was starting to fall behind the other two.

Then I started taking individual lessons. My teacher was blind. He’d come to the house once a week. That went on, I don’t exactly recall, for a year, when I switched to Mr. Dysert. I stayed with him until my senior year in high school. Somewhere along the line I started making up my own tunes. Thus when I joined the marching band, I made up march tunes. I’m pretty sure I didn’t think of this as improvisation.

I know, I know, hold your jets. I’m getting to Beethoven.

I supposed learned the term “improvise” when I became interested in jazz at about the same time. Improvisation is what jazz musicians did. What I did, read music, was right HERE. What they did, improvise, was over THERE, in alien territory. What I was doing when I was making up those march tunes, that didn’t fit into this scheme at all. It’s just something I did. I also made-up bluesy lines and tunes, like what I heard on some of my jazz records. But NO, I wasn’t improvising, because, remember? improvising is over THERE.

As far as I can recall the first time I was both improvising and (subsequently) thought of it as improvising was in a rehearsal of a rock and roll band I’d just joined, The Saint Matthew Passion. We were rehearsing “For What It’s Worth” and were riffing at the end. I decided to add some more elaborate riffs. Once I started the band liked it. From that moment on I had a solo at the end of that tune. I took solos in other tunes as well. From that time on I was an improvisor.

The point is simple. I lived in a culture where regular music was music you read from written notes. Improvisation was this mysterious process done by these other people. It was also special, special, mysterious, and OVER THERE. 

* * * * *

That’s not how it was in Beethoven’s musical culture. He lived in a milieu where advanced keyboard players were expected to improvise, where, indeed, the last slot on a concert program was often left open for improvisation, which was expected to be the best music on the program. It’s only recently – yesterday if you must know – that I knew that. Oh, I’ve known that Beethoven was an improvisor for a long time, like Mozart and Bach before him. But that improvisation was so highly regarded, I hadn’t known that. Given that, one would like to know: Why’d they stop improvising? I’m reading about that and want to read some more before I blog about it.

Right now I want to tell you about what Beethoven’s contemporaries thought about his improvising. I’ve taken these examples from, O. G. Sonneck, Beethoven: Impressions by his Contemporaries (1926, 1954).

Mozart, 1787

Mozart was a renowned improvisor and toured Europe as a six-year old, dazzling the powdered wig set.

Beethoven, who as a youth of great promise came to Vienna in 1786 [really in 1787], but was obliged to return to Bonn after a brief sojourn, was taken to Mozart and at that musician’s request played something for him which he, taking it for granted that it was a show-piece prepared for the occasion, praised in a rather cool manner. Beethoven observing this, begged Mozart to give him a theme for improvisation. He always played admirably when excited and now he was inspired, too, by the presence of the master whom he reverenced greatly; he played in such a style that Mozart, whose attention and interest grew more and more, finally went silently to some friends who were sitting in an adjoining room, and said, vivaciously, “Keep your eyes on him; some day he will give the world something to talk about.”

Johann Schenk (1792)

He was Beethoven’s teacher in counterpoint.

In 1792, His Royal Highness Archduke Maximilian, Elector of Cologne, was pleased to send his charge Louis van Beethoven to Vienna to study musical composition with Haydn. Towards the end of July, Abbé Gelinek informed me that he had made the acquaintance of a young man who displayed extraordinary virtuosity on the pianoforte, such, indeed, as he had not observed since Mozart. [...]

Thus I saw the composer, now so famous, for the first time and heard him play. After the customary courtesies he offered to improvise on the pianoforte. He asked me to sit beside him. Having struck a few chords and tossed off a few figures as if they were of no significance, the creative genius gradually unveiled his profound psychological pictures. My ear was continually charmed by the beauty of the many and varied motives which he wove with wonderful clarity and loveliness into each other, and I surrendered my heart to the impressions made upon it while he gave himself wholly up to his creative imagination, and anon, leaving the field of mere tonal charm, boldly stormed the most distant keys in order to give expression to violent passions....

Johann Wenzel Tomaschek (1798)

A Bohemian organist, teacher and composer.

In the year 1798, in which I continued my juridical studies, Beethoven, the giant among pianoforte players, came to Prague. He gave a largely attended concert in the Konviktssaal, at which he played his Concerto in C major, Op. 15, and the Adagio and graceful Rondo in A major from Op. 2, and concluded with an improvisation on a theme given him by Countess Sch... [Schlick?], “Ah tu fosti il primo oggetto,” from Mozart’s “Titus” (duet No. 7). Beethoven’s magnificent playing and particularly the daring flights in his improvisation stirred me strangely to the depths of my soul; indeed I found myself so profoundly bowed down that I did not touch my pianoforte for several days....

I heard Beethoven at his second concert, which neither in performance nor in composition renewed again the first powerful impression. This time he played the Concerto in B-flat which he had just composed in Prague. Then I heard him a third time at the home of Count C., where he played, besides the graceful Rondo from the A major Sonata, an improvisation on the theme: “Ah! vous dirai-je, Maman.”* This time I listened to Beethoven’s artistic work with more composure. I admired his powerful and brilliant playing, but his frequent daring deviations from one motive to another, whereby the organic connection, the gradual development of idea was put aside, did not escape me.

Evils of this nature frequently weaken his greatest compositions, those which sprang from a too exuberant conception. It is not seldom that the unbiased listener is rudely awakened from his transport. The singular and original seemed to be his chief aim in composition, as is confirmed by the answer which he made to a lady who asked him if he often attended Mozart’s operas. “I do not know them,” he replied, “and do not care to hear the music of others lest I forfeit some of my originality.”

*Known in English as “Twinkle, Twinkle, Little Star.” Mozart composed a very famous set of variations on this unassuming little tune.

Friday, July 29, 2022

Playing for Peace: Reclaiming Our Human Nature

Charlie Keil and I have a new book out (title above). It’s the third book in our series, Local Paths to Peace Today. The first two are: We Need a Department of Peace: Everybody's Business, Nobody's Job and Thomas Naylor’s Paths to Peace: Small is Necessary. Steve Feld thinks it’s the greatest thing since sliced bread:

Who says a peace manifesto can’t be deep fun? The wisdom of collaborative practice rings bells on each page here, inviting us to dance in the streets of a world still within reach. Get with the beat of this drum!

You can buy it here:

Amazon: https://tinyurl.com/4p67bahh
Barnes & Noble: https://tinyurl.com/5vjf3kjt
Google Play: https://tinyurl.com/39rz43vp
Kobo eReader: https://tinyurl.com/5bmkjsme

Contents

Thriving and Jiving Among Friends and Family: The Place of this Volume in the Peace Series
Common Glad Impulse
What’s the Point If We Can’t Have Fun?
Paideia Con Salsa
Dance to the Music: The Kids Owned the Day
Jamming for Peace
The Hungry March Band helps Hoboken Celebrate Public Control of Its Waterfront
Peace & Joy Unlimited: The Festive in Everyday Life
Global Green Basics
Reclaiming our species being: Humo Ludens collaborans
Appendix A: Greening the Population Issue
Appendix B: Organizations Charlie’s supported
Appendix C: A trifecta from Charlie Keil on the need for a Global Organization of Democracies

Brief synopsis of each chapter

Thriving and Jiving Among Friends and Family: The Place of this Volume in the Peace Series – In 1930 John Maynard Keynes famously predicted that we were heading to a world in which people only worked 15 hours a week. That hasn’t happened. Why not? Perhaps it’s because we (adults) have become so used to work, even at meaningless jobs, that we don’t know what else to do with our time. We need to become more involved in active music-making.

Common Glad Impulse – William Henry Hudson is perhaps best known for his 1904 novel, Green Mansions, which was made into the 1959 film starring Audrey Hepburn and Anthony Perkins. The novel was drenched in love for the natural world, which Hudson cultivated in his career as a naturalist. This section contains a passage from The Naturalist in La Plata, in which he asserts that “birds are more subject to this universal joyous instinct than mammals,” an instinct he termed “the common glad impulse.” That common glad impulse is at the heart of this book.

What’s the Point If We Can’t Have Fun? – The late David Graeber argues that fun IS at the heart of animal life, human as well. He asks: “Why does the existence of action carried out for the sheer pleasure of acting, the exertion of powers for the sheer pleasure of exerting them, strike us as mysterious?” He argues that we’ve got it all wrong, from the ground up. Economics – survival of the fit and all that – though important, fails to explain life. Why not reground the whole story on pleasure and play, in freedom. Graeber outlines how we can get started. ”We can understand the happiness of fishes—or ants, or inchworms— because what drives us to think and argue about such matters is, ultimately, exactly the same thing.”

Paideia Con Salsa – Charlie Keil proposes the people need (at least) three layers of cultural awareness, practice, and loyalty: 1) local, the most intense and richly developed, 2) regional, and 3) a thin layer of planetary culture “so that regions or the peoples within regions do not drift back into aggressing, aggrandizing, state-building, and empire expanding.” The local level could be organized around an Afro-Latin music-dance curriculum, a “paideia con salsa,” that could be seeded at the start and then allowed to grow and flourish as it will. To that end he proposes a pilot project oriented toward the “three Ms” – music, motion, morality – that were the foundation of Ancient Greece’s golden age. A curriculum would be developed and deployed in several local schools and the results studied, refined, and further disseminated.

Dance to the Music: the Kids Owned the Day – Bill Benzon: You know about competitive dance, right? Dance studios compete locally and regionally for prizes, trophies and glory. It’s late afternoon at one such competition in suburban New Jersey. The competition is over, the kids are just hanging out, and then suddenly and spontaneously they begin to dance to hip-hop booming on the sound system, kids, tweens, and teens, all of them, in the aisle, between rows of seats, 100, 200 of them. That’s the world of competitive dance at its best.

Jamming for Peace – Bill Benzon: There was a big peace demonstration in New York City on March 22, 2013. I was there with Charlie Keil; he had a cornet, I had my trumpet. Thousands and thousands of people marching down Broadway. Lots of musicians, mostly drumers, but other horn players as well. We moved from place to place hooking up with various musicians. Sometimes spontaneous magic broke out and a thirty yard swath of people became one in the beat. We ended on “All You Need Is Love.”

Peace & Joy Unlimited: The Festive in Everyday Life – Charlie Keil argues for guiding and coaching children in singing and dancing as early in life as possible, as often as possible. They need to develop their “festive skills” – mocking the dark side, identifying with spirits, birds, animals and plants, drumming, singing, dancing the seasons and Nature's cycles, horn tooting, honk festing, joke telling, miming, rhyming, funny walking, looking ashamed, goofy talking. “They don't have to be taught. They just have to be done, for fun, to enhance primary communication in daily life.” Suggestions for things you can do at home, now.

Global Green Basics – Angeliki V. Keil lays out the core of a “green”, that is sustainable, communally responsible, and joyus approach to living with one another, with other species, plant, animal, microbial, and with the Earth. Establish living areas for all, encourage local self-sufficiency, foster democracy at all levels of organization, renounce governmental debt, renounce war, foster the survival of all cultures, East and West, North and South, halt pollution, renounce nuclear technology, encourage scientific inquiry, revamp science education, and facilitate the free flow of individuals and information. Start now. “We only need to commit our hopes, faiths and love, especially in the form of the actual sharing of resources, to help us over the ensuing dislocations.”

Reclaiming our true species being: Humo Ludens collaborans –- Food for Thought – Charlie Keil has created an exploratory listing of thoughts, directions, provocations, wishes, what have you, pointing toward a re-creation of our local social being in terms of our most basic species being: Humo Ludens collaborans.

Appendix A: Greening the Population Issue – Linda Cree lays out the basic rationale for reducing the population as a necessary step to long-term sustainability.

Appendix B: Organizations Charlie’s supported – With links to where you can find them on the web.

Appendix C: Toward the GOOD, a Declaration of Interdependance and An Appeal

Thursday, July 28, 2022

Toward a Standard Model of Mind? – Automatic & Deliberate Learning

Two papers

Kotseruba, I., Tsotsos, J.K. 40 years of cognitive architectures: core cognitive abilities and practical applications. Artif Intell Rev 53, 17–94 (2020). https://doi.org/10.1007/s10462-018-9646-y

Abstract: In this paper we present a broad overview of the last 40 years of research on cognitive architectures. To date, the number of existing architectures has reached several hundred, but most of the existing surveys do not reflect this growth and instead focus on a handful of well-established architectures. In this survey we aim to provide a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 84 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning, reasoning and metareasoning. In order to assess the breadth of practical applications of cognitive architectures we present information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight the overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.

Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38(4), 13-26. https://doi.org/10.1609/aimag.v38i4.2744

A standard model captures a community consensus over a coherent region of science, serving as a cumulative reference point for the field that can provide guidance for both research and applications, while also focusing efforts to extend or revise it. Here we propose developing such a model for humanlike minds, computational entities whose structures and processes are substantially similar to those found in human cognition. Our hypothesis is that cognitive architectures provide the appropriate computational abstraction for defining a standard model, although the standard model is not itself such an architecture. The proposed standard model began as an initial consensus at the 2013 AAAI Fall Symposium on Integrated Cognition, but is extended here through a synthesis across three existing cognitive architectures: ACT-R, Sigma, and Soar. The resulting standard model spans key aspects of structure and processing, memory and content, learning, and perception and motor, and highlights loci of architectural agreement as well as disagreement with the consensus while identifying potential areas of remaining incompleteness. The hope is that this work will provide an important step toward engaging the broader community in further development of the standard model of the mind.

Toward a Common Model

I found out about those articles in a recent article published by Rosenblum, Lebiere, and Laird (the authors of the second article), Cross-pollination among neuroscience, psychology and AI research yields a foundational understanding of thinking, The Conversation, July 25, 2022. I skimmed until I came to these paragraphs:

This Common Model of Cognition divides humanlike thought into multiple modules, with a short-term memory module at the center of the model. The other modules – perception, action, skills and knowledge – interact through it.

Learning, rather than occurring intentionally, happens automatically as a side effect of processing. In other words, you don’t decide what is stored in long-term memory. Instead, the architecture determines what is learned based on whatever you do think about. This can yield learning of new facts you are exposed to or new skills that you attempt. It can also yield refinements to existing facts and skills.

The modules themselves operate in parallel; for example, allowing you to remember something while listening and looking around your environment. Each module’s computations are massively parallel, meaning many small computational steps happening at the same time. For example, in retrieving a relevant fact from a vast trove of prior experiences, the long-term memory module can determine the relevance of all known facts simultaneously, in a single step.

It’s that middle paragraph that caught my attention. Why? Because it is and isn’t true. Sure, a lot of learning is a side effect, as they say. Speaking is perhaps the classic example. But it is also the case that we do devote enormous effort to deliberate learning. That’s what happens in school. Just why they gloss over it is a mystery. However...

Automatic vs. deliberate learning

This speaks to the issues I raised in my recent post, Physical constraints on computing, process and memory, Part 1 [LeCun], where I was concerned with the distinction that Jerry Fodor and Zenon Pylyshyn made between “classical” theories of cognition where there is an explicit distinction between memory and program and connectionist accounts where memory and program are interwoven in one structure. Classical systems can easily acquire new knowledge by adding more memory; the structure of the program is unaffected. Connectionist systems are not like that.

To a first approximation the human nervous system seems to be a connectionist system. Each neuron seems to be both an active unit and a memory unit. There is no obvious division between a central processor, where all the programming resides, and a passive memory store. And yet, we learn, all the time we learn. How is that possible?

In that post I cited research by Walter Freeman on the sense of smell. It seems that when a new odorant is learned, the entire ‘landscape’ of odorant memory is changed. That is, not only is a new item added to the landscape, but the response patterns of existing items are change. That’s what we would expect in a connectionist model. Just how the brain does this is obscure, though I offered an off-the-cuff speculation.

Anyhow, let’s say that what Freeman was observing was the automatic memory that happens in the course of ordinary processing. Let us say that automatic memory is consonant with those ordinary processes. Deliberate memory is necessary to learn things that a dissonant with those processes. Let’s leave those two terms, consonant and dissonant, undefined beyond their contrastive use. We – me or someone else – can worry about a more thorough characterization later.

Deliberate learning: arithmetic, the method of loci

As an example of deliberate learning, consider arithmetic. It begins with learning the meaning of number names by enumerating collections of objects and then by learning the tables for addition, subtraction, multiplication, and division. This process requires considerable drill. Let’s hypothesize that that is necessary to overcome the inertia, the viscosity – to use a term I introduced in that earlier post – of the automatic process.

As a result of this drill, a foundation is laid on which one can then learn how to do more complex calculations. Considerable drill is required to become fluent in that process. But we’ve got three kinds of drill going on.

1. Meaning of number words: this is an episodic procedure that establishes the meaning of a small number of words. To determine whether any of the words applies to a collection of object, execute the procedure.

2. Learning arithmetic table: this is straight memorization of system items, each having the form: numeral, operation, numeral, equals, numeral.

3. Learning multiple-digit calculation: this is an episodic level set of procedures in which one calls up the items in the arithmetic tables and applies them in succession to pairs and n-tuples of multiple digit numbers.

The episodic procedures, 1 and 3, are dissonant with respect to ordinary episodic processes, such as moving about the physical world, while the system procedures, 2, are dissonant with respect to the ordinary processes of learning the meanings of words.

As another example, consider the method of loci, sometimes known as the memory palace. Here’s the account I gave in my working paper on Visual Thinking:

The locus classicus for any discussion of visual thinking is the method of loci, a technique for aiding memory invented by Greek rhetoricians and which, over a course of centuries, served as the starting point for a great deal of speculation and practical elaboration — an intellectual tradition which has been admirably examined by Frances Yates. The idea is simple. Choose some fairly elaborate building, a temple was usually suggested, and walk through it several times along a set path, memorizing what you see at various fixed points on the path. These points are the loci which are the key to the method. Once you have this path firmly in mind so that you can call it up at will, you are ready to use it as a memory aid. If, for example, you want to deliver a speech from memory, you conduct an imaginary walk through your temple. At the first locus you create a vivid image which is related to the first point in your speech and then you “store” that image at the locus. You repeat the process for each successive point in the speech until all of the points have been stored away in the loci on the path through the temple. Then, when you give your speech you simply start off on the imaginary path, retrieving your ideas from each locus in turn. The technique could also be used for memorizing a speech word-for-word. In this case, instead of storing ideas at loci, one stored individual words.

The process starts with choosing a suitable building and memorizing it. That’s deliberate learning. Think of it as analogous to the three kinds of drill involved in learning arithmetic calculation.

Actually using the memorized building for a specific task, that is deliberate learning as well. Here the deliberation is confined to associating items to be learning with positions in the palace. One learning a collection of system links. The idea that one is to use vivid images no doubt reflects the inherent nature of the nervous systems, it is an exhortation to consonance.

More later.

The machine majority language problem [entropic text]

The linked article: Benjamin Bratton and Blaise Agüera y Arcas, The Model Is The Message, Noema, July 12, 2022.

The idea of synthetic language:

At what point is calling synthetic language “language” accurate, as opposed to metaphorical? Is it anthropomorphism to call what a light sensor does machine “vision,” or should the definition of vision include all photoreceptive responses, even photosynthesis? Various answers are found both in the histories of the philosophy of AI and in how real people make sense of technologies.

Synthetic language might be understood as a specific kind of synthetic media. This also includes synthetic image, video, sound and personas, as well as machine perception and robotic control. Generalist models, such as DeepMind’s Gato, can take input from one modality and apply it to another — learning the meaning of a written instruction, for example, and applying this to how a robot might act on what it sees.

This is likely similar to how humans do it, but also very different. For now, we can observe that people and machines know and use language in different ways. Children develop competency in language by learning how to use words and sentences to navigate their physical and social environment. For synthetic language, which is learned through the computational processing of massive amounts of data at once, the language model essentially is the competency, but it is uncertain what kind of comprehension is at work. AI researchers and philosophers alike express a wide range of views on this subject — there may be no real comprehension, or some, or a lot. Different conclusions may depend less on what is happening in the code than on how one comprehends “comprehension.”

Does this kind of “language” correspond to traditional definitions, from Heidegger to Chomsky? [...]

There are already many kinds of languages. There are internal languages that may be unrelated to external communication. There are bird songs, musical scores and mathematical notation, none of which have the same kinds of correspondences to real world referents. Crucially, software itself is a kind of language, though it was only referred to as such when human-friendly programming languages emerged, requiring translation into machine code through compilation or interpretation.

As Friedrich Kittler and others observed, code is a kind of language that is executable. It is a kind of language that is also a technology, and a kind of technology that is also a language. In this sense, linguistic “function” refers not only to symbol manipulation competency, but also to the real-world functions and effects of executed code. For LLMs in the world, the boundary between symbolic function competency, “comprehension,” and physical functional effects are mixed-up and connected — not equivalent but not really extricable either.

What happens in a world where the quantity of text generated by LLMs exceeds that generated by humans by a long margin?

Imagine that there is not simply one big AI in the cloud but billions of little AIs in chips spread throughout the city and the world — separate, heterogenous, but still capable of collective or federated learning. They are more like an ecology than a Skynet. What happens when the number of AI-powered things that speak human-based language outnumbers actual humans? What if that ratio is not just twice as many embedded machines communicating human language than humans, but 10:1? 100:1? 100,000:1? We call this the Machine Majority Language Problem.

On the one hand, just as the long-term population explosion of humans and the scale of our collective intelligence has led to exponential innovation, would a similar innovation scaling effect take place with AIs, and/or with AIs and humans amalgamated? Even if so, the effects might be mixed. Success might be a different kind of failure. More troublingly, as that ratio increases, it is likely that any ability of people to use such cognitive infrastructures to deliberately compose the world may be diminished as human languages evolve semi-autonomously of humans.

Nested within this is the Ouroboros Language Problem. What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs? The snake eats its own tail, and a self-collapsing feedback effect ensues.

The resulting models may be narrow, entropic or homogeneous; biases may become progressively amplified; or the outcome may be something altogether harder to anticipate. What to do? Is it possible to simply tag synthetic outputs so that they can be excluded from future model training, or at least differentiated? Might it become necessary, conversely, to tag human-produced language as a special case, in the same spirit that cryptographic watermarking has been proposed for proving that genuine photos and videos are not deepfakes? Will it remain possible to cleanly differentiate synthetic from human-generated media at all, given their likely hybridity in the future?

There’s much more at the link.

High school jazz [It wasn’t done]

While cruising the web yesterday I came across this video of an incredible high school jazz band. It’s not at all clear to me that there were any high school bands playing at this level back in the early and mid-1960s.

This video is from the Essentially Ellington festival and competition in 2018. Essentially Ellington was started in 1995 by Jazz at Lincoln Center and directed initially at New York City schools but quickly expanded nationally and to Canada.

Anyhow, I decided to send a link to Tyler Cowen, who has wide-ranging interests and is particularly interested in the cultivation of talent. Here’s a slightly revised version of that letter:

Tyler,

It is entirely plausible to me that when I was in high school in the early-mid 1960s that there was no high school in the nation that could mount a jazz band like this. Back in those days they were called “stage bands,” or “dance bands,” presumably to avoid the word “jazz.” Nor would any band have had as many young women in it, much less the trumpet star (the Dizzy Gillespie role on Dizzy’s best-known tune). Maybe there was a band here and there, though I’d be surprised. In any case, there are a greater number of good high school bands now than there were then.

What’s happened between then and now? Lots. On the one hand, jazz is not as popular as it was then, and jazz was on the downslope in the 1960s. But it was easy to find jazz LPs in the record bins at the local discount department store, and not just ‘jazz lite’, but the real hard core stuff.

However, there’s more jazz in the colleges now than there was then, and I suspect that, in turn, the level of jazz sophistication among secondary school music teachers and band leaders is higher. To be sure, I’m led to believe that music and the arts in primary and secondary school have suffered in the last few decades. If so, what’s left seems to be more sophisticated about jazz. Moreover, there are more jazz-band competitions at the high school level now than there were then. As far as I know there weren’t any then. This is a yearly competition sponsored by Jazz at Lincoln Center. I don’t know what other competitions there are. But even if this is the only one, there wasn’t anything like this back in the 60s.

On net, then, jazz is less popular in the culture now than it was then, but a high level of jazz instruction exists for a relatively small stratum of schools that didn’t exist then. It is quite possible that today’s average is better than 1965’s average. [And I won’t go in to the good high school jazz bands from Japan that post videos to YouTube.]

Now, about that young woman. I couldn’t play like that at her age. I had the technical skill and the fire, but not the experience with improvisation. By the time I’d reached my late 20s, however, I’d accumulated the experience – partially from playing in a jazz-rock band after graduating from college, and partially as a result of an improvisation workshop I audited in graduate school (taught by Frank Foster, who used to play with Basie, Sarah Vaughan, Elvin Jones, and who knows who else). Now, since I eventually reached that level, why’d it take me so long?

My primary music teacher during my high school years was a man named Dave Dysert. He was primarily a pianist and arranger, and primarily interested in jazz. He taught in one of the local high schools and had the kind of all-around musical training given to music-ed folks. When he found out that I was interested in jazz, we worked on it. I had a book of Louis Armstrong solo transcriptions and those were part of my lessons. He wrote out special exercises in swing interpretation. He urged me to take piano lessons (I was primarily a trumpet player) so I could pick up the basics of harmony. I did that for two years. If I was going to improvise, I’d need that kind of knowledge.

But do you know what we never did? Never once in a lesson did I improvise while he accompanied me on the piano. Why not? I’d been making up my own tunes since I was 10 years old. I’m sure I’d have taken to it like a duck to water, as the saying goes. I don’t really know why he didn’t do that. But my best guess is simple: It wasn’t done. It never occurred to me to ask him for that. Why not? I suppose because it wasn’t done. I was the student and he was the teacher.

It wasn’t done. How much talent has been thwarted, if not outright wasted, for such a stupid reason.

Somehow between then and now, a bunch of educators figured out: Hey! We can do this. And it’s changed secondary school music for at least some kids.

Best,

BB

I note in passing that I graduate from high school in June of 1965. Four years and two months later Bethel, New York, was home to the legendary Woodstock Music and Art Fair. I wasn’t there, but I certainly knew about it. And though I had turned on and tuned in, I never really dropped out. Could it be that the counter-culture that gave birth to Woodstock was, at least in part, a reaction against It-wasn’t-done?

The kids finally figured that, Hey, we can do this. But they had to drop out to do it, or at least look in another direction. And the music they chose was rock and roll, not jazz. Jazz had already had its age of rebellion, back in the 1920s and 1930s.

Now we have hip hop. And while there is such a thing as hip hop culture, well, it’s not like jazz was early in the century and rock and roll at mid-century. It’s a different and not necessarily better world. Though the kids do play better jazz.

* * * * *

BTW, that trumpet player is Summer Camargo. Here's her solo with a transcription superimposed.

Tuesday, July 26, 2022

Once more around the merry-go-round: Is the brain a computer?

I have argued I don’t know how many times that language is the simplest human activity than must be considered computational (e.g. in this comment on Yann LeCun’s latest proposal). That necessarily implies that, fundamentally, the brain is something else, but what?

What’s a computer?

Let’s step back a bit and consider an argument by John Searle. It may seem a bit strange, but bear with me.

In 1950, Alan Turing published an article in which he set out the Turing Test.1 The purpose of the test was to establish whether a computer had genuine intelligence: if an expert cannot distinguish between human intelligent performance and computer performance, then the computer has genuine human intelligence. It is important to note that Turing called his article “Computing Machinery and Intelligence.” In those days “computer” meant a person who computes. A computer was like a runner or a singer, someone who does the activity in question. The machines were not called “computers” but “computing machinery.”

The invention of machines that can do what human computers did has led to a change in the vocabulary. Most of us now think of “computer” as naming a type of machinery and not as a type of person. But it is important to see that in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer independent, but the computation is observer relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.

This is an important point for understanding the significance of the computer revolution. When I, a human computer, add 2 + 2 to get 4, that computation is observer independent, intrinsic, original, and real. When my pocket calculator, a mechanical computer, does the same computation, the computation is observer relative, derivative, and dependent on human interpretation. There is no psychological reality at all to what is happening in the pocket calculator. [1]

He goes on in this vein for a bit and then arrives at this statement: “First, a digital computer is a syntactical machine. It manipulates symbols and does nothing else.” If you are familiar with his famous Chinese Room argument then you’ve heard this before. After a brief precis of that argument, which is of no particular interest here, Searle arrives at the point that interests me:

Except for the cases of computations carried out by conscious human beings, computation, as defined by Alan Turing and as implemented in actual pieces of machinery, is observer relative. The brute physical state transitions in a piece of electronic machinery are only computations relative to some actual or possible consciousness that can interpret the processes computationally. It is an epistemically objective fact that I am writing this in a Word program, but a Word program, though implemented electronically, is not an electrical phenomenon; it exists only relative to an observer.

Of course, he’s already said this before, but I repeat it because it’s a strange way of talking – at least I found it strange when I first read it – and so a bit of repetition is worthwhile.

Physically, a computer is just an extremely complex pile of electronic machinery. But we have designed it in such a way that the state transitions in its circuitry perform operations that we find useful. Most generally, we think of them as computation. The machinery is a computer because it has been designed to be one.

Is the brain a computer?

With Searle’s argument in mind we can now ask: Is the human brain, or any brain, a computer? Physically it is certainly very different from any electronic computer, but it does seem to be a complex meshwork that transmits many electrochemical signals in complex patterns. Are those signals performing calculations? Given Searle’s argument the answer to that question would seem to depend on just what the brain was designed to do. But, alas, that brain wasn’t designed in any ordinary sense of the word, though evolutionary biologists do sometimes talk about evolution as a process of design. But if so, it is design without an designer.

Given this, does it make sense for us to say that the brain IS a computer. I emphasize the “is” because, of course, we can simulate brains and parts of brains, but a simulation is one thing, and the thing being simulated is quite something else. The simulation of an atomic explosion is not the same as a real atomic explosion. Or, to switch terms, as Searle remarks, “Even with a perfect computer emulation of the stomach, you cannot then stuff a pizza into the computer and expect the computer to digest it.”

So, I’m not talking about whether or not we can produce a computer simulation of the brain. Of course we can. I’m talking about the brain itself. Is it a computer? Consider this passage:

This brings us to the question: what are the type of problems where generating a simulation is a more viable strategy than performing a detailed computation? And if so, what are the kind of simulators that might be relevant for consciousness? The answer to the first question has to do with the difference of say computing an explicit solution of a differential equation in order to determine the trajectory of a system in phase space versus mechanistically mimicking the given vector field of the equation within which an entity denoting the system is simply allowed to evolve thereby reconstructing its trajectory in phase space. The former involves explicit computational operations, whereas the latter simply mimics the dynamics of the system being simulated on a customized hardware. For complex problems involving a large number of variables and/or model uncertainly, the cost of inference by computation may scale very fast, whereas simulations generating outcomes of models or counterfactual models may be far more efficient. In fact, in control theory, the method of eigenvalue assignment is used precisely to implement the dynamics of a given system on a standardized hardware. [...] if the brain is indeed tasked with estimating the dynamics of a complex world filled with uncertainties, including hidden psychological states of other agents (for a game-theoretic discussion on this see [1–4,9]), then in order to act and achieve its goals, relying on pure computational inference would arguably be extremely costly and slow, whereas implementing simulations of world models as described above, on its cellular and molecular hardware would be a more viable alternative. These simulation engines are customized during the process of learning and development to acquire models of the world.[2]

That suggests that, no, the brain is not a computer, not if by that you mean that it is performing explicit numerical calculations. It isn’t performing calculations at all. It’s just passing signals between neurons, thousands and thousands of them each second. What those signals are doing, what they are achieving, depends on how the whole shebang is connected to the world in which the brain operates.

Consider experiments on mental rotation [3]. A subject is presented with a pair of 2-D or 3-D images. In some cases the images depict the same object, but from different points of view; it other cases the images depict two different objects. The subject is asked whether or not the images depict the same object. To perform the task the subject has to mentally rotate one of the images until it matches the other. But a match will be achieved only if the images depict the same object. If no match can be achieved then the subject is looking at two different objects.

What researchers found is that the length of time required to reach a decision was proportional to the angle between the two views. The larger the angle, the longer it takes to make a decision. If the process were numerical there’s no reason to believe that the computation time would be proportional to the size of the angle being computed. That strongly suggests that the process is analog, not numerical. If the brain IS a computer, it’s not a digital computer.

For various reasons – those experiments are only one of them – I have long been of the view that, at least at the sensorimotor level, the brain constructs quasi-analog models of the world and uses them in tracking the sensory field and in generating motor actions for operating in the world. These models are also called on in much of what is called common-sense knowledge, which proved to be very problematic for symbolic computation back in the world of GOFAI models in AI from the beginning up into the 1980s and which is proving somewhat problematic for current LLMs. In any given situation one simply calls up the necessary simulations and then generates whatever verbal commentary seems necessary or useful. GOFAI investigators were faced with the task of hand-coding a seemingly endless collection of propositions about common sense matters while LLMs are limited by the fact that they only have access to text, not the underlying simulations on which the text is based.

I arrived at this view partially on the basis of an elegant book from 1973, Behavior: The Control of Perception, by the late William Powers. As the title indicates, he developed a model of human behavior from classical control theory.

References

[1] John R. Searle, What Your Computer Can’t Know, The New York Review of Books, October 9, 2014, http://www.nybooks.com/articles/2014/10/09/what-your-computer-cant-know/

[2] Arsiwalla, X.D., Signorelli, C.M., Puigbo, JY., Freire, I.T., Verschure, P.F.M.J. Are Brains Computers, Emulators or Simulators? In V. Vouloutsi et al. (Eds.) Living Machines 2018. Lecture Notes in Computer Science, vol 10928. Springer, https://doi.org/10.1007/978-3-319-95972-6_3.

[3] Mental rotation, Wikipedia, https://en.wikipedia.org/wiki/Mental_rotation.

Monday, July 25, 2022

Visual adaptation to an inverted visual field [time-course of neural change]

This article speaks to the issue raised in my recent post, Physical constraints on computing, process and memory, Part 1 [LeCun], under the somewhat strange notion of the brain has a hyperviscous mesh, which is about timescales of stability in patterns of connectivity, where it is understood that 'information' is registered in those patterns.

Timothy P. Lillicrap · Pablo Moreno‐Briseño · Rosalinda Diaz · Douglas B. Tweed · Nikolaus F. Troje · Juan Fernandez‐Ruiz, Adapting to inversion of the visual field: a new twist on an old problem, Experimental Brain Research 228(3), May 2013, https://doi.org/10.1007/s00221-013-3565-6

Abstract:

While sensorimotor adaptation to prisms that displace the visual field takes minutes, adapting to an inversion of the visual field takes weeks. In spite of a long history of the study, the basis of this profound difference remains poorly understood. Here, we describe the computational issue that underpins this phenomenon and presents experiments designed to explore the mechanisms involved. We show that displacements can be mastered without altering the updated rule used to adjust the motor commands. In contrast, inversions flip the sign of crucial variables called sensitivity derivatives-variables that capture how changes in motor commands affect task error and therefore require an update of the feedback learning rule itself. Models of sensorimotor learning that assume internal estimates of these variables are known and fixed predicted that when the sign of a sensitivity derivative is flipped, adaptations should become increasingly counterproductive. In contrast, models that relearn these derivatives predict that performance should initially worsen, but then improve smoothly and remain stable once the estimate of the new sensitivity derivative has been corrected. Here, we evaluated these predictions by looking at human performance on a set of pointing tasks with vision perturbed by displacing and inverting prisms. Our experimental data corroborate the classic observation that subjects reduce their motor errors under inverted vision. Subjects' accuracy initially worsened and then improved. However, improvement was jagged rather than smooth and performance remained unstable even after 8 days of continually inverted vision, suggesting that subjects improve via an unknown mechanism, perhaps a combination of cognitive and implicit strategies. These results offer a new perspective on classic work with inverted vision.

From the article's introduction:

Visuomotor adaptation to perturbations that displace the visual field, for example, from left to right, is widely studied and well characterized (Harris 1965; Kohler 1963; Kornheiser 1976; Redding and Wallace 1990). Pointing, throwing, and reaching tasks have been used to assess adaptation, and in these tasks, human subjects adapt quickly and smoothly to displacements, typically within minutes (Fernandez-Ruiz et al. 2006; Kitazawa et al. 1997; Martin et al. 1996; Redding et al. 2005; Redding and Wal- lace 1990). When prisms are worn, which displace targets and responses to the right and thus initially produce a left- ward error (Fig. 1b), subjects reduce their errors by correct- ing in the leftward direction on subsequent trials. Doing so, subjects make use of an implicit assumption about how error vectors ought to be used to update motor commands (Fig. 1a). the assumption, which holds in the case of dis- placed vision, is that the relationship between commands and errors (i.e., the sensitivity derivatives) has not been altered.

Comparatively, little is understood about visuomotor adaptation to inversions of the visual field—for example, a perturbation which flips the visual field from left to right about the midline (Fig. 1c). Studies have reported that, although subjects are initially severely impaired by inversions, they were eventually able to reacquire even complex sensorimotor skills, such as riding a bicycle (Harris 1965; Kohler 1963). However, most studies have been qualitative in nature (Rock 1966, 1973; Stratton 1896, 1897) or else have focused on perceptual rather than motor adaptations (Linden et al. 1999; Sekiyama et al. 2000). thus, the reason for the profound difference in the time course of visuomotor adaptation, the manner in which adaptation unfolds, and the mechanisms involved are not well studied.

Gradient vs. cognitive processes:

Superficially, our experimental data agree with the class of gradient-based models which update their feedback learning rule. However, closer examination of our results suggests that adaptation to inversions involves a complex mixture of implicit (i.e., gradient or reinforcement learning) and explicit or “cognitive” processes (e.g., Mazzoni and Krakauer 2006), which is not well modeled by the existing theory.

Sunday, July 24, 2022

Physical constraints on computing, process and memory, Part 1 [LeCun]

Yann LeCun recently posted a major position paper that has been receiving quite a bit of discussion:

Yann LeCun, A Path Towards Autonomous Machine Intelligence, Version 0.9.2, 2022-06-27, https://openreview.net/forum?id=BZ5a1r-kVsf

This post is a response to a long video posted by Dr. Tim Scarfe which raised a number of important issues. One of them is about physical constraints in the implementation of computing procedures and memory. I’m thinking this may well be THE fundamental issue in computing, and hence in human psychology and AI.

I note in passing that John von Neumann’s The Computer and the Brain (1958) was about the same issue and discussed two implementation strategies, analog and digital. He also suggested that the brain perhaps employed both. He also noted that, unlike digital computers, where you have and active computational unit linked to passive memory through fetch-execute cycles, that each unit of the brain, i.e. neuron, appears to be an active unit.

Physical constraints on computing

Here’s the video I was talking about. It is from the series Machine Learning Street Talk, #78 - Prof. NOAM CHOMSKY (Special Edition), and is hosted by Dr. Tim Scarfe along with Dr. Keith Duggar and Dr. Walid Saba.

As I’m sure you know, Chomsky has nothing good to say about machine learning. Scarfe is not so dismissive, but he does seem to be a hard-core symbolist. I’m interested in a specific bit of the conversation, starting about about 2:17:14. One of Scarfe’s colleagues, Dr. Keith Duggar, mentions a 1988 paper by Fodor and Pylyshyn, Connectionism and Cognitive Architecture: A Critical Analysis (PDF). I looked it up and found this paragraph (pp. 22-23):

Classical theories are able to accommodate these sorts of considerations because they assume architectures in which there is a functional distinction between memory and program. In a system such as a Turing machine, where the length of the tape is not fixed in advance, changes in the amount of available memory can be affected without changing the computational structure of the machine; viz by making more tape available. By contrast, in a finite state automaton or a Connectionist machine, adding to the memory (e.g. by adding units to a network) alters the connectivity relations among nodes and thus does affect the machine’s computational structure. Connectionist cognitive architectures cannot, by their very nature, support an expandable memory, so they cannot support productive cognitive capacities. The long and short is that if productivity arguments are sound, then they show that the architecture of the mind can’t be Connectionist. Connectionists have, by and large, acknowledged this; so they are forced to reject productivity arguments.

That’s what they were talking about. Duggar and Scarfe agree that this is a deep and fundamental issue. A certain kind of very useful abstraction seems to depend on separating the computational procedure from the memory on which it depends. Scarfe (2:18:40): “LeCun would say, well if you have to handcraft the abstractions then learning's gone out the window.” Duggar: “Once you take the algorithm and abstract it from memory, that's when you run into all these training problems.”

OK, fine.

But, as they are talking about a fundamental issue in physical implementation, it must apply to the nervous system as well. Fodor and Pylyshyn are talking about the nervous system too, but they don’t really address the problem except to assert that (p. 45), “the point is that the structure of ‘higher levels' of a system are rarely isomorphic, or even similar, to the structure of ‘lower levels' of a system,” and therefore the fact that the nervous system appears to be a connectionist network need not be taken as indicative about the nature of the processes it undertakes. That is true, but no one has, to my knowledge, provided strong evidence that this complex network of 86 billion neurons is, in fact, running a CPU and passive memory type of system.

Given, that, how has the nervous system solved the problem of adding new content to the system, which it certainly does? Note that here is their specific phrasing, from the paragraph I’ve quoted: “adding to the memory (e.g. by adding units to a network) alters the connectivity relations among nodes and thus does affect the machine’s computational structure.” The nervous system seems to be able to add new items to memory without, however, having to add new physical units, that is neurons, to the network. That is worth thinking about.

Human cortical plasticity: Freeman

The late Walter Freeman has left us a clue. In an article from 1991 in Scientific American (which was more technical in those days), entitled “The Physiology of Perception,” he discusses his work on the olfactory cortex. He’s using an array of electrodes mounted on the cortical surface (of a rat) to register electrical activity. Note that he’s NOT making recordings of the activity of individual neurons. Rather, he’s recording activity in a population of neurons. He then made 2-D images of that activity.

The shapes we found represent chaotic attractors. Each attractor is the behavior the system settles into when it is held under the influence of a particular input, such as a familiar odorant. The images suggest that an act of perception consists of an explosive leap of the dynamic system from the " basin" of one chaotic attractor to another; the basin of an attractor is the set of initial conditions from which the system goes into a particular behavior. The bottom of a bowl would be a basin of attraction for a ball placed anywhere along the sides of the bowl. In our experiments, the basin for each attractor would be defined by the receptor neurons that were activated during training to form the nerve cell assembly.

We think the olfactory bulb and cortex maintain many chaotic attractors, one for each odorant an animal or human being can discriminate. Whenever an odorant becomes meaningful in some way, another attractor is added, and all the others undergo slight modification.

Let me repeat that last line: “Whenever an odorant becomes meaningful in some way, another attractor is added, and all the others undergo slight modification.” That the addition of a new item to memory should change the other items in memory is what we would expect of such a system. But how does the brain manage it? It would seem that specific memory items are encoded in whole populations, not in one or a small number of neurons (so-called ‘grandmother’ cells). Somehow the nervous system is able to make adjustments to some subset of synapses in the population without having to rewrite everything.

In this connection it’s worth mentioning my favorite metaphors for the brain, as a hyperviscous fluid. What do I mean by that? A fluid having many components, of varying viscosity (some very high, some very low, and everything in between), which are intermingled in a complex way, perhaps fractally. Of course, the brain, like most of the body’s soft tissue, is mostly water, but that’s not what I’m talking about. I’m talking about connectivity.

Perhaps I should instead talk about a hyperviscous network, or mesh, or maybe just a hyperviscous pattern of connectivity. Some synaptic networks have extremely high viscosity and so change very slowly over time while others have extremely low viscosity, and change rapidly. The networks involved in tracking and moving in the world in real time must have extremely low viscosity while those holding our general knowledge of the world and our own personal history will have a very high viscosity.

In the phenomenon that Freeman reports, we can think of the overall integrity of the odorant network as being maintained at, say, level 2, where moment-to-moment sensations are at level 0. The new odorant is initially registered at level 1 and so will affect the level 1 networks across all odorants. That’s the change registered in Freeman’s data. But the differences between odorants are still preserved in level 2 networks. Over time the change induced by the new odorant will percolate from level 1 to level 2 synaptic networks. Thus a new item enters the network without disrupting the overall pattern of connectivity and activation.

That is something I just made up. I have no idea whether or not, for example, it makes sense in terms of the literature on long-term potentiation (LTP) and short-term potentiation (STP), which I do not know. I do note, however, that the term “viscosity” has a use in programming that is similar to my use here.

Addendum: Are we talking about computation in the Freeman example?

I have variously argued that language is the simplest operation humans do that qualifies as computation. Thus, earlier in the commentary on LeCun’s piece I have said:

I take it as self-evident that an atomic explosion and a digital simulation of an atomic explosion are different kinds of things. Real atomic explosions are enormously destructive. If you want to test an atom bomb, you do so in a remote location. But you can do a digital simulation in any appropriate computer. You don’t have to encase the computer in lead and concrete to shield you from the blast and radiation, etc. And so it is with all simulations. The digital simulation is one thing, and real phenomenon, another.

That’s true of neurons and nervous systems too. [...] However, back in 1943 Warren McCulloch and Walter Pitts published a very influential paper (A Logical Calculus of the Ideas Immanent in Nervous Activity) in which they argued that neurons could be thought of as implementing circuits of logic gates. Consequently many have, perhaps too conveniently, assumed that nervous systems are (in effect) evaluating logical expressions and therefore that the nervous system is evaluating symbolic expressions.

I think that’s a mistake. Nervous systems are complex electro-chemical systems and need to be understood as such. What happens at synapses is mediated by 100+ chemicals, some more important than others. It seems that some of these processes have a digital character while others have an analog character. [...] I have come to the view that language is the simplest phenomenon that can be considered symbolic, thought we may simulate those processes through computation if we wish. That implies that there is no symbolic processing in animals and none in humans before, say, 18 months or so. Just how language is realized in a neural architecture that seems so contrary to the requirements of symbolic computing, that is a deep question, though I’ve offered some thoughts about that in the working paper I mentioned in my original comment.

If the phenomenon Freeman describes is not about computation, and it is NOT according to my current beliefs, then how does the problem brought up by Fodor & Pylyshyn apply?

And yet there IS a problem, isn’t there. There is a physical network with connections between the items in the network. Those connections must be altered in order to accommodate a new phenomenon. We can’t just add a new item to the end of the tape. That is, it IS a physical problem of the same form. So perhaps this technicality doesn’t matter.