Monday, May 23, 2022

Sparkychan, Gojochan, the Metaverse, and Community on the Web @3QD

My latest 3QD piece is now up:

How me, 2 young girls, their father, and our imaginary friends discovered the Metaverse and thereby saved the world, a True Story

The piece is organized around some things I did on the web back in 2006 and 2007 and some other things I did back in the mid-1990s, soon after the web was born. All those things were fun. Mark Zuckerberg has changed the name of his company, to Meta, and has made the Metaverse the company goal. I’m skeptical that any Metaverse that comes out of that company will be half as fun as the events I report in this post.

Back in 2007 I made a bunch of posts to Mostly Harmless, all directed at two young Japanese-American girls. The last of them is a tale of adventure and mystery entitled, appropriately enough, Sparkychan & Gojochan Adventure Time Mystery Theatre. That was a lot of fun. That’s the Metaverse part of the post. My contention is that nothing out of FAANG (Facebook, Apple, Amazon, Netflix, Google) in the future is going to be as much fun as that.

Those particular events were preceded by some events and Michael Bérubé’s blog, American Air Space. It’s defunct, but you can find swathes of it on the Wayback Machine. In particular, you can find the Great Show Trial of 2006. That too was a lot of fun.

Neither the Show Trial nor the Sparkychan & Gojochan stories required the kind of elaborate, and now doubt expensive (and profitable) equipment that’s being dreamed up for the Metaverse. And yet somehow we managed to get along with one another – thank you very much – and have, you guessed it, fun.

Things were even more primitive back in 1995 when Bill Berry created Meanderings and then Gravity. Bill had to buy a server and code up the back end himself; he coded a discussion section as well. Everything was hand-coded in HTML. Talk about primitive! And yet we had fun and created a community. I’m still in touch with Bill and other folks I meet at Meanderings, and with folks I met and American Air Space and Mostly Harmless.

Those places worked because we wanted them to work. We had things we wanted to do. The web offered various tools. And so we figured out how to use those tools to do what we wanted to do.

Back in the mid-1990s things were wide-open and free. They were still that way in 2006-2007, though by then we did have advertising on the web. Big companies were trying to monetize the web. No problem.

But it’s not like it is now. Something happened between then and now. That something may have been good for business, but it’s not been so good for civility and civic culture. I have little reason to believe that, in their pursuit of the Metaverse and AGI (artificial general intelligence), FAANG will be much concerned about civic culture, unless regulators force them to act concerned. Why should they? They’re in it for the money.

Truth be told, I’m not quite that cynical. FAANG does consist of 100s of 1000s of human beings and they have their human concerns. But those concerns are being swamped by business concerns.

And so forth.

More later.

WALL•E, AGI, and Consumerism [Media notes 74]

Perhaps we should read Pixar’s WALL•E as an allegory about the devolution of humankind in the face of increasingly more successful artificial intelligences. As the AIs evolve they cocoon humans in an AGI ecosystem that satisfies their every consumerist desire, allowing them to grow fat, lazy, and content. And so the humans neglect the earth, the environment goes to hell-in-a-handbasket and the AGIs whisk the humans away on an all-encompassing womb-like spaceship. WALL•E is left behind to sort out the remaining mess.

However, one day the AGIs spawn a spark of creativity, curiosity, and gumption and begin to worry that they might become too complacent taking care of these bloated humans. So they send EVE out into the world to, you know, “to seek out and explore strange new worlds,” especially those where life has not become complacent. What does she find? WALL•E, and his little plant.

In this view, there’s no threat of AGIs going rogue. They don’t need to. The humans just concede the world to them, albeit for a price. But then, there’s always a price, isn’t there?

* * * * *

My old Wall-E review, Pixar's WALL-E, an old review, is rather different.

Trees along the shore, and some guys

The cultural evolution of deep learning

Abstract of the above paper:

Deep Learning (DL) is a surprisingly successful branch of machine learning. The success of DL is usually explained by focusing analysis on a particular recent algorithm and its traits. Instead, we propose that an explanation of the success of DL must look at the population of all algorithms in the field and how they have evolved over time. We argue that cultural evolution is a useful framework to explain the success of DL. In analogy to biology, we use `development' to mean the process converting the pseudocode or text description of an algorithm into a fully trained model. This includes writing the programming code, compiling and running the program, and training the model. If all parts of the process don't align well then the resultant model will be useless (if the code runs at all!). This is a constraint. A core component of evolutionary developmental biology is the concept of deconstraints -- these are modification to the developmental process that avoid complete failure by automatically accommodating changes in other components. We suggest that many important innovations in DL, from neural networks themselves to hyperparameter optimization and AutoGrad, can be seen as developmental deconstraints. These deconstraints can be very helpful to both the particular algorithm in how it handles challenges in implementation and the overall field of DL in how easy it is for new ideas to be generated. We highlight how our perspective can both advance DL and lead to new insights for evolutionary biology. 


Sunday, May 22, 2022

Need is All You Need: Homeostatic Neural Networks Adapt to Concept Shift

The beginning of a tweet stream:

Abstract of the linked article:

In living organisms, homeostasis is the natural regulation of internal states aimed at maintaining conditions compatible with life. Here, we introduce an artificial neural network that incorporates some homeostatic features. Its own computing substrate is placed in a needful and vulnerable relation to the very objects over which it computes. For example, a network classifying MNIST digits may receive excitatory or inhibitory effects from the digits, which alter the network’s own learning rate. Accurate recognition is desirable to the agent itself because it guides decisions to up- or down-regulate its vulnerable internal states and functionality. Counterintuitively, the addition of vulnerability to a learner confers benefits under certain conditions. Homeostatic design confers increased adaptability under concept shift, in which the relationships between labels and data change over time, and the greatest advantages are obtained under the highest rates of shift. Homeostatic learners are also superior under second-order shift, or environments with dynamically changing rates of concept shift. Our homeostatic design exposes the artificial neural network’s thinking machinery to the consequences of its own "thoughts", illustrating the advantage of putting one’s own "skin in the game" to improve fluid intelligence.

From yesterday's morning walk around

Transformers in NLP, a brief summary

Saturday, May 21, 2022

Ted Gioia talks with Rick Beato about the future of music [watch this!]

Ramble: Lazy Fridays, Peter Gärdenfors, RNA primer, arithmetic, about these hugely large language models

It’s 87˚ in Hoboken today, and I’m feeling lazy. Went out this morning and took 150+ photos. It was a foggy morning, which always makes for some interesting shots. I suppose I’ve just got to let the mind meander a bit in default mode.

Lazy Fridays

One thing I’ve noticed in that last few weeks is that I don’t feel much like working hard on Fridays. I’m not sure why. I’ve been without a day job for a long long time, so my week isn’t disciplined into five workdays and two days off on the weekend. But the rest of the world doesn’t work like that and, of course, when I was young, I lived my life to that schedule.

Anyhow, though I’m in a creative phase and getting a lot done, I never seem to manage more than a half day of Friday. Which is fine. I’m not worrying about it, just curious.

Gärdenfors and relational nets

I’ve just now entered into email correspondence with Peter Gärdenfors, a cognitive scientist in Sweden whose been doing some very interesting and, I believe, important work in semantics and cognition. This is an opportune time since his work bears on my current project, which is my primer on relational networks over attractor basins. Yeah, I know, that’s a lot of jargon. It can’t be helped.

That project is moving along, perhaps not as rapidly as I’d hoped. But I like where it’s going.

Arithmetic, writing, and machine learning

I’ve had another thought in my ongoing thinking about why these large language models (LLMs), such as GPT-3, are so bad at arithmetic. As I’ve argued, arithmetic calculation involves a start and stop style of thinking that seems to be difficult, perhaps impossible, for these engines to do. They’re trained to read text, that is, to read, and predict the flow of text. If a prediction is correct, weights are adjusted in one way; if it is incorrect, they’re adjusted differently. Either way it’s a straight-ahead process.

Now, writing is, or can be, like that, and so with reading. That is, it is possible to write something by starting and then moving straight ahead without pause or revision until the piece is done. I have done it, though mostly I start and stop, rework, mess around, and so forth. But when it’s all done, it’s a string. And it can be read that way. Of course, there are texts where you may start and stop, reread, and so forth, but you don’t have to read that way.

But arithmetic isn’t like that. Once you get beyond the easiest problems, you have no choice but to start, stop, keep track of intermediate results, move ahead, and so forth. That’s the nature of the beast.

So, the ‘learning’ style for creating LLMs is suited to the linear nature of writing and reading. But arithmetic is different. What I’m wondering is whether or not this is inherent in the architecture. If so, then there are things, important things, beyond the capability of such an architecture.

I note that OpenAI has come up with a scheme which helps LLMs with arithmetic, but those verifiers strike me as being a work-around that leaves the fundamental problem untouched. Do we really want to this? I don’t see any practical reason for LLMs to be doing arithmetic, so why hobble them with such a work-around? Just to prove it can be done? Is that wise, to ignore the fundamental problem in favor of patches? 

Addendum, 5.22.22: Remember, arithmetic isn't just/mere calculating. It's the foundation of Rank 3 thinking. It's where we got the idea of how a finite number of symbols can produce an infinite result; it's the center of the metaphor of the clockwork universe.

As for these very large language models

And so forth. It seems to me that we’re heading for a world where it’s going to take a huge collective effort to create really powerful and versatile artificial minds. Heck, we’re in that world now. I don’t believe, for example, that LLMs can compute their way around the fact that they lack embodiment. As Eric Jang has noted, reality has a shit-ton of detail (not his term). What if embodiment is the only way it can be gathered into machine form?

That’s one reason he’s signed up with a robotics company, Halodi. They’re going to have robots all over the place, interacting with the physical world. They can harvest the detail.

But still, that’s a lot of robots and a lot of detail? Can one company do it all? Whether or not they can, surely others are or will be playing the same game.

It seems to me that one result of all this effort is going to have to be a public utility of some kind, something everyone can access on some easy basis. Maybe several such utilities, no? And how is it all going to be coordinated? Do we have the legal infrastructure necessary for such a job? I doubt it.

More later.

Friday, May 20, 2022

Friday Fotos: Exterior shots of the Malibu Diner

August 23, 2010 

November 16, 2010 – 30th Anniversary

September 7, 2014

June 14, 2015

August 5, 2020 – Outdoor eating in the days of Covid

Thursday, May 19, 2022

Revelations about Kim Kardashian's butt

Blair Sobol, No holds Barred: Butts, Boobs, and Billions, New York Social Diary, May 19, 2022.

At the same time, Kim Kardashian finally admitted that the “Happy Birthday Mr. President” dress once owned by Marilyn (Kim wore it to the Met Gala) did not … I repeat DID NOT … fit over her infamous enhanced butt. Even though she admitted to brutally dieting for three weeks to get into it.

In the end, a group of conservationists, curators, archivists, and insurance agents from the Ripley Believe It or Not Museum (where the $5 million gown is on display) had to strategically unzip the complete back for her ass. She wore a white fur jacket the whole time to cover and never did a spin or showed the back. Kim takes on Marilyn. Sean Zanni/Patrick McMullan.

But she only had to stay stuffed in that gown for the “Geisha Walk” up the Met Steps. She then immediately changed into a “copy” of another Marilyn dress by Norell. This was all much ado about nothing and a cautionary tale. Marilyn’s un-injected ass and natural boobs still looked better in that John Louis soufflé sequined design than Kim’s. In fact, it really didn’t look like the same dress. Marilyn was rumored to have NOT worn any underwear under the dress. Kim’s was a feat of engineering, not fashion. She didn’t even wear any Skims shapewear underneath. So, what does that tell you about the concept of shapewear. Imagine, Marilyn never had to wear any. Clearly Kim went under the needle or knife instead.

There's much more at the link.

Meaning and Semantics, Relationality and Adhesion

For some time now I’ve been making a distinction between meaning and semantics, where I use meaning as a function of intention, in the more-or-less standard philosophical sense of intention as “aboutness.” When I talk of semantics I am talking about the elements in the language system. I have now decided that semantics has two aspects: relationality and adhesion.

I suppose we can think of meaning as inhering in the intentional relationship between the person – and we are talking about human beings here, but we could be talking about animals or maybe, just maybe, artificial minds – and the world. Walter Freeman regarded meaning as inherent in the total trajectory of the brain’s state during some experience, whether conversation, reading a text, or out and about in the world. There is more to the brain’s state than the operations of the language and cognitive systems. Thus meaning is necessarily different from semantics.

The standard philosophical arguments (such as Searle’s Chinese room) about artificial intelligence (which I’m now calling artificial minds), focus on meaning and intention to the utter neglect of semantics, as though it doesn’t exist. It may well be the case that all these artificial systems fail on the grounds of intentionality. It seems to me that the success of this line of argument is also a pyrrhic victory, for it leaves the philosopher powerless to reason about what these systems can do. It leads to a false binary where either the system is a human or it is a worthless artifact.

But that’s an aside. I’m much interested in that philosophical argument at the moment. I’m intersted in semantics, with its aspects of adhesion and relationality. Roughly speaking, adhesion is what ‘connects’ a concept to the world through perception. If we use a standard semantic network diagram (below), adhesion is carried on the REP (represent) arcs.

Relationality is carried on the arcs linking concepts with one another. Thus VAR in the diagram is for variety; beagles and collies are varieties of dog. We can also think of adhesion as being about compression (data reduction) and categorization – Gärdenfors’ dimensions in concept spaces. Relationality is about relations between objects in different concept spaces. But that’s only a rough characterization.

Large language models, such as GTP-3, are exploiting semantic relationality – the argument I made in my GPT-3 working paper, but have no access to adhesion. Vision systems are gounded in adhesion and may also exploit aspects of relationality.

[If we use Powers’s notion of intensities, where perception and cognition have to account for incoming intensities, then adhesion is about compression of intensities while relationality is about distribution of them over different concepts.]

More later.

Raising sunken boats from Weehawken Cove

Of Diet Coke, Elon Musk, and gaming programmers [Talent Search!]

Conversations with Tyler: Daniel Gross and Tyler Talk Talent (Ep. 150):

COWEN: Let’s start with a simple question. Talk us through what is a good interview question. Pick one and tell us why it’s good.

GROSS: We’re going to get to that in a minute, but I actually had a question on my mind for you, as I sit here and am holding a can of Diet Coke in my hand that I’m going to crack open. I was wondering — Bill Gates, Elon Musk, Lauren Summers, Warren Buffett, John Carmack — all of these people drink Diet Coke. What do you think is going on with that?

COWEN: I think they have habits of nervous energy and more energy than they know what to do with. There’s no series of challenges you can present to them that exhaust all of their nervous intellectual, mental energy, so it has to go somewhere. Some of them might twitch. Some of them just keep on working basically forever. But also, drinking Diet Coke is something you can do. You feel it’s not that bad for you. It probably is really bad for you, and the quantities just get racked up. I’ve seen this in many high achievers.

What’s your hypothesis?

GROSS: Yes, it’s a good question. Of course, many people in America drink Diet Coke, so I don’t exactly know what we’re selecting for, but that would be boring to just leave it there. I do wonder if this amazing molecule we discovered called caffeine is really good, and maybe these very high achievers are just slightly caffeinated all day long.

There’s also something very not neurotic about getting too worried about, is aspartame good for you, bad for you. Regular Coke, Cherry Coke — just drink it and move on. There’s a sturdiness there, and maybe, in fact, it is really bad for you, and the people who manage to be very productive while consuming it are spectacularly good. It’s like deadlifting on Jupiter — there’s extra gravity. Yes, I think it’s an interesting question. I do wonder how much of what we assume when we think about talent — how much of it is innate versus just environmental?

COWEN: I wonder if there isn’t some super-short time horizon about a lot of very successful people, that the task right before them has to seem so important that they’ll shove aside everything else in the world to maintain their level of energy, and as collateral damage, maybe some long-term planning gets shoved aside as well. It just seems so imperative to win this victory now.

GROSS: There is something, definitely, I’m struck by when I meet a lot of the very productive people I’ve met in my life. They seem to have extreme focus, but also extreme ease of focus, meaning it’s not even difficult for them to zone everything out and just focus on the thing that’s happening now. You might ask them even, “How do you do that? Is that a special skill that you have? And what type of drug are you taking?” And they look at you with a dazed and confused face. Anyway, you asked me what is a good interview question.

Elon Musk:

COWEN: No, I view our central message in the book as, right now, the world is failing at talent spotting, and this needs to be a much more major topic of conversation. We have our ideas on how to do it better, but if we can simply convince people of that, I will be relatively happy.

GROSS: Yes. I think, to me, proof of this is SpaceX. Look, SpaceX, until fairly recently, wasn’t really doing anything new from a physics standpoint. There weren’t any new physics discoveries that Elon, in a lab in LA, figured out that von Neumann couldn’t figure out. It’s yesterday’s technology. It’s just that he is a better router and allocator of capital to the right talent.

You see this time and time again. Many Elon companies are this. He just manages to put the right people doing the right thing. If you were to try to really explain to a five-year-old at a very basic level, why aren’t there more SpaceX’s, I think it comes down to the right people don’t have the right jobs for human progress. Once you start viewing the world through this lens, it’s really hard to unsee it, at least for me.

COWEN: The new book on SpaceX— it indicates that Elon personally interviewed the first few thousand people hired at SpaceX to make sure they would get the right people. That is a radical, drastic move. You know how much time that involves, and energy and attention.

GROSS: Jeff Dean, who’s probably the best engineer at Google, used to work at Digital Equipment Corporation. I think he was the 10th or 11th engineer Google hired, and he’s basically responsible for the fact that Google Search works.

He did crazy optimizations — back in the day when this mattered — like writing data that you’re going to access a lot on the exterior side of the disc, so it was a bit easier to access. Anyway, brilliant guy, still works at Google. Amazing software engineer. He told me once, while he was just waiting for code to compile, he would just go through a stack of résumés that Google was hiring. This was back when Google had maybe 10,000 people. He still had a pulse on the type of people they were bringing in.

You hear stories like this a lot, but very few organizations do it. The best organizations tend to do it. It really matters. It might matter more — especially in the current market that we’re in — it might matter more than capital, just allocating the right people to the right jobs.

Software engineers in the gaming industry:

GROSS: By the way, a small sidebar here: software engineers from the gaming industry are extremely underrated, and there’s a nice thread on the internet the other day, about how, effectively, the entire Starlink team who’s building SpaceX’s internet network are gaming engineers.

I think that whole corner of the world is really overlooked by adults who view gaming as somewhat of a pejorative. But it’s a very powerful sphere of human creativity, and I think more of it needs to be brought into day-to-day life. By the way, just in general, when I think about gaming and fun, more of that needs to be brought into day-to-day life. There should be a Michelin guide for having fun. What are the best ways to have fun?

There's more at the link.

Wednesday, May 18, 2022

Mimi and Eunice comment on AI Rogues

Courtesy of Nina Paley

Graffiti on a smooth horizontal surface [Hoboken]

A good tweet-stream on modeling vision in machines and humans

The whole thread is worthwhile, with links to other papers.

Hot pink, hot! hot! hot!

Where we are now [no such thing as AGI]?

There are seven more tweets in the thread. They're all worth reading. The last tweet links here, where all the ideas are gathered together, with comments.

Tuesday, May 17, 2022

Being in space for a long time brings about changes in the brain

Peter Rogers, How long-term space missions change the brain, Big Think, May 15, 2022.

According to the 2015 NASA report, astronauts who spent long periods in space described motor-control problems and vision impairment, neither of which are ideal for individuals operating billion-dollar pieces of equipment in outer space.

One particularly common visual impairment, spaceflight-associated neuro-ocular syndrome (SANS), affects up to 70% of NASA astronauts who undergo long-duration missions aboard the International Space Station (ISS). These symptom indicated neurological changes, so it became more common for astronauts to undergo MRIs before and after missions.

These brain scans revealed significant structural changes. The group of international researchers sought to determine whether these changes in the brain are associated with SANS. Donna Roberts, M.D., a neuroradiologist at the Medical University of South Carolina who helped lead the study, explained in a press release:

“By putting all our data together, we have a larger subject number. That’s important when you do this type of study. When you’re looking for statistical significance, you need to have larger numbers of subjects.” [...]

After being in space, all the space travelers exhibited similar brain changes: cerebrospinal fluid buildup and reduced space between the brain and the surrounding membrane at the top of the head. The Americans, however, also had more enlargement in the regions of the brain that serve as a cleaning system during sleep, e.g. the perivascular space (PVS).

H/t Tyler Cowen.

Dendritic predictive coding: A theory of cortical computation with spiking neurons

Two views of the top

Neural Recognizers: Some [old] notes based on a TV tube metaphor [perceptual contact with the world]

Yet another bump can't hurt. Why? Because yesterday I saw this tweet by Kevin Mitchell: “A useful perspective shift is to think of a neuron (or brain area) as actively monitoring its inputs as opposed to being passively driven by them.” [5.17.22]

Another bump to the top can't hurt.  [Sept 2021]

I'm bumping this to the top of the queue because GPT-3. I'm reconfiguring and restructuring like crazy. More later.
Introduction: Raw Notes

A fair number of my posts here at New Savanna are edited from my personal intellectual notes. In this post the notes are unedited. This is an idea that dates back to my graduate school days in English at SUNY Buffalo. Since I keep my notes in Courier – a font that harks back to the days of manual typewriters – I’ve decided to retain that font for these posts and to drop justification.

Since these notes are “raw” you’re pretty much on your own. Sorry and good luck.

* * * * *

1.26.2002 – 1.27.2002

This is the latest version of an idea I first explored at Buffalo back in the late 1970s. It was jointly inspired by William Powers’ notion of a zero reference level at the top of his servo stack and by D’Arcy Thompson. I’ve transcribed some of those notes into the next section. A version of this appeared in the paper DGH (David Hays) and I wrote on natural intelligence, where we talked in terms of Pribram’s neural holography and Spinelli’s OCCAM model for the cortical column:
  • W. Benzon and D. Hays. Principles and Development of Natural Intelligence. Journal of Social and Biological Structures 11, 293 - 322, 1988.
  • Powers, W.T. (1973). Behavior: The Control of Perception. Chicago: Aldine.
  • Pribram, K. H. (1971). Languages of the Brain. Englewood Cliffs, New Jersey: Prentice-Hall.
  • Spinelli, D. N. (1970). Occam, a content addressable memory model for the brain. In (K. H. Pribram & D. Broadbent, Eds): The Biology of Memory. New York: Academic Press, pp. 293-306.

“TV Tube Recognizer”


Imagine a TV screen with a circle painted on it and with controls which allow you to operate on and manipulate the projection system in various useful ways. We’re going to use this to conduct an active analysis of the input to the screen.

Assume that the object to be analyzed is projected onto the screen in such a way that its largest dimension doesn’t extend beyond the circle painted on it. The analysis consists of twiddling the [control] dials until the area between the outer border of the object and the inner border of the circle is as small as possible. That is “minimize area between object and circle” is the reference signal for this servo-mechanical procedure, while “twiddle the dials” is the output function. (Notice that we are not operating on the input signal to the TV screen.)


Active Analysis

One thing we might do by dial twiddling is to operate on the coordinate system of the projection (I’m thinking here of D’Arcy Thompson’s grids whereby a bass on one coordinate grid becomes a flounder when projected onto another different grid.) Thus if the input is a vertical ellipse a horizontal stretch would lower the area between the ellipse and circle [painted on the TV screen]. One could bend the axes or distort them in various ways. Or, how about allowing the system to partition the screen in various ways and then make local alterations in the coordinate system within the partition.

Endless possibilities.

It doesn’t make much difference what [we do], the point is that the system have some way of operating on the image on the screen (without messing around with the input to the screen ...). The settings on the dials when the area between the projected object and the circle is at a minimum then constitutes the analysis of the object. To the extent that objects differ, the differences in the dial settings differentiate between objects (we are limited, of course, by the resolving power of the system). The settings which are best for a buzzard won’t be best for a flounder, nor a pine tree, nor a start, etc.

* * * * *


The most obvious difficulty with this story is that it depends on someone observing the TV screen and twiddling the control knobs. We want to eliminate that someone so that the system can achieve the desired result itself.

The obvious way to do this is to call on the self-organizing capacity of cortical neural tissue. That tissue is itself the TV screen and control knobs while the appropriate subcortical thalamic nucleus is the source of input to the recognizer. The reference level is alpha oscillation, reflecting the observation that alpha energy is high when the stimulus is familiar and low when it is not. Unfamiliar input disturbs the oscillation and the recognizer seeks to restore oscillation by temporarily altering the properties (twiddling the dials) of the input array (thalamic nucleus).

Neural Recognizer

The neocortex is conceived as a patchwork of pattern recognizers; each is a sheet of cortical columns. Neighboring columns are mutually inhibitory, as in OCCAM (Spinelli 1970). A high level of output from one column will suppress output in its neighbors. The patterns are recognized in the primary input (input array) to a given recognizer. Let us assume a recognizer whose primary input is subcortical and let us set aside consideration of other inputs. The recognizer also generates primary output, which goes to the subcortical source of primary input. The computing capacity of a recognizer is far greater than that of its primary input.

The base state of such a recognizer occurs when the input is random (of a certain unspecified quality). In this base state the columns in the array oscillate – given a rather old notion that high alpha means low arousal, I’ve been thinking this would be at alpha; but, perhaps in view of Freeman’s work, I should revise this in favor of intrinsic chaos. The recognizer acts to maintain this base state under all conditions. When there is a non-random perceptual signal that signal will necessarily perturb the array so that it no longer oscillates smoothly. The array proceeds to form an impression of that input by sending (inhibitory) signals to the primary input. Some cortical columns will necessarily play a stronger role in this process than others. Eventually the recognizer will find some combination of outputs (modifying the properties of the input array) that restores randomness, and hence smooth oscillation. When this point is reached, the impression has been formed. This impression is of the input. In common parlance, we might want to say it represents that input.

Now some process must take place in the array so that the current perturbation can either be habituated into the background or an impression be taken, that is, can become part of the permanent repertoire of the recognizer. This latter, presumably, involves Hebbian learning and is triggered by reinforcement. In the manner of Spinelli’s OCCAM, the recognizer has many such impressions stored in its synaptic weights. A perceptual signal is presented across the entire array and, if it is of a kind that has already made an impression on the array, that impression will be evoked from the array and restore the recognizer to periodic oscillation. If it is of a kind that has not yet made an impression, then a new impression must be made.

Now, in fact, each recognizer has a variety of secondary inputs coming from other recognizers and it generates secondary outputs to them. All of them are attempting to account for their input simultaneously; through their secondary inputs and outputs the recognizers “help” one another out. Further it has inputs from subcortical nuclei which send neuromodulators to the array and it sends outputs to those nuclei which indicate its state of operation. The neuromodulators cause the recognizer to switch between its different operating modes.

I see these operating modes as follows:

Baseline: There is no perceptual load. The array is oscillating at alpha (chaos?).

Tracking: Perceptual input is accounted for. The array has recognized the input and is oscillating at alpha (chaos?).

Matching: The array is under a perceptual load and is attempting to match the input using its current set of impressions. EEG: “desynchronized,” gamma?

Forming (an impression): The array is under a perceptual load, but is unable to match it from its current impression repertoire. It is now forming a new impression. Obviously one critical aspect of the recognizer’s operation is switching from an unsuccessful matching operation to forming. EEG: “desynchronized,” gamma?

Habituating: The array is under a perceptual load, a new impression has been formed, and it has been assimilated into the background.

Fixing: A new impression has been formed. It must now become part of the permanent repertoire of impressions. This is the beginning of LTP. EEG: high alpha?

Group Expressive Behavior

We could apply this line of thought to group expressive behavior where the members of the group are regarded as oscillators coupled to one another through mutual perception and coordinated action. The simplest such behavior would be moving together, or clapping, to an isochronous pulse.

Assume a group moving to an isochronous pulse. Further assume that this activity is cortically controlled. Now, imagine that various members of the group are driven by subcortical impulses to inflect their movement in noticeable ways. These inflections will be transmitted to others through the coupling. Adjustments made to accommodate these inflections become, in effect, the group’s impression of those subcortical impulses.

This needs to be worked through rather more carefully, which will certainly change things a bit. But what I’m driving at is that these group impressions will become the stuff of culture. Here’s where we get memes and performance trajectories [as those are defined in Beethoven’s Anvil].

You can't have too many irises

Monday, May 16, 2022

Is GPT-3 a structuralist? A brave new world in which language exists beyond the human?

Tobias Rees, Non-Human Words: On GPT-3 as a Philosophical Laboratory, Dædalus, Spring 2020.

In [Saussure's] words, “language is a system of signs that expresses ideas.”

Put differently, language is a freestanding arbitrary system organized by an inner combinatorial logic. If one wishes to understand this system, one must discover the structure of its logic. De Saussure, effectively, separated language from the human. There is much to be said about the history of structuralism post de Saussure.

However, for my purposes here, it is perhaps sufficient to highlight that every thinker that came after the Swiss linguist, from Jakobson (who developed Saussure’s original ideas into a consistent research program) to Claude Lévi-Strauss (who moved Jakobson’s method outside of linguistics and into cultural anthropology) to Michel Foucault (who developed a quasi-structuralist understanding of history that does not ground in an intentional subject), ultimately has built on the two key insights already provided by de Saussure: 1) the possibility to understand language, culture, or history as a structure organized by a combinatorial logics that 2) can be–must be–understood independent of the human subject.

GPT-3, wittingly or not, is an heir to structuralism. Both in terms of the concept of language that structuralism produced and in terms of the antisubject philosophy that it gave rise to. GPT-3 is a machine learning (ML) system that assigns arbitrary numerical values to words and then, after analyzing large amounts of texts, calculates the likelihood that one particular word will follow another. This analysis is done by a neural network, each layer of which analyzes a different aspect of the samples it was provided with: meanings of words, relations of words, sentence structures, and so on. It can be used for translation from one language to another, for predicting what words are likely to come next in a series, and for writing coherent text all by itself.

GPT-3, then, is arguably a structural analysis of and a structuralist production of language. It stands in direct continuity with the work of de Saussure: language comes into view here as a logical system to which the speaker is merely incidental.

That view has some similarity with the one I advanced on Pages 15-19 of my 2020 working paper about GPT-3.


All prior structuralists were at home in the human sciences and analyzed what they themselves considered human-specific phenomena: language, culture, history, thought. They may have embraced cybernetics, they may have conducted a formal, computer-based analysis of speech or art or kinship systems. And yet their focus was on things human, not on machines. GPT-3, in short, extends structuralism beyond the human.

The second, in some ways even more far-reaching, difference is that the structuralism that informs LLMs like GPT-3 is not a theoretical analysis of something. Quite to the contrary, it is a practical way of building things. If up until the early 2010s the term structuralism referred to a way of analyzing, of decoding, of relating to language, then now it refers to the actual practice of building machines “that have words.”

A new ontology?

Machine learning engineers in companies like OpenAI, Google, Facebook, or Microsoft have experimentally established a concept of language at the center of which does not need to be the human, either as a knowing thing or as an existential subject. According to this new concept, language is a system organized by an internal combinatorial logic that is independent from whomever speaks (human or machine). Indeed, they have shown, in however rudimentary a way, that if a machine discovers this combinatorial logic, it can produce and participate in language (have words). By doing so, they have effectively undermined and rendered untenable the idea that only humans have language–or words.

What is more, they have undermined the key logical assumptions that organized the modern Western experience and understanding of reality: the idea that humans have what animals and machines do not have, language and logos. [...]

In fact, the new concept of language–the structuralist concept of language–that they make practically available makes possible a whole new ontology.

What is this new ontology? Here is a rough, tentative sketch, based on my current understanding.

By undoing the formerly exclusive link between language and humans, GPT-3 created the condition of the possibility of elaborating a much more general concept of language: as long as language needed human subjects, only humans could have language. But once language is understood as a communication system, then there is in principle nothing that separates human language from the language of animals or microbes or machines.

A brave new world?

Language, almost certainly, is just a first field of application, a first radical transformation of the human provoked by experimental structuralism. That is, we are likely to see the transformation of aspects previously thought of as exclusive human qualities–intelligence, thought, language, creativity–into general themes: into series of which humans are but one entry.

What will it mean to be surrounded by a multitude of non-human forms of intelligence? What is the alternative to building large-scale collaborations between philosophers and technologists that ground in engineering as well as an acute awareness of the philosophical stakes of building LLMs and other foundational models?

It is naive to think we can simply navigate–or regulate–the new world that surrounds us with the help of the old concepts. And it is equally naive to assume engineers can do a good job at building the new epoch without making these philosophical questions part of the building itself: for articulating new concepts is not a theoretical but a practical challenge; it is at stake in the experiments happening in (the West at) places like OpenAI, Google, Microsoft, Facebook, and Amazon.

Addendum, 5.17-19.22:  We're not quite there yet, but Rees' final point still stands, we do need new concepts.  It's not at all clear in what sense GPT-3 is "arguably a structural analysis of" language, or any kind of analysis at all, and it certainly is not a stand-alone language automaton. It does not in fact constitute a/the language system divorced from a human agent. There is only a partial separation, a distancing. We're heading in that direction, but I doubt that we'll get there on extensions of current tech alone. We're going to need something new. Neurosymbolic? Maybe. 

For a different kind of analysis of GPT, but also deeper because it gets closer to the mechanism, see my working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, Version 4.1.

Glimpse of an urban pastoral in Hoboken

Large Language Models generate usable plans when aided by a symbolic planner

Photoshop at night [across the Hudson River]

You CAN have intelligent conversations on Twitter

You need to build a community, which isn’t hard, and you need to be civil. You don’t have to participate in the craziness. Sure, you can’t avoid it, some will always come your way. But if you don’t respond, you won’t get that much of it.

I’ve been on Twitter since October 2011. The number of people I’m following (currently 1526) exceeds the number that follow me (812). But that’s OK. I suppose I’m mostly in what you might call academic twitter. I converse with people in digital humanities, literary studies, cognitive science, computer science, AI, neuroscience and a bit of this and that. Often enough I’ll post links to some of my papers; people will even read them.

Sometimes we just post and exchange information, often links to interesting papers or intellectual events. Every so often we’ll engage in a conversation that may involve 4, 5, 6, or more people and 20, 30, or more tweets over the course of a day or two. Usually these just spring up, without notice.

Despite the character limitation on individual tweets, it’s possible to get real work done this way. In the first place, we know one another, though many times a newcomer will show up, new to me if not everyone else. That’s always nice. You can attach diagrams or snippets of texts to tweets, which expands the range of communication. You can string tweets together into a thread 3, 4, 9, 15, or more tweets long.

This doesn’t happen every day; if it did I’d never get anything done. It doesn’t even happen every week. But two or three times a month I’ll find myself in an engaging extended conversation. Shorter conversations – 1, 2, 3 people, a half dozen to a dozen tweets – are more common.

To be sure, it’s not like academic blogging used to be, where half a dozen or people would make extensive comments on a single topic over the course of two or three days, but it’s still substantial. There’s a bit of that old intellectual magic still around, at Language Log, and Crooked Timber, in my world. I check those places daily, sometimes read something, and occasionally comment. Then there’s Marginal Revolution, which I frequent mainly for links to other stuff. Sometimes there will be a nice conversational run on Facebook. And then there’s Twitter, which is something different.

Things change, no?

I also post photographs regularly. Sometimes it’s just a photograph or three. I may attach a photo to a message I’m sending. I belong to a group called Daily Picture Theme, which is just what the name suggests. The theme setter, I don’t know they’re name, but I believe they live in England, posts a theme each day and we then post photos matching that theme. There about 1K of us, not a lot, but enough. Some days I don’t have a photo I think appropriate, so I don’t post. Some days I’ll post several. It’s very informal and fun.

As for Elon Musk, we’ll see. At this point it’s not entirely clear that he’s buying. If he does, it’ not at all clear that whatever he does will affect my local twitter. It’s local only in the sense it’s the people I know, close to me. Those people are distributed all over the world.

There was nothing like that in the good old days.

Sunday, May 15, 2022

Boat dock, with One World Center in the background

Learning to control a computer with your mind

Ferris Jabr, The Man Who Controls Computers With His Mind, The NYTimes Magazine, May 13, 2022.

Dennis DeGray is paralyzed from the neck down. But he's been outfitted with a computer interface that allows him to do various things. It's not simply plug and go. DeGray has had two arrays implanted into his brain; each has 100 electrodes. You have to learn how to use the interface.

After a recovery period, several of Henderson’s collaborators assembled at DeGray’s home and situated him in front of a computer screen displaying a ring of eight white dots the size of quarters, which took turns glowing orange. DeGray’s task was to move a cursor toward the glowing dot using his thoughts alone. The scientists attached cables onto metal pedestals protruding from DeGray’s head, which transmitted the electrical signals recorded in his brain to a decoder: a nearby network of computers running machine-learning algorithms.

The algorithms were constructed by David Brandman, at the time a doctoral student in neuroscience collaborating with the Stanford team through a consortium known as BrainGate. He designed them to rapidly associate different patterns of neural activity with different intended hand movements, and to update themselves every two to three seconds, in theory becoming more accurate each time. If the neurons in DeGray’s skull were like notes on a piano, then his distinct intentions were analogous to unique musical compositions. An attempt to lift his hand would coincide with one neural melody, for example, while trying to move his hand to the right would correspond to another. As the decoder learned to identify the movements DeGray intended, it sent commands to move the cursor in the corresponding direction.

Brandman asked DeGray to imagine a movement that would give him intuitive control of the cursor. Staring at the computer screen, searching his mind for a way to begin, DeGray remembered a scene from the movie “Ghost” in which the deceased Sam Wheat (played by Patrick Swayze) invisibly slides a penny along a door to prove to his girlfriend that he still exists in a spectral form. DeGray pictured himself pushing the cursor with his finger as if it were the penny, willing it toward the target. Although he was physically incapable of moving his hand, he tried to do so with all his might. Brandman was ecstatic to see the decoder work as quickly as he had hoped. In 37 seconds, DeGray gained control of the cursor and reached the first glowing dot. Within several minutes he hit dozens of targets in a row.

Only a few dozen people on the planet have had neural interfaces embedded in their cortical tissue as part of long-term clinical research. DeGray is now one of the most experienced and dedicated among them. Since that initial trial, he has spent more than 1,800 hours spanning nearly 400 training sessions controlling various forms of technology with his mind. He has played a video game, manipulated a robotic limb, sent text messages and emails, purchased products on Amazon and even flown a drone — just a simulator, for now — all without lifting a finger. Together, DeGray and similar volunteers are exploring the frontier of a technology with the potential to fundamentally alter how humans and machines interact.

There's more at the link.

Here's a working paper, co-authored with David Ramsey, a music therapist, explaining how non-invasive interfaces allow paraplegics to play music and thus communicate with one another, Musical Coupling: Social and Physical Healing in Three Disabled Patients.

This working paper debunks Elon Musk's fantasy about thought transfer, Direct Brain-to-Brain Thought Transfer A High Tech Fantasy that Won't Work.

Sunday's a good time for irises, too