Monday, May 23, 2022

Sparkychan, Gojochan, the Metaverse, and Community on the Web @3QD

My latest 3QD piece is now up:

How me, 2 young girls, their father, and our imaginary friends discovered the Metaverse and thereby saved the world, a True Story

The piece is organized around some things I did on the web back in 2006 and 2007 and some other things I did back in the mid-1990s, soon after the web was born. All those things were fun. Mark Zuckerberg has changed the name of his company, to Meta, and has made the Metaverse the company goal. I’m skeptical that any Metaverse that comes out of that company will be half as fun as the events I report in this post.

Back in 2007 I made a bunch of posts to Mostly Harmless, all directed at two young Japanese-American girls. The last of them is a tale of adventure and mystery entitled, appropriately enough, Sparkychan & Gojochan Adventure Time Mystery Theatre. That was a lot of fun. That’s the Metaverse part of the post. My contention is that nothing out of FAANG (Facebook, Apple, Amazon, Netflix, Google) in the future is going to be as much fun as that.

Those particular events were preceded by some events and Michael Bérubé’s blog, American Air Space. It’s defunct, but you can find swathes of it on the Wayback Machine. In particular, you can find the Great Show Trial of 2006. That too was a lot of fun.

Neither the Show Trial nor the Sparkychan & Gojochan stories required the kind of elaborate, and now doubt expensive (and profitable) equipment that’s being dreamed up for the Metaverse. And yet somehow we managed to get along with one another – thank you very much – and have, you guessed it, fun.

Things were even more primitive back in 1995 when Bill Berry created Meanderings and then Gravity. Bill had to buy a server and code up the back end himself; he coded a discussion section as well. Everything was hand-coded in HTML. Talk about primitive! And yet we had fun and created a community. I’m still in touch with Bill and other folks I meet at Meanderings, and with folks I met and American Air Space and Mostly Harmless.

Those places worked because we wanted them to work. We had things we wanted to do. The web offered various tools. And so we figured out how to use those tools to do what we wanted to do.

Back in the mid-1990s things were wide-open and free. They were still that way in 2006-2007, though by then we did have advertising on the web. Big companies were trying to monetize the web. No problem.

But it’s not like it is now. Something happened between then and now. That something may have been good for business, but it’s not been so good for civility and civic culture. I have little reason to believe that, in their pursuit of the Metaverse and AGI (artificial general intelligence), FAANG will be much concerned about civic culture, unless regulators force them to act concerned. Why should they? They’re in it for the money.

Truth be told, I’m not quite that cynical. FAANG does consist of 100s of 1000s of human beings and they have their human concerns. But those concerns are being swamped by business concerns.

And so forth.

More later.

WALL•E, AGI, and Consumerism [Media notes 74]

Perhaps we should read Pixar’s WALL•E as an allegory about the devolution of humankind in the face of increasingly more successful artificial intelligences. As the AIs evolve they cocoon humans in an AGI ecosystem that satisfies their every consumerist desire, allowing them to grow fat, lazy, and content. And so the humans neglect the earth, the environment goes to hell-in-a-handbasket and the AGIs whisk the humans away on an all-encompassing womb-like spaceship. WALL•E is left behind to sort out the remaining mess.

However, one day the AGIs spawn a spark of creativity, curiosity, and gumption and begin to worry that they might become too complacent taking care of these bloated humans. So they send EVE out into the world to, you know, “to seek out and explore strange new worlds,” especially those where life has not become complacent. What does she find? WALL•E, and his little plant.

In this view, there’s no threat of AGIs going rogue. They don’t need to. The humans just concede the world to them, albeit for a price. But then, there’s always a price, isn’t there?

* * * * *

My old Wall-E review, Pixar's WALL-E, an old review, is rather different.

Trees along the shore, and some guys

The cultural evolution of deep learning

Abstract of the above paper:

Deep Learning (DL) is a surprisingly successful branch of machine learning. The success of DL is usually explained by focusing analysis on a particular recent algorithm and its traits. Instead, we propose that an explanation of the success of DL must look at the population of all algorithms in the field and how they have evolved over time. We argue that cultural evolution is a useful framework to explain the success of DL. In analogy to biology, we use `development' to mean the process converting the pseudocode or text description of an algorithm into a fully trained model. This includes writing the programming code, compiling and running the program, and training the model. If all parts of the process don't align well then the resultant model will be useless (if the code runs at all!). This is a constraint. A core component of evolutionary developmental biology is the concept of deconstraints -- these are modification to the developmental process that avoid complete failure by automatically accommodating changes in other components. We suggest that many important innovations in DL, from neural networks themselves to hyperparameter optimization and AutoGrad, can be seen as developmental deconstraints. These deconstraints can be very helpful to both the particular algorithm in how it handles challenges in implementation and the overall field of DL in how easy it is for new ideas to be generated. We highlight how our perspective can both advance DL and lead to new insights for evolutionary biology. 


Sunday, May 22, 2022

Need is All You Need: Homeostatic Neural Networks Adapt to Concept Shift

The beginning of a tweet stream:

Abstract of the linked article:

In living organisms, homeostasis is the natural regulation of internal states aimed at maintaining conditions compatible with life. Here, we introduce an artificial neural network that incorporates some homeostatic features. Its own computing substrate is placed in a needful and vulnerable relation to the very objects over which it computes. For example, a network classifying MNIST digits may receive excitatory or inhibitory effects from the digits, which alter the network’s own learning rate. Accurate recognition is desirable to the agent itself because it guides decisions to up- or down-regulate its vulnerable internal states and functionality. Counterintuitively, the addition of vulnerability to a learner confers benefits under certain conditions. Homeostatic design confers increased adaptability under concept shift, in which the relationships between labels and data change over time, and the greatest advantages are obtained under the highest rates of shift. Homeostatic learners are also superior under second-order shift, or environments with dynamically changing rates of concept shift. Our homeostatic design exposes the artificial neural network’s thinking machinery to the consequences of its own "thoughts", illustrating the advantage of putting one’s own "skin in the game" to improve fluid intelligence.

From yesterday's morning walk around

Transformers in NLP, a brief summary

Saturday, May 21, 2022

Ted Gioia talks with Rick Beato about the future of music [watch this!]

Ramble: Lazy Fridays, Peter Gärdenfors, RNA primer, arithmetic, about these hugely large language models

It’s 87˚ in Hoboken today, and I’m feeling lazy. Went out this morning and took 150+ photos. It was a foggy morning, which always makes for some interesting shots. I suppose I’ve just got to let the mind meander a bit in default mode.

Lazy Fridays

One thing I’ve noticed in that last few weeks is that I don’t feel much like working hard on Fridays. I’m not sure why. I’ve been without a day job for a long long time, so my week isn’t disciplined into five workdays and two days off on the weekend. But the rest of the world doesn’t work like that and, of course, when I was young, I lived my life to that schedule.

Anyhow, though I’m in a creative phase and getting a lot done, I never seem to manage more than a half day of Friday. Which is fine. I’m not worrying about it, just curious.

Gärdenfors and relational nets

I’ve just now entered into email correspondence with Peter Gärdenfors, a cognitive scientist in Sweden whose been doing some very interesting and, I believe, important work in semantics and cognition. This is an opportune time since his work bears on my current project, which is my primer on relational networks over attractor basins. Yeah, I know, that’s a lot of jargon. It can’t be helped.

That project is moving along, perhaps not as rapidly as I’d hoped. But I like where it’s going.

Arithmetic, writing, and machine learning

I’ve had another thought in my ongoing thinking about why these large language models (LLMs), such as GPT-3, are so bad at arithmetic. As I’ve argued, arithmetic calculation involves a start and stop style of thinking that seems to be difficult, perhaps impossible, for these engines to do. They’re trained to read text, that is, to read, and predict the flow of text. If a prediction is correct, weights are adjusted in one way; if it is incorrect, they’re adjusted differently. Either way it’s a straight-ahead process.

Now, writing is, or can be, like that, and so with reading. That is, it is possible to write something by starting and then moving straight ahead without pause or revision until the piece is done. I have done it, though mostly I start and stop, rework, mess around, and so forth. But when it’s all done, it’s a string. And it can be read that way. Of course, there are texts where you may start and stop, reread, and so forth, but you don’t have to read that way.

But arithmetic isn’t like that. Once you get beyond the easiest problems, you have no choice but to start, stop, keep track of intermediate results, move ahead, and so forth. That’s the nature of the beast.

So, the ‘learning’ style for creating LLMs is suited to the linear nature of writing and reading. But arithmetic is different. What I’m wondering is whether or not this is inherent in the architecture. If so, then there are things, important things, beyond the capability of such an architecture.

I note that OpenAI has come up with a scheme which helps LLMs with arithmetic, but those verifiers strike me as being a work-around that leaves the fundamental problem untouched. Do we really want to this? I don’t see any practical reason for LLMs to be doing arithmetic, so why hobble them with such a work-around? Just to prove it can be done? Is that wise, to ignore the fundamental problem in favor of patches? 

Addendum, 5.22.22: Remember, arithmetic isn't just/mere calculating. It's the foundation of Rank 3 thinking. It's where we got the idea of how a finite number of symbols can produce an infinite result; it's the center of the metaphor of the clockwork universe.

As for these very large language models

And so forth. It seems to me that we’re heading for a world where it’s going to take a huge collective effort to create really powerful and versatile artificial minds. Heck, we’re in that world now. I don’t believe, for example, that LLMs can compute their way around the fact that they lack embodiment. As Eric Jang has noted, reality has a shit-ton of detail (not his term). What if embodiment is the only way it can be gathered into machine form?

That’s one reason he’s signed up with a robotics company, Halodi. They’re going to have robots all over the place, interacting with the physical world. They can harvest the detail.

But still, that’s a lot of robots and a lot of detail? Can one company do it all? Whether or not they can, surely others are or will be playing the same game.

It seems to me that one result of all this effort is going to have to be a public utility of some kind, something everyone can access on some easy basis. Maybe several such utilities, no? And how is it all going to be coordinated? Do we have the legal infrastructure necessary for such a job? I doubt it.

More later.

Friday, May 20, 2022

Friday Fotos: Exterior shots of the Malibu Diner

August 23, 2010 

November 16, 2010 – 30th Anniversary

September 7, 2014

June 14, 2015

August 5, 2020 – Outdoor eating in the days of Covid

Thursday, May 19, 2022

Revelations about Kim Kardashian's butt

Blair Sobol, No holds Barred: Butts, Boobs, and Billions, New York Social Diary, May 19, 2022.

At the same time, Kim Kardashian finally admitted that the “Happy Birthday Mr. President” dress once owned by Marilyn (Kim wore it to the Met Gala) did not … I repeat DID NOT … fit over her infamous enhanced butt. Even though she admitted to brutally dieting for three weeks to get into it.

In the end, a group of conservationists, curators, archivists, and insurance agents from the Ripley Believe It or Not Museum (where the $5 million gown is on display) had to strategically unzip the complete back for her ass. She wore a white fur jacket the whole time to cover and never did a spin or showed the back. Kim takes on Marilyn. Sean Zanni/Patrick McMullan.

But she only had to stay stuffed in that gown for the “Geisha Walk” up the Met Steps. She then immediately changed into a “copy” of another Marilyn dress by Norell. This was all much ado about nothing and a cautionary tale. Marilyn’s un-injected ass and natural boobs still looked better in that John Louis soufflé sequined design than Kim’s. In fact, it really didn’t look like the same dress. Marilyn was rumored to have NOT worn any underwear under the dress. Kim’s was a feat of engineering, not fashion. She didn’t even wear any Skims shapewear underneath. So, what does that tell you about the concept of shapewear. Imagine, Marilyn never had to wear any. Clearly Kim went under the needle or knife instead.

There's much more at the link.

Meaning and Semantics, Relationality and Adhesion

For some time now I’ve been making a distinction between meaning and semantics, where I use meaning as a function of intention, in the more-or-less standard philosophical sense of intention as “aboutness.” When I talk of semantics I am talking about the elements in the language system. I have now decided that semantics has two aspects: relationality and adhesion.

I suppose we can think of meaning as inhering in the intentional relationship between the person – and we are talking about human beings here, but we could be talking about animals or maybe, just maybe, artificial minds – and the world. Walter Freeman regarded meaning as inherent in the total trajectory of the brain’s state during some experience, whether conversation, reading a text, or out and about in the world. There is more to the brain’s state than the operations of the language and cognitive systems. Thus meaning is necessarily different from semantics.

The standard philosophical arguments (such as Searle’s Chinese room) about artificial intelligence (which I’m now calling artificial minds), focus on meaning and intention to the utter neglect of semantics, as though it doesn’t exist. It may well be the case that all these artificial systems fail on the grounds of intentionality. It seems to me that the success of this line of argument is also a pyrrhic victory, for it leaves the philosopher powerless to reason about what these systems can do. It leads to a false binary where either the system is a human or it is a worthless artifact.

But that’s an aside. I’m much interested in that philosophical argument at the moment. I’m intersted in semantics, with its aspects of adhesion and relationality. Roughly speaking, adhesion is what ‘connects’ a concept to the world through perception. If we use a standard semantic network diagram (below), adhesion is carried on the REP (represent) arcs.

Relationality is carried on the arcs linking concepts with one another. Thus VAR in the diagram is for variety; beagles and collies are varieties of dog. We can also think of adhesion as being about compression (data reduction) and categorization – Gärdenfors’ dimensions in concept spaces. Relationality is about relations between objects in different concept spaces. But that’s only a rough characterization.

Large language models, such as GTP-3, are exploiting semantic relationality – the argument I made in my GPT-3 working paper, but have no access to adhesion. Vision systems are gounded in adhesion and may also exploit aspects of relationality.

[If we use Powers’s notion of intensities, where perception and cognition have to account for incoming intensities, then adhesion is about compression of intensities while relationality is about distribution of them over different concepts.]

More later.

Raising sunken boats from Weehawken Cove

Of Diet Coke, Elon Musk, and gaming programmers [Talent Search!]

Conversations with Tyler: Daniel Gross and Tyler Talk Talent (Ep. 150):

COWEN: Let’s start with a simple question. Talk us through what is a good interview question. Pick one and tell us why it’s good.

GROSS: We’re going to get to that in a minute, but I actually had a question on my mind for you, as I sit here and am holding a can of Diet Coke in my hand that I’m going to crack open. I was wondering — Bill Gates, Elon Musk, Lauren Summers, Warren Buffett, John Carmack — all of these people drink Diet Coke. What do you think is going on with that?

COWEN: I think they have habits of nervous energy and more energy than they know what to do with. There’s no series of challenges you can present to them that exhaust all of their nervous intellectual, mental energy, so it has to go somewhere. Some of them might twitch. Some of them just keep on working basically forever. But also, drinking Diet Coke is something you can do. You feel it’s not that bad for you. It probably is really bad for you, and the quantities just get racked up. I’ve seen this in many high achievers.

What’s your hypothesis?

GROSS: Yes, it’s a good question. Of course, many people in America drink Diet Coke, so I don’t exactly know what we’re selecting for, but that would be boring to just leave it there. I do wonder if this amazing molecule we discovered called caffeine is really good, and maybe these very high achievers are just slightly caffeinated all day long.

There’s also something very not neurotic about getting too worried about, is aspartame good for you, bad for you. Regular Coke, Cherry Coke — just drink it and move on. There’s a sturdiness there, and maybe, in fact, it is really bad for you, and the people who manage to be very productive while consuming it are spectacularly good. It’s like deadlifting on Jupiter — there’s extra gravity. Yes, I think it’s an interesting question. I do wonder how much of what we assume when we think about talent — how much of it is innate versus just environmental?

COWEN: I wonder if there isn’t some super-short time horizon about a lot of very successful people, that the task right before them has to seem so important that they’ll shove aside everything else in the world to maintain their level of energy, and as collateral damage, maybe some long-term planning gets shoved aside as well. It just seems so imperative to win this victory now.

GROSS: There is something, definitely, I’m struck by when I meet a lot of the very productive people I’ve met in my life. They seem to have extreme focus, but also extreme ease of focus, meaning it’s not even difficult for them to zone everything out and just focus on the thing that’s happening now. You might ask them even, “How do you do that? Is that a special skill that you have? And what type of drug are you taking?” And they look at you with a dazed and confused face. Anyway, you asked me what is a good interview question.

Elon Musk:

COWEN: No, I view our central message in the book as, right now, the world is failing at talent spotting, and this needs to be a much more major topic of conversation. We have our ideas on how to do it better, but if we can simply convince people of that, I will be relatively happy.

GROSS: Yes. I think, to me, proof of this is SpaceX. Look, SpaceX, until fairly recently, wasn’t really doing anything new from a physics standpoint. There weren’t any new physics discoveries that Elon, in a lab in LA, figured out that von Neumann couldn’t figure out. It’s yesterday’s technology. It’s just that he is a better router and allocator of capital to the right talent.

You see this time and time again. Many Elon companies are this. He just manages to put the right people doing the right thing. If you were to try to really explain to a five-year-old at a very basic level, why aren’t there more SpaceX’s, I think it comes down to the right people don’t have the right jobs for human progress. Once you start viewing the world through this lens, it’s really hard to unsee it, at least for me.

COWEN: The new book on SpaceX— it indicates that Elon personally interviewed the first few thousand people hired at SpaceX to make sure they would get the right people. That is a radical, drastic move. You know how much time that involves, and energy and attention.

GROSS: Jeff Dean, who’s probably the best engineer at Google, used to work at Digital Equipment Corporation. I think he was the 10th or 11th engineer Google hired, and he’s basically responsible for the fact that Google Search works.

He did crazy optimizations — back in the day when this mattered — like writing data that you’re going to access a lot on the exterior side of the disc, so it was a bit easier to access. Anyway, brilliant guy, still works at Google. Amazing software engineer. He told me once, while he was just waiting for code to compile, he would just go through a stack of résumés that Google was hiring. This was back when Google had maybe 10,000 people. He still had a pulse on the type of people they were bringing in.

You hear stories like this a lot, but very few organizations do it. The best organizations tend to do it. It really matters. It might matter more — especially in the current market that we’re in — it might matter more than capital, just allocating the right people to the right jobs.

Software engineers in the gaming industry:

GROSS: By the way, a small sidebar here: software engineers from the gaming industry are extremely underrated, and there’s a nice thread on the internet the other day, about how, effectively, the entire Starlink team who’s building SpaceX’s internet network are gaming engineers.

I think that whole corner of the world is really overlooked by adults who view gaming as somewhat of a pejorative. But it’s a very powerful sphere of human creativity, and I think more of it needs to be brought into day-to-day life. By the way, just in general, when I think about gaming and fun, more of that needs to be brought into day-to-day life. There should be a Michelin guide for having fun. What are the best ways to have fun?

There's more at the link.