Pages in this blog

Friday, December 31, 2021

Kara Swisher liked "Don't Look Up" [So did I] [Media Notes 65b]

From the very end her year-end column in The NYTimes, Dec. 30, 2021:

But I digress: The reason I liked the “The Matrix Resurrections” and “Don’t Look Up” is because these are both stories about the limits of big tech, big media and big politics and the importance of heartfelt, real family connections. These are critically important ideas as we move into the next iteration of tech, which will have a lot more to do with virtualizing everything. How we evolve and connect as humans as the world moves to VR is a critical issue. [...]

So, too, Adam McKay’s much-maligned “Don’t Look Up.” If you ask me, you should ignore the critics. Yes, there are some obvious plot points and over-the-top characterizations, but ultimately it’s a story about the gravity of humanity, however doomed it becomes because of its most pernicious members. That includes, particularly, the tech billionaire Peter Isherwell, a part played to geek perfection by Mark Rylance, who has managed to cohesively mash together the worst parts of Jeff Bezos, Elon Musk and Zuckerberg.

Isherwell’s character hits it on the nose with his know-it-all certainty and data-driven lunacy, calling to mind tech’s ruling class, with its proclivity to be frequently wrong but never in doubt. And within the movie is a caution, that we ought not let Big Tech alone govern the world we share. “We really did have everything, didn’t we?” says the feckless astronomer played by Leonardo DiCaprio in the movie’s last scene.

I feel the same way about the Isherwell character.

Thursday, December 30, 2021

Richard Hanania in search of a way to make sense of US foreign policy.

US foreign policy is unintelligible because 1) it's irrational, and 2) we're using the wrong intellectual tools to understand it. Richard Hanania sets out a better framework in Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy. Here's a blog post in which he introduces the book. Here's the basic idea:

Ever since I started studying IR [international relations], I had a gnawing feeling that something about the whole enterprise was off. As I read more history, and also in other fields like economics, anthropology and psychology, I came to the conclusion that the ways in which we talk about international relations and foreign policy are simply wrong. The whole reason that IR is its own subfield in political science is because of the “unitary actor model,” or the assumption that you can talk about a nation like you talk about an individual, with motivations, goals, and strategies. No one believes this in a literal sense, but it’s considered “close enough” for the sake of trying to understand the world. And although many IR scholars do look at things like psychology and state-specific factors to explain foreign policy, they generally don’t take the critique of the unitary actor model far enough. The more I studied the specifics of American foreign policy the more it looked irrational on a system-wide level and unconnected to any reasonable goals, which further made me skeptical of the assumptions of the field.

That’s pretty abstract, so let’s make it concrete. Think about the most consequential foreign policy decision of the last half century. Why did America invade Iraq in 2003? People say things like it was for oil, or Israel, or neo-conservative ideology. Some still take the original WMDs justification seriously (here’s me arguing with Garett Jones). As I explain in the book, my theory is more like “Bush felt angry, had an instinct that expanding the war on terror was good politics, and had appointed people like Feith and Wolfowitz who already had a target in mind and told him it was going to be easy. So they just invaded and didn’t care about the consequences, because it’s not like any of them had to live in Iraq or anything. Plus they all got nice jobs afterwards anyway.” For more context, see my previous article on neo-cons as willing dupes of Ahmed Chalabi.

For both the major post-9/11 wars – Afghanistan and Iraq – it is clear from the historical record that the Bush administration had no idea what would come after regime change. The neo-con faction wanted to install Chalabi in Iraq, but Bush sort of dithered and then rejected that view, and ended up just letting State Department types get to work writing a constitution with gender quotas and building something called “civil society.” The political system selects for people who think in terms of short-term political goals, not long-term grand strategy. When the war started going badly and there weren’t even WMDs, it was too embarrassing to admit how dumb the whole thing was and the 2004 election was coming up soon so they all started talking about how American freedom depended on democratizing Muslim countries. It’s often thought that putting yourself in the shoes of others helps build empathy, but when I studied the Bush administration in particular, my experience was pretty much the opposite, and I remain taken aback by the extent to which they didn’t seem to feel any moral responsibility to think too much about the consequences of their actions, at least for anything besides electoral politics.

He summarizes the book's conclusion (in Ch. 7) this way:

In the arguments put forth in this book, dominant American ideas about foreign policy are mostly downstream of the interests of concentrated groups. Therefore, I suggest that those who want to change US behavior abroad should seek to shape the incentive structures that politicians and government officials face, rather than simply focusing on ideas.

There's more at the link.

Tyler Cowen thinks segregating school children by age is a bad idea [I agree]

From the year-end review of Conversations with Tyler:

HOLMES: Next question from @gasca. Asked a number of ones. I think the one I’ll pick is, “We talked about university curriculum, but if you could do whatever you wanted, how would you change elementary, middle school, high school curricula?”

COWEN: I don’t think I know enough to say, but intuitively, it strikes me as somewhat absurd that we group together children all of the same age. There’s an obvious staggering problem. But ideally you would want younger children always to be interacting with older children, and older children to take on a partial role of teacher and mentor, older peer.

The idea that there’s the second grade, the third grade, the fourth grade — in my gut, I feel that has to be wrong, and you’re inducing the kids to bring out the worst in each other. I don’t know how to fix that, but that’s where my attention would point — on that assumption that you group by age seems barbaric.

YES!!

Wednesday, December 29, 2021

What’s the opposite of substrate independence?

I suppose we could call it “substrate dependence” or “substrate linkage,” but this really isn’t about the term, it’s about the substance. As I recently noted, the idea of substrate independence is often invoked to indicate that it should be possible to construct a (proper) mind in silicon, though we’ve not yet figured out how to do it. I have my doubts, but who knows?

Substrate independence presupposes a fully explicit computational procedure, one whose structure is fully accessible to an external observer, one that can be constructed from the outside. The human mind, as I’ve argued, is constructed from the inside. I note as well that while digital computers recognize a distinction between addresses and content (data stored at an address), there’s no reason to think that such a distinction exists in natural nervous systems. Nor is there a distinction between memory units and processing units – something von Neumann recognized in Computers and the Brain. All neurons seem to be both processors and memory. And each neuron is a living agent.

What if all these things – built from the inside, no distinction between address and content, no distinction between memory and processing, living components – are necessary for the construction of a mind? Could they be realized in silicon, or some other inert substrate? That’s not at all obvious.

Don’t Look Up [Media Notes 65a]

Don’t Look Up, as you’ve probably heard, is a weird apocalyptic comedy in which a large comet acts as an allegorical stand-in for climate change. The human race does not come out well.

Manohla Dargis of The New York Times ran a review on the downside of so-so. Nick Allen at Roger Ebert.com was scathing. In a short notice Tyler Cowen noted:

You won’t find many accurate reviews of this one, in part because it is so brutal about media, not to mention American politics. The core message, however, is that everything is downstream of culture. And that we are incapable of taking our own decline seriously.

Cowen cites a substantial review from Bruno Maçães.

I note that Silicon Valley billionaires don't come out very well. Mark Rylance plays a data-happy space-struck billionaire with the power to bend the nation’s response to the comet to his commercial will – though, come to think of it, it’s all about commerce in one way or another. But he does correctly predict the President's demise between the jaws of a Bronteroc.

I rather liked it. It rang true. By all means, see and judge it for yourself.

The energetics of uniquely human subsistence strategies

Efficiency leads to leisure

Humans are animals—merely another lineage of great apes. However, we have diverged in significant ways from our ape cousins and we are perennially interested in how this happened. Kraft et al. looked at energy intake and expenditure in modern hunter-gatherer societies and great apes. They found that we do not spend less energy while foraging or farming, but we do acquire more energy and at a faster rate than our ape cousins. This difference may have allowed our ancestors to spend more time in contexts that facilitated social learning and cultural development. —SNV

Structured Abstract

INTRODUCTION

Relative to other great apes, humans have large brains, long life spans, higher fertility and larger neonates, and protracted periods of childhood dependency and development. Although these traits constitute the unique human life history that underlies the ecological success of our species, they also require human adults to meet extraordinarily high energetic demands. Determining how human subsistence strategies have met such extreme energy needs, given time and energy expenditure constraints, is thus key to understanding the origins of derived human traits.

RATIONALE

Two major transitions in hominin subsistence strategies are thought to have elevated energy capture: (i) the development of hunting and gathering ~2.5 million years ago, which coincided with brain enlargement and extended postnatal growth, and (ii) the rise of agriculture ~12,000 years ago, which was accompanied by substantial increases in fertility and population densities. These transitions are associated with the exploitation of novel food sources, but it is not clear how the energy and time budgets of early human foragers and farmers shifted to accommodate expensive traits. Some evolutionary reconstructions contend that economical locomotion, cooperation, the use of sophisticated tools, and eventually agriculture increased energy efficiency (i.e., energy gained versus energy spent), beyond that of other great apes. Alternatively, unique human subsistence strategies may reduce time and improve yield, increasing return rates (i.e., energy gained versus time spent).

To test these ideas, we compared subsistence costs (energy and time) and energy acquisition among wild orangutans, gorillas, and chimpanzees with high-resolution data on total energy expenditure, food acquisition, and time allocation, collected among Tanzanian hunter-gatherers (Hadza) and Bolivian forager-horticulturalists (Tsimane). Both populations actively forage (hunt, gather), whereas the Tsimane also practice slash-and-burn horticulture, which permits exploration of further changes in the energetics of subsistence associated with farming. We also assembled a global subsistence energetics database of contemporary hunter-gatherers and horticulturalists.

RESULTS

Relative to other great apes, human hunter-gatherers and horticulturalists spend more energy daily on subsistence, and they achieve similar energy efficiencies despite having more economical locomotion and using sophisticated technologies. In contrast, humans attain much greater return rates, spending less time on subsistence while acquiring more energy per hour. Further, horticulture is associated with higher return rates than hunting and gathering, despite minimal differences in the amount of time devoted to subsistence. Findings from our detailed study of the Hadza and Tsimane were consistent with those from the larger cross-cultural database of subsistence-level societies. Together, these results support prior evidence that the adoption of farming could have been motivated by greater gains per time spent working, and refute the notion that farming lifestyles are necessarily associated with increased labor time.

CONCLUSION

These findings revise our understanding of human energetics and evolution, indicating that humans afford expanded energy budgets primarily by increasing rates of energy acquisition, and not through energy-saving adaptations (such as economical bipedalism or sophisticated tool use) that decrease overall costs. Relative to other great apes, human subsistence strategies are characterized by high-intensity, high-cost extractive activities and expanded day ranges that provide more calories in less time. These results suggest that energy gained from improvements in efficiency throughout human evolution were primarily channeled toward further increasing foraging intensity rather than reducing the energetic costs of subsistence. Greater energetic gains per unit time are the reward for humans’ intense and behaviorally sophisticated subsistence strategies. Humans’ high-cost but high-return strategy is ecologically risky, and we argue that it was only possible in the context of increased cooperation, intergenerational food sharing, and a division of labor. We contend that the time saved by human subsistence strategies provided more leisure time for social interaction and social learning in central-place locations, which is critical for cumulative cultural evolution.

Tuesday, December 28, 2021

Leroy Jethro Gibbs – Beyond the edge (#NCIS) [Media Notes 64]

Leroy Jethro Gibbs is the central character of NCIS, one of the longest running and most popular shows on network TV. It premiered in 2003, has just completed its 19th season. Gibbs is a Supervisory Special Agent and a former Marine sniper. He’s highly respected by his superiors and subordinates and seems to be almost legendary, at least in certain circles. I’m interested in three aspects of his characterization, that “point beyond the edge.”

What do I mean by that? That will emerge as we examine them: the mysterious woman, the boats in the basement, and Gibbs’ rules.

The Mysterious Woman

At the end of the first episode, as Gibbs is standing around, a woman pulls up in a silver Mercedes convertible. Gibbs gets into the car and they drive off. No one remarks about it, then or later. This happens throughout the first and second seasons and into the third. We never learn who she is.

What’s up?

We assume, more or less by default, that they have a sexual relationship. But we don’t actually know that. That she’s a redhead is one thing; we learn in the course of things that Gibbs likes redheads. That’s it.

That she’s in a Mercedes indicates that she’s got money. The Mercedes convertible is also a bit exotic. It’s a bit of a James Bond touch, though admittedly Bond favored sportier vehicles. But Gibbs is not a Bond kind of guy. Yes, they’re in the similar businesses, but Bond is a cosmopolitan urban sophisticate who favors bespoke suits while Gibbs is more mundane. We’ll eventually learn that he from a small coal-mining town in Pennsylvania. Bond jets all over the world while Gibbs stays mostly in the Washington, D.C. metropolitan region, though he does travel to Mexico, Arizona, the Middle East, North Africa, and Afghanistan as required. He does not don a Tuxedo to gamble in clubs peopled by wealthy folks and Eurotrash. It's almost as though that woman-in-the-Mercedes serves to tells that the Bond world exists, but this is not that. It is something else.

A blogger writing as hessd as an interesting take on this motif. That Gibbs is pursued by a wealthy woman establishes his sexual attractiveness without encumbering him with a relationship that might interfere with his job. As the show moves on he has affairs with other woman, an army colonel, a psychiatrist, a lawyer, and others, but nothing that becomes permanent.

Gibbs is offered as some kind of prototype of ideal masculinity, as is the somewhat different James Bond. As such, he has to be sexually attractive and active. This motif is a way of achieving that. But it only lasted into the third season. On the one hand it’s not the sort of they could have gone on forever. One the other hand, things did change in the third season. NCIS got a new director, an attractive redhead names Jenny Shepard. We learn that in the past Gibbs and Shepard we agents together and had had a torrid affair. Every so often we’d get flashbacks to that affair, but, in the present, their relationship we professional and free of sexual complication. After two or three seasons – I forget exactly – Shepard is killed and the show comes up with other women to keep us aware of Gibbs’ sexual heat.

The boat in the basement

I don’t know when we first went down into Gibbs’ basement, but I assume it was relatively early in the first season. There we see an upturned hull of a sailing vessel supported on sawhorses. At first it’s just the keel and ribs, but in time planking is added. How is he ever going to get it out of the basement? It’s much too large to go up the steps.

Every so often someone will ask Gibbs about that. He just smiles and shrugs it off. It’s a mystery.

Gibbs goes down there to think while he works on the boat, generally sanding or planning. He pours some bourbon into a mug, offering his visitor the same. And they talk, about the case, or perhaps about life. Whatever.

Though Gibbs is working for the navy, albeit in a civilian capacity, he shows no particular interest in the sea or sailing. Sometime a case will take him aboard a ship, but we never see in sailing about in the kind of boat he’s building.

At some point the first hull disappears, without comment, to be replaced by another somewhat different hull, one for a small motor launch. Then the boats disappear entirely and Gibbs builds other stuff.

What’s up?

Whatever else is going on, this is a device for displaying Gibbs’ love of craftsmanship and carpentry. He does it for the activity itself, not to achieve some practical end, though occasionally he will build something for a specific purpose, such as the ornamented coffin he built for his friend, colleague, and mentor Mike Franks. Some people meditate; Gibbs crafts boats, coffins, and other things.

I suppose woodworking also contrasts with the high-tech world of crime fighting, from the cell phones (which Gibbs hates), to the computers, to the lab equipment. Agent McGee is an MIT graduate with excellent computer skills and Abby Sciuto handles all the laboratory instrumentation. But Gibbs is an old-fashioned guy who likes woodworking with hand tools.

Gibbs’ rules

And then we have Gibbs’ rules, a running motif in the show. If you do a search on “Gibb’s rules” you’ll get multiple hits, like this one. Some of them seem specific to the job – Never Let Suspects Stay Together, Never Get Personally Involved on a Case – others are more general – You Don't Waste Good, Never accept an apology from someone who has just sucker punched you. There doesn’t seem to be any particularly logical order to them. It’s a more or less arbitrary list.

Apparently #91 is the highest number mentioned so far: When you decide to walk away, don't look back. But there are many numbers without rules attached to them, e.g., 17, 19, 21, 24, 25, etc. Only 35 rules have been mentioned so far. (I’ve only watched the 15 seasons available on Netflix, so I’ve not encountered all that have been revealed.)

Gibbs’s agents are supposed to learn the rules. Every once in a while you’ll see one of them write one down. Occasionally someone, not necessarily Gibbs, will mention a rule by number only.

I suppose the central point is that these are Gibbs’s rules. No one else gets to have them. They establish the world as his world. The fact that they don’t exist in any logical order or structure, yet they are numbered, suggest they radically open-ended nature of this world. It is not logical and orderly – it is, after all, a world structured by crime, terrorism, and war.

* * * * *

Addendum, 1.8.22: In S16, E13, “She,” near the end, Gibbs opens a small file box where he apparently keeps slips of paper with his rules on. He takes out the slip for rule 10, Never Get Personally Involved on a Case, and tosses it in the fire (in his fireplace). This is in response reading Ziva David’s old personal notes about a case that then team had just solved. It seems that Ziva had kept a private office which she rented in a small building and kept personal notes on every case she’d worked on.

What have we got?

Any fictional world is going to imply things that we do not see actually happening. But these three motifs – the mysterious woman, the boat in the basement, Gibbs’ rules – serve explicitly to point beyond what we see on screen. Though I’ve not thought this through, nothing else in NCIS works like this, nor have I seen anything quite like it in any other show I’ve watched. Of course, there any many shows I haven’t watched. Any or several of them do this sort of thing, I wouldn’t be surprised. But I’d be curious.

I wonder what role, if any, these motifs play in the show’s popularity. More likely, they tell us something about the show that is central to its appeal. What?

The rise and fall of rationality in language [based on the Google Ngram corpus, 1850-2019]

Significance of the linked article, The rise and fall of rationality in language:

The post-truth era has taken many by surprise. Here, we use massive language analysis to demonstrate that the rise of fact-free argumentation may perhaps be understood as part of a deeper change. After the year 1850, the use of sentiment-laden words in Google Books declined systematically, while the use of words associated with fact-based argumentation rose steadily. This pattern reversed in the 1980s, and this change accelerated around 2007, when across languages, the frequency of fact-related words dropped while emotion-laden language surged, a trend paralleled by a shift from collectivistic to individualistic language.

Abstract:

The surge of post-truth political argumentation suggests that we are living in a special historical period when it comes to the balance between emotion and reasoning. To explore if this is indeed the case, we analyze language in millions of books covering the period from 1850 to 2019 represented in Google nGram data. We show that the use of words associated with rationality, such as “determine” and “conclusion,” rose systematically after 1850, while words related to human experience such as “feel” and “believe” declined. This pattern reversed over the past decades, paralleled by a shift from a collectivistic to an individualistic focus as reflected, among other things, by the ratio of singular to plural pronouns such as “I”/”we” and “he”/”they.” Interpreting this synchronous sea change in book language remains challenging. However, as we show, the nature of this reversal occurs in fiction as well as nonfiction. Moreover, the pattern of change in the ratio between sentiment and rationality flag words since 1850 also occurs in New York Times articles, suggesting that it is not an artifact of the book corpora we analyzed. Finally, we show that word trends in books parallel trends in corresponding Google search terms, supporting the idea that changes in book language do in part reflect changes in interest. All in all, our results suggest that over the past decades, there has been a marked shift in public interest from the collective to the individual, and from rationality toward emotion.

The post-truth era where “feelings trump facts” (1) may seem special when it comes to the historical balance between emotion and reasoning. However, quantifying this intuitive notion remains difficult as systematic surveys of public sentiment and worldviews do not have a very long history. We address this gap by systematically analyzing word use in millions of books in English and Spanish covering the period from 1850 to 2019 (2). Reading this amount of text would take a single person millennia, but computational analyses of trends in relative word frequencies may hint at aspects of cultural change (2⇓–4). Print culture is selective and cannot be interpreted as a straightforward reflection of culture in a broader sense (5). Also, the popularity of particular words and phrases in a language can change for many reasons including technological context (e.g., carriage or computer), and the meaning of some words can change profoundly over time (e.g., gay) (6). Nonetheless, across large amounts of words, patterns of change in frequencies may to some degree reflect changes in the way people feel and see the world (2⇓–4), assuming that concepts that are more abundantly referred to in books in part represent concepts that readers at that time were more interested in. Here, we systematically analyze long-term dynamics in the frequency of the 5,000 most used words in English and Spanish (7) in search of indicators of changing world views. We also analyze patterns in fiction and nonfiction separately. Moreover, we compare patterns for selected key words in other languages to gauge the robustness and generalizability of our results. To see if results might be specific to the corpora of book language we used, we analyzed how word use changed in the New York Times since 1850. In addition, to probe whether changes in the frequency of words used in books does indeed reflect interest in the corresponding concepts we analyzed how change in Google word searches relates to the recent change in words used in books. Following best-practice guidelines (8) we standardized word frequencies by dividing them by the frequency of the word “an,” which is indicative of total text volume, and subsequently taking z-scores (SI Appendix, sections 1, 5, and 8).

Saturday, December 25, 2021

Minds are built from the inside [evolution, development]

Thinking about this post from last year in conjunction with my short note about Substrate Independence. What's the connection? Systems that are substrate independent are not built from the inside. Their structure is observable externally and are constructed externally. It's this externally observable structure that can be transferred from one substrate to another. 

More later.

* * * * *

What’s it mean, minds are built from the inside?

As far as I can tell, I’ve been arguing that minds and brains are built from the inside since (at least) January 2002, when I first imagined a thought experiment about coupling two brains together through a huge number of point-to-point connections. I put an argument (about inside) online in 2014 and elaborated on it in my current post at 3 Quarks Daily about the impossibility of direct point-to-point brain-to-brain communication. I’ve decided that the argument is important enough that I’ve excerpted it from the larger article and have placed it below.


The argument I make about brains is, I believe, true for all evolutionary and/or developmental systems, whether biological or cultural. They start at some point in time, maintain a boundary between themselves and their surrounding, and develop from the inside. They are self-organizing.

Note that I first developed the diagrams for a presentation I made before the Linguistics Association of Canada and the United States and were intended to convey an idea I learned from David Hays, who, in turn, learned it from Sydney Lamb: that the meaning of an idea in a network is a function of its location in the entire network. As such, it is difficult to observe such meanings from outside the system.

Minds are built from the inside

Let us start with this simple diagram:


It makes a very simple point: that the central nervous system (CNS), with the brain as the largest component, functions in two worlds. There is the external world: the physical world, the world of plants, animals, and other people. And there is the internal milieu: the body’s interior (in which the brain itself is situated). The brain senses the external world through the vision, hearing, smell, taste, touch, and a other senses; it directs action in the world by control of the skeletal muscles. Similarly, it senses the internal milieu through the bloodstream and acts on it through the endocrine system – there is more to it than that, but we don’t need it all; it is the principle that I’m interested in.

This is basically the same diagram, but with just a bit more detail in the central box, the CNS.


Now we have distinct regions for receiving input from the external world (A), from the internal milieu (B), for acting on the internal milieu (C) and for acting in the external world (D). Finally, there is a central area containing neurons connected to neurons in the other areas. While no real nervous system is that simple, they are all elaborations of that basic organization.

Closed organization

And that organization is CLOSED. What do I mean by that?

The brain isn’t something that is assembled from a bunch of parts scattered throughout a bunch of bins from which they are fetched by some Transcendental Maker. The process is quite different from what I did years ago when I assembled my stereo amplifier from a kit. When I did that I laid all the part out and assembled the basic sub-circuits. I then connected those together on the chassis and, when it was all connected, plugged it in and turned in on, a magic moment when all the dead elements suddenly came to life.

No, the brain develops through an organic process that starts with a single fertilized egg which then begins dividing and differentiating. At some point about three weeks into the process specifically neural cells appear in the embryo and then differentiate into the brain and the peripheral nervous system.

The brain is far from fully developed at birth, but its operating environment changes drastically at that time. Until that point its external world had been the womb. After birth it is exposed to the larger external world. In humans the brain continues developing into the early twenties, at which time the sutures in the skull finally close completely.

At every point in the process the cells are living cells. The neurons are receiving inputs and generating inputs. They are, in effect, becoming used to one another, “learning” one another’s ways. They are mutually adjusted.

THAT’s what I mean when I say that the system is closed. There is no external meddling going on. How could there be?

External meddling

External meddling, that is what happens when two brains are hooked together by a high tech coupling linking tens or even hundreds of millions of neurons across two brains. Both brains are now subject to considerable external meddling. Each brain is receiving inputs from a source it has no experience with, and generating outputs to that source as well. As I have indicated before, it has no way of even recognizing the presence of all this foreign input as foreign. It is just there. It is noise, electrochemical energy with no traceable linkage to the organism itself.

I suppose one could imagine that in time, weeks, months, years, the two brains would somehow sort things out. Maybe. But that’s very different from the instantaneous perception and recognition that Koch is talking about. But maybe not. Maybe the initial shock of all that noise is too much to overcome. We simply don’t know.

No two brains are alike

Bu..bu..but aren’t the two brains alike?


Only approximately, only approximately.

It is easy to identify gross body parts for one individual with the same parts for another. Here’s my left leg, there’s your left leg, here’s my left thumb, there’s your left thumb. But you can’t do that with hairs on the head. For one thing, people don’t even have the exact same number of hairs on their heads.

It is the same with neurons. It is not as though we could link neuron #7,983,004,512 from one brain to neuron #7,983,004,512 in the other brain, and so on for tens or even hundreds of millions of neurons. We have no way of making such identifications between neurons in different brains. Brains are sufficiently different from one another that it is difficult to identify corresponding areas with a high level of precision. Gross correspondence at the scale of centimeters or millimeters is all we can do. That’s not very high precision.

What we have at best, then, is some miraculous technology linking millions or even billions of neurons in one brain with of millions or billions of neurons in another brain in some quasi-ordered pattern based on approximate brain geometry. This technology allows the two brains to send signals to one another, signals which neither brain is prepared to deal with and which therefore interrupt the normal processes of both brains. I do not see how anything resembling coherent thought can emerge from the resulting electrochemical chaos. Even the best of magicians is incapable of pulling a rabbit out of a poorly constructed hat.

Wednesday, December 22, 2021

Did "The Matrix" depict a more optimistic vision of the future than we are now living?

Samuel Earle, "The Timeline We’re on Is Even Darker Than ‘The Matrix’ Envisioned," NYTimes, December 22, 2021.

In original Matrix films had an undercurrent of optimism:

Yet alongside this dark tableau, “The Matrix” also contained dreams of a better internet than our own. The eponymous computer simulation is a sinister mechanism of control, imposed upon humans to harness their energy. But after the simulation is seen as a construction (enabled by swallowing the “red pill”), people have the power to plug back in and traverse it as a truer version of themselves.

Such possibilities seem beyond the current web:

Despite the pseudonyms, trolls and alter egos that still dwell in some corners of the internet, its main byways now prize consistency and transparency over the risks of anonymity and reinvention. The idea of the internet as a place to cultivate an identity outside the slots other people put you in has been eclipsed by a social media-driven focus on creating an aspirational personal brand. Self-realization is now measured in likes, shares and follower counts.

“Our digital presentations are slicker, influencer-influenced,” Ms. Turkle, a professor of the social studies of science and technology at MIT, told me. “Everyone wants to present themselves in their best light, but now we have a corporate filter of what ‘pleases.’”

The cultural shift toward holding one narrowly defined identity — online and offline, across platforms — aligns neatly with Silicon Valley’s interests. The aim of many tech companies is to know us more intimately than we know ourselves, to predict our desires and anxieties — all the better to sell us stuff. The presumption that we each hold a single “authentic” identity simplifies the task, suggesting to advertisers that we are consistent, predictable consumers.

Tuesday, December 21, 2021

Complex systems and AI

Abstract from the article linked in the first tweet above, "Collective Intelligence for Deep Learning: A Survey of Recent Developments":

In the past decade, we have witnessed the rise of deep learning to dominate the field of artificial intelligence. Advances in artificial neural networks alongside corresponding advances in hardware accelerators with large memory capacity, together with the availability of large datasets enabled researchers and practitioners alike to train and deploy sophisticated neural network models that achieve state-of-the-art performance on tasks across several fields spanning computer vision, natural language processing, and reinforcement learning. However, as these neural networks become bigger, more complex, and more widely used, fundamental problems with current deep learning models become more apparent. State-of-the-art deep learning models are known to suffer from issues that range from poor robustness, inability to adapt to novel task settings, to requiring rigid and inflexible configuration assumptions. Ideas from collective intelligence, in particular concepts from complex systems such as self-organization, emergent behavior, swarm optimization, and cellular systems tend to produce solutions that are robust, adaptable, and have less rigid assumptions about the environment configuration. It is therefore natural to see these ideas incorporated into newer deep learning methods. In this review, we will provide a historical context of neural network research's involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its current capabilities. To facilitate a bi-directional flow of ideas, we also discuss work that utilize modern deep learning models to help advance complex systems research. We hope this review can serve as a bridge between complex systems and deep learning communities to facilitate the cross pollination of ideas and foster new collaborations across disciplines.

Saturday, December 18, 2021

Substrate Independence

The notion of substrate independence in is invoked in arguments about whether or not, at least in theory, a digital computer can do anything the human brain can do. The idea is that what matters is the computational procedure, the algorithm if you will, not the substrate in which it is implemented. Anything that’s being ‘computed’ in neural ‘wetware’ can be computed by suitable software running on digital hardware. We just have to figure out the procedure and have enough computing power to do it.

I wonder, though. To the extent that the notion of substrate independence assumes (something like) the distinction between hardware and software, this may not be the case. For the hardware/software distinction doesn’t apply to the brain (and its mind). Thus, you can easily erase some software and data from a digital computer without affecting the underlying hardware. You can just as easily reload that data and software to the hardware, thus restoring the computer to its prior state. You can’t do that with a brain. Similarly, you can add new capability to computer simply by uploading new software. You can’t do that with a brain. You cannot, for example, learn a new language or a new intellectual discipline simply by uploaded a new module, say, overnight. You have to learn, painstakingly learn. In neural ‘wetware’ the substrate in irreversibly changed in a way this is not true of digital hardware.

Does machine learning change this? I note that machine learning is software that’s implemented on hardware. The underlying hardware is not changed. The learning takes place entirely in the data (the parameter weights) that is learned. Still, it seems that something more or less like (organic) neural learning is taking place in the implemented system.

Food for thought.

Neuromorphic chips and spiking neurons

From the linked article, "Spiking Neural Networks":

Artificial intelligence researchers, on the other hand, would like to build deep neural networks that have both the brain’s remarkable abilities and its extraordinary energy efficiency. The brain consumes only about 20 watts of power. If the brain achieves its ends partly because of spiking neurons, some think that energy-efficient deep artificial neural networks (ANNs) would also need to follow suit.

But spiking neural networks have been hamstrung. The very thing that made them attractive — communicating via spikes — also made them extremely difficult to train. The algorithms that ran on IBM’s chip, for instance, had to be trained offline at considerable computational cost.

That’s set to change. Researchers have developed new algorithms to train spiking neural networks that overcome some of the earlier limitations. And at least for networks of tens of thousands of neurons, these SNNs perform as well as regular ANNs. Such networks would likely be better at processing data that has a temporal dimension (such as speech and videos) compared with standard ANNs. Also, when implemented on neuromorphic chips, SNNs promise to open up a new era of low-energy, always-on devices that can operate without access to services in the cloud.

One of the key differences between a standard ANN and a spiking neural network is the model of the neuron itself. Any artificial neuron is a simplified computational model of biological neurons. A biological neuron receives inputs into the cell body via its dendrites; and based on some internal computation, the neuron may generate an output in the form of a spike on its axon, which then serves as an input to other neurons. Standard ANNs use a model of the neuron in which the information is encoded in the firing rate of the neuron. So the function that transforms the inputs into an output is often a continuous valued function that represents the spiking rate. This is achieved by first taking a weighted sum of all the inputs and then passing the sum through an activation function. For example, a sigmoid turns the weighted sum into a real value between 0 and 1.

In a spiking artificial neuron, on the other hand, the information is encoded in both the timing of the output spike and the spiking rate. The most commonly used model of such a spiking neuron in artificial neural networks is called the leaky integrate-and-fire (LIF) neuron. Input spikes cause the neuron’s membrane potential — the electrical charge across the neuron’s cell wall — to build up. There are also processes that cause this charge to leak; in the absence of input spikes, any built-up membrane potential starts to decay. But if enough input spikes come within a certain time window, then the membrane potential crosses a threshold, at which point the neuron fires an output spike. The membrane potential resets to its base value. Variations on this theme of an LIF neuron form the basic computational units of spiking neural networks.

In 1997, Wolfgang Maass of the Institute of Theoretical Computer Science, Technische Universität, Graz, Austria, showed that such SNNs are computationally more powerful, in terms of the number of neurons needed for some task, than ANNs with rate-coding neurons that use a sigmoid activation function. He also showed that SNNs and ANNs are equivalent in their ability to compute some function (an important equivalence, since an ANN’s claim to fame is that it is a universal function approximator: given some input, an ANN can be trained to approximate any function to transform the input into a desired output).

There's much more at the link, much of it having to do with how to modify backpropagation so it works and can scale with SNNs.

Conclusion:

All this bodes well for the day when spiking neural networks can be implemented on the numerous neuromorphic chips that are in development. The hope is that such networks can be both trained and deployed using dedicated hardware that sips rather than sucks energy.

Vision isn't "solved" [AI]

Abstract of linked article, Overinterpretation reveals image classification model pathologies:

Image classifiers are typically scored on their test set accuracy, but high accuracy can mask a subtle type of model failure. We find that high scoring convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features. When a model provides a high-confidence decision without salient supporting input features, we say the classifier has overinterpreted its input, finding too much class-evidence in patterns that appear nonsensical to humans. Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets. We introduce Batched Gradient SIS, a new method for discovering sufficient input subsets for complex datasets, and use this method to show the sufficiency of border pixels in ImageNet for training and testing. Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy. Unlike adversarial examples, overinterpretation relies upon unmodified image pixels. We find ensembling and input dropout can each help mitigate overinterpretation.

Tuesday, December 7, 2021

Sonny Rollins and Rahsaan Roland Kirk

Thursday, December 2, 2021

It Shook Me, the Light

Bump! This post is from 2010, but it's been on my mind lately. So I'm bumping it to the head of the queue.
 
During the early 1970s I'd played for two years with a rock band called St. Matthew Passion. Modeled on Blood, Sweat, and Tears and on Chicago, the band consisted of 4-piece rhythm section plus three horns: sax, trumpet (me), and trombone. On “She's Not There” the three horns would start with a chaotic improvised freak-out and then, on cue from the keyboard player, the entire band would come in on the first bar of the written arrangement.

On our last gig it was just me and the sax player; the trombonist couldn't make it. The sax and I started our improv. The music got more and more intense until Wham! I felt myself dissolve into white light and pure music. It felt good.

And I tensed up.

It was over.

After the gig the sax player and I made a few remarks about it — “that was nice” — enough to confirm that something had happened to him too. One guy from the audience came up to us and remarked on how fine that section had been. Did he know what had happened? Or, if not ‘know’ exactly, did he sense a special magic in the performance? I ask because performers and audience often have a very different ‘sense’ of the same performance. Perhaps the guy was just complimenting us on our ‘freak-out’ chops, not on any magic in the music.

That's the only time I've ever experienced that kind of ego loss in music. For a few years I was very ambivalent about that experience, wanting it again, but also fearing it. A child of the 60s, a very geekish child of the 60s, I’d read quite a bit about altered states of consciousness, as they were called in the scientific literature. I read around in the secondary and tertiary literature on mystical experiences, and even a bit of the primary literature – though just exactly what’s the point of reading a mystic’s account an ineffable encounter with . . . . well, with what, exactly?

I knew such things happened. And now, in little more than a couple of heartbeats, now I too knew. But what is it that I knew?

Other than the experience itself, I knew that what all those people had been writing about was real. It’s not that I doubted it. Still it’s one thing to read about walking on the moon, even to see video footage and photographs of space-suited men walking about. It’s another thing to be there oneself.

But how can one experience be so powerful, so polarizing, that it haunts your thoughts and echoes through your soul for years afterward? What is the human nervous system that THAT can happen? In attributing the experience to the nervous system – as opposed, say, to an encounter with the divine, I do not thereby mean to dismiss it – oh, that? that was just a burp of the nervous system. We cannot dismiss it. The nervous system is us.

Now the memory's faded & the ambivalence too. But I'm playing better music now than ever I did back then. I’m talking not so much about technique – that comes and goes – but about expressive power, about ‘authenticity.’ Is that authenticity and echo of that experience?

Who knows?


* * * * *


Such experiences are common enough among musicians. Over the years I’ve collected accounts from books and articles. Here are some from Jenny Boyd, with Holly George-Warren. Musicians In Tune. New York: Simon and Schuster, 1992.