Wednesday, September 30, 2020

Deploying machine learning models in real-life applications


1. Deploying ML models is hard

Deploying a model for friends to play with is easy. Export trained model, create an endpoint, build a simple app. 30 mins.

Deploying it reliably is hard. Serving 1000s of requests with ms latency is hard. Keeping it up all the time is hard.

2. You only have a few ML models in production

Booking, eBay have 100s models in prod. Google has 10000s. An app has multiple features, each might have one or multiple models for different data slices.

You can also serve combos of several models outputs like an ensemble.

3. If nothing happens, model performance remains the same

ML models perform best right after training. In prod, ML systems degrade quickly bc of concept drift.

Tip: train models on data generated 6 months ago & test on current data to see how much worse they get.

4. You won’t need to update your models as much

One mindboggling fact about DevOps: Etsy deploys 50 times/day. Netflix 1000s times/day. AWS every 11.7 seconds.

MLOps isn’t an exemption. For online ML systems, you want to update them as fast as humanly possible.

Deploying ML systems isn't just about getting ML systems to the end-users.

It's about building an infrastructure so the team can be quickly alerted when something goes wrong, figure out what went wrong, test in production, roll-out/rollback updates.

It's fun!

* * * * *

Read the rest of the thread for discussion.

Tuesday, September 29, 2020

Weeds and graffiti [in an urban paradise]


Practical Fusion power? [more likely than AGI]

Scientists developing a compact version of a nuclear fusion reactor have shown in a series of research papers that it should work, renewing hopes that the long-elusive goal of mimicking the way the sun produces energy might be achieved and eventually contribute to the fight against climate change.

Construction of a reactor, called Sparc, which is being developed by researchers at the Massachusetts Institute of Technology and a spinoff company, Commonwealth Fusion Systems, is expected to begin next spring and take three or four years, the researchers and company officials said.

Although many significant challenges remain, the company said construction would be followed by testing and, if successful, building of a power plant that could use fusion energy to generate electricity, beginning in the next decade. [...]

“Reading these papers gives me the sense that they’re going to have the controlled thermonuclear fusion plasma that we all dream about,” said Cary Forest, a physicist at the University of Wisconsin who is not involved in the project. “But if I were to estimate where they’re going to be, I’d give them a factor of two that I give to all my grad students when they say how long something is going to take.” [...]
H/t Tyler Cowen.

A comment I left over at Marginal Revolution:
It could be a game-changer for the world's energy future if it works. If have no serious opinion about whether or not it will work, but note, as the article says, that we've been chasing fusion power for a long time, since before the Apollo Program and the War on Cancer.

Apollo, of course, came through. It never really was a "moonshot" if by that you mean a low probability enterprise with a potential for high gain if it succeeds. It was expensive and dangerous, some men did lose their lives, but the basic science was in place from the beginning. It just took a lot of engineering.

The war on cancer was and is different. The basic science wasn't in place and it seems as though it still isn't. We've learned a lot over the years, but not what we need to effect routine cures.

As I say, I don't what what the case is for fusion. It seems that the science is there , but the engineering is very difficult.

And then we have human-class AGI, or even super-intelligent AGI. I'm with those who think we're missing basic a lot of science and the goal is mostly a phrase with no coherent meaning. Yes, we've got chess and Go down cold, and machine translation is impressive, but you wouldn't use it for legal documents. GPT-3 is interesting and impressive too. But I think that line of development will bottom out before GPT-X consumes all the electrical power in Northern California.

We'll have practical fusion power before human-class AGI.
* * * * *

Useful commentary thread on Twitter:

The tech is over-hyped, so is there a business case for Musk's Neuralink?

In an article published in The Baffler, Shit for Brains, Danielle Carr does a good job of deflating Elon Musk's claims for Neuralink and of laying out the history of José Delgado's excursions into the same territory a half century ago. She ends with some speculation about how he might recoup his investment:
While it’s not immediately obvious what business models will emerge to capitalize on neural data, a rough shape of the answer is suggested by Rune Labs, a tech venture founded by an alumnus of Alphabet’s bioscientific wing Verily Life Sciences. Most medical device manufacturers are behemoths of the old economy, lacking the resources to curate the vast amounts of data their devices generate. Ditto for university researchers, who rarely have the margins in their research grants to purchase or build the computational tools necessary to correlate large quantities of behavioral and neural data. Enter Rune Labs, which offers device manufacturers and researchers a deal: give us access to the data generated by your neural implants, and in exchange, we will provide access to state of the art data storage and computation.

From their end, Rune Labs has developed a variety of phone apps to glean data about self-reported mood. (Similar research is ongoing in “digital psychiatry” to build apps that harvest data about everything from voice modulation to exercise which can then be coupled to the information about brain activity gleaned from neural devices.) The only restriction on Rune’s use of neural device data is that they have to keep patients anonymous. More and more, this looks like the business model that will define neural implants. As Alik Widge remarked, “The idea that your data is the product is already here with brain implants. Neuropace has already said that they’re moving toward being less of an implant company and more a brain data company.”

Of all the wild speculations Elon Musk made during the Neuralink launch, the most accurate prediction was his quip that the device is “sort of like if your phone went in your brain.” “Sort of like,” indeed: Neuralink is like a phone in that it is yet another machine built for generating data. While the device does not represent a major advance in brain-machine interfaces, and the pipeline for applications beyond movement disorders is at best decades long, what Neuralink does offer is an opportunity to harvest data about the brain and couple it to the kinds of data about our choices and behaviors that are already being collected all the time. The device is best understood not as a rupture with the past, but as an intensification of the forms of surveillance and data accumulation that have come to define our everyday lives.
H/t Leanne Ogasawara.

Monday, September 28, 2020

14th street at night


Analyze This: A Curious Pattern Across Characters in Three Shakespeare Plays

From the old days: When I first started posting at The Valve I posted a series on the problem of literary character: Since they ARE fictions, why is it so difficult for us to talk about them AS fictions? Why are we always using that language and concepts of real people to talk about these fictions? This is one of those posts, originally going on the web on July 25, 2006.
Some years ago I published an essay on Much Ado About Nothing, a comedy, Othello, a tragedy, and The Winter's Tale, a romance.* All involve a protagonist who mistakenly believes the woman he loves to be unfaithful – the Claudio-Hero plot in Much Ado. Though I argue the point in my essay, for the purposes of this post I will simply assume that that common plot feature betrays the same psycho-social problematic in each play. Thus in this group of three plays we have a "natural" experiment in which a single problematic is dramatically realized in three different kinds of play.

In the comedy the male protagonist makes the mistake during courtship; in the tragedy the mistake happens shortly after marriage; and in the romance, the mistake occurs well into the marriage. If we examine the relationships between the characters, we find that it gets closer as we move from one play to the next. And that's not all. There seem to be systematic differences among the configuration of characters in these plays. And that has led me to wonder whether or not those differences are related to the fact that we are dealing with three different genres, comedy, tragedy, and romance. Are these configurations merely incidental features of the plays or are they intrinsic to the different genres -- as realized by Shakespeare, if not in general? This line of thinking was suggested to me by a remark Frye had made in his Anatomy of Criticism, to the effect that a tragedy is a comedy where the last act, the reconciliation, has gone missing.

With this in mind, consider the following table, in which the first column names the function a given character takes in the play:

Much Ado Othello Winter's Tale
Protagonist Claudio Othello Leontes
Mentor Don Pedro
Deceiver Don John Iago
Paramour Borachio Cassio Polixenes
Beloved Hero Desdemona Hermione

Does this table depict something for which an explanation is necessary or does it depict a mere contingent set of relationships between these plays? If an explanation is necessary, what kind?

If I thought this table depicted mere contingency, I wouldn't bother posting it here (nor would I have bothered publishing an article based on it). Unfortunately, the type of explanation required is not clear to me, though I've pondered the question enough. I suspect it has something to do with the "deep structure" of those three genres. Whatever that explanation is, I don't see how it can be couched in terms of naturalistic accounts of the motivations and actions of the characters in the table. Perhaps such naturalistic accounts – whether expressed in Freudian, Jungian, Lacanian, cognitive, or evolutionary terms, whatever – have some explanatory value when applied to individual characters, but the phenomenon depicted in that table is of a different order.

It is about artistry, about how characters are constructed to meet the demands of a certain kind of dramatic trajectory, and about how a certain kind of trajectory follows from certain characters in certain relations. But it is also about how all the characters in a play are the product of a single mind. From one point of view, that mind is Shakespeare's; from a different point of view, we're dealing with the minds of readers. Just how is it that a single mind can yield all of the characters in a single play?

What that table suggests to me is that we have three different ways of "mapping" a single mind onto the multiple characters of a play. Claudio, Othello, and Leontes each has different capabilities; that is, they draw on different capabilities within the reader. And so it is with other characters as well. I can't explain what's going on. But I will finish this post by describing it in a little more detail.

Neither Othello nor Leontes has a mentor comparable to Claudio's Don Pedro. Don Pedro talked with Hero's father, Leonato, and arranged the marriage. We see that happen in the play. We must infer that Othello arranged his marriage to Desdemona, whose father didn't even know about the marriage. We know nothing about how Leontes managed his marriage to Hermione, but he doesn't have anyone associated with him who could be called his mentor.

Further, there is no deceiver in The Winter's Tale comparable to Don John or Iago. Leontes deceives himself. Iago, Othello's deceiver, is closer to Othello than Don John is to Claudio. Among the presumed paramours, Cassio is closer to Othello than Borachio is to Claudio. Polixenes and Leontes have known one another since boyhood; they are so closely identified that we can consider them doubles. Thus relationships between key characters and the protagonist become more intimate as we move from the comedy to the tragedy to the romance – and some characters, the mentor and the deceiver, seem to disappear.

Finally, note that the protagonist becomes more powerful as we move through the sequence of plays. Claudio is a youth just beginning to make his way in the world. Othello is a mature man, a seasoned general at the height of his career; but there are men who have authority over him. Leontes is king (and father); there is no mundane authority higher than his. Perhaps this increase in power is correlated with the apparent "absorption" of functions into the protagonist. The absorption of functions increases the behavioral range of the protagonist. And this increased range is symbolized by higher social status.

What makes this problem so intractible is (1) that it involves a rich configuration of relationships – both synchronic and diachronic – among characters and plot trajectories, but (2) that we don't have an adequate metalanguage for describing these relationships. Levi-Strauss faced this problem in his four-volume Mythologiques, where he examined the relationships among myths.** He seemed to be getting at the notion that there is an invariant relationship between social structure – broadly considered – myth structure, and that that relationship follows from the structure and operations of the human mind.

To describe these relationships Levi-Strauss used a pseudo algebraic notation and the notion that "transformational" relationships exist between one myth and another. At the same time he made gnomic statements to the effect this his theory of myth is just another transformation of the myths about which he theorized. That is to say, the obvious commonsense distinction between myth and discourse about myth failed on some deeper and more abstract level.

For all the work that's been done in the cognitive and neurosciences since then, it's not at all clear to me that we are yet in a position to do much better. I don't expect the cognitive scientists to tackle this problem, nor neuroscientists, much less evolutionary psychologists. I don't expect it to be solved by anyone for whom stories are simply examples of higher cognitive processing. I'm afraid it's up to us.



*Benzon, William L. At the Edge of the Modern, or Why is Prospero Shakespeare's Greatest Creation? Journal of Social and Evolutionary Systems, 21 (3), 259-279, 1998. URL: http://www.academia.edu/235334/At_the_Edge_of_the_Modern_or_Why_is_Prospero_Shakespeares_Greatest_Creation

**I've taken up this problem in a working paper, Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition, URL: https://www.academia.edu/10541585/Beyond_Lévi-Strauss_on_Myth_Objectification_Computation_and_Cognition

Sunday, September 27, 2020

Our alien overlords?


Ramble into Fall: Cultural evolution, economic growth and progress, rambling [let's go meta]

Is the study of cultural evolution wrecked?

I’ve been thinking and article comparing cultural with biological evolution for about a month or so. At the end authors asked for feedback. I was wondering whether or not to respond. I’ve finally done so, with a recent blog post, Can the study of cultural evolution be successfully assimilated to a framework that is centered on biology? [No, but the biologists keep trying]. I don’t know whether or not I’ll write directly to the lead author. My problem has been that the article is so relentlessly biological in orientation that I find it hard to imagine that the authors, there are nine, would be able to make use of my remarks.

We’re thinking in different worlds. But the world they represent is, as far as I can tell, the largest more or less organized body of thinkers about cultural evolution – gene-culture co-evolution, or dual inheritance theory. The work these people do is good as far as it goes, but I don’t see how it can handle the kind of phenomena that interest me, music, literature, the arts in general, language, the history of science, and so forth. You simply can’t get to those phenomena from the assumptions guiding their thinking.

That line of thinking dates back to the mid-1980s as does another line of thinking about cultural evolution, memetics. Dawkins came up with the idea in his 1976 book, The Selfish Gene, but it took a decade for the idea to catch on. Two things have come out of that. One the one hand we have the wide-spread popular idea of memes, as in “internet meme.” That is what it is and I have no problems with. What’s more problematic is the attempt to found a science of memetics. As far as I can tell, nothing has come of it, though a bunch of books have been written, a journal was started and failed, and Dan Dennett keeps pushing the idea. While Dawkins original impulse was valid – an account of cultural evolution needs to be built around cultural entities that are the benefactors of the evolutionary dynamic – the thinking that came out of it collapsed onto the idea of memes-in-the-head which then flit about from brain to brain. As far as I can tell, this line of thinking is all but dead.

So, we have one line of investigation that failed, memetics, and another, gene-cultural co-evolution, which is flourishing, but also short-sighted. Is there any chance for an inquiry into cultural evolution that is suited to the subject matter?

Growth and progress

I continue to think about economic stagnation and growth and the prospects for progress along the lines of my post from August 13, Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds], which I subsequently incorporated into a working paper, What economic growth and statistical semantics tell us about the structure of the world (August 24, 2020), which I like a lot. I’ve been thinking about a new version of that paper in which I would add a new section at the end, one that would knit the two facets of the paper together, growth and semantics.

But the thinking’s been going hard, as has the motivation to actually do the writing. As things moved along, I was getting more ideas about the economic end of the article than about the semantics end. So I decided to drop the idea of a second edition and instead do another working paper where I’d just expand the economic side of the article. Things felt better after I’d arrived at that decision about a week ago.

Yet, the writing was still coming hard, and I’ve been feeling a bit guilty about that. It’s difficult in this kind of situation to determine whether or not I’m just delaying the writing for no particularly good reason or because I still need to do some more thinking before writing. And, of course, the actual writing is always a good way to do some thinking.

Anyhow, I actually did some writing this afternoon and made some real progress. We’ll see how it goes.

My working title for the new paper: The Materiality of Ideas in the Universe of Knowledge and the Prospects for Progress.

Rambling about on rambling

So why do I write these rambles? Because it helps me think. I’ve got a variety of interests and I generally keep several of them working at a time. It sometimes gets a bit difficult to figure out what to work on next and push on through to a complete blog post, working paper, or set of photos. Writing one of these posts is a way of putting a bunch of different things before me and letting them bump up against one another. It’s a way to step back and breath.

Which reminds me, I still need to do some media notes, one on To Catch a Thief, and another on Atelier.

And I’ve got to get back to this Facebook business, not to mention GPT-3 and the future of AI. Yikes!

More later.

How intelligent is an octopus, or a (giant) squid?


Abstract from the article:
The soft‐bodied cephalopods including octopus, cuttlefish, and squid are broadly considered to be the most cognitively advanced group of invertebrates. Previous research has demonstrated that these large‐brained molluscs possess a suite of cognitive attributes that are comparable to those found in some vertebrates, including highly developed perception, learning, and memory abilities. Cephalopods are also renowned for performing sophisticated feats of flexible behaviour, which have led to claims of complex cognition such as causal reasoning, future planning, and mental attribution. Hypotheses to explain why complex cognition might have emerged in cephalopods suggest that a combination of predation, foraging, and competitive pressures are likely to have driven cognitive complexity in this group of animals. Currently, it is difficult to gauge the extent to which cephalopod behaviours are underpinned by complex cognition because many of the recent claims are largely based on anecdotal evidence. In this review, we provide a general overview of cephalopod cognition with a particular focus on the cognitive attributes that are thought to be prerequisites for more complex cognitive abilities. We then discuss different types of behavioural flexibility exhibited by cephalopods and, using examples from other taxa, highlight that behavioural flexibility could be explained by putatively simpler mechanisms. Consequently, behavioural flexibility should not be used as evidence of complex cognition. Fortunately, the field of comparative cognition centres on designing methods to pinpoint the underlying mechanisms that drive behaviours. To illustrate the utility of the methods developed in comparative cognition research, we provide a series of experimental designs aimed at distinguishing between complex cognition and simpler alternative explanations. Finally, we discuss the advantages of using cephalopods to develop a more comprehensive reconstruction of cognitive evolution.

Saturday, September 26, 2020

The moon as seen from earth orbit

Robo pets for the elderly [in a time of pandemic]

Paula Span, In Isolating Times, Can Robo-Pets Provide Comfort? NYTimes, Sept 26, 2020.
“She’s more isolated in her room now,” Dr. Spangler said. “And she misses having a dog.”

Knowing that her mother couldn’t manage pet care, even if the residence had permitted animals, Dr. Spangler looked online for the robotic pets she had heard about.

She found a fluffy puppy with sensors that allow it to pant, woof, wag its tail, nap and awaken; a user can feel a simulated heartbeat. Unable to deliver the robot personally, she asked a staff member to take it inside. In a subsequent video chat, Dr. Spangler learned that her mother had named the robot dog Dumbo.

Such devices first appeared in American nursing homes and residences for seniors several years ago. A Japanese company began distributing an animatronic baby seal called PARO in 2009, and Hasbro started marketing robotic cats in 2015.

But the isolation caused by the coronavirus, not only in facilities but also among seniors living alone in their homes, has intensified interest in these products and increased sales, company executives said. It has also led to more public money being used to purchase them.
Research:
More recently, researchers have started analyzing the use of robotic pets outside institutional settings, by seniors living in their own homes. Of particular interest is the Joy for All brand sold by Ageless Innovation, a spinoff of Hasbro, and available from retailers like Walmart and Best Buy for about $120.

One of the largest studies, underwritten by United HealthCare and AARP, distributed free Joy for All robots to 271 seniors living independently.

All the seniors suffered from loneliness, according to a screening questionnaire. At 30 and 60 days, “there was improvement in their mental well-being, in sense of purpose and optimism,” said Dr. Charlotte Yeh, chief medical officer of AARP’s business subsidiary and a study co-author. The study also found “a reduction in loneliness,” Dr. Yeh said, although the questionnaires showed that participants remained lonely.

Phases of the moon [Photoshop]



Business management from the School of Rock


From the article:
Successful startups have to make a difficult transition from being a gang of friends working on a cool idea to being managers of a complex enterprise with multiple stakeholders. It’s a problem familiar to rock groups, which can go quickly from being local heroes to global brands, and from being responsible only for themselves to having hundreds of people rely on them for income. In both cases, people who made choices by instinct and on their own terms acquire new, often onerous responsibilities with barely any preparation. Staff who were hired because they were friends or family have their limitations exposed under pressure, and the original gang can have its solidarity tested to destruction. A study from Harvard Business School found that 65% of startups fail because of “co-founder conflict”. For every Coldplay, there are thousands of talented bands now forgotten because they never survived contact with success.

The history of rock groups can be viewed as a vast experimental laboratory for studying the core problems of any business: how to make a group of talented people add up to more than the sum of its parts. And, once you’ve done that, how to keep the band together. Here are four different models.

FRIENDS

“We can work it out”

The Beatles invented the idea of the band as a creative unit in the 1960s. John Lennon’s and Paul McCartney’s artistic partnership enabled them to vertically integrate the hitherto separate functions of songwriting and performing. The band had no designated frontman; all four Beatles were capable of singing lead. Though Lennon was the de facto leader in the early years, one of the band’s innovations was not to call itself “Johnny and the Beatles”, as was conventional at the time. Partly because promoters and journalists found this new entity hard to grasp, friendship became central to the band’s image. John, Paul, George and Ringo were presented to the world as a gang of inseparable buddies. Their voices blended thrillingly. They cut their hair and dressed in the same style. They talked – oh how they talked – in synchrony. “We’re really all the same person,” said McCartney in 1969. “We’re just four parts of the one.” [...]

Business disagreements were intensely personal because, for the Beatles, everything was intensely personal. What were Starr and McCartney disagreeing about so violently? A marketing plan! It was because the Beatles were such good friends to begin with that they fell out irreconcilably.

AUTOCRACIES

“I won’t back down”

Tom Petty and the Heartbreakers were formed in 1976 by five musicians from Gainesville, Florida, who had moved to Los Angeles in search of stardom. Petty was the group’s lead singer, songwriter and driving force, but the band split its income equally. Petty was talented enough to make it alone, but he loved being in a band: it gave him a sense of belonging after a fraught childhood scarred by violence. The Heartbreakers had an ethos of all for one, and one for all. By 1978 they had released two albums that sold well. Their next, “Damn the Torpedoes”, would go triple platinum and propel them into the big league. But before that happened, the band’s leader faced a tough decision.

The Heartbreakers had a new manager, Elliot Roberts, who, at 35, was already a grizzled veteran of the industry, having managed Neil Young and Joni Mitchell. The first thing Roberts did was sit down with Petty and tell him that he needed to be more selfish. “You can’t do this deal where you’re giving every­body in the band an equal cut of money,” Roberts said, “because there’s going to be a big problem at some point. You’re going to feel really bitter and used. I’ve been down this road with bands before. It explodes, and everyone walks away.” Petty listened. The days of equal shares were over.

His bandmates felt a stinging sense of betrayal. [...] Others have followed a similar pattern. On stage, Bruce Springsteen celebrates the ties that bind him to the E Street Band, but in his autobiography he is matter of fact: “Democracy in a band...is often a ticking time bomb. If I was going to carry the workload and responsibility, I might as well assume the power. I’ve always believed that the E Street Band’s continued existence is partially due to the fact that there was little to no role confusion among its members.” By which he means, there is no confusion over who’s the boss.

Even at a time when flat decision-making structures are fashionable in business, some companies are successfully run as Springsteen-style autocracies. In 2009, Cisco’s then ceo John Chambers told the New York Times, “I’m a command-and-control person. I like being able to say turn right, and we truly have 67,000 people turn right.” After returning to the helm of Apple in 1997, Steve Jobs almost single-handedly wrenched the firm out of stagnation. [...]

DEMOCRACIES

“Everybody hurts”

In 1979, Michael Stipe, a college student in Athens, Georgia, was browsing in a downtown record store called Wuxtry when he got talking to the clerk, a college dropout and amateur guitarist called Peter Buck. The two men bonded over a love of underground rock and soon decided to form a band, recruiting two fellow students, Bill Berry and Mike Mills. Thirty-two years later, their band, R.E.M., broke up amicably, ending one of the happiest collaborations in rock history.

Another regular at Wuxtry Records was Bertis Downs, a law student. An early fan of the band, Downs became R.E.M.’s legal adviser and manager. He told me that r.e.m. operated as an Athenian democracy. “They all had equal say. There was no pecking order.” This was not majority rule: “Everyone had a veto, which meant everyone had to buy into every decision, business or art. They hashed things out until they reached a consensus. And they said ‘No’ a lot.”

Underpinning r.e.m.’s flat governance structure was an egalitarian economic one. As Tony Fletcher explains in “Perfect Circle”, his biography of the band, each member received an equal share of publishing royalties, regardless of who contributed what to each song. The same was true of their recording and performing royalties – although here equal splits are normal. [...]

R.E.M. is one of a handful of bands that has successfully contravened Springsteen’s rule. Another is one of the biggest bands in rock history, Coldplay. In both cases, members receive equal shares of all income and have had a roughly equal say in band matters. [...] The democratic model depends on individual members believing that each has the group’s interest at heart, not just their own. [...] Finally, it helps to have a shared vision of success.

FRENEMIES

“It’s only rock ’n’ roll”

Charlie Watts’s forceful rebuke to Mick Jagger [told in the first paragraphs of the article] came at a difficult time for the Rolling Stones. In the 1980s they came as close to splitting as they ever have. Their last album, “Undercover”, had sold disappointingly. Jagger embarked on a solo career and seemed to be seeking an escape from the band, possibly because he was tired of dealing with Richards, who had shaken off a debilitating dependence on heroin only to replace it with one on alcohol. But Jagger’s solo albums flopped, and he returned to his old partner. The two came to an accommodation. By the end of the decade, the Stones were back on the road again, promoting a successful new album. They have been touring – and the money has kept pouring in – ever since.

“In bands that survive a long time, there’s often an agreement to disagree,” says Simon Napier-Bell, a manager of multiple bands, including the Yardbirds and Wham! “People who don’t get on can get on in an interesting way.” It was possible for the Stones to come to such an arrangement precisely because they were never as close as the Beatles. It’s not that Jagger and Richards weren’t friends, but friendship was never as central to their image. When it comes down to it, they are there to work.

The Stones also have a clear division of responsibilities, a necessity for startups hoping to grow into stable companies. At Facebook, Mark Zuckerberg used to be responsible for every­thing, but now focuses on the product while Sheryl Sandberg leads the business. In “Life”, Keith Richards portrayed Jagger as a cold, soulless character who cares more about money than music. But it is Jagger’s leadership of the business side of things, and Richards’s acceptance of that leadership, that has kept the Stones rolling for so long. [...]

Ernest Bormann, a scholar of small-group communication, said that every group has a threshold for tension that represents its optimal level of conflict. Uncontrolled conflict can destroy the group, but without conflict, boredom and apathy set in. Simon Napier-Bell told me that bands who don’t fight tend to be creatively moribund.
Trade-offs?
One of the most striking differences between the Stones and the Beatles is that the Beatles split up after a mere seven years at the top, whereas the Stones are still going. One startup flashed brightly and burned out; the other established itself as a long-running corporation.

Perhaps there is a trade-off between creativity and stability. The Stones had ceased to be musical innovators by the end of the 1970s, but survived the waning of their creative powers by reaching a professional arrangement that enabled them to exploit their earlier innovations. The business they most resemble is Microsoft. The Beatles made only seven years’ worth of albums before splitting, but those albums represent the greatest body of work in the rock canon. Their emotionally intense collaboration maximised their creative potential, but made the group fragile. [...]

Marie-Louise von Franz, a psychoanalyst, wrote that “whenever one is in a group…one has to draw a veil over a part of one’s personality.” Gains from collaboration are traded off against self-expression. Occasionally, she said, it’s the other way around: a group can become united in spirit and each individual expresses themselves more fully than they would be able to by themselves. A “superpersonal harmony” prevails.
That, I believe, is what David Hays and I had, superpersonal harmony. See my brief eulogy: How Now I and Thou: In Memory of David Glenn Hays. See also Breaking Bread Together, Two Examples Very Different for a short account of how Dave ran his research group.

Friday, September 25, 2020

Ripples, ducks, a mast, buildings


Gwern on the implications of GPT-3 ["no coherent model of why GPT-3 was possible"]

I'm not a regular follower of Gwern, though I did check out what he has to say about GPT-3 and poetry, so I only just now noticed this statement:
...GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions. Their primary concerns appear to be supporting the status quo, placating public concern, and remaining respectable. As such, their comments on AI risk are meaningless: they would make the same public statements if the scaling hypothesis were true or not.
While Gwern appears to believe in AI in a way that I do not, I agree with this. And that is what prompted my recent thinking on GPT-3 in the first place, in particular, my working papers, GPT-3: Waterloo or Rubicon? Here be Dragons, from August 5, and the more recent, What economic growth and statistical semantics tell us about the structure of the world, from August 24.

Gwern concludes that assessment with this question:  "Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting—sigmoid or singularity?" I do expect the 2020s to be interesting, but I don't expect sigmoidal from GPT-X and similar engines, and not singularity from anything. Though, as I've been arguing for awhile, we're already swimming in a singularity.

Can the study of cultural evolution be successfully assimilated to a framework that is centered on biology? [No, but the biologists keep trying]

11:45 PM 9.26.20: This post has been edited considerably since I first posted it on the morning of the 25th. Here's a link to the article mentioned in the tweets, which are no longer publicly available.
This fills me with despair for the future study of cultural evolution. Why? Because the article is so relentlessly centered on biology. And why not? Much of the best known work in the study of cultural evolution has been done by biologists, or by thinkers oriented toward biological concepts and methods. And what is wrong with that?

It won't work, not for language, music, art, poetry, ideas in all domains, and so forth. Why? Because the authors of the paper (implicitly) assume that the benefits cultural evolution must accrue to biological individuals as do the benefits of biological individuals. But cultural change often happens too rapidly for this to provide a mechanism. And they know this. But they are unwilling to draw the logical conclusion, that cultural evolution cannot be centered on biological individuals. It must be centered on cultural individuals. Cultural individuals? Don't ask, this is only a blog post, not a book-length exposition of a theoretical position. But see the entry for "cultural being" on this page: Cultural Evolution: Terms & Guide.

For example, there they acknowledge the rapid rate of cultural evolution (p. 9): "Cultural evolution can proceed much faster than biological evolution both in rate of change and in terms of trait complexity. This has been pointed out previously, and several reasons are generally given for why this is the case [...]." A bit later they note (p. 12):
In biological evolution, the fate of a trait can be understood by considering the average reproductive success of all carriers of that trait (Haig, 2012; Hamilton, 1963). It is common to use optimisation and game theoretic approaches to study evolutionary endpoints (i.e. adaptations). In contrast, in cultural evolution there is no generally accepted formulation of cultural fitness (Ramsey and De Block, 2017) as the relationship between the individual and its traits is more complicated. Here, transmission is not tied to biological reproduction, can occur between arbitrary individuals, and the individual is not fixed with respect to the cultural traits it possesses and exhibits to others during its life. [...] For instance, traits that are contagious enough can spread and be maintained in a population even if they reduce the survival and/or biological reproduction of individuals (Anderton et al., 1987; Boyd and Richerson, 1976; Campbell, 1975).
Yes, a well-known issue. So why adopt a theoretical framework that treats this as a complication rather than as one of the phenomena central to the theory?

Let’s consider another brief passage (p. 12): “It is quite possible that concepts like adaptation and fitness in cultural evolution will not have equivalent and as straightforward meanings as they do in biological evolution.” They then go on to convert that into a question for further investigation (p. 13), which it certainly is. In the context of their discussion, that feels a bit like an add-on, or a secondary issue. But if the object is to think about cultural evolution, shouldn’t that be central to the theoretical enterprise?

This enterprise feels like it is about taking a biological theory, evolution, and extending it to cover cultural phenomena. That’s certainly the core of gene-culture coevolution, where cultural is simply a different, and more rapid and flexible, means of inheriting behavioral traits. It seems to me that it would be better in the long run to take the logical structure of evolutionary explanation and figure out how to realize it in the materials of human mind, culture, and society. Dawkins took a stab at this when he coined the idea of a meme in The Selfish Gene. But this article doesn’t mention Dawkins at all and memes only once in passing (it also shows up in a title in the bibliography). I can certainly understand why this is so given the amount of fruitless speculation that has grown up around the idea, which I have criticized at some length, but the basic idea is worth pursuing, which I have done, most recently in a paper about music, “Rhythm Changes” Notes on Some Genetic Elements in Musical Culture.

Now consider this passage from one of the articles cited (Buskell, Enquist, & Jansson, A systems approach to cultural evolution):
While humanities and social science scholars are interested in complex phenomena—often involving the interaction between behaviour rich in semantic information, networks of social interactions, material artefacts and persisting institutions—many prominent cultural evolutionary models focus on the evolution of a few select cultural traits, or traits that vary along a single dimension [...]. Moreover, when such models do build in more traits, these typically are taken to evolve independently of one another [...]. Within cultural evolutionary theory, this strategy holds that the dynamics and structure of cultural evolutionary phenomena can be extrapolated from models that represent a small number of cultural traits interacting in independent (or non-epistatic) processes. This kind of strategy licences the modelling of simple trait systems, either with an eye to describing the kinematics of those simple systems, or to illuminate the evolution and operation of mechanisms underpinning their transmission [...].
That passage implies another issue: description. Existing models favor systems and traits that are descriptively simple, while many phenomena of interest to humanists and social scientists require complex descriptions.

When Darwin began his career he had several centuries of careful naturalistic description at his disposal, descriptions of plants and animals and their life ways. He undertook such work himself and it was central to his thinking. This article says nothing about the need for accurate description of cultural phenomena, as though it were not an issue. It is. That’s one thing I undertook in my article on Rhythm Changes: I described the phenomenon. I have written extensively, if informally, about the need for better description of literary texts (three working papers here) and have published a long paper on the need for literary morphology. There has been a fair amount of work on the cultural transmission of myths and folktales; that work is based on standard descriptions of individual texts. Without careful descriptive work, there is nothing to think about.

I could go on and on, but as I said, this is only a blog post, not a book. 

I take it as given that culture, taken as a whole, is a biological adaptation – though, of course, there is always a chance that, either by action (e.g. nuclear war) or inaction (e.g. global warming) humankind may destroy itself. What we need from a theory of cultural evolution is a set of conceptual tools that will help us better understand how culture works. The biology-centric conceptual orientation of this article, gene-cultural coevolution or dual-inheritance theory, is of limited value. It cannot be the basis of a robust account of cultural evolution. It is, in effect, a geocentric proposal (biology) about a heliocentric world (culture). The theory of cultural evolution needs a Copernican+Keplerian revolution, not a complex arrangement of deferents and epicycles appended to the mechanisms of biological evolution.

* * * * *

Here's an exercise for the reader: Take this article and analyze it in terms of the four questions I pose in my quick guide to cultural evolution (for the humanist):
  1. What is the target/beneficiary of the evolutionary dynamic?
  2. Replication (copying) or (re)construction
  3. Is there a meaningful distinction comparable to the biological distinction between phenotype and genotype?
  4. Are the genetic elements of culture inside people’s heads or are they in the external environment?
You might want to start with this table (from page 6) in the second tweet above.

Friday Fotos: Electric Empire [shaky-cam]





Thursday, September 24, 2020

TikTok and the problems of social media in a global information economy

John Herrman, What Happens When Americans Join the Global Internet, NYTimes, Sept. 22, 2020.
In contrast to mounting political criticisms of, say, Facebook and Twitter, platforms where the president is extremely active and invested, the government’s public case against TikTok has been largely speculative, citing theoretical dangers and hardly trying to appeal to the app’s users directly. It’s no surprise that the vague message from Ms. Pappas gave some users comfort, given how little this process has addressed them.

TikTok’s users are experiencing for the first time something long familiar to much of the world outside the United States: a flourishing online social space existentially threatened by diplomatic and political fights between states and corporations, with little input from those affected by their decisions. Likewise for WeChat, the Chinese messaging app used by millions in the U.S. to keep in touch with friends, families and colleagues abroad, which was set to be banned on Sunday until a federal court intervened.

To the limited extent that the plights of TikTok and WeChat have familiar precedents, they’re mostly overseas: China’s broad bans on foreign platforms including Facebook and Google; Russia’s “data localization” laws, which require foreign firms to store certain types of data locally; the occasional national shutdowns of Twitter, Facebook or YouTube during periods of political unrest in many countries around the world, including Egypt, Vietnam, Bangladesh, Sri Lanka, Turkey and others; or the Indian ban on TikTok and other Chinese internet services earlier this year. [...]

Worrying about a foreign government’s influence or access to data — or about whether imported competitors might hurt domestic firms — has been a burden for practically every country in the world except the United States, where many of the global internet’s most popular services were started. For a large majority of their users, Facebook, Twitter and Google are foreign firms.

The prodigal tech-bro in search of redemption

Maria Farrell, The Prodigal Techbro, The Conversationalist, march 5, 2020:

This could almost be a response to The Social Dilemma, and maybe it is, though it doesn't mention it.
The Prodigal Tech Bro is a similar story, about tech executives who experience a sort of religious awakening. They suddenly see their former employers as toxic, and reinvent themselves as experts on taming the tech giants. They were lost and are now found. They are warmly welcomed home to the center of our discourse with invitations to write opeds for major newspapers, for think tank funding, book deals and TED talks. These guys – and yes, they are all guys – are generally thoughtful and well-meaning, and I wish them well. But I question why they seize so much attention and are awarded scarce resources, and why they’re given not just a second chance, but also the mantle of moral and expert authority.

I’m glad that Roger McNamee, the early Facebook investor, has testified to the U.S. Congress about Facebook’s wildly self-interested near-silence about its amplification of Russian disinformation during the 2016 presidential election. I’m thrilled that Google’s ex-‘design ethicist’, Tristan Harris, “the closest thing Silicon Valley has to a conscience,“(startlingly faint praise) now runs a Center for Humane Technology, exposing the mind-hacking tricks of his former employer. I even spoke —critically but, I hope, warmly—at the book launch of James Williams, another ex-Googler turned attention evangelist, who “co-founded the movement”of awareness of designed-in addiction. I wish all these guys well. I also wish that the many, exhausted activists who didn’t take money from Google or Facebook could have even a quarter of the attention, status and authority the Prodigal Techbro assumes is his birth-right.

Today, when the tide of public opinion on Big Tech is finally turning, the brothers (and sisters) who worked hard in the field all those years aren’t even invited to the party. No fattened calf for you, my all but unemployable tech activist. The moral hazard is clear; why would anyone do the right thing from the beginning when they can take the money, have their fun, and then, when the wind changes, convert their status and relative wealth into special pleading and a whole new career?
The rewards of privilege:
The only thing more fungible than cold, hard cash is privilege. The prodigal tech bro doesn’t so much take an off-ramp from the relatively high status and well-paid job he left when the scales fell from his eyes, as zoom up an on-ramp into a new sector that accepts the reputational currency he has accumulated. He’s not joining the resistance. He’s launching a new kind of start-up using his industry contacts for seed-funding in return for some reputation-laundering.

So what? Sure, it’s a little galling, but where’s the harm?

Allowing people who share responsibility for our tech dystopia to keep control of the narrative means we never get to the bottom of how and why we got here, and we artificially narrow the possibilities for where we go next. And centering people who were insiders before and claim to be leading the outsiders now doesn’t help the overall case for tech accountability. It just reinforces the industry’s toxic dynamic that some people are worth more than others, that power is its own justification.

The prodigal tech bro doesn’t want structural change. He is reassurance, not revolution. He’s invested in the status quo, if we can only restore the founders’ purity of intent. Sure, we got some things wrong, he says, but that’s because we were over-optimistic / moved too fast / have a growth mindset. Just put the engineers back in charge / refocus on the original mission / get marketing out of the c-suite. Government “needs to step up”, but just enough to level the playing field / tweak the incentives. Because the prodigal techbro is a moderate, centrist, regular guy. Dammit, he’s a Democrat. Those others who said years ago what he’s telling you right now? They’re troublemakers, disgruntled outsiders obsessed with scandal and grievance. He gets why you ignored them. Hey, he did, too. He knows you want to fix this stuff. But it’s complicated. It needs nuance. He knows you’ll listen to him. Dude, he’s just like you…
For some comments on The Social Dilemma, see this post: Facebook or freedom, Part 3: The game goes on [Media Notes, special edition #1].

Can you spot the Empire State Building? (It's small, but easy to find.)


Wednesday, September 23, 2020

The toothlessness of wokism

Arnold Kling, The movie Stay Woke, Sept. 23, 2020.
Our synagogue had a virtual showing of the movie Stay Woke, a documentary made in 2016 about the Black Lives Matter movement. Many in our congregation are much more fervent in their leftism than in their Judaism, and everyone else had only positive things to say afterward about the film and about Black Lives Matter.

The documentary depicted BLM in a very positive light Those who spoke for BLM were very energized by the movement. Critics were depicted as unfair and embedded in Fox News.

In the discussion that we had afterward, I pointed out that the movie did not include even one specific proposal or policy change. I did not mention Martin Gurri, but I was thinking about him.

Other congregants pointed out how sad they were that nothing seemed to have changed between 2016 and 2020. One person typed into the Zoom chat that things had gotten worse.

No one else saw a connection between the absence of policy ideas in the movie and the absence of any change. But it strikes me that is you aren’t behind a program, that makes it unlikely that you will effect change.

Continuing to channel Gurri, I would say that social media is not a tool suited to creating a movement. Instead, it is suited to instigating a mob. A movement requires thought and long-term planning. A mob just requires stimulating rage and the narcissistic satisfaction that comes these days from appearing in viral videos and having one’s tweets widely circulated.

American spends more on fossil fuels than on defense

Tim Dickinson, Study: U.S. Fossil Fuel Subsidies Exceed Pentagon Spending, Rolling Stone, May 8, 2019.
The United States has spent more subsidizing fossil fuels in recent years than it has on defense spending, according to a new report from the International Monetary Fund.

The IMF found that direct and indirect subsidies for coal, oil and gas in the U.S. reached $649 billion in 2015. Pentagon spending that same year was $599 billion.

The study defines “subsidy” very broadly, as many economists do. It accounts for the “differences between actual consumer fuel prices and how much consumers would pay if prices fully reflected supply costs plus the taxes needed to reflect environmental costs” and other damage, including premature deaths from air pollution.

These subsidies are largely invisible to the public, and don’t appear in national budgets. But according the the IMF, the world spent $4.7 trillion — or 6.3 percent of global GDP — in 2015 to subsidize fossil fuel use, a figure it estimated rose to $5.2 trillion in 2017. China, which is heavily reliant on coal and has major air-pollution problems, was the largest subsidizer by far, at $1.4 trillion in 2015. But the U.S. ranked second in the world.

The human, environmental and economic toll of these subsidies is shocking to the conscience. The authors found that if fossil fuels had been fairly priced in 2015, global carbon emissions would have been slashed by 28 percent. Deaths from fossil fuel-linked air pollution would have dropped by nearly half.

Golded hogwash on corporate responsibility


Two lampshades and a mirror, a study in variation




Tuesday, September 22, 2020

American deaths for first 30 weeks of the year for years 2015-2020

What are the chances that Facebook will pull out of Europe?

Facebook has warned that it may pull out of Europe if the Irish data protection commissioner enforces a ban on sharing data with the US, after a landmark ruling by the European court of justice found in July that there were insufficient safeguards against snooping by US intelligence agencies.

In a court filing in Dublin, Facebook’s associate general counsel wrote that enforcing the ban would leave the company unable to operate.

“In the event that [Facebook] were subject to a complete suspension of the transfer of users’ data to the US,” Yvonne Cunnane argued, “it is not clear … how, in those circumstances, it could continue to provide the Facebook and Instagram services in the EU.”

Facebook denied the filing was a threat, arguing in a statement that it was a simple reflection of reality. “Facebook is not threatening to withdraw from Europe,” a spokesperson said.

“Legal documents filed with the Irish high court set out the simple reality that Facebook, and many other businesses, organisations and services, rely on data transfers between the EU and the US in order to operate their services. A lack of safe, secure and legal international data transfers would damage the economy and hamper the growth of data-driven businesses in the EU, just as we seek a recovery from Covid-19.”

The filing is the latest volley in a legal battle that has lasted almost a decade. In 2011, Max Schrems, an Austrian lawyer, began filing privacy complaints with the Irish data protection commissioner, which regulates Facebook in the EU, about the social network’s practices.

Those complaints gathered momentum two years later, when the Guardian revealed the NSA’s Prism program, a vast surveillance operation involving direct access to the systems of Google, Facebook, Apple and other US internet companies. Schrems filed a further privacy complaint, which was eventually referred to the European court of justice.
Is social media making a hash of our current institutions?

High on a low hill in Hoboken on the banks of the Hudson

Why, in the course of an intellectual life, can it take years to see the obvious?

Give me a place where I can stand—and I shall move the world.

– Archimedes


I’m thinking of my own intellectual life, of course. And I have two examples in mind, 1) my realization that literary form was at the center of my interest in literature, and 2) my recent realization about the impossibility of direct brain-to-brain thought transmission.

Literary form

My first major piece of intellectual work was my 1972 MA thesis on “Kubla Khan.” That thesis focused on the poem’s form and in a sense set the direction for my career, and it led me to focus on computation and the cognitive sciences. In graduate school at SUNY Buffalo I wrote papers that were concerned with form, on Sir Gawain and the Green Knight, Much Ado About Nothing, Othello, “The Cat and the Moon”, and Wuthering Heights. I revised the Sir Gawain paper and published it at the time [1], and some years later material from the two Shakespeare papers was the basis of a publication [2]. I also published a paper about narrative form that based, in part, on my 1978 dissertation. So I was examining form from the beginning and yet I didn’t realize that it was the center of my focus.

It wasn’t until the mid 1990s that I realized that it was form I was looking at all along [4]. And that realization came about indirectly. In cruising the web I discovered that the Stanford Humanities Review had devoted an issue to cognitive science an literature. Herbert Simon had written an article setting forth his views [5] and 33 critics had responded. Obviously there was now a group of literary scholars interested in cognitive science. I read their stuff, went to a couple of conferences some of these people attended (Haj Ross’s Languaging conferences at North Texas) and realized that these people were not at all interested in the things I was.

First and foremost, their version of cognitive science did not include computation, which was central to my interest. It was in the course of thinking that through that I realized that they weren’t interested in form either, but I was. And now that I thought about it, form was central to my work in literature, wasn’t it? That puts we into the late 1990s, when I took a detour from literature to write a book on music. So it wasn’t until the early 2000s that I was able focus on form, when I returned to “Kubla Khan” and then “This Lime-Tree Bower My Prison” with articles in PsyArt: An Online Journal for the Psychological Study of the Arts, and culminating in 2006 article on literary morphology, where I put form front and center [6].

When it was there from the beginning, why did it take me so long to get there?

Brain-to-brain thought transmission

The second case is brain-to-brain thought transmission. I first took the matter up in January 2002 when I posted a thought experiment to Brainstorms, an online community established by Howard Rheingold. In that experiment I imagined we had the technology to do it without harming brain tissue, but how do we determine which neurons to link together in the respective brains? Since brains are unique there is no inherent unique mapping between neurons in two brains. This is unlike the situation with gross body parts, where one person’s right thumb corresponds to another person’s right thumb, and so forth.

It was until a decade later, in 2013, when I put the thought experiment online [7] that I imagined a much simpler and more direct counter argument: How does a brain tell where a given impulse comes from? Brains have no mechanisms for distinguishing between endogenous and exogenous impulses.

Why did it take me a decade to move from the complex argument to the simple one?

What’s going on?

I don’t really know. But it must be a function of how one’s mind is set-up when you first approach a problem. In the case of literary texts, literary criticism is about meaning; that’s what you’re taught in school. More than that, when we read any text for whatever purpose, we’re reading it for meaning. That’s the orientation. In literary study one learns about form, of course, but you don’t focus on it in an analytic or descriptive way.

So, to focus on form I had to break away from my prior training, but also from my natural inclination toward texts. And, come to think of it, this is not simply a matter of will, but of method as well. By the time I made the break I had several examples of close formal analysis from my own work. Those gave me a standpoint from which to make the break.

Is it the same with brain-to-brain thought transmission? That’s not a topic that normally comes up in the study of neuroscience. No one is examining what happens when you link two brains together so we don’t have any specific framework at all. What do we do? We apply a default framework of some kind? Where do we get that framework? Most likely from our experience with electrical and electronic circuits, especially in computers.

And that’s not a useful framework at all. In fact it gets in the way because brain circuitry is not at all like neural circuitry. Anyone who studies neuroscience knows this, of course, but they may not have the knowledge linked strongly to this kind of problem. In my case I had my conversations with Walter Freeman on the uniqueness of brains, and that focused my attention on neurons and on comparison between brains at the single neuron level. Thought was enough to bring me to the realization that we had no coherent and consistent way to link two brains together on the scale of 10s of millions of neurons.

But that wasn’t enough to take me the whole way to the realization that there is no way for brains to identify foreign spikes. But I did do something like that in my review of Auger’s Electric Meme, which I also wrote in 2002 [8]. But why did it take me a decade to connect the two together? What was my new standpoint?

References

[1] William Benzon, Sir Gawain and the Green Knight and the Semiotics of Ontology, Semiotica, 3/4, 1977, 267-293, https://www.academia.edu/238607/Sir_Gawain_and_the_Green_Knight_and_the_Semiotics_of_Ontology/.

[2] William Benzon, At the Edge of the Modern, or Why is Prospero Shakespeare's Greatest Creation? Journal of Social and Evolutionary Systems, 21 (3): 259-279, 1998, https://www.academia.edu/235334/At_the_Edge_of_the_Modern_or_Why_is_Prospero_Shakespeares_Greatest_Creation.

[3] William Benzon, The Evolution of Narrative and the Self, Journal of Social and Evolutionary Systems, 16( 2): 129-155, 1993, https://www.academia.edu/235114/The_Evolution_of_Narrative_and_the_Self.

[4] See these two blog posts, William Benzon, How I discovered the structure of “Kubla Khan” & came to realize the importance of description, blog post, New Savanna, October 9, 2017, http://new-savanna.blogspot.com/2017/10/how-i-discovered-structure-of-kubla.html; Things change, but sometimes they don’t: On the difference between learning about and living through [revising your priors and the way of the world], blog post, New Savanna, July 26, 2020, http://new-savanna.blogspot.com/2020/07/things-change-but-sometimes-they-dont.html.

[5] Herbert Simon, “Literary Criticism: A Cognitive Approach.” Stanford Humanities Review 4, No. 1, (1994) 1-26.

[6] William Benzon, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form.

[7] William Benzon, Why we'll never be able to build technology for Direct Brain-to-Brain Communication, blog post, New Savanna, September 26, 2018, http://new-savanna.blogspot.com/2013/05/why-well-never-be-able-to-build.html.

[8] William L. Benzon, Colorless Green Homunculi, Human Nature Review 2 (2002) 454-462, https://www.academia.edu/41181169/Colorless_Green_Homunculi.