Saturday, December 14, 2019

Two views of a signal tower [the over-hanging tree is a bonus]



Is the world falling apart, or at least the world of large nation states with a diverse and disgruntled citizenry?

Roger Cohen, Boris Johnson and the Coming Trump Victory in 2020, NYTimes Dec 13, 2019:
“Brexit and Trump were inextricably linked in 2016, and they are inextricably linked today,” Steve Bannon told me. “Johnson foreshadows a big Trump win. Working-class people are tired of their ‘betters’ in New York, London, Brussels telling them how to live and what to do. Corbyn the socialist program, not Corbyn the man, got crushed. If Democrats don’t take the lesson, Trump is headed for a Reagan-like ’84 victory.”

I still think Trump can be beaten, but not from way out left and not without recognition that, as Hugo Dixon, a leader of the now defeated fight for a second British referendum, put it: “There is a crisis of liberalism because we have not found a way to connect to the lives of people in the small towns of the postindustrial wasteland whose traditional culture has been torn away.”

Johnson, even with his 80-seat majority, has problems. His victory reconciled the irreconcilable. His moneyed coterie wants to turn Britain into free-market Singapore on the Thames. His new working-class constituency wants rule-Britannia greatness combined with state-funded support. That’s a delicate balancing act. The breakup of Britain has become more likely. The strong Scottish National Party showing portends a possible second Scottish referendum on independence.

This time I would bet on the Scots bidding farewell to little England. And then there’s the small matter of what Brexit actually means. Johnson will need all his luck with that.

Do trained murderers working of a Mexican drug cartel have a code of conduct?

From Azam Ahmed and Paulina Villegas, He Was One of Mexico’s Deadliest Assassins. Then He Turned on His Cartel. The NYTimes, 14 Dec 2019:
Murder was rarely for sport, the sicario said. He studied his victims at length, investigating the complaints against them. Once confirmed, he warned them to stop, mostly to keep them from drawing too much attention from the authorities. If they didn’t, he planned the killings meticulously, carrying them out only with approval from above.

“For me to kill someone, I had to have permission,” he explained. “Why do I want to kill that person? Not because I just don’t like them. That’s not how it works.”

He followed a code, he said. He didn’t recruit children, and wouldn’t harm women or working people, if he could avoid it. But the workings of organized crime were rarely orderly. He did kill women and innocent civilians. For all the talk of honoring a code, it was often just that: talk. Business always came first.

The New York Times confirmed many of his homicides with the authorities and attempted to speak with the victims’ families in several cases. All refused. Having lost their daughters, sons and fathers to the cartel, they were fearful of reprisals.

Of all the people the sicario killed in his five-year run, only a few haunted him, he said. One in particular.

It was during a routine operation, he recalled, when his bosses sent him to eliminate a group of local kidnappers. After he arrived, he said, he found a college student with them. The sicario said he knew instantly the student was innocent: the look of terror on his face, his body language, even his clothes. They were all wrong.

Following protocol, the sicario tied everyone up and called his boss. He wanted to let the young man go. He was unaffiliated. There was no need to kill him. But the boss said no. Any witness was a liability.

As the boy begged for his life, the sicario said he looked away and told him he was sorry before slitting his throat.

“That student still haunts me,” he said, weeping. “I see his face, that kid begging me for his life. I will never forget his eyes. He was the only one who ever looked at me that way.”
A sinner confesses:
For five years, the sicario lived as two different people: the son who dropped off groceries for his mother and had a baby of his own with his girlfriend; and the “monster,” as he called himself, who killed for a few hundred dollars a week.

After his arrest, the wall between them began to crack. He suffered what seemed like psychotic episodes, he said, sleepless nights of strange voices and shadows collapsing on him. He knew he deserved no pity, that he alone was to blame. He took some comfort in that.

“I was at the point of going crazy,” he said. “I would spend two or three days crying.”

Eventually, a pastor — an uneducated, reformed convict himself — came to see him. At first, the sicario worried the man was a spy sent by his enemies. Eventually, he began to speak to him and, before long, could hardly stop.

The pastor was caught off guard by the torrent of confessions as the sicario gave himself over to the Bible with a fervor he once held for violence, a conversion so common it is almost a cliché in the world of gangs and cartels.

“That other person is dead,” the sicario said as if, with repetition, it might become true.

He found new purpose in confinement, helping solve cold cases, testifying against cartel players and paving the way for some two dozen convictions. The police said they saw a real transformation in him, though they had their own reasons to believe it, too.

A model of the visual cortex

Friday, December 13, 2019

Anand Giridharadas on the new feudalism



From YouTube:
Are Mark Zuckerberg and Jeff Bezos the new feudal elite?

Anand Giridharadas talks to INET President Rob Johnson about how the titans of Silicon Valley use “philanthropy” to control more of our lives.
See my various posts tagged as virtual feudalism.

Two views of Christ Hospital [but you have to look carefully, because it's small in both photos]


Is AI a feature or a platform? [machine learning, artificial neural nets]


In a recent conversation with Kevin Kelly (link in tweet) Marc Andreessen remarked that his VC firm would get pitches where the founders would list, say, five features of their produce and then tack on AI as a sixth. That’s AI as a feature.

At Andreessen Horowitz they think that the future of AI is as a platform. I think that may be right. Forget about artificial general intelligence (AGI), superintelligence, upload, and all that, that’s just the fever dreams of tech-bro monotheism. Think of AI as a learning technology that functions entirely within an artificial world, one bounded by computing technology. Its task is to learn and extend that world.

Learning, that’s how humans get about in the world. We learn as we go. Computing systems need to do that. But they need a bounded world in which they CAN learn effectively. Chess and Go are like that. Natural language is not. Deep learning can ‘learn’ the structure hidden in a mound of texts, but that’s not the structure of the world. That is, at best, the structure of language about the world. And that’s a far cry from being the world itself. Chess, on the other hand, is completely bounded by the rules of the game, and those rules are fully available to the computer. Play enough games, millions of them, and the system’s got a good grasp of that world.

So how does that generalize to AI as a platform? I suppose the idea is that every application is written within an AI learning engine which then proceeds to learn about and extend the application domains as humans use the application to solve problems.

I had something like that in mind some years ago when I dreamed up a natural language interface for PowerPoint. This was back in 2004: 1) before machine learning had blossomed as it has in the last decade, and 2) after I’d finished my book on music, Beethoven’s Anvil, and had conceived of something I call attractors nets [1] in which I used Sydney Lamb’s network notation to serve, in effect, as the high-end control system for the nervous system (conceived as a complex dynamical system after the work of Walter Freeman). Here’s the abstract I wrote for a short paper setting for the idea [2]:
This document sketches a natural language interface for end user software, such as PowerPoint. Such programs are basically worlds that exist entirely within a computer. Thus the interface is dealing with a world constructed with a finite number of primitive elements. You hand-code a basic language capability into the system, then give it the ability to ‘learn’ from its interactions with the user, and you have your basic PPA (PowerPoint Assistant).
Yes, I know, that reads like PPA is an add-on for good old Powerpoint, so AI as feature. But notice that I talk of programs as “worlds that exist entirely within a computer” and of the system learning “from its interactions with the user.” That’s moving into platform territory.

I then went on to imagine a community of users working with PPA:
As it happens, Jasmine [my imaginary user] is one of five graphic artists in the marketing communications department of a pharmaceutical company. All of them use PowerPoint, and each has her own PPA. While each artist has her own style and working methods, they work on similar projects, and they often work together on projects. The work they do must conform to overall company standards.

It would thus be useful to have ways of maintaining these individual PPAs as a “community” of computing “agents” sharing a common “culture.” While each PPA must be maximally responsive and attuned to its primary user, it needs to have access to community standards. Further, routines and practices developed by one user might well be useful to other users. Thus the PPAs need ways of “sharing” information with one another and for presenting their users with useful tips and tools.
Let’s generalize further:
The PowerPoint Assistant is only an illustrative example of what will be possible with the new technology. One way to generalize from this example is simply to think of creating such assistants for each of the programs in Microsoft’s Office suite. From that we can then generalize to the full range of end-user application software. Each program is its own universe and each of these universes can be supplied with an easily extensible natural language assistant. Moving in a different direction, one can generalize from application software to operating systems and net browsers.
That’s inching awfully close to AI-as-platform. Just do a gestalt switch and make the extensible natural language assistant your base system. Note that this system is not in the business of learning language in general, but only language where the meaning of words is closely tied to the application domain of the system itself. That’s something a deep learning system could learn.

[1] See these informal working papers. You need to read the Notes paper before reading the Diagrams paper.

William Benzon, Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic and Dynamics in Relational Networks, Working Paper, 2011, 52 pp., Academia, https://www.academia.edu/9012847/Attractor_Nets_Series_I_Notes_Toward_a_New_Theory_of_Mind_Logic_and_Dynamics_in_Relational_Networks.

William Benzon, Attractor Nets 2011: Diagrams for a New Theory of Mind, Working Paper, 55 pp., Academia, https://www.academia.edu/9012810/Attractor_Nets_2011_Diagrams_for_a_New_Theory_of_Mind.

[2] William Benzon, PowerPoint Assistant: Augmenting End-User Software through Natural Language Interaction, Working Paper, July 2015, 15 pp., https://www.academia.edu/14329022/PowerPoint_Assistant_Augmenting_End-User_Software_through_Natural_Language_Interaction.

Palgrave series on cultural evolution

Cultural evolution describes how socially learned ideas, rules, and skills are transmitted and change over time, giving rise to diverse forms of social organisation, belief systems, languages, technologies and artistic traditions. This research article collection showcases cutting-edge research into cultural evolution, bringing together contributions — both quantitative and qualitative — that reflect the interdisciplinary scope of this rapidly growing field, as well as the diversity of topics and approaches within it.

Interested in contributing a paper for this collection? Read our call for papers.

Friday Fotos: Squatter's Rights [The Hallucinated City]





Now there's clickbait for you


Thats one of those clickbait sites where they post a provocative title on Twitter so you come to the site and click and click and click until you find what you're looking for, which may not even be in the sited article. In this case, you're looking to see if your favorite pizza joint is the one listed. If you live in Alabama, no problem, because it's the first state listed. But if you live in Wyoming, bit problem. You've got to click all the way through to find out whether Chico's Pizza [I made the name up] in Laramie made the cut.

Why does deep learning work?


As I understand this piece from MIT's Technology Review, the answer is simple: because that's how the universe is:
Now Lin and Tegmark say they’ve worked out why. The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties.

So deep neural networks don’t have to approximate any possible mathematical function, only a tiny subset of them.

To put this in perspective, consider the order of a polynomial function, which is the size of its highest exponent. So a quadratic equation like y=x2 has order 2, the equation y=x24 has order 24, and so on.

Obviously, the number of orders is infinite and yet only a tiny subset of polynomials appear in the laws of physics. “For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order,” say Lin and Tegmark. Typically, the polynomials that describe laws of physics have orders ranging from 2 to 4.
And so:
“We have shown that the success of deep and cheap learning depends not only on mathematics but also on physics, which favors certain classes of exceptionally simple probability distributions that deep learning is uniquely suited to model,” conclude Lin and Tegmark.

That’s interesting and important work with significant implications. Artificial neural networks are famously based on biological ones. So not only do Lin and Tegmark’s ideas explain why deep learning machines work so well, they also explain why human brains can make sense of the universe. Evolution has somehow settled on a brain structure that is ideally suited to teasing apart the complexity of the universe.

This work opens the way for significant progress in artificial intelligence. Now that we finally understand why deep neural networks work so well, mathematicians can get to work exploring the specific mathematical properties that allow them to perform so well. “Strengthening the analytic understanding of deep learning may suggest ways of improving it,” say Lin and Tegmark.
Color me just a bit sceptical.

The research paper: Henry W. Lin, Max Tegmark, David Rolnick, Condensed Matter > Disordered Systems and Neural Networks, Why does deep and cheap learning work so well?, arXiv:1608.08225v4 [cond-mat.dis-nn]
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various "no-flattening theorems" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.

Is governance breaking down world-wide?

Martin "Fifth Wave" Gurri is at it again.* His topic this time: 2019: The year revolt went global. So:
Politics in the digital age revolve around information. A safe assumption when thinking about this environment is that everyone is aware of everything, globally. This sets up powerful demonstration effects: protesters in one nation can learn from those in another. One reason for the spread of anti-establishment revolts may well be their improved capacity to evade suppression.
And:
The question, for me, is whether these repeated crises of authority at the national level represent a systemic failure. After all, the disorders of 2019 are the latest installment in a familiar tale. Governments long ago yielded control of the information sphere to the public, and the political landscape, ever since, has been in a state of constant perturbation. From the euphoria and subsequent horrors of the Arab Spring in 2011, through the improbable electoral victories of Brexit and Donald Trump in 2016, to last year’s violence by the Yellow Vests of France, we ought to have learned, by this late hour, to anticipate instability and uncertainty.
His optimistic hypothesis: "revolt has, quite literally, gone viral." His pessimistic hypothesis: widespread systemic failure:
...the loss of control over information must be fatal to modern government as a system: the universal spread of revolt can be explained as a failure cascade, driving that system inexorably toward disorganization and reconfiguration. Failure cascades can be thought of as negative virality. A local breakdown leads to the progressive loss of higher functions, until the system falls apart. This, in brief, is why airplanes crash and bridges collapse.

For systems that are dynamic and complex, like human societies, outcomes are a lot more mysterious. A failure cascade of revolts (the hypothesis) will knock the institutions of modern government ever further from equilibrium, until the entire structure topples into what Alicia Juarrero calls “phase change”: a “qualitative reconfiguration of the constraints” that gave the failed system its peculiar character. In plain language, the old regime is overthrown – but at this stage randomness takes charge, and what emerges on the far side is, in principle, impossible to predict. I can imagine a twenty-first century Congress of Vienna of the elites, in which Chinese methods of information control are adopted globally, and harsh punishment is meted out, for the best of reasons, to those who speak out of turn. But I can also envision a savage and chaotic Time of Troubles, caused by a public whose expectations have grown impossibly utopian. The way Juarrero tells it, “[T]here is no guarantee that any complex system will reorganize.”

Not every outcome is condemned to drown in pessimistic tears: the process, recall, is unpredictable. A structural reform that brings the public into closer alignment with the elites is perfectly possible. But I find it hard to see how that can be accomplished, so long as the public clings to the mutism of the consumer and refuses to articulate its demands like a true political actor. One rarely gets what one hasn’t asked for. Reform depends on the public’s willingness to abandon negation for practical politics.
Is this indeed the manifestation of a massive world-wide phase change of the sort David Hays and I have called rank shift? Is this a political manifestation of the singularity we are indeed living through?

*See earlier posts:

Thursday, December 12, 2019

On the beach, again [Liberty State Park]

Stations of [my] Mind at 3QD

My latest at 3 Quarks Daily: Stations of the mind: Om to Eureka and beyond.

Back in the 1960s, I suppose, someone coined the term “altered states of consciousness” (ASC) to designate what happens when one takes drugs such as marijuana, LSD, and so forth, but also during meditation and the like. These were under fairly intense investigation in psychology and medicine, and then things slacked off as the psychedelics were made illegal and counter-cultural backlash grew stronger. I was aware of that work, read some of the literature, and experimented with the drugs, mostly marijuana (enough so that it could not be termed experimentation).

Sometime after that, say in the 1980s (but who knows) I realized that consciousness itself was every bit as mysterious as anything that happens on an acid trip or during advanced meditation. We’re used to waking up in the morning and having to take time to concentrate the mind for the day’s activities, we’re familiar with the thrill of a roller coaster ride, being absorbed in a book, concentrating on walking along a log across a creek, drifting with the clouds, or being pleasantly or not so pleasantly drunk. And so those things don’t seem mysterious.

But really, they are. They all are. It’s consciousness that’s the mystery. The rest is just window dressing – well, that’s a bit extreme, but you get the idea.

Anyhow, this piece is about mental experiences I’ve had, from about the age of four to my late 20s. Some of them would qualify as ASCs – the mystical bliss of a musical performance, having a term paper write itself through me – others are just ideas I once had – that the world is just a movie being watched by the Baby Jesus – while the last is an intense intellectual realization of a kind that has come to be known as an Aha! experience. So there is some phenomenological diversity in these experiences. What connects them is simply the fact that I remember them – and for the very earliest the mere fact that I DO remember seems significant and that, without much analytical philosophizing about them, I connect them together as somehow defining what my mind is and how it operates.

I suppose I could add other experiences to the list – in the case of music, I have – but none seem as significant as those. Now if I really thought about it, analyzed things and so forth, then I might revise the list. But that’s the point, I haven’t really done that. These experiences have more or less presented themselves to me when I reflect back on my life. It’s not a long list ten or a dozen depending on just how you count them, and there’s nothing on there that happened after my late 20s, perhaps 30 – I’m not sure just when that last one happened, the Eureka experience, but it had to have been prior to 1978, when my Ph. D. was awarded. Of course I’ve not stopped living and thinking since then, and important things have happened, many of them. But something seems to have been completed at the time of that last experience.

Wednesday, December 11, 2019

Greta Thunberg, Time Person of the Year

Jump and Kong fly the Magical Elephant

William Robert Fogel on the Future of America [#Progress]

Back in 2000 William Robert Fogel, the great economic historian, published The Fourth Great Awakening and the Future of Egalitarianism. He argued that American politics, culture, and society has been driven by recurring cycles called “Great Awakenings”. Each cycle lasts about a century and each has three phases:
A cycle begins with a phase of religious revival, propelled by the tendency of new technological advances to outpace the human capacity to cope with ethical and practical complexities that those new technologies entail. The phase of religious revival is followed by one of rising political effect and reform, followed by a phase in which the new ethics and politics of the religious awakening come under increasing challenge and the political coalition promoted by the awakening goes into decline. These cycles overlap, the end of one cycle coinciding with the beginning of the next.
The dates of the awakenings are as follows (notice the overlap):
  • First Great Awakening: 1730-1830
  • Second Great Awakening: 1800-1920
  • Third Great Awakening: 1890-?
  • Fourth Great Awakening: 1960-?
The first three are widely recognized by historians while the fourth is Fogel’s own conception.

Here’s a quick characterization of the first phase of the fourth awakening (from the publisher’s link above):
1960-?: Return to sensuous religion and reassertion of experiential content of the Bible; rapid growth of the enthusiastic religions; reassertion of concept of personal sin; stress on an ethic of individual responsibility, hard work, a simple life, and dedication to family.
The second phase:
1990-?: Attack on materialist corruption; rise of pro-life, pro-family, and media reform movements; campaign for more value-oriented school curriculum; expansion of tax revolt; attack on entitlements; return to a belief in equality of opportunity.
After he’d presented his main argument, Fogel concluded the book with an afterward, “Whither Goes Our World?” I offer some passages from that for your consideration.

p. 236
The egalitarian tradition in the United States is alive and healthy. It is part of the bequest of my generation and my children’s generation to our grandchildren. The return to the principle of equality of opportunity as the touchstone of egalitarian progress is not a retreat but a recognition that, at very high average incomes for ordinary people, self-realization becomes the critical issue. Equal opportunity turns less on the command of physical capital now than it did at the close of the nineteenth century. Today, and for the foreseeable future, spiritual capital, especially command of those facets of knowledge that are both heavily rewarded in the marketplace and the key to opportunities of volwork, is the crux of the quest for self-realization.
What’s volwork? you ask. Earnwork is undertaken primarily to earn a living. Volwork, in contrast, is not undertaken to earn a living, but it may often involve vigorous and intense activity rather than lolling around in front of the TV. Fogel presents the following table (p. 189):


Notice the earnwork hours exceed volwork hours by a considerable margin in 1880; that is reversed in 1995. Fogel goes on to remark: “It is the abundance of leisure time that promotes the search for a deeper understanding of the meaning of life and fuels engagement with the issues of the Fourth Great Awakening.”

p. 239
The traditional family is likely to become stronger. Culture is one engine of change. Business, educational, and government institutions are increasingly accommodating themselves to a labor force that places great emphasis on life outside work. The cultural effect of the Fourth Great Awakening, which emphasizes the bearing and rearing of children, is already visible in some new media programming that celebrates religion and the family. Technology, which once promoted large-scale enterprise and separated the workplace from the home, is now facilitating the reunification of workplace and home.

Major reductions in equality among nations over the next half century are also likely.
About that first paragraph, is that in fact what we’ve seen in the last two decades? I know there’s lots of talk about the need for work-life balance, but is there any action on that front? Certainly for a significant class of people the online networked world means that they’re now available for work 24/7/365. How many companies expect employees to honor that total availability for work? Certainly that’s the case for some start-ups, because that’s how start ups are. And beyond that? Is this “the reunification of workplace and home” about increasing opportunities for family life and volwork, or is it about increasing the claims of work on people’s lives?

Surely this has been/is being researched. What’s the data say? I wouldn’t be at all surprised if the data presented a complex picture showing differences among industries and types of work.

Once again, what's up with all the tags/labels?

I'd posted on that topic back on November 7, 2017. I don't know how many tags I had then, but I did a count on December 30, 2016 and there were 481 then. Now, in mid-December of 2019, there are 680 tags. Yikes!

As I said back then, there are two reasons for this: 1) it allows me to group and reference a fairly specific set of texts on a single topic, such as posts reference the philosopher Peter Godfrey-Smithobject-oriented ontology (a somewhat more diffuse topic, but still limited), moral injury, the computational envelope of language (a current interest), virtual feudalism, or even this blog, New Savanna. In contrast, the label photo is quite general, with 2143 posts, out of 6525 published so far. But if you're only interested in photos of the remains of Jersey City's old chocolate factory, I've got you (only three with that tag, though there are probably more).

And, of course, reason #2, those tags are there to help me find stuff. The strange fact of the matter is that sometimes almost the only way to find something on my computer is to search for it on the blog. I search through the tags/labels for an appropriate one, then I start looking through posts. I'm looking for a highly specific word of phrase I can then use to search my hard drive. Sometimes that round about works quickly, and sometimes it doesn't. But it's worth a shot.

Recently I was looking for the original text of some blog posts about my early interest in jazz. So I looked under the tag BBjazzed and then started going through the individual posts. I then picked out words and phrases and looked for them on my hard drive and was unable to find my original text for those posts. Maybe I wrote those posts directly in the blog, which I sometimes do (I'm doing it now). Though, frankly I don't quite believe I did that for those posts. Have I somehow lost the original text files? Or is it just that my hard-drive search engine doesn't know how to find them, or I'm not using the engine properly? I don't know.

Finding things is tough, even things in your own mind. Or is that especially things in your own mind? And you know what, I'm sure that the AGI Super Intelligence of the Future will have the same problem. Minds are like that.


Current AI speech understanding systems are brittle

Following up "Shelties On Alki Story Forest" (11/26/2019) and "The right boot of the warner of the baron" (12/6/2019), here's some recent testimony from engineers at Google about the brittleness of contemporary speech-to-text systems: Arun Narayanan et al., "Recognizing Long-Form Speech Using Streaming End-To-End Models", arXiv 10/24/2019.

The goal of that paper is to document some methods for making things better. But I want to underline the fact that considerable headroom remains, even with the massive amounts of training material and computational resources available to a company like Google.

Modern AI (almost) works because of machine learning techniques that find patterns in training data, rather than relying on human programming of explicit rules. A weakness of this approach has always been that generalization to material different in any way from the training set can be unpredictably poor. (Though of course rule- or constraint-based approaches to AI generally never even got off the ground at all.) "End-to-end" techniques, which eliminate human-defined layers like words, so that speech-to-text systems learn to map directly between sound waveforms and letter strings, are especially brittle.
Read the whole thing; it's not very long.

FWIW, I note that the old rule-based systems (the ones that never got off the ground) were also brittle.

Those were the days, my friend [Erie-Lawkawana]



Tuesday, December 10, 2019

The Crown and The Marvelous Mrs. Maisel [Media Notes 25]

I finished watching season three of The Marvelous Mrs. Maisel over the weekend and season three of The Crown yesterday. I love both shows, and I want to compare them. But they’re so different in subject matter and tone that a comparison could easily become an intellectual stunt rather than a useful exercise.

We’ll see.

Both shows, of course, center on woman. Queen Elizabeth is at the center and peak of her society while Miriam “Midge” Maisel, as a Jew, is somewhat marginal to hers. Elizabeth didn’t choose to be Queen; rather, she had the role thrust upon her when her father abdicated, for love no less. Midge choose to become a comedian, against the wishes of her family and her society – as I recall from season one, her husband thought HE had talent, but didn’t. In becoming a comedian she was seeking personal fulfillment. Elizabeth had to deny herself personal fulfillment in order to carry out her duty as Queen.

And so forth. The two series are opposites, complements, supplements, whatever. Moreover you can see it in the design and lighting. The Crown is somber and has many scenes with low lighting. Mrs. Maisel is brightly lit and full of saturated colors.

The final episode of the third season of The Crown centers on Elizabeth’s sister, Princess Margaret. They are quite different in temperament and there had been tension between them since childhood. Margaret was more artistic while Elizabeth was more conventional, a contrast which has come to the fore at various times in the show. In this episode Margaret’s marriage, to a famous photographer, is on the rocks. He’s having an affair. She retaliates and is caught on camera with her lover at her Caribbean estate. The Queen Mother summons her home in disgrace. She attempts suicide. Elizabeth visits her in bed and is moved to tell her that she is the most important person in the world to her.

Mrs. Maisel’s third season ends with her anxious to go out on the road again with Shy Baldwin, a headlining African American singer who strikes me as being modeled on Johnny Mathis (though Harry Belafonte seems to be the fan favorite). She plays Las Vegas, a Florida resort (where her parents visit her), and has a grand time. In Florida she learns that Shy is gay, something no one knows, but no one (except a very close few) – she swears she’ll tell no one, no one. When she opens for him at the Apollo – yes, Midge Maisel, Jewish matron from the Upper West Side at the freakin’ Apollo – she hints about his sexual identity. When she arrives at the airport to get on the plane to continue the tour she finds that she’s been dumped. “I thought we were friends”, she tells Shy’s manager – or words to that effect. “No”, he replies, “you were musicians on tour.”

What I’m wondering is this: What framework would be able to accommodate both of these stories, or, if not exactly those stories, stories like them? And by accommodate I mean down to how the characters are costumed, sets are designed and dressed, and how shots are filmed and lit?

* * * * *

Incidentally, Shy’s bassist is a woman named Carole Keen. She’s obviously modeled after legendary studio bassist Carole Kay.

The Chinese were printing with moveable type three Cenbturies before Gutenberg

DIY film-making in Kaduna, Nigeria

Hitchcock camera moves

Monday, December 9, 2019

Can you learn anything worthwhile about a text if you treat it, not as a TEXT, but as a string of marks on pages? [#DH]

With Matt Jockers' 3300 node graph on my mind, this post deserves a bump to the otp of the queue.
The Chronicle of Higher Education just published a drive-by take-down of the digital humanities. It was by one Timothy Brennan, who didn’t know what he was talking about, didn’t know that he didn’t known, and more likely than not, didn’t care.
Timothy Brennan, The Digital-Humanities Bust, The Chronicle of Higher Education, October 15, 2017, http://www.chronicle.com/article/The-Digital-Humanities-Bust/241424
Subsequently there was a relatively brief tweet storm in the DH twittersphere in which one Michael Gavin observed that Brennan seemed genuinely confused:


“Lexical patterns”, what are they? The purpose of this post is to explicate my response to Gavin.

The Text is not the (physical) text

While literary critics sometimes use “the text” to refer to a physical book, or to alphanumeric markings on the pages in such a book, they generally have something vaguer and more expansive in mind. Here is a passage from a well-known, I won’t say “text”, article by Roland Barthes [1]:
1. The text must not be understood as a computable object. It would be futile to attempt a material separation of works from texts. In particular, we must not permit ourselves to say: the work is classical, the text is avant-garde; there is no question of establishing a trophy in modernity's name and declaring certain literary productions in and out by reason of their chronological situation: there can be “Text” in a very old work, and many products of contemporary literature are not texts at all. The difference is as follows: the work is a fragment of substance, it occupies a portion of the spaces of books (for example, in a library). The Text is a methodological field. The opposition may recall (though not reproduce term for term) a distinction proposed by Lacan: “reality” is shown [se montre], the “real” is proved [se démontre]; in the same way, the work is seen (in bookstores, in card catalogues, on examination syllabuses), the text is demonstrated, is spoken according to certain rules (or against certain rules); the work is held in the hand, the text is held in language: it exists only when caught up in a discourse (or rather it is Text for the very reason that it knows itself to be so); the Text is not the decomposition of the work, it is the work which is the Text's imaginary tail. Or again: the Text is experienced only in an activity, in a production. It follows that the Text cannot stop (for example, at a library shelf); its constitutive moment is traversal (notably, it can traverse the work, several works).  
And that is just the first of seven propositions in that well known text article, which has attained, shall we say, the status of a classic.

I have no intention of offering extended commentary on this passage. I will note, however, that Barthes obviously knows that there’s an important difference between the physical object and what he’s calling the text. Every critic knows that. We are not dumb, but we do have work to do.

Secondly, perhaps the central concept is in that italicized assertion: “the Text is experienced only in an activity, in a production.”

Finally, I note that that first sentence has also been translated as: “The Text must not be thought of as a defined object” [2]. Not being a reader of French, much less a French speaker, I don’t know which translation is truer to the original. It is quite possible that they are equally true and false at the same time. But “computable object” has more resonance in this particular context.

Now, just to flesh things out a bit, let us consider a more recent passage, one that is more didactic. This is from the introduction Rita Copeland and Frances Ferguson prepared for five essays from the 2012 English Institute devoted to the text [3]:
Yet with the conceptual breadth that has come to characterize notions of text and textuality, literary criticism has found itself at a confluence of disciplines, including linguistics, anthropology, history, politics, and law. Thus, for example, notions of cultural text and social text have placed literary study in productive dialogue with fields in the social sciences. Moreover, text has come to stand for different and often contradictory things: linguistic data for philology; the unfolding “real time” of interaction for sociolinguistics; the problems of copy-text and markup in editorial theory; the objectified written work (“verbal icon”) for New Criticism; in some versions of poststructuralism the horizons of language that overcome the closure of the work; in theater studies the other of performance, ambiguously artifact and event. “Text” has been the subject of venerable traditions of scholarship centered on the establishment and critique of scriptural authority as well as the classical heritage. In the modern world it figures anew in the regulation of intellectual property. Has text become, or was it always, an ideal, immaterial object, a conceptual site for the investigation of knowledge, ownership and propriety, or authority? If so, what then is, or ever was, a “material” text? What institutions, linguistic procedures, commentary forms, and interpretive protocols stabilize text as an object of study? [p. 417]
“Linguistic data” and “copy-text”, they sound like the physical text itself, the rest of them, not so much.

If literary critics were to confine themselves to discussing the physical text, what would we say? Those engaged in book studies and editorial projects would have more to say than most, but even they would find such rigor to be intolerably confining. The physical signs on the page, or the vibrations in the air, exist and come alive in a vast a complicated network of ... well, just exactly what? Relationships among people to be sure, but also relationships between sights and sounds and ideas and movements and feelings and a whole bunch of stuff mediated by the nervous systems of all those people interacting with one another.

It’s that vast network of people and neuro-mental stuff that we’re trying to understand when we explicate literary and cultural Texts. As we lack really good accounts of all that stuff, literary critics have felt that we had little choice but to adopt this more capacious conception, albeit at the expense of definition and precision. Anyhow, aren’t the people trying to figure out those systems, aren’t they scientists? And aren’t we, as humanists, skeptical about science?

And then along came the computer.

An ontological gulf

The thing about computational criticism is that computers don’t have all that other stuff – a complex fluctuating network of interactions among people both containing and embedded in a vast and turbulent meshwork of flashing neurons (one of Tim Morton’s hyperobjects?) – available to them. The computer deals only with those dumb marks on the page, or rather, with digital representations of them. There is only the physical text, that dull thing that other literary critics pass over in favor of The Text.

Thus there is an ontological gulf between computational criticism and the many varieties of – I hate to say it ­– conventional literary criticism. Here I mean ontological in the sense it has come to have in the cognitive sciences, a sense where a more conventional humanist might talk of different discourses (Foucault) or paradigms (Kuhn). In this sense an ontology is an inventory of concepts. Salt, as ordinarily understood, and sodium chloride, exist in different ontologies in this sense [4]. Humans have known about salt since forever, and apprehend it through its taste, texture and color; even animals know about it. Sodium chloride is quite different conceptually, though physically, yes, it is salt. Conceptually it is defined in terms of bonds between atoms which themselves consist of electrons, protons, and neutrons, a set of concepts developed in Western science in the 18th and 19th centuries.

When it comes to the concept of text, the conventional critic and the computational critic operate in different conceptual worlds, with different ontologies. Yet there are subtleties.

Text and context


Algorithmic bias and disinformation

Some informal remarks on Jockers’ 3300 node graph: Part 2, structure and computational process [#DH]

My previous note was about time and evolution. This one is about mechanism. And like the previous note, it is also about intuition – though I didn’t frame that note in that way. When I’d thought about Jockers’ graph just a bit, I decided it betokened an evolutionary process. That decision reflected an intuitive judgement. It’s not something I reasoned out, it’s something I saw, if you will. It appeared before. Once that intuition had formed, I set about rationalizing it.

This note is about how my early immersion in computational semantics guides my thinking in, well, in many things. I turned to computational semantics when I’d thrown everything I had in the way of literary theory, such as it existed before one talked of Theory with a capital “T”, plus a few other things (Piaget, Merleau-Ponty, Nietzsche, Wittgenstein) at “Kubla Khan” and had it fall apart. If there was a way forward, I thought, computation would be it.

But let’s set that story aside for the moment; we’re return to it later. I want to open by talking about what I believe to be the most immediate effect my computational background had on my perception of Jockers’ graph: I saw it as a manifestation of a process. Then I’ll talk about the broader effects of that experience on my approach to literary criticism.

Diagrams and process

The computational semantics I studied under David Hays at the State University of New York at Buffalo (SUNYAB, or just UB) was and is quite different from anything in computational criticism, though it is perhaps a little like work using vector semantics, but only a little. Of course semantics is only part of such a model, which must also include morphology, syntax, pragmatics, discourse, and speech synthesis and hearing, on the one hand, and character recognition on the other (generation streams of characters is trivial). The objectives of computational critics vary among investigators and from one investigation to another, but no one seeks to the model linguistic processes of reading and writing, listening and talking. Computational criticism isn’t trying to understand language mechanisms at all, not at the level of phrases and sentences and not, I’d argue, at the level of whole texts either.

How does one create such models? Techniques vary, a lot, but the range of techniques is secondary to this discussion, which is about what I’ve brought with me to my understanding of computational criticism in general, and Jockers’ graph in particular. What I’ve brought is a great deal of experience in working with graphs as models of mental processes. Here’s a fragment of the semantic model I developed while working on Shakespeare’s Sonnet 129:


The graph is quite different from Jockers’ graph. For one thing it has fewer nodes, by a considerable margin. But it is otherwise more complex. The nodes in Jockers’ graph represent the same kind of object, a text, and the edges between them are of the same kind, proximity in space. The nodes in that semantic network are of various kinds – objects, events, properties of objects or events, some even represent whole bundles of objects and events ¬– as are the edges. And the space in which a semantic network is embedded has no metric associated with it; the physical distance between nodes is a mere diagrammatic convenience and has no formal significance. Taken together these various kinds of nodes and edges can be used to specify processes in the network. That’s the crucial point, semantic networks may be depicted as static objects on a page, just as one may depict an clock mechanism as a static object, but they function in, are designed to function in, linguistic processes.

That’s what I brought with me to Jockers’ graph, the concept of a graph that embodies or supports some a process. By itself the graph would not have activated that concept, but when I read that the graph ordered the nodes in rough chronological order despite the fact that there was no temporal information in the underlying database, THAT told me there’s a process at work in that graph. I judged the process to be an evolutionary one – what else could it be – and began thinking about it.

Yes, I know, a large scale evolutionary process is very different from a micro scale linguistic process, but these diagrams are very abstract objects. At a high enough level of abstraction a network is a network and a process is a process. Moreover, as I indicated in my earlier remarks on “generic time trends”, I’ve been thinking about evolutionary processes as long, if not longer than I’ve been thinking about linguistic processes. It was thus all but inevitable and natural that I would read that graph as the trace of a process unfolding in time, an evolutionary process.

My break with 'traditional' literary criticism

But why, with an background in literary criticism, did I turn to such strange conceptual objects in the first place? As I’ve indicated in my introduction, I had become interested in “Kubla Khan”. I set out to do a structuralist analysis of the poem – this was before structuralism had more or less fallen apart within the literary academy – and it didn’t work. It’s not that I could find binary oppositions in the poem. I could. They’re all over the place – Kubla vs. wailing woman, Kubla vs. damsel with a dulcimer, pleasure dome vs. caves, sound vs. sight, inspired poet vs. those who hear and see, ice vs. Paradise, and on and on – and that was a problem. I couldn’t see any ‘narrative’ order in the profusion of oppositions.

This is not the place to give a blow-by-blow account of what happened, I’ve done that elsewhere [1]. Suffice it say that I’d discovered that the poem had a structure that could be diagrammed like this (first 36 lines):

1 tree

Those nested ternary structures (in red) looked like, smelled like, computation at work. By the time I’d gotten that far in my analytic and descriptive work on the poem I’d become aware of a variety of work in the nascent cognitive sciences – the phrase “cognitive science” wasn’t coined until 1973, after I’d done my initial work on “Kubla Khan” – and so I turned to them.

What I got was a new and I believe quite a valuable way of thinking about language and mental processes, but one not quite up to satisfying my curiosity about “Kubla Khan”. Nonetheless I couldn’t look back, I couldn’t unlearn what I’d learned and thereby return to a more naïve approach to literary criticism. And yes, I regard standard literary criticism, to the extent that there is such a thing, up through new historicism and post structuralist approaches, as naïve, and a bit confused as well [2]. The text is a crucial notion; is there any consensus on what constitutes the text? No. And the same with form, another critical concept about which there is no critical consensus.

The upshot is that I am a native reader and writer of two different discourses focused on language: literary criticism and cognitive science. I remain comfortable with reading a wide variety of literary criticism, and I can write it, at least up to a point. But I can also read and write cognitive science and do so. I find that, for the most part, the world of computational criticism is commensurate with that of cognitive science. To use a crude geographical metaphor, think of the North America as the New World. I’ve spent most of my time, say, exploring the territory along the East Coast and through the Midwest to the Mississippi River. That’s where I’d met these computational critics, who’d come up through Central America and along the Rocky Mountains to the plains states. So, it’s a different kind of territory, but still on the same continent. We’re doing the same kind of thing. Standard literary criticism, on the other hand, that’s the Old World. I’ve been there, they’ve been there, we’ve all been there, but the New World is where we function best.

Crude, yes, but serviceable.

Scale

Let’s drop the analogy and take a brief look at a problem that’s been much discussed in the recent past, though such discussion seems to have subsided. I’m talking of the problem of scale, of so-called distant reading vs. so-called close reading.

From my point of view the issue is miscast. Scale, so far as I know, is not an issue in biology, where they’ve got to deal with individual cells and their components and the evolution of life on earth as well. That’s two very different scales of analysis, but the terms in with the analysis is conducted are mutually commensurate across all scales. The situation in literary criticism is not so clear. The terms of analysis used in computational criticism are quite different from those in any of the standard schools of critical reading, from New Criticism on through the various forms of post-structuralism. Many proponents of close-reading regard the terms of computational criticism as absolutely incommensurate with close-reading and hence computational criticism is either wrong or trades in trivial truisms [3]. Computational critics see it differently. Some may well reject close-reading across the board, though I’ve not seen that position publically articulated. Others admit, yes, the terms are different, but we’re ultimately looking at the same objects, literary texts. It’s just that we’re looking at them in somewhat different ways and, yes, at different scales. But, as I said, this talk of scales is beside the point. It’s those different ways that matter.

For me, there is no issue of scale. I learned computational semantics as a tool to use at the micro scale. I’ve also developed an approach to the analysis and description of form at the micro scale [4]. These concepts are perfectly compatible with those of computational criticism. Their relationship to ordinary “close-reading”, however, that’s problematic. And it’s problematic in the same way that computational criticism itself is problematic.

That issue is more than I want to address in this (relatively) short and informal note. Basically, I think those critics must explicitly embrace aesthetic and ethical values in a full-blown and explicit ethical criticism, which is beyond computational criticism at whatever scale, macro (as in so-called distant reading) or micro (as in computational semantics or descriptive analysis of form). I’ve written a fair bit about ethical criticism here on New Savanna but have yet to formulate a central statement [5].

References

[1] There’s an autobiographical account in William Benzon, Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)
https://www.academia.edu/9814276/Touchstones_Strange_Encounters_Strange_Poems_the_beginning_of_an_intellectual_life.

For a more conceptual account, see my working paper,
Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition (2015),
https://www.academia.edu/10541585/Beyond_Lévi-Strauss_on_Myth_Objectification_Computation_and_Cognition. In particular, see section 4, “Into Lévi-Strauss and out through ‘Kubla Khan’”, pp. 20-27.

[2] On my general skepticism about literary criticsm, see my long blog post, Literary Studies from a Martian Point of View: An Open Letter to Charlie Altieri (December 17, 2915)
http://new-savanna.blogspot.com/2015/12/literary-studies-from-martian-point-of.html,  my working paper, An Open Letter to Dan Everett about Literary Criticism (February 19, 2017),
https://www.academia.edu/33589497/An_Open_Letter_to_Dan_Everett_about_Literary_Criticism (PDF), and Rejected! @ New Literary History, with observations about the discipline (February 28, 2017)
https://www.academia.edu/31647383/Rejected_at_New_Literary_History_with_observations_about_the_discipline.

[3] That seems to be the position taken by Nan Z. Da, The Computational Case against Computational Literary Studies, Critical Inquiry 45, Spring 2019, 601-639.

[4] For a methodological and programmatic statement, see William Benzon, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form. For a methodological statement about description see, William Benzon, Description 3: The Primacy of Visualization, Working Paper, October 2015, 48 pp., https://www.academia.edu/16835585/Description_3_The_Primacy_of_Visualization.

[5] For example, see my post Ethical Criticism: Blakey Vermeule on Theory, Cornel West in the Academy, Now What?, September 23, 2015, https://new-savanna.blogspot.com/2015/09/ethical-criticism-blakely-vermeule-on.html. More generally, see the posts gathered under the label, “ethical criticism”, https://new-savanna.blogspot.com/search/label/ethical%20criticism.

Houses on a hill over looking the Hudson River

The profession of literary criticism as I have observed it over the course of 50 years

Updated 12.9.19.
Updated 6.23.17.

In the course of thinking about my recent rejection at New Literary History I found myself, once again, rethinking the evolution of the profession as I’ve seen it from the 1960s to the present. In fact, that rejection has led me, once again, to rethink that history and to change some of my ideas, particularly about the significance of the 1970s.

This post is a guide to my historically-oriented thinking about academic literary criticism. Much, but not all, of the historical material is autobiographical in nature.

I list the articles more or less in the order of writing. In some cases a post has been rewritten and revised several years after I first wrote it. The link I give is to the most recent version.

Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life (1975-2015)

This is about my years at Johns Hopkins, both undergraduate (1965-1969) and graduate (1969-72). That’s when, I see in retrospect, I left the profession intellectually, with a “structuralism and beyond” MA thesis on “Kubla Khan,” even before I’d joined it institutionally, but getting my PhD. I originally wrote this while I was working on my PhD in English at SUNY Buffalo. Art Efron published a journal, Paunch, and I wrote it for that. The current version includes interpolated comments from 2014 and 2015.

The Demise of Deconstruction: On J. Hillis Miller’s MLA Presidential Address 1986. PMLA. Vol. 103, No. 1, Jan. 1988, p. 57.

A letter I published in PMLA in which I replied to J. Hillis Miller on the eclipse of deconstruction. I suggested 1) that deconstruction had a different valence for those who merely learned it in graduate school than for those who had struggled to create it, and 2) that it was in eclipse because it did the same thing to every text.

For the Historical Record: Cog Sci and Lit Theory, A Chronology (2006-2016)

At the beginning of every course (at Johns Hopkins) Dick Macksey would hand out a chronology, a way, I suppose, of saying “history is important” without lecturing on the topic. It was with that in mind that I originally posted this rough and ready chronology in a comment to a discussion at The Valve. The occasion was an online symposium that interrogated Theory by discussing the anthology, Theory’s Empire (Columbia UP 2005). I then emended it a bit and made it a freestanding post. As the title suggests, it juxtaposes developments in cognitive science and literary theory from the 1950s through the end of the millennium.

[BTW The entire Theory’s Empire symposium is worth looking at, including the comments on the posts: http://www.thevalve.org/go/valve/archive_asc/C41]

Seven Sacred Words: An Open Letter to Steven Pinker (2007-2011)

An Open Letter to Steven Pinker: The Importance of Stories and the Nature of Literary Criticism (2015)

Steven Pinker has been a severe critic of the humanities for ignoring recent work in the social and behavioral sciences. He has also argued that the arts serve no biological purpose, that they are “cheesecake for the mind.” When I read his The Stuff of Thought (2007) I realized his later chapters contained the basis for an account of the arts. I sketched that out, added a brief account of why deconstruction had been popular, and published it as an open letter, along with his reply. It appeared first at The Valve (2007) and then at New Savanna (2011). In 2015 I posted it to a “session” at Academia.edu. I took some of my comments in that discussion along with some other materials and published the lot at Academic.edu as a working paper. In a final section I propose a four-fold division of literary criticism: 1) description, 2) naturalist criticism, 3) ethical criticism, and 4) digital criticism.

Lévi-Strauss and Myth: Some Informal Notes (2007-2011)

Beyond Lévi-Strauss on Myth: Objectification, Computation, and Cognition (2007-2015)

These are two versions of roughly the same material. Each was assembled from four blog posts. The first and fourth sections are the same in both working paper, but two and three differ. The more recent version also contains a short appendix comparing Lévi-Strauss and Latour. I published the first series at The Valve shortly after Lévi-Strauss had died. They are an attempt to explain what Lévi-Strauss was up to in his work on myth, why he failed, and why that work remains important. The fourth section (common to both versions), Into Lévi-Strauss and Out Through “Kubla Khan”, is an account of how and why I went from Lévi-Strauss’s structuralism to cognitive science. Warning: it contains diagrams. I suppose I could create a deluxe edition which contains all the posts.

The Only Game in Town: Digital Criticism Comes of Age (May 5, 2014)

Here I argue that digital criticism’s deepest contribution to literary criticism is that it requires fundamentally different modes of thinking. It is not purely discursive. It is statistical and visual. Moreover the visualizations are central to the thought process. This may also be the first time I’ve explicitly identified the mid-1970s as an important turning point in the recent history of literary criticism.

Paths Not Taken and the Land Before Us: An Open Letter to J. Hillis Miller (January 30, 2015)

I had studied with Miller at Johns Hopkins (but have had no contact with him since). While I certainly say a bit about what I’ve been doing since I left Hopkins, including ring-composition, I also introduce him to Matt Jockers’ Macroanalysis and Goldstone and Underwood, “The Quiet Transformations of Literary Studies: What Thirteen Thousand Scholars Could Tell Us”. New Literary History 45, no. 3, Summer 2014. I mention Kemp Malone, a Hopkins person, as he came up in blog discussion of the paper.

On the Poverty of Literary Cognitivism 2: What I Learned When I Walked off the Cliff of Cognitivism (August 24, 2015)

I attempt to explain what, in the end, I got out of my immersion in cognitive networks since I haven’t used them in my post-graduate work in literature. What I got most immediately was a powerful way of thinking about language in general where there is a sharp distinction between the object of thought, captured in diagrams, and a given text: The text is one thing, the model is another. There is no confusing the two. Moretti has made similar remarks about the diagrams he uses in ‘distant reading.’