Tuesday, July 17, 2018

White cloud

20180707-_IGP2389

A spearmaster's way of death

Mary Douglas, Purity and Danger (1966) 178-179:
Another example of death being softened by welcome, if we can put it that way, is the ritual murder by which the Dinka put to death their aged spearmasters. This is the central rite in Dinka religion. All their other rites and bloodily expressive sacrifices pale in significance besides this one which is not a sacrifice. The spearmasters are a hereditary clan of priests. Their divinity, Flesh, is a symbol of life, light and truth. Spearmasters may be possessed by the divinity; the sacrifices they perform and blessings they give are more efficacious than other men’s. They mediate between their tribe and divinity. The doctrine underlying the ritual of their death is that the spearmaster’s life should not be allowed to escape with his last breath from his dying body. By keeping his life in his body his life is preserved; and the spirit of the spearmaster is thus transmitted to his successor for the good of the community. The community can live on as a rational order because of the unafraid self-sacrifice of its priest.

By reputation among foreign travellers this rite was a brutal suffocation of a helpless old man. An intimate study of Dinka religious ideas reveals the central theme to be the old man’s voluntary choosing of the time, manner and place of his death. The old man himself asks for the death to be prepared for him, he asks for it from his people and on their behalf. He is reverently carried to his grave, and lying in it says his last words to his grieving sons before his natural death is anticipated. By his free, deliberate decision he robs death of the uncertainty of its time and place of coming. His own willing death, ritually framed by the grave itself, is a communal victory for all his people (Lienhardt). By confronting death and grasping it firmly he has said something to his people about the nature of life. [...]

The old spearmaster giving the sign for his own slaying makes a stiffly ritual act. It has none of the exuberance of St. Francis of Assisi rolling naked in the filth and welcoming his Sister Death. But his act touches the same mystery. If anyone held the idea that death and suffering are not an integral part of nature, the delusion is corrected. If there was a temptation to treat ritual as a magic lamp to be rubbed for gaining unlimited riches and power, ritual shows its other side. If the hierarchy of values was crudely material, it is dramatically undermined by paradox and contradiction. In painting such dark themes, pollution symbols are as necessary as the use of black in any depiction whatsoever. Therefore we find corruption enshrined in sacred places and times.

Signatures


Monday, July 16, 2018

Planet of the Apes Redux, coming to a theatre near you


Shoot for the stars

20180707-_IGP2366

Continuity and discontinuity in literary history, H vs. DH? (Literary Lab 4) – Tales from the Twitterverse [#DH]

If, a few years ago, you’d asked me whether or not a half dozen scholars could have an interesting and fruitful conversation 280 characters at a time, I’d have said “Are you freakin’ kidding me?”–or words to that effect. But it happens, not all the time, but often enough, and it even happened when we were restricted to 140 characters at a time. Such is life in the academic Twittersphere.

Ted Underwood kicked one off on Friday (the 13th, FWIW) and it continued on into Saturday. A half dozen or so joined in and who knows how many followed along–a dozen, 30, 50, more, who knows? I don’t know what Ted expected when he threw that first tweet into the maelstrom, nor does that much matter. What matters is what happened, and that was unplanned, spontaneous, even fun.

Of course, it requires interlocutors who understand the issues at hand well enough that they can speak in code. And they need to read one another charitably and in good faith. But, given those conditions, it is possible to do some good work.

But enough of this. Let’s get to it.

In the next section of this post I reproduce three tweets from that conversation and add some commentary. These set the stage for the next two sections, where I use the well-known Stanford Literary Lab Pamphlet 4 to interrogate the issues raised in the first section; this fleshes out an argument I tossed into the conversation in a five-tweet string.

Dropping science in the Twitterverse

Here’s the tweet that started things off:

So, yeah, it’s a bit aggressive: maybe the received wisdom ain’t necessarily so (as the song goes). And periodization is just dropped in there at the end, as a for-example. Of course, it’s an important case because literary studies is more or less organized according to periods: fields of study, journals, conferences, professional organizations, coursework, all organized by period. To be, period isn’t the only parameter of organization, we’ve also got author, genre, and a blither or theoretical proclivities, but it’s an important one.

And it’s one that Underwood has investigated. Here’s the abstract of an article he and Jordan Sellers recently published, The Longue Durée of Literary Prestige [1]:
A history of literary prestige needs to study both works that achieved distinction and the mass of volumes from which they were distinguished. To understand how those patterns of preference changed across a century, we gathered two samples of English-language poetry from the period 1820–1919: one drawn from volumes reviewed in prominent periodicals and one selected at random from a large digital library (in which the majority of authors are relatively obscure). The stylistic differences associated with literary prominence turn out to be quite stable: a statistical model trained to distinguish reviewed from random volumes in any quarter of this century can make predictions almost as accurate about the rest of the period. The “poetic revolutions” described by many histories are not visible in this model; instead, there is a steady tendency for new volumes of poetry to change by slightly exaggerating certain features that defined prestige in the recent past.
Those “poetic revolution” imply period boundaries, but those boundaries didn’t show up. To be sure, the model did show change, but it was gradual and seemed to have a historical direction–which is a whole different kettle of conceptual and perhaps even ideological fish.

I rather liked that work, and took a close look at it [2]. I particularly liked the apparent directionality it revealed, but let’s set that aside. As for the lack of period boundaries...well, I don’t think periodization is simply a matter of disciplinary hallucination. So I’m inclined to think that the method Underwood and Sellers used simply isn’t sensitive to such matters. But that’s not an argument; it’s merely a casual assertion. Underwood is right, there IS work to be done.

And that’s what caught people’s attention. Daniel Shore entered and exchanged tweets with Ted. Then I suggest color perception as an analogy:

In making that suggestion I had more in mind than the mere presence of categories over continuity. Color perception is tricky, and much studied. We know, and have know so for quite awhile, that there isn’t a simple and direct relationship between perceived color and wavelength of light. This is not the place for a primer in color vision (the Wikipedia article is a reasonable place to begin). But I can offer a few remarks.

Blue, for example, does not map directly to a fixed region in the electromagnetic spectrum, with violet and green in adjacent fixed regions. The color of a given region in the visual field is calculated over input from three kinds of retinal receptors (cones) having different sensitivities. Moreover the color of one region is “normalized” over adjacent regions. The upshot is that a red apple will appear to be red under a wide range of viewing conditions. Different illumination means different wavelengths incident on the apple. Hence light that the apple reflects to the eye varies in different situation. Because the brain normalizes, the apple’s color remains constant.

We’ll return to part of that story a bit later.

Let’s return to the twitter conversation with a pair of remarks by Ryan Heuser:

The conversation continued on. Others joined. It forked here and there. And somewhere in there I introduced a string of tweets about Lit Lab 4.

Is it moral to raise children in a doomed world?

I cried two times when my daughter was born. First for joy, when after 27 hours of labor the little feral being we’d made came yowling into the world, and the second for sorrow, holding the earth’s newest human and looking out the window with her at the rows of cars in the hospital parking lot, the strip mall across the street, the box stores and drive-throughs and drainage ditches and asphalt and waste fields that had once been oak groves. A world of extinction and catastrophe, a world in which harmony with nature had long been foreclosed.
What to do?
Take the widely cited 2017 research letter by the geographer Seth Wynes and the environmental scientist Kimberly Nicholas, which argues that the most effective steps any of us can take to decrease carbon emissions are to eat a plant-based diet, avoid flying, live car free and have one fewer child — the last having the most significant impact by far. Wynes and Nicholas argue for teaching these values in high school, thus transforming society through education. On its face, this proposal might seem sensible. But when values taught in the classroom don’t match the values in the rest of society, the classroom rarely wins. The main problem with this proposal isn’t with the ideas of teaching thrift, flying less or going vegetarian, which are all well and good, but rather with the social model such recommendations rely on: the idea that we can save the world through individual consumer choices. We cannot.
Tragedy?
Of course, nobody really needs to have children. It just happens to be the single strongest drive humans have, the fundamental organizing principle of every human society and the necessary condition of a meaningful human world. Procreation alone makes possible the persistence of human culture through time.

To take Wynes and Nicholas’s recommendations to heart would mean cutting oneself off from modern life. It would mean choosing a hermetic, isolated existence and giving up any deep connection to the future. Indeed, taking Wynes and Nicholas’s argument seriousaly would mean acknowledging that the only truly moral response to global climate change is to commit suicide. There is simply no more effective way to shrink your carbon footprint. [...]

When my daughter was born I felt a love and connection I’d never felt before: a surge of tenderness harrowing in its intensity. I knew that I would kill for her, die for her, sacrifice anything for her, and while those feelings have become more bearable since the first delirious days after her birth, they have not abated. And when I think of the future she’s doomed to live out, the future we’ve created, I’m filled with rage and sorrow.

Every day brings new pangs of grief. Seeing the world afresh through my daughter’s eyes fills me with delight, but every new discovery is haunted by death.
Read the rest and think on it.

REFLEX

20180712-_IGP2495

Sunday, July 15, 2018

Google AI in Ghana

The question everyone was asking after Jeff Dean (the Google AI and Google Brain team lead) announced that Google AI Lab was coming to Accra was "Why Ghana?"

The answer to that has been clear for a while now but has eluded most people.

When Barrack Obama decided to visit Ghana as the first African country during his Presidency, it was the same question most people asked - "Why Ghana?"

The answer is simple; Ghana is the future of Africa.

When I decided to move away from Nigeria to Ghana almost a decade ago, most people could not understand why because they had not visited Ghana, I did and I fell in love, literally.

I married a Fanti woman. I had already fallen for the country before I met my wife because the place was different.

It didn't have the hardcore market edge of places like Nigeria and South Africa, but it was a place where I could live and work.

Ghana has relatively stable electricity, relative security, and decent internet infrastructure. It also has some of the best tourist destinations in the developing world. All of this is present without any hype. I moved our business there and didn't look back ever since.

This choice is in spite of the challenges the country has gone through in recent times. I have remained and will continue to do so.

Google probably, however, has different reasons for choosing Ghana, and Jeff Dean tried to explain that it had to do with the robust network of academic institutions as well as infrastructure.

Google has been a significant investor in strengthening those institutions and the infrastructure around it.

An Alphabet Subsidiary named CSquared spun out of Google has quietly been laying an extensive fiber optic backbone in Accra and Kampala to help solve the last-mile internet problem that Eric Schmidt mentioned in Barcelona.

The internet speeds I get in the office and at home in Accra are now comparable with California speeds.

Ghana has also become a melting pot for education in the sub-region over the years because of the relative stability of the country and the high standards of its institutions, such as the highly-regarded Ashesi University.

Jordan Peterson interviews Nina Paley


Nina Paley is an animator and artist who makes unbelievably beautiful films. We discussed her life, her views, and her work, interspersing her animation throughout. Nina has done a particularly brilliant job of animating Exodus as a feature length film (see www.sedermasochism.com, as well as her Vimeo channel https://vimeo.com/user2983855
The interview took place some time last year, before Seder-Masochism was finished. They also discuss Sita Sings the Blues, religion, her artistic process, and copyright. I've got a bunch of posts on SSTB.

BTW, Nina has a prayer to her muse. She recites it at about 50:20:
Our idea, which art in the ether, that cannot be named, thy vision come, thy will be done, on earth as it is in abstraction. Give us this day, our daily spark, and forgive us our criticism, as we forgive those who critique against us. And lead us not into stagnation, but deliver us from ego; for thine is the vision, the power, and the glory forever. Amen.
Peterson remarks:
I would interpret that as a mantra that opens up the gateway between you and this transcendent force that allows people religious inspiration. And you're doing something like clearing out your ego. And I think  it is very interesting that it is associated with something like The Lord's Prayer.

Well, hello there!

20180707-_IGP2373

Two books on universal basic income (UBI)

Annie Lowrey, GIVE PEOPLE MONEY, How a Universal Basic Income Would End Poverty, Revolutionize Work, and Remake the World, 263 pp. Crown. $26.

Andrew Yang, THE WAR ON NORMAL PEOPLE: The Truth About America’s Disappearing Jobs and Why Universal Basic Income Is Our Future, 284 pp. Hachette Books. $28.
From the review:
The two books cover so much of the same terrain that I’m tempted to wonder whether they were written by the same robot, programmed for slightly different levels of giddy enthusiasm. Both cite Martin Luther King Jr., Richard Nixon and Milton Friedman as early supporters of a U.B.I. Both urge that a U.B.I. be set at $1,000 a month for every American. Both point out that with poverty currently defined as an income for a single adult of less than $12,000 a year, such a U.B.I. would, by definition, eliminate poverty for the 41 million Americans now living below the poverty line. It would also improve the bargaining power of millions of low-wage workers — forcing employers to increase wages, add benefits and improve conditions in order to retain them. If a U.B.I. replaced specific programs for the poor, it would also reduce government bureaucracy, minimize government interference in citizens’ lives and allow people to avoid the stigma that often accompanies government assistance. By virtue of being available to all, a U.B.I. would not only guarantee the material existence of everyone in a society; it would establish a baseline for what membership in that society means.

U.B.I.’s critics understandably worry that it would spur millions to drop out of the labor force, induce laziness or at least rob people of the structure and meaning work provides. Both Yang and Lowrey muster substantial research to rebut these claims. I’m not sure they need it. After all, $12,000 a year doesn’t deliver a comfortable life even in the lowest-cost precincts of America, so there would still be plenty of incentive to work. Most of today’s jobs provide very little by way of fulfillment or creativity anyway.

A U.B.I. might give recipients a bit more time to pursue socially beneficial activities, like helping the elderly or attending to kids with special needs or perhaps even starting a new business. Yang suggests it would spur a system of “social credits” in which people trade their spare time by performing various helpful tasks for one another. (I.R.S. be warned.) Surely a U.B.I. would help compensate many people — especially women — for the unpaid labor they already contribute. As Lowrey points out, some 40 million family caregivers in America provide half a trillion dollars of unpaid adult care annually. Child care has become so expensive that one of every three stay-at-home mothers today lives below the poverty line (compared with 14 percent in 1970). [...]

Whatever the source of funds, it seems a safe bet that increased automation will allow the economy to continue to grow, making a U.B.I. more affordable. A U.B.I. would itself generate more consumer spending, stimulating additional economic activity. And less poverty would mean less crime, incarceration and other social costs associated with deprivation.

Saturday, July 14, 2018

Hasui Kawase, Japanese print-maker

Friday, July 13, 2018

Subjectivity vs. Objectivity in the epistemic and ontological senses (John Searle)

I bumped this to the top of the queue because I added another (very interesting) video at the very end, where Searle sets up an analogy between building an artificial heart and building and honest-to-dog artificial brain.

Here’s a video in which John Searle discusses AI with Luciano Floridi. I haven’t watched Floridi’s section, nor the discussion afterward. Here and now I’m interested in a distinction Searle makes, starting at about 9:12:



He points out that the distinction between objectivity and subjectivity has both an ontological and epistemic aspect, which is generally neglected. This is VERY important. He uses it to clarify what’s doing on when we worry about whether or not or in what respect computers can be said to think, or be intelligent. That’s a complicated and worthwhile discussion and if the topic interests you, by all means listen to what Searle has to say.

My immediate interest is somewhat more restricted. For some time now I’ve been complaining that, unfortunately, “subjective” has come to mean something like “idiosyncratically variable among different individuals.” But a more basic meaning is simply, “of or pertaining to subjects.” Well, Searle informs me that I’m distinguishing between “subjective” in the epistemic sense (idiosyncratically variable) and “subjective” in the ontological sense.

Ontologically, subjectivity has to do with existence, being experienced by a subject. The experience of a literary text is certainly subjective in this sense. And, as the meaning of texts depends on experiencing them, meaning must be subjective as well. This is a matter of ontology.

Claims about the meaning of a text must necessarily be observer relative and so those claims are epistemically subjective. There are no objective claims about the meanings of texts, though some claims may well be intersubjectively held among some group of readers (an interpretive community in Fish’s sense?).

My claim about literary form is that it is an objective property of the interaction between texts and readers. It is thus not subjective in either the ontological or epistemic senses. By studying the form, however, we can learn about how literary subjectivity works. For literary subjectivity is a real phenomenon of the human world.

Friday Fotos: Wild Art in Jersey City

20180712-_IGP2503

20180712-_IGP2483

20180712-_IGP2504

20180712-_IGP2492

20180712-_IGP2496

Training your mind, Michael Nielsen on Anki and human augmentation

Michael Nielsen, an AI researcher at Y Combinator Research, has written a long essay, Augmenting Long-term Memory, which is about Anki, a computer-based tool for training long-term memory.
In this essay we investigate personal memory systems, that is, systems designed to improve the long-term memory of a single person. In the first part of the essay I describe my personal experience using such a system, named Anki. As we'll see, Anki can be used to remember almost anything. That is, Anki makes memory a choice, rather than a haphazard event, to be left to chance. I'll discuss how to use Anki to understand research papers, books, and much else. And I'll describe numerous patterns and anti-patterns for Anki use. While Anki is an extremely simple program, it's possible to develop virtuoso skill using Anki, a skill aimed at understanding complex material in depth, not just memorizing simple facts.

The second part of the essay discusses personal memory systems in general. Many people treat memory ambivalently or even disparagingly as a cognitive skill: for instance, people often talk of “rote memory” as though it's inferior to more advanced kinds of understanding. I'll argue against this point of view, and make a case that memory is central to problem solving and creativity. Also in this second part, we'll discuss the role of cognitive science in building personal memory systems and, more generally, in building systems to augment human cognition. In a future essay, Toward a Young Lady's Illustrated Primer, I will describe more ideas for personal memory systems.

The essay is unusual in style. It's not a conventional cognitive science paper, i.e., a study of human memory and how it works. Nor is it a computer systems design paper, though prototyping systems is my own main interest. Rather, the essay is a distillation of informal, ad hoc observations and rules of thumb about how personal memory systems work. I wanted to understand those as preparation for building systems of my own. As I collected these observations it seemed they may be of interest to others. You can reasonably think of the essay as a how-to guide aimed at helping develop virtuoso skills with personal memory systems. But since writing such a guide wasn't my primary purpose, it may come across as a more-than-you-ever-wanted-to-know guide.

To conclude this introduction, a few words on what the essay won't cover. I will only briefly discuss visualization techniques such as memory palaces and the method of loci. And the essay won't describe the use of pharmaceuticals to improve memory, nor possible future brain-computer interfaces to augment memory. Those all need a separate treatment. But, as we shall see, there are already powerful ideas about personal memory systems based solely on the structuring and presentation of information.
The method of loci is well-known, and I'm sure you can come up with a lot of information just by googling the term. You might even come up with my encyclopedia article, Visual Thinking, where I treat it as one form of visual thinking among others.

Before returning to Nielson and Anki, I want to digress to a different form of mental training. When I was young people didn't have personal computers, nor even small hand-held electronic calculators. If you had to make a lot of calculations, you might have used a desktop mechanical calculator, a slide rule–my father had become so fluent with his that he didn't even have to look at it while doing complex multi-step calculations, or you might have mastered mental calculation.

Some years ago I reviewed biographies of John von Neumann and Richard Feynman; both books mentioned that their subjects were wizards of mental calculation. I observe:
Feynman and von Neumann worked in fields were calculational facility was widespread and both were among the very best at mental mathematics. In itself such skill has no deep intellectual significance. Doing it depends on knowing a vast collection of unremarkable calculational facts and techniques and knowing one's way around in this vast collection. Before the proliferation of electronic calculators the lore of mental math used to be collected into books on mental calculation. Virtuosity here may have gotten you mentioned in "Ripley's Believe it or Not" or a spot on a TV show, but it wasn't a vehicle for profound insight into the workings of the universe.

Yet, this kind of skill was so widespread in the scientific and engineering world that one has to wonder whether there is some connection between mental calculation, which has largely been replaced by electronic calculators and computers, and the conceptual style, which isn't going to be replaced by computers anytime soon. Perhaps the domain of mental calculations served as a matrix in which the conceptual style of Feynman, von Neumann, (and their peers and colleagues) was nurtured.
Then, citing the work of Jean Piaget, I suggest every so briefly why that might be so. However, once powerful handheld calculators became widely available, skill in mental calculation was no longer necessary. These days one may hear of savants who have such skills, but that's pretty much it.

Returning to Nielsen and Anki, as his essay evolves, he suggests that more than mere memory is at stake. After explaining Anki basics he describes how he used Anki to help him learning enough about AlphaGo–the first computer system that beat the best human experts at Go–to write an article for Quanta Magazine. Alas
I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material.
He then explains what he did. The upshot:
This entire process took a few days of my time, spread over a few weeks. That's lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise.
He then explains how he he used Anki to do shallow reads of papers. I'm not going excerpt or summarize that material, but I'll point out that doing shallow reads is a very useful skill. When I was in graduate school I prepared abstracts of the current literature for The Journal of Computational Linguistics. While some articles and tech reports had good abstracts, many did not. In those cases I'd have to read the article and write an abstract; I gave myself an hour, perhaps a bit more to write a 250-word abstract. I gave those articles a shallow read. How'd I do it? Hmmmm... I'll get back to you on that. It's quite possible that Nielsen's Anki process is better than the one I used.

Yet:
Really good resources are worth investing time in. But most papers don't fit this pattern, and you quickly saturate. If you feel you could easily find something more rewarding to read, switch over. It's worth deliberately practicing such switches, to avoid building a counter-productive habit of completionism in your reading. It's nearly always possible to read deeper into a paper, but that doesn't mean you can't easily be getting more value elsewhere. It's a failure mode to spend too long reading unimportant papers.
My process was certainly good enough to make that go-nogo decision.
Nielson then goes on to discuss this and that use of Anki, suggesting:
Anki isn't just a tool for memorizing simple facts. It's a tool for understanding almost anything. It's a common misconception that Anki is just for memorizing simple raw facts, things like vocabulary items and basic definitions. But as we've seen, it's possible to use Anki for much more advanced types of understanding. My questions about AlphaGo began with simple questions such as “How large is a Go board?”, and ended with high-level conceptual questions about the design of the AlphaGo systems – on subjects such as how AlphaGo avoided over-generalizing from training data, the limitations of convolutional neural networks, and so on.

Part of developing Anki as a virtuoso skill is cultivating the ability to use it for types of understanding beyond basic facts. Indeed, many of the observations I've made (and will make, below) about how to use Anki are really about what it means to understand something.
That's the good stuff.

Where's he going? Human augmentation:
The human-computer interaction (HCI) community has tried to achieve it in the systems they build, not just for memory, but for augmenting human cognition in general. But I don't think it's worked so well. It seems to me that they've given up a lot of boldness and imagination and aspiration in their design** As an outsider, I'm aware this comment won't make me any friends within the HCI community. On the other hand, I don't think it does any good to be silent, either. When I look at major events within the community, such as the CHI conference, the overwhelming majority of papers seem timid when compared to early work on augmentation. It's telling that publishing conventional static papers (pdf, not even interactive JavaScript and HTML) is still so central to the field. . At the same time, they're not doing full-fledged cognitive science either – they're not developing a detailed understanding of the mind. Finding the right relationship between imaginative design and cognitive science is a core problem for work on augmentation, and it's not trivial.

In a similar vein, it's tempting to imagine cognitive scientists starting to build systems. While this may sometimes work, I think it's unlikely to yield good results in most cases. Building effective systems, even prototypes, is difficult. Cognitive scientists for the most part lack the skills and the design imagination to do it well.

This suggests to me the need for a separate field of human augmentation. That field will take input from cognitive science. But it will fundamentally be a design science, oriented toward bold, imaginative design, and building systems from prototype to large-scale deployment.
* * * * *

See also my post, Beyond "AI" – toward a new engineering discipline, in which I excerpt Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet". Jordan discusses human augmentation under the twin rubrics of "Intelligence Augmentation" and "Intelligence Infrastructure".

Thursday, July 12, 2018

Stairway to the sun

20171207-_IGP1500

Innovative governance, side-slipping the nation-state

Mark Lutter, Local Governments Are Changing the World, Cato Unbound, July 11, 2018.
The innovative governance movement is interested in improving governance via the creation of new jurisdictions with significant degrees of autonomy. These new jurisdictions could import successful institutions to create the conditions for catch up growth. Or the new jurisdictions could experiment with new forms of governance, to push the frontier. The overarching thesis of innovative governance is that the existing equilibrium of political units is overly resistant to change, and small, new jurisdictions, particularly on greenfield sites, are an effective mechanism to institutional improvements.

The modern innovative governance movement was launched ten years ago when Patri Friedman and Wayne Gramlich created the Seasteading Institute. Critical of the lack of success of traditional libertarian attempts at social change, the Seasteading Institute argued that new societies, “seasteads,” could be created in international waters. Seasteads would provide a blank slate for institutional innovation and experimentation. Successful models of governance could attract new residents, while unsuccessful ones would fail. This iterative, evolutionary process of governance improvements could help push the frontier of the optimal type of government. [...]

Historically, the innovative governance movement has been heavily influenced, and arguably led, by techno-libertarians, with Romer being the obvious exception. This is beginning to change. However, while the techno-libertarian attitude was arguably important for the vision, it hampered the development of more practical capacities necessary for the creation of charter cities. [...]

Luckily, things are changing, making charter cities more viable than they were ten years ago. A handful of influential groups are beginning to think about charter cities. That said, they’re coming at it from different angles, and few have the full vision. However, with proper coordination, it’s possible to rapidly, within 2 to 3 years, create the environment within which several charter city projects can be launched. Let’s consider some of the perspectives at hand.

Economists: Most development economists are sympathetic to charter cities. While some are strongly critical, there is nevertheless a general sense that charter cities are an idea worth trying. The downside is that economists don’t get career points for discussing charter cities. For example, Romer, who is frequently listed as a contender for the Nobel Prize, gave a TED talk on charter cities rather than publishing an academic article.

Silicon Valley: Silicon Valley is interested in cities. YCombinator made a big splash about building a city, though it was later toned down to research. Seasteading is big enough to be made fun of on HBO’s Silicon Valley. Multiple unicorn founders have told me they are building up a war chest such that they can build a city when they exit.

Humanitarians: While the refugee crisis has dropped out of the news recently, there remains interest in improving refugee camps via special economic zones and creating charter cities as a mechanism for economic development to lower the demand for emigration. The Jordan Compact gives aid and favorable grants to Jordan in exchange for work rights for refugees and increasing refugee participation in special economic zones. Kilian Kleinschmidt, who formerly ran the Za’atari refugee camp in Jordan and is on the Board of Advisers of my nonprofit, the Center for Innovative Governance Research, argues for special development zones for migrants. And of course, there is the aforementioned nonprofit Refugee Cities. Michael Castle-Miller is developing the legal and institutional frameworks for these charter cities via his teams at Politas Consulting.

New-city projects: There are dozens of new city projects around the world. These new city projects are real estate plays, building satellite cities of 50,000 or more residents. Investments in these new cities is rarely under $1 billion. Nkwashi, a new city project in Zambia, is one of my favorite examples. Mwiya Musokotwane, the CEO, is on the Board of Advisers for the Center for Innovative Governance Research.

Some of the new city projects are beginning to think about governance, which is a natural path as their revenues are based on land values.

A final interesting development, which is hard to place or categorize, is that Anders Fogh Rasmussen, the former Prime Minister of Denmark and former Secretary General of NATO, also has an interest in charter cities and special economic zones. He recently launched a new foundation, the Alliance of Democracies Foundation. One of the key initiatives of the Foundation is Expeditionary Economics, which is focusing, as previously mentioned, on charter cities and special economic zones.

Wednesday, July 11, 2018

Changizi on what the arts reveal about the mind

TH: Stepping back for a moment, how do you conceive of the relationship between the arts and sciences in general? Are there genuine tensions there, or is that idea a mere artifact of history and institutional traditions and ruts?

MC: Often this question ends up about whether science can ever come to understand the arts. But I think this misses the more important arrow here. It’s not about how science can illuminate the arts, but the other way around . . . science is simple. Experimentally, we’re ingenious in our controls, but even so the complexity of stimuli is ridiculously simple. The number of parameters that we can play with is only a handful at a time. If some fantastically complex stimulus turns out to hit some sweet spot for us, our careful lab manipulations won’t ever find it.

But artists can find and have found these sweet spots. Not any artist alone, necessarily. But together they act as a cultural evolutionary process that finds sweet spots of stimuli that evoke humans in some way, and in ways science would never find. It’s the arts that discovered music, not science. In this light, the arts is a massive laboratory experiment of sorts. Not well-controlled in the usual lab sense. But one capable, in a sense, of making scientific discoveries. This is why, in my own research, I often use massive amounts of artistic stimuli as data. What have these cultural-artistic tendencies discovered? What does it tell us about the brain, or about the innate mechanisms we possess?
A few years ago I wrote a pair of posts that speak to this:

Tuesday, July 10, 2018

Pier 13 in Hoboken (Verrazano-Narrows Bridge in the background)

20180630-P1150436

Interpreting Melania’s Jacket [#melaniasjacket #melania]

A couple of weeks ago First Lady Melania Trump was photographed in a jacket which had “I REALLY DON’T CARE, DO U?” written on the back. Chaos ensued. Well, not exactly chaos, but rampant speculation about what she meant by that statement.

The statement itself seems harmless enough, a vague undirected statement of detachment or nonchalance. But such a statement seems, on the surface, in conflict with the context in which she wore the jacked. In the photo above (to the left) she is boarding an airplane to fly to a detention center for immigrant children. The children’s families attempted an illegal border crossing, had been caught, and the children had been separated from their parents. This policy was (and is) enormously controversial and is strongly identified with her husband, President Donald Trump, who’d made cracking down on immigration a cornerstone of his policy. That controversy was at a fever pitch when the First Lady got on the plane.

Such a trip is ordinarily an expression of sympathy. But if the First Lady was sympathetic to the children, then why wear a jacket that expresses detachment on its back? Is she saying, in effect, that she’s not at all concerned about/for the children? If so, isn’t that a terrible thing? But then, isn’t her husband a terrible president? And thus the full force of anti-Trump sentiment became directed at Melania and her jacket.

Abstractly considered, it’s possible that she just grabbed the jacket on the way out the door without thinking about it. But, as the tweet above points out, she once made a living by wearing clothes for the camera. It seems unlikely that she’d be so cavalier. She had some intention, but what?

I surely don’t know. It’s possible that she had something specific in mind, for a specific audience. It’s also possible that she thought about it and picked that jacket on a purely intuitive basis, sure that it was just the thing, but without an explicit sense of what that thing is. I don’t know.

* * * * *

To what degree or in what way are literary texts like the writing on Melania’s jacket? So-called formalists have argued that literary texts contain their meaning within themselves. Hence we don’t need to know anything about the author or the historical context in order to determine the meaning of the text. But not all literary critics are formalists, not by a long shot. For these critics, context is essential.

In the case of Melania’s jacket context, yes, is essential. But it is not definitive, not for those of us without access to Melania’s mind. And maybe not even for Melania herself. She had a certain intention when she first put the jacket on and was photographed wearing it. Has that intention remain intact through the ensuring controversy?

Artificial General Intelligence (AGI): Curiouser and Curiouser [#AI #Mars]

A  few years ago physicist David Deutsch mused on the possibility of fully general artificial intelligence. The bottom line: We don't know what we're doing. Though he takes a round about way to get there.
In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.
It certainly seems that way, that we're no closer than we were 50 years ago. At least we've managed to toss out a lot of ideas that won't work. Or have we?

Deutsch thinks the problem is philosophical: "I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place."

He places great emphasis on the ideas of Karl Popper, whom I admire, but I don't quite see what Deutsch sees in Popper. Still, he manages to marshall an interesting quote:
As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.
That is to say, minds are built from within. And the building is done by fundamental units that are themselves alive and so trying achieve something, if only the minimal something of remaining alive. Those fundamental units are, of course, cells.

Monday, July 9, 2018

In the land of flowers

20180707-_IGP2374

20180707-_IGP2387

20180707-_IGP2380

"Positivism", "German idealism" and method in the humanities (and sciences) [#DH]


So, what is positivism?

Simon During @SimonDuring
what is positivism for you?
1 reply 0 retweets 0 likes

Ted Underwood @Ted_Underwood
Beyond simply "a term invented by Comte," I think it might be fair to use it to describe epistemologies that claim experimental investigation produce certainty ("positive knowledge"), and that insist on its utterly value-free, objective character.

But I think the emphasis on certainty was never broadly shared. Even in the 19c, many scientists emphasized the tentative character of experimental inquiry, and that emphasis has only grown with the rising importance of statistics, which is +

often called "the science of uncertainty." Statistical models tend to be probabilistic rather than deterministic, and Bayesian models are avowedly subjective (founded on priors) rather than objective. So "positivist" seems to me 200 yrs out of date.

In practice, I think a lot of ppl use "positivist" to mean "anyone who doesn't accept Dilthey's premise of a firm divide between natural and human sciences." But the right term for that, imho, is "someone who disagrees with Wilhelm Dilthey"! ;)

The guiding slogan for contemporary quant. social science is George Box: "All models are wrong." That's as remote in spirit from Comte's emphasis on positive verifiability as Comte was from Francis Bacon. In fact, it would make more sense to call us all Baconians.

The varieties of positivism

Scott Enderle @scottenderle
I used to think there was a wrong and a right way to use "positivism," but adopted a policy of clemency towards abusers. Then I learned the definition of "legal positivism" and decided the whole issue is confused beyond redemption.

Ted Underwood @Ted_Underwood
I suspect philosophical definitions are mostly retconned. It is in practice just one of those words like "determined : stubborn : pigheaded." People write "scientific" or "empirical"—think to themselves "but that sounds good"—then scratch it out and write "positivist."

Saturday, July 7, 2018

Three flowers

20180707-_IGP2342

20180707-_IGP2352

20180707-_IGP2356

The Problematic of Description

From five years ago....

We can frame the problematic of description with a remark David Bordwell made while discussing the state of film criticism. Following Monroe Beardsley, he divides the critic’s activities into four categories, description, analysis, interpretation, and evaluation. He then characterizes reviews, critical essays, and academic articles with respect to those four categories of activity. All of them employ description. Beyond that, the sense of his usage is that description is the most primitive of elementary of these activities.

In Bordwell’s recasting:
Critics describe artworks. Film critics summarize plots, describe scenes, characterize performances or music or visual style. These descriptions are seldom simply neutral summaries. They’re usually perspective-driven, aiding a larger point that the critic wants to make. A description can be coldly objective or warmly partisan.
The question I want to pose is whether or not neutral, if you will, objective, description is possible. While the issue interests me in its full generality, I’ve been particularly concerned about the descriptive characterization of literary texts. In the extreme, of course, we have that sort of postmodernism that tends to insist the text itself is just a bunch of marks on a surface and it’s all interpretation from there.

For those interested in, say, machine vision, that in itself is a practical, and difficult problem. It is one thing to train a computer to recognize mechanically printed text; that is now routine business. But computer recognition of handwriting is still a difficult problem. That is to say, in that particular intellectual context, merely recognizing the words on the page (or, for that matter, speech sounds in the air) is a deep and challenging problem. It is by no means obvious, however, that such an attitude is reasonable for the general practice of literary criticism. Is it possible to characterize plot structures and semantic structures in a way that is neutral and objective?

Therein lies the problem. In the penultimate chapter of Is There a Text in This Class? Stanley Fish takes up Stephen Booth’s work on Shakespeare’s sonnets, noting that Booth disavows any interpretive aims but declares that he intends simply to describe the sonnets. Fish observes (p. 353):
The basic gesture, then, is to disavow interpretation in favor of simply presenting the text; but it is actually a gesture in which one set of interpretive principles is replaced by another that happens to claim for itself the virtue of not being an interpretation at all. The claim, however, is an impossible one since in order “simply to present” the text, one must at the very least describe it . . . and description can occur only within a stipulative understanding of what there is to be described, an understanding that will produce the object of its attention.
That is to say, Booth, among others Fish discusses, seems to be claiming that description is not a starting point, but the end point. And further, that it has unmediated access to the text, something we know, in fact, to be impossible. There is no description without (logically prior) interpretive activity of some sort. Literary texts, whatever they are, are exceedingly complex. Just what we describe, and how we describe it, these are not simple matters.

Friday, July 6, 2018

Fountain with droplets

20180630-P1150408

Hedging THE EVENT: Yes, the super-rich are different from you and me, they think they can buy their way out of mortality

Douglas Rushkoff was recently paid "about half my annual professor’s salary " to talk with "five super-wealthy guys — yes, all men — from the upper echelon of the hedge fund world."
Which region will be less impacted by the coming climate crisis: New Zealand or Alaska? Is Google really building Ray Kurzweil a home for his brain, and will his consciousness live through the transition, or will it die and be reborn as a whole new one? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?”

The Event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? The billionaires considered using special combination locks on the food supply that only they knew. Or making guards wear disciplinary collars of some kind in return for their survival. Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

That’s when it hit me: At least as far as these gentlemen were concerned, this was a talk about the future of technology.
This, of course, is nuts.
There’s nothing wrong with madly optimistic appraisals of how technology might benefit human society. But the current drive for a post-human utopia is something else. It’s less a vision for the wholesale migration of humanity to a new a state of being than a quest to transcend all that is human: the body, interdependence, compassion, vulnerability, and complexity. As technology philosophers have been pointing out for years, now, the transhumanist vision too easily reduces all of reality to data, concluding that “humans are nothing but information-processing objects.”
How'd we come to this?
Of course, it wasn’t always this way. There was a brief moment, in the early 1990s, when the digital future felt open-ended and up for our invention. Technology was becoming a playground for the counterculture, who saw in it the opportunity to create a more inclusive, distributed, and pro-human future. But established business interests only saw new potentials for the same old extraction, and too many technologists were seduced by unicorn IPOs. Digital futures became understood more like stock futures or cotton futures — something to predict and make bets on. So nearly every speech, article, study, documentary, or white paper was seen as relevant only insofar as it pointed to a ticker symbol. The future became less a thing we create through our present-day choices or hopes for humankind than a predestined scenario we bet on with our venture capital but arrive at passively.

This freed everyone from the moral implications of their activities. Technology development became less a story of collective flourishing than personal survival. [...] So instead of considering the practical ethics of impoverishing and exploiting the many in the name of the few, most academics, journalists, and science-fiction writers instead considered much more abstract and fanciful conundrums: Is it fair for a stock trader to use smart drugs? Should children get implants for foreign languages? Do we want autonomous vehicles to prioritize the lives of pedestrians over those of its passengers? Should the first Mars colonies be run as democracies? Does changing my DNA undermine my identity? Should robots have rights?

Asking these sorts of questions, while philosophically entertaining, is a poor substitute for wrestling with the real moral quandaries associated with unbridled technological development in the name of corporate capitalism.

Thursday, July 5, 2018

Just a reminder, American Heartache

IMGP6464

Eric Weinstein sees the need for hypercapitalism linked with hypersocialism

Sean Illing interviews Eric Weinstein (who, incidentally, coined the term the "intellectual dark web") in Vox. From the introduction:
Weinstein’s thinking reflects a growing awareness in Silicon Valley of the challenges faced by capitalist society. Technology will continue to upend careers, workers across fields will be increasingly displaced, and it’s likely that many jobs lost will not be replaced.

Hence many technologists and entrepreneurs in Silicon Valley are converging on ideas like universal basic income as a way to mitigate the adverse effects of technological innovation.
On capitalism and socialism:
Sean Illing
Let's talk about that. What does a hybrid of capitalism and socialism look like?

Eric Weinstein
I don't think we know what it looks like. I believe capitalism will need to be much more unfettered. Certain fields will need to undergo a process of radical deregulation in order to give the minority of minds that are capable of our greatest feats of creation the leeway to experiment and to play, as they deliver us the wonders on which our future economy will be based.

By the same token, we have to understand that our population is not a collection of workers to be input to the machine of capitalism, but rather a nation of souls whose dignity, well-being, and health must be considered on independent, humanitarian terms. Now, that does not mean we can afford to indulge in national welfare of a kind that would rob our most vulnerable of a dignity that has previously been supplied by the workplace.

People will have to be engaged in socially positive activities, but not all of those socially positive activities may be able to command a sufficient share of the market to consume at an appropriate level, and so I think we're going to have to augment the hypercapitalism which will provide the growth of the hypersocialism based on both dignity and need.

Sean Illing
I agree with most of that, but I’m not sure we’re prepared to adapt to these new circumstances quickly enough to matter. What you’re describing is a near-revolutionary shift in politics and culture, and that’s not something we can do on command.

Eric Weinstein
I believe that once our top creative class is unshackled from those impediments which are socially negative, they will be able to choose whether capitalism proceeds by evolution or revolution, and I am hopeful that the enlightened self-interest of the billionaire class will cause them to take the enlightened path toward finding a rethinking of work that honors the vast majority of fellow citizens and humans on which their country depends.

Beyond the 'two cultures'

Jennifer Summit and Blakey Vermeule, The ‘Two Cultures’ Fallacy, The Chronicle of Higher Education, July 1, 2018. Final three paragraphs:
As researchers from the Institute for the Future suggested in their "Future Work Skills 2020" report, "While throughout the 20th century, ever-greater specialization was encouraged, the new century will see transdisciplinary approaches take center stage." Projects that bring together scientists, engineers, artists, humanists, and social scientists in ways that bridge traditional disciplinary divides produce fresh approaches to complex questions. New knowledge requires new forms of education. Where 20th-century paradigms of teaching and learning emphasized disciplinary specialization, we now need "a new culture of learning" — to quote the title of Douglas Thomas and John Seely Brown’s 2011 book.

The challenge we face as educators is how to restore imagination and creativity to students who have come to associate education with the lack of those qualities. Rather than offering lip service and window dressing, we need to step far outside our dominant models of learning, thinking, and living. Schools discourage creative thinking, the educator Ken Robinson observes, in large part through their tendency to elevate "some disciplines over others." To counter this, he suggests, "we need to eliminate the existing hierarchy of subjects."

Rather than reinforce boundaries between disciplines and the value-laden hierarchies that keep them in place, we need to accept that studies in "imagination" and "humanity" are no less vital to work and citizenship than those of "facts" and "machines." This is the time for humanists and scientists, fuzzies and techies, to overcome the divisions of knowledge, culture, and value that separate them. Doing so will transform the disciplines themselves, and displace the oppositional framework that has for so long defined and divided them.

Wednesday, July 4, 2018

What is India, a nation, an empire?

Arundhati Roy, In What Language does Rain Fall over Tormented Cities?, Raiot, June 28, 2018. This is the text of the W. G. Sebald Lecture on Literary Translation, delivered June 5, 2018, The British Library, London.
I fell to wondering what my mother tongue actually was. What was—is—the politically correct, culturally apposite, and morally appropriate language in which I ought to think and write? It occurred to me that my mother was actually an alien, with fewer arms than Kali perhaps but many more tongues. English is certainly one of them. My English has been widened and deepened by the rhythms and cadences of my alien mother’s other tongues. (I say alien because there’s not much that is organic about her. Her nation-shaped body was first violently assimilated and then violently dismembered by an imperial British quill. I also say alien because of the violence unleashed in her name on those who do not wish to belong to her (Kashmiris, for example), as well as on those who do (Indian Muslims and Dalits, for example), makes her an extremely un-motherly mother.

How many tongues does she have? Officially, approximately 780, only twenty-two of which are formally recognized by the Indian Constitution, while another thirty-eight are waiting to be accorded that status. Each has its own history of colonizing or being colonized. There are few pure victims and pure perpetrators. There is no national language.
And what is India?
Fundamentally, India is in many ways still an empire, its territories held together by its armed forces and administered from Delhi, which, for most of her subjects, is as distant as any foreign metropole. If India had broken up into language republics, like countries in Europe, then perhaps English could be done away with. But even still, not really, not any time soon. As things stand, English, although it is spoken by a small minority (which still numbers in the tens of millions), is the language of mobility, of opportunity, of the courts, of the national press, the legal fraternity, of science, engineering, and international communication. It is the language of privilege and exclusion. It is also the language of emancipation, the language in which privilege has been eloquently denounced. Annihilation of Caste by Dr. B. R. Ambedkar, the most widely read, widely translated, and devastating denunciation of the Hindu caste system, was written in English. It revolutionized the debate on perhaps the most brutal system of institutionalized injustice that any society has ever dreamed up. How different things would have been had the privileged castes managed to contain Ambedkar’s writing in a language that only his own caste and community could read. Inspired by him, many Dalit activists today see the denial of a quality English education to the underprivileged (in the name of nationalism or anticolonialism) as a continuation of the Brahmin tradition of denying education and literacy—or, for that matter, simply the right to pursue knowledge and accumulate wealth—to people they consider “shudras” and “outcastes.” To make this point, in 2011 the Dalit scholar Chandra Bhan Prasad built a village temple to the Dalit Goddess of English. “She is the symbol of Dalit Renaissance,” he said. “We will use English to rise up the ladder and become free forever.”
Her second novel, The Ministry of Utmost Happiness, "has been—is being—translated into forty-eight languages."
Given the setting of the novel, the Hindi and Urdu translations are, in part, a sort of homecoming. I soon learned that this did nothing to ease the task of the translators. To give you an example: The human body and its organs play an important part in The Ministry. We found that Urdu, that most exquisite of languages, which has more words for love than perhaps any other language in the world, has no word for vagina. There are words like the Arabic furj, which is considered to be archaic and more or less obsolete, and there are euphemisms that range in meaning from “hidden part,” “breathing hole,” “vent,” and “path to the uterus.” The most commonly used one is aurat ki sharamgah. A woman’s place of shame. As you can see, we had trouble on our hands. Before we rush to judgment, we must remember that pudenda in Latin means “that whereof one should feel shame.” In Danish, I was told by my translator, the phrase is “lips of shame.” So, Adam and Eve are alive and well, their fig leaves firmly in place.
H/t 3 Quarks Daily.

Could Moby Dick have etched this on his own tooth?

_IGP1152

Tuesday, July 3, 2018

Quick notes on ‘meaning’ in literary texts

When a literary critic explicates the ‘meaning’ of a literary text, what is it that they are doing? What is the relationship between the meaning thus explicated and the text which is supposed to somehow ‘harbor’ it? A problematic matter, no?

Early in my career I seem to have converged on the idea that this meaning was something one took up in the course of reading the text, but that was not consciously available. So the job of the critic was to state the content of this unconscious meaning. By convention such meanings are said to be ‘hidden’. Hidden how? where?

Psychoanalytic thought provides perhaps the clearest ‘model’ for such an approach. When a psychoanalyst approaches a dream, the dream has a manifest content and it is the analyst’s job to help the patient arrive at its latent content (or meaning). So it is with the critic and the text. The text itself is the manifest content, which can be summarized and paraphrased. The latent content is something else; it must be discovered, dug out if you will, by other means. But when one reads a text that latent content is there in the mind along with the manifest content. It’s just not consciously available.

Thus, when I was in graduate school in the mid-1970s I studied computational semantics/psycholinguistics and created an explicit model of a mind reading a Shakespeare sonnet (#126, The expense of spirit). That model was no more than an intellectual toy, but toys are useful if recognized as such. 1) It was explicit in a way that standard accounts of (literary) reading were not and are not. 2) Whatever it was, it certainly was NOT a reading or an interpretation of the text.

If what happens in the mind is something like what is made explicit in that model, then what are ordinary interpretations? Whatever they are, they don’t seem to be accounts of the inner workings of something that works like that model does.

So then, what are interpretations? They flow from the conviction that there is something important about these texts, their meaning, and that we have to do SOMETHING to register this. That something has turned out to be interpretation. Now, back in the days when I was first learning my craft the argument was often made that the meaning critics find in texts are, in fact, put there by the critics themselves, and so not really in the text at all. And this argument was countered by the argument that the object of critical scrutiny was the author’s intended meaning. But if THAT’s what the author intended, why didn’t they SAY that instead of writing this poem, play, or novel? And so it went, round and round.

The questions have never been answered or resolved. They’ve just, for the most part, been abandoned.

Should we also abandon the activity of interpretation? Perhaps. But only perhaps. If we’re going to continue it we need to come up with a rationale.

Red flowers

20180630-P1150460

Monday, July 2, 2018

Stanley Fish, machine and mechanism, and the poverty of his intentionalist search for meaning

Over on the Humanist Discussion Group we’ve been examining Stanley Fish’s criticisms of, for the most part, computational criticism (which he frames as a criticism of digital humanities as a whole) – check the archives for June 2018 (see entries entitled “Fish'ing for fatal flaws”). I want to look at something closely related, his sense of machine and mechanism.

Mechanism and Intention

In his seminal essay, “Literature in the Reader: Affective Stylistics”[1], Fish made a general point that the pattern of expectations, some satisfied and some not, which is set up in reader’s mind in the process of reading literary texts is essential to the meaning of those texts. Hence any adequate analytic method must describe that essentially temporal pattern. Of the proposed method, Fish asserts:
Essentially what the method does is slow down the reading experience so that “events” one does not notice in normal time, but which do occur, are brought before our analytical attentions. It is as if a slow motion camera with an automatic stop action effect were recording our linguistic experiences and presenting them to us for viewing. Of course the value of such a procedure is predicated on the idea of meaning as an event, something that is happening between words and in the reader’s mind...
A bit further on Fish asserts that “What is required, then, is a method, a machine if you will, which in its operation makes observable, or at least accessible, what goes on below the level of self-conscious response.”

What did he mean by that, “a method, a machine”? Clearly he didn’t mean, for example, a steam locomotive, a sewing machine, a dental drill, or any such device. For a long time I’ve conjectured that modern digital computers were resonating in his mind when he wrote that, though he doesn’t mention them anywhere in the essay. But then, when we talk of, for example, a “political machine”, in what sense is THAT a machine? Is Fish using “machine” in a general sense that covers a wide variety of cases, including phenomena other than electromechanical devices?

While Fish doesn’t mention computers in that essay, he does examine some computational stylistics in another essay he wrote at the time, “What Is Stylistics and Why Are They Saying Such Terrible Things About It?” and so is necessarily referencing computers, if only indirectly [2]. We thus know that he knows about computers and has thought about them. But I’m more interested in what he said in thaeessay about an article by the linguist, Michael Halliday, which doesn’t involve computing, but does involve a linguistics system.

After quoting a passage in which Halliday analyses a single sentence from Through the Looking Glass, Fish remarks (p. 80): “When a text is run through Halliday’s machine, its parts are first dissembled, then labeled, and finally recombined in their original form. The procedure is a complicated one, and it requires many operations, but the critic who performs them has finally done nothing at all.” Note, moreover, that he had framed Halliday’s essay as one of many lured on by “the promise of an automatic interpretive procedure” (p. 78), though he doesn’t ascribe that automaticity to a computer.

If one takes Halliday’s “machine” as a crude approximation to the linguistic mind, well it seems to me, then, that you accomplish quite a lot with it. It’s not an interpretation in the usual sense of the word. But, for whatever reason, that doesn’t seem to interest Fish.

Now let’s come forward in time to Fish’s 2015 address to the School of Criticism and Theory, “If You Count It, They Will Come: The Promise of the Digital Humanities”–a video and transcript are online. On page 4 (of the transcript) he says this:
Now writing in a book called The Companion to the Digital Humanities, digital humanist Hugh Craig acknowledges the force of my criticism in the 1970s, but asserts that the more sophisticated techniques now available make possible a new stylistics with what he calls another motivation. And he defines it, the motivation, quote, to uncover patterns of language use, which because of their background quality-- that is, how deeply embedded they are-- or their emergence on a super humanly wide scale, would otherwise not be noticed, unquote.

But if the problem with the old stylistics was that you could not generalize, except illegitimately, from the data, the problem with this new up-to-date stylistics is that it is by no means clear why you should be interested in the data it uncovers at all. Maybe the patterns that have not been noticed before, patterns like the frequency with which particular words appear in the titles of 19th century books through the decades, should have remained unnoticed, because they are nothing more than the artifacts of a machine.
But what machine is he talking about, the computational machine used by the scholar or the “machine”, or the linguistic mind, that produced the text in the first place? I suspect that he means the latter.

Later on, where he is discussing his preferred ‘school’ of interpretation, intentionalism, he says (p. 8):
For an intentionalist, the fact that data mining can uncover hidden patterns undetectable by the mere human reader is cause not for celebration, but for suspicion. A pattern that is subterranean is unlikely to be a pattern that was put there by an intentional agent. And if it wasn't put there by an intentional agent, it cannot have meaning.
Assuming the pattern was really there, then, where’d it come from?

There are a LOT of assumptions in Fish’s statement about “hidden patterns undetectable...reader” and “put there...agent”, and in the question I just asked immediately above, and this is not the place to untangle them all. So I’m going to let those things alone and skip to where I’m going. Fish seems to think we’ve got intentional agents conveying meaning, on the one hand, and that they are distinct from mere mechanical patterns on the other. Can that be right?