Sunday, July 30, 2017
Major transitions in cultural evolution
The original research article is HERE. Here's the abstract:Identifying major transitions in human cultural evolution https://t.co/Yg8K8ryjmg
— Bill Benzon, BAM! Bootstrapping Artificial Minds (@bbenzon) July 30, 2017
Evolutionary thinking can be applied to both cultural microevolution and macroevolution. However, much of the current literature focuses on cultural microevolution. In this article, we argue that the growing availability of large cross-cultural datasets facilitates the use of computational methods derived from evolutionary biology to answer broad-scale questions about the major transitions in human social organization. Biological methods can be extended to human cultural evolution. We illustrate this argument with examples drawn from our recent work on the roles of Big Gods and ritual human sacrifice in the evolution of large, stratified societies. These analyses show that, although the presence of Big Gods is correlated with the evolution of political complexity, in Austronesian cultures at least, they do not play a causal role in ratcheting up political complexity. In contrast, ritual human sacrifice does play a causal role in promoting and sustaining the evolution of stratified societies by maintaining and legitimizing the power of elites. We briefly discuss some common objections to the application of phylogenetic modeling to cultural evolution and argue that the use of these methods does not require a commitment to either gene-like cultural inheritance or to the view that cultures are like vertebrate species. We conclude that the careful application of these methods can substantially enhance the prospects of an evolutionary science of human history.
Saturday, July 29, 2017
Gary Marcus is skeptical of AI, however...we need a new funding paradigm
Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus. Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).
What we need:
To get computers to think like humans, we need a new A.I. paradigm, one that places “top down” and “bottom up” knowledge on equal footing. Bottom-up knowledge is the kind of raw information we get directly from our senses, like patterns of light falling on our retina. Top-down knowledge comprises cognitive models of the world and how it works.
Marcus goes on to argue that the current methods of funding AI–small academic labs and somewhat larger industrial labs–are inadequate. Academic labs are too small to gather the wide variety of talent needed to make significant progress. Industrial labs are too focused on short-term results.
I look with envy at my peers in high-energy physics, and in particular at CERN, the European Organization for Nuclear Research, a huge, international collaboration, with thousands of scientists and billions of dollars of funding. [...]An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few.
Friday, July 28, 2017
Neuromorphic AI
Demis Hassabis, Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, Matthew Botvinick. Neuroscience-Inspired Artificial Intelligence. Neuron, Volume 95, Issue 2, p245–258, 19 July 2017.
Summary:
The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.
Conclusions:
In this perspective, we have reviewed some of the many ways in which neuroscience has made fundamental contributions to advancing AI research, and argued for its increasingly important relevance. In strategizing for the future exchange between the two fields, it is important to appreciate that the past contributions of neuroscience to AI have rarely involved a simple transfer of full-fledged solutions that could be directly re-implemented in machines. Rather, neuroscience has typically been useful in a subtler way, stimulating algorithmic-level questions about facets of animal learning and intelligence of interest to AI researchers and providing initial leads toward relevant mechanisms. As such, our view is that leveraging insights gained from neuroscience research will expedite progress in AI research, and this will be most effective if AI researchers actively initiate collaborations with neuroscientists to highlight key questions that could be addressed by empirical work.
The successful transfer of insights gained from neuroscience to the development of AI algorithms is critically dependent on the interaction between researchers working in both these fields, with insights often developing through a continual handing back and forth of ideas between fields. In the future, we hope that greater collaboration between researchers in neuroscience and AI, and the identification of a common language between the two fields (Marblestone et al., 2016), will permit a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances. We believe that the quest to develop AI will ultimately also lead to a better understanding of our own minds and thought processes. Distilling intelligence into an algorithmic construct and comparing it to the human brain might yield insights into some of the deepest and the most enduring mysteries of the mind, such as the nature of creativity, dreams, and perhaps one day, even consciousness.
Thursday, July 27, 2017
The Physics of Life's Origins(?)
The biophysicist Jeremy England made waves in 2013 with a new theory that cast the origin of life as an inevitable outcome of thermodynamics. His equations suggested that under certain conditions, groups of atoms will naturally restructure themselves so as to burn more and more energy, facilitating the incessant dispersal of energy and the rise of “entropy” or disorder in the universe. England said this restructuring effect, which he calls dissipation-driven adaptation, fosters the growth of complex structures, including living things. The existence of life is no mystery or lucky break, he told Quanta in 2014, but rather follows from general physical principles and “should be as unsurprising as rocks rolling downhill.”
Since then has published two studies, one in the Proceedings of the National Academy of Sciences (PNAS) and the other in Physical Review Letters (PRL).
Concerning the PNAS study:
The paper strips away the nitty-gritty details of cells and biology and describes a simpler, simulated system of chemicals in which it is nonetheless possible for exceptional structure to spontaneously arise — the phenomenon that England sees as the driving force behind the origin of life. “That doesn’t mean you’re guaranteed to acquire that structure,” England explained. The dynamics of the system are too complicated and nonlinear to predict what will happen.The simulation involved a soup of 25 chemicals that react with one another in myriad ways. Energy sources in the soup’s environment facilitate or “force” some of these chemical reactions, just as sunlight triggers the production of ozone in the atmosphere and the chemical fuel ATP drives processes in the cell. Starting with random initial chemical concentrations, reaction rates and “forcing landscapes” — rules that dictate which reactions get a boost from outside forces and by how much — the simulated chemical reaction network evolves until it reaches its final, steady state, or “fixed point.”Often, the system settles into an equilibrium state, where it has a balanced concentration of chemicals and reactions that just as often go one way as the reverse. This tendency to equilibrate, like a cup of coffee cooling to room temperature, is the most familiar outcome of the second law of thermodynamics, which says that energy constantly spreads and the entropy of the universe always increases. [...]But for some initial settings, the chemical reaction network in the simulation goes in a wildly different direction: In these cases, it evolves to fixed points far from equilibrium, where it vigorously cycles through reactions by harvesting the maximum energy possible from the environment.
Of the PRL paper:
In the PRL paper, England and his coauthors Tal Kachman and Jeremy Owen of MIT simulated a system of interacting particles. They found that the system increases its energy absorption over time by forming and breaking bonds in order to better resonate with a driving frequency. “This is in some sense a little bit more basic as a result” than the PNAS findings involving the chemical reaction network, England said.
However:
But even if the fine-tuned fixed points can be observed in settings that are increasingly evocative of life and its putative beginnings, some researchers see England’s overarching thesis as “necessary but not sufficient” to explain life, as Walker put it, because it cannot account for what many see as the true hallmark of biological systems: their information-processing capacity. From simple chemotaxis (the ability of bacteria to move toward nutrient concentrations or away from poisons) to human communication, life-forms take in and respond to information about their environment.To Walker’s mind, this distinguishes us from other systems that fall under the umbrella of England’s dissipation-driven adaptation theory, such as Jupiter’s Great Red Spot. “That’s a highly non-equilibrium dissipative structure that’s existed for at least 300 years, and it’s quite different from the non-equilibrium dissipative structures that are existing on Earth right now that have been evolving for billions of years,” she said. Understanding what distinguishes life, she added, “requires some explicit notion of information that takes it beyond the non-equilibrium dissipative structures-type process.” In her view, the ability to respond to information is key: “We need chemical reaction networks that can get up and walk away from the environment where they originated.”
Tuesday, July 25, 2017
Computational Thinking and the Digital Critic: Part 1, Four Good Books
This is about computational thinking. But computational thinking is not one thing. It is many, some as yet undefined. What can it become for students of the humanities?
How, you might ask, are we to engage a computational understanding of literary process, if computation isn’t well-defined?
With care, I say, with care. We have to make it up.
* * * * *
As Stephen Ramsay pointed out in a post, DH and CS (where DH = digital humanities and CS = computer science), computer scientists are mostly interested in abstract matters of computability and data structures while programmers are mostly concerned with the techniques of programming certain kinds of capabilities in this or that language. Those are different, though related, undertakings.
Further, the practical craft has two somewhat different aspects. One faces toward the end user and is concerned with capturing that user’s world in the overall design of the program. This design process is, in effect, applied cognitive anthropology. The other aspect faces toward the computer itself and is concerned with implementing that design through the means available in the appropriate programming language. This is writing, but in a very specialized dialect. But it’s all computational thinking in some meaningful sense.
Though I have written a computer program or three, that was long ago. I have, however, spent a fair amount of time working with programmers. At one period in my life I documented software; at a different time I participated in product design.
But I also spent several years in graduate school studying the computational semantics of natural language with the late David Hays. That’s an abstract and theoretical enterprise. Though he is one of the founders of computational linguistics, Hays did no programming until relatively late in his career, after he’d left academia. He was interested in how the mind works and computation was one of his conceptual strategies. I studied with Hays because I wanted to figure out how poetry worked. All the members of his research group were interested in the human mind in one way or another; some of them were also programmers of appreciable skill.
Computational Thinking and the Digital Critic: Part 2, An Ant Walks on the Beach and a Pilot is Alone
Simon’s ant is a well-known thought experiment from Chapter 3, “The Psychology of Thinking: Embedding Artifice in Nature,” in Herbert A. Simon, The Sciences of the Artificial, 1981. It’s a parable about computation, about how computational requirements depend on the problem to be solved. Stated that way, it is an obvious truism. But Simon’s thought experiment invites you to consider this truism where the “problem to be solved” is an environment external to the computer – it is thus reminiscent of Braitenberg’s primitive vehicles (which I discussed in Part 1).
Think of it like this: the nervous system requires environmental support if it is to maintain its physical stability and operational coherence. Note that Simon was not at all interested in the physical requirements of the nervous system. Rather, he was interested in suggesting that we can get complex behavior from relatively simple devices, and simplicity translates into design requirements for a nervous system.
Simon asks us to imagine an ant moving about on a beach:
We watch an ant make his laborious way across a wind- and wave-molded beach. He moves ahead, angles to the right to ease his climb up a steep dunelet, detours around a pebble, stops for a moment to exchange information with a compatriot. Thus he makes his weaving, halting way back to his home. So as not to anthropomorphize about his purposes, I sketch the path on a piece of paper. It is a sequence of irregular, angular segments--not quite a random walk, for it has an underlying sense of direction, of aiming toward a goal.
Monday, July 24, 2017
More synch: Firewalking (performers and spectators), Romantic partners (& empathy for pain)
Pavel Goldstein, Irit Weissman-Fogel, Simone G. Shamay-Tsoory. The role of touch in regulating inter-partner physiological coupling during empathy for pain. Scientific Reports, 2017; 7 (1) DOI: 10.1038/s41598-017-03627-7
Abstract: The human ability to synchronize with other individuals is critical for the development of social behavior. Recent research has shown that physiological inter-personal synchronization may underlie behavioral synchrony. Nevertheless, the factors that modulate physiological coupling are still largely unknown. Here we suggest that social touch and empathy for pain may enhance interpersonal physiological coupling. Twenty-two romantic couples were assigned the roles of target (pain receiver) and observer (pain observer) under pain/no-pain and touch/no-touch conditions, and their ECG and respiration rates were recorded. The results indicate that the partner touch increased interpersonal respiration coupling under both pain and no-pain conditions and increased heart rate coupling under pain conditions. In addition, physiological coupling was diminished by pain in the absence of the partner’s touch. Critically, we found that high partner’s empathy and high levels of analgesia enhanced coupling during the partner’s touch. Collectively, the evidence indicates that social touch increases interpersonal physiological coupling during pain. Furthermore, the effects of touch on cardio-respiratory inter-partner coupling may contribute to the analgesic effects of touch via the autonomic nervous system.
H/t Steven Strogatz.
Ivana Konvalinkaa, Dimitris Xygalatas, Joseph Bulbulia, Uffe Schjødt, Else-Marie Jegindø, Sebastian Wallot, Guy Van Orden, and Andreas Roepstorff. Synchronized arousal between performers and related spectators in a fire-walking ritual. PNAS, May 17, 2011 vol. 108 no. 20 8514-8519, doi: 10.1073/pnas.1016955108
Abstract: Collective rituals are present in all known societies, but their function is a matter of long-standing debates. Field observations suggest that they may enhance social cohesion and that their effects are not limited to those actively performing but affect the audience as well. Here we show physiological effects of synchronized arousal in a Spanish fire-walking ritual, between active participants and related spectators, but not participants and other members of the audience. We assessed arousal by heart rate dynamics and applied nonlinear mathematical analysis to heart rate data obtained from 38 participants. We compared synchronized arousal between fire-walkers and spectators. For this comparison, we used recurrence quantification analysis on individual data and cross-recurrence quantification analysis on pairs of participants' data. These methods identified fine-grained commonalities of arousal during the 30-min ritual between fire-walkers and related spectators but not unrelated spectators. This indicates that the mediating mechanism may be informational, because participants and related observers had very different bodily behavior. This study demonstrates that a collective ritual may evoke synchronized arousal over time between active participants and bystanders. It links field observations to a physiological basis and offers a unique approach for the quantification of social effects on human physiology during real-world interactions.
Sunday, July 23, 2017
Computational Psychiatry?
Psychiatry, the study and prevention of mental disorders, is currently undergoing a quiet revolution. For decades, even centuries, this discipline has been based largely on subjective observation. Large-scale studies have been hampered by the difficulty of objectively assessing human behavior and comparing it with a well-established norm. Just as tricky, there are few well-founded models of neural circuitry or brain biochemistry, and it is difficult to link this science with real-world behavior.That has begun to change thanks to the emerging discipline of computational psychiatry, which uses powerful data analysis, machine learning, and artificial intelligence to tease apart the underlying factors behind extreme and unusual behaviors.Computational psychiatry has suddenly made it possible to mine data from long-standing observations and link it to mathematical theories of cognition. It’s also become possible to develop computer-based experiments that carefully control environments so that specific behaviors can be studied in detail.
The article then goes on to discuss research reported in:
Sarah K Fineberg (MD PhD), Dylan Stahl (BA), Philip Corlett (PhD), Computational Psychiatry in Borderline Personality Disorder, Current Behavioral Neuroscience Reports, March 2017, Vol 4, Issue 1, pp31-40: arXiv:1707.03354v1 [q-bio.NC]
Purpose of review: We review the literature on the use and potential use of computational psychiatry methods in Borderline Personality Disorder.Recent findings: Computational approaches have been used in psychiatry to increase our understanding of the molecular, circuit, and behavioral basis of mental illness. This is of particular interest in BPD, where the collection of ecologically valid data, especially in interpersonal settings, is becoming more common and more often subject to quantification. Methods that test learning and memory in social contexts, collect data from real-world settings, and relate behavior to molecular and circuit networks are yielding data of particular interest.Summary: Research in BPD should focus on collaborative efforts to design and interpret experiments with direct relevance to core BPD symptoms and potential for translation to the clinic.
Tuesday, July 18, 2017
Language boundaries & surface tension
In his new study, Burridge presents a deliberately minimal model of language change, which focuses on explaining dialect distribution solely in terms of topographical features and speaker interaction. The model assumes the existence of multiple linguistic variants for multiple linguistic variables, which effectively define different dialects. In determining whether a given speaker adopts a specific variant, the model does not consider “social value” factors. Instead, it assumes that speakers interact predominantly with people living in their local environment (defined by some radius around their home), and that they will conform to the speech patterns of the majority of people in that geographic vicinity. Such local linguistic alignment favors the emergence of distinct dialect areas, with dialect boundaries tending to shorten in length in a way that mimics how surface tension minimizes the surface area of a water droplet (see Fig. 1). In a region with uniform population density, this language-based surface tension will cause the boundary between two dialects to form straight lines. Densely populated areas, however, interfere with boundary straightening by repelling boundaries and effectively creating new dialect areas around themselves. Furthermore, topography can have an imprint on dialect spatial distributions. In systems with irregular perimeters, Burridge shows that boundary lines tend to migrate to places where they emerge perpendicular from the edge of the system, such as indentations in coastlines.Original research HERE (PDF).
Monday, July 17, 2017
Where is the never ending (medieval) text? [#DH]
I checked in at Academia.edu today and found another article by medievalist Stephen Nichols. I've not finished it, but wanted to blog a passage or two anyhow.
Stephen G. Nichols, Dynamic Reading of Medieval Manuscripts, Florilegium, vol. 32 (2015): 19-57 DOI: 10.3138/ or.32.002 download at Academia.edu. https://www.academia.edu/33907842/Nichols_Dynamic_Reading_flor_32_002
Here's the abstract:
Abstract: Digital manuscript and text representation provides such a wealth of information that it is now possible to see the incessant versioning of works like the Roman de la Rose. Using Rose manuscripts of the Bibliothèque municipale de Lyon MS 763 and BM de Dijon MS 525 as examples and drawing on Aristotelian concepts such as energeia, dynamis, and entelecheia, the copiously illustrated article demonstrates how pluripotent circulation allows for “dynamic reading” of such manuscript texts, which takes into consideration the interplay between image, text, and the context of other texts transmitted in the same manuscript.
What caught my attention was his statement about the unexpected impact of digital technology. It made it possible, for the first time, to examine a number of different codices of the same title and to compare them. And THAT led to a sea-change in understanding of what a text is. The normative concept of the Urtext as the author's original version is in trouble. What happens to the so-called critical edition? Thus (p. 22):
that the critical edition represents a construct based on selected evidence is neither exceptional nor particularly shocking. More problematic is the fact that expediency decrees that manuscript mass be accorded short shrift. Not all manuscripts are equal in this scenario. Indeed, the purpose of manuscript selection—the choice by the editor of a small number of manuscripts deemed reliable — lay precisely in minimizing the number of manuscripts. The more versions an editor could eliminate as defective or uninteresting, the greater the probability that one had located the few copies closest to an original or early version of a work. The select copies could then be closely scrutinized for variant readings. And ‘variant’ meant precisely that: readings of lines or passages differing from what the editor determined to be the normative text. It was in reaction to such a restrictive treatment of manuscript variation that New Philology emerged. Initially, we argued that manuscript copies bore witness to a dialectical process of transmission where individual versions might have the same historical authority as that represented by the critical edition.
And so (pp. 24-25):
Perhaps the most startling question posed by the specular confrontation of manuscripts concerns the status of textuality itself. With unerring perspicuity, Jacqueline Cerquiglini-Toulet pinpoints the issue by asking the simple, but trenchant question: “what, exactly, is ‘a text’ in the Middle Ages, and how do we locate it in a manuscript culture where each codex is unique? [. . .] More radically still,” she continues, “we might legitimately ask just where we’re supposed to nd the text in the manuscript. How does it come to instantiate itself materially as object? And how is its literary identity realized?”If such questions seem disorienting, it is because they underline how much print editions of medieval works have shaped our expectations. We have grown accustomed to finding the ‘text’ of a medieval work before our eyes whenever we open an edition. In the critical edition, the text is a given; that is why the work is called ‘textual scholarship.’ The editor works hard to establish a text on the basis of painstaking study of the manuscripts that he or she determines to be authoritative. The point, of course, is to circumscribe or close the text to from continuing to generate additions or variants. As we know, that is a modern practice grounded in concepts of scientific text editing.But as Jacqueline Cerquiglini-Toulet observes, the very concept of a definitive text, a text incapable of generating new versions, is an illusion propagated by its own methodology. Authentic medieval texts, she observes, are never closed, nor, I would add, would their mode of transmission allow them to remain static. And, as a corollary, she observes: “Where are the boundaries?” How do we “identify the borders of a text”? She means that the manuscript folio has a very different ecology from the page of a printed edition. Textual space on a folio is not exclusive, but shared with other systems of representation, or — why not? — other kinds of ‘texts.’ These include rubrics, miniature paintings, decorated or historiated initials, bas-de-page images, marginal glosses, decorative programmes, and so on. In other words, the medieval manuscript page is not simply complex but, above all, an inter-artistic space navigated by visual cues.
We are far from the world of "distant reading" a large corpus of texts and thereby beginning to see patterns in literary history that had been but dimly envisioned before. But the change is equally profound. For example (26-27):
To understand the astonishing virtuosity and variety we find in manuscript versions of the ‘same’ work — such as the Roman de la Rose, for example, for which we have some 250 extant manuscripts produced between the end of the thirteenth and the beginning of the sixteenth century — we need to identify imminent factors responsible for generating multiple versions of a given work throughout the period. Here again, digital manuscript study offers reasons to move beyond conventional explanations.Whereas increased manuscript production might intuitively be explained by such external causes as rising literacy among the merchant and artisan classes and the growth in the number of booksellers, the great variation we see in manuscripts, even those contemporaneous with one another, suggests the possibility of inherent forces of variation at work. Put another way, whereas the increase in literacy and leisure certainly contributed to the growing market for manuscripts to which Parisian booksellers responded, the efficient cause generating multiple manuscripts of a given work lay in the nature of the manuscript matrix itself.It is not by chance that versions of a given work vary. Literary prestige derived in part from a work’s ability to renew itself from generation to generation by a dynamic process of differential repetition.
And so it goes. And we bring in Artistotle (p. 30): "But whereas we might think of striving for perfection as linear and directed, Aristotle sees it as continuous and open-ended." Is Nichols going to be arguing, then, that the production of version after version is a "striving for perfection" the extends through a population of scribes and readers? I suppose that's what I'll find out as I continue reading.
Thus, p. 32: "In other words, manuscripts are, by their very nature as eidos, ergon, and energeia, predisposed towards actualizing the works they convey not as invariant but as versions in an ever-evolving process of representation. Against those who would see manuscript copies as regressions from an authoritative original to ever fainter avatars of that primal moment, we must recall Aristotle’s notion of form as atemporal actuality. "
Thus, p. 32: "In other words, manuscripts are, by their very nature as eidos, ergon, and energeia, predisposed towards actualizing the works they convey not as invariant but as versions in an ever-evolving process of representation. Against those who would see manuscript copies as regressions from an authoritative original to ever fainter avatars of that primal moment, we must recall Aristotle’s notion of form as atemporal actuality. "
* * * * *
Here's an earlier post about Nichols: Mutable stability in the transmission of medieval texts. And here's a post about the three texts of Hamlet that's relevant: Journey into Shakespeare, a tedious adventure – Will the real Hamlet stand up?
Early history of digital creativity (James Ryan)
I am currently excavating the forgotten early history of computer story generation. More info here: https://t.co/drKKyMlmE7. pic.twitter.com/OMZ8ltvhYz— James Ryan (@xfoml) June 7, 2017
And so he's been digging up all sorts of interesting things, not just computer storytelling. Here's some recent stuff he's dug up.
the invention of the camera raised copyright issues that seem weird today—it was not clear whether the camera or the photographer was author pic.twitter.com/OlfaIEHxww— James Ryan (@xfoml) July 17, 2017
in 1971, 1000 feet of computer-generated poetry was dropped from a helicopter onto an experimental arts center in Burbank, California pic.twitter.com/IKjduyOcjw— James Ryan (@xfoml) July 14, 2017
a 1962 issue of the Librascope company newsletter featured an "interview" with AUTO-BEATNIK, the computer poet developed by employees there pic.twitter.com/CkLnliYEVf— James Ryan (@xfoml) July 14, 2017
Sunday, July 16, 2017
Luxury real estate & Trump: International networks of power crossing public and private boundaries
Bloggingheads.tv – Published on Jul 14, 2017
00:26 Alex’s book Dictators Without Borders
04:29 Oligarchs and autocrats and kleptocrats, oh my!
10:52 Luxury real estate’s illicit money problem
22:11 The globalization of money laundering
30:12 Trump and networks of power
45:28 How Trump is blurring lines between business and politics
56:07 The slippery slope to kleptocracy
Daniel Nexon (The Duck of Minerva, Georgetown University) and Alexander Cooley (Columbia Harriman Institute, Barnard College, Dictators Without Borders)
Recorded on July 14, 2017
04:29 Oligarchs and autocrats and kleptocrats, oh my!
10:52 Luxury real estate’s illicit money problem
22:11 The globalization of money laundering
30:12 Trump and networks of power
45:28 How Trump is blurring lines between business and politics
56:07 The slippery slope to kleptocracy
Daniel Nexon (The Duck of Minerva, Georgetown University) and Alexander Cooley (Columbia Harriman Institute, Barnard College, Dictators Without Borders)
Recorded on July 14, 2017
A most interesting discussion about how luxury real estate is a vehicle for money laundering & Trump's network extends into this world. "The lines between business and politics are not how we think about them."
Friday, July 14, 2017
"Lawfare" comes of age [@lawfareblog]
I first became aware of Lawfare through a wonderful March 3 post by Benjamin Wittes and Quinta Jurecic, What Happens When We Don’t Believe the President’s Oath? It seems that a lot of people discovered Lawfare about the same time and its readership has blossomed until
Today is the day and this morning is the morning during which @lawfareblog's 2017 traffic will pass that of the site's whole prior history. pic.twitter.com/cq2cAflZd3— Benjamin Wittes (@benjaminwittes) July 14, 2017
Obviously it is the Presidency of Donald Trump that made Lawfare's commentary so salient. Trump's bull-in-a-china-shop style begged for informed legal analysis, and Lawfare was there to provide it.
Congratulations Ben Wittes, Robert Chesney, Jack Goldsmith and the rest of the team!
Once more, a history of American Lit Crit, this time with politics
Writing in the LA Review of Books, Bruce Robbins reviews Joseph North, Literary Criticism: A Concise Political History (Harvard 2017). An interesting review of what sounds like an interesting book. Robbins reads the recent politics of lit crit as conservative rather than radical, which is how such criticism styles itself; and we get once more universals.
The broad strokes of his narrative are familiar enough, at least to literature professors. As everyone knows, the radicals of 1968, when they turned their attention to the university, insisted that academic attention be paid to race, gender, sexuality, colonialism, and other measures of historically inflicted injury. In literary criticism, these were contexts that had been missing from the everyday practice of interpretation. Moving into the ’70s and ’80s, it became obvious to much or most of the discipline that to read a work of past literature without asking what sort of society the work emerged from was as reprehensible, in its way, as ignoring those who were currently suffering injustice all around you. This is how close reading, little by little, went out of fashion — a momentous shift that, like so much else that later came to be associated with the ’60s, I was somehow living through but not really registering.Most of the academics who advocated for historicism thought of themselves as radicalizing an apolitical or even crypto-conservative discipline. In North’s view, though, this gets the story backward. The politicization of the discipline that seemed to follow the eclipse of close reading was actually its depoliticization. In the period that began in the late 1970s “and continues through to the present,” North writes, “the project of ‘criticism’ was rejected as necessarily elitist, dehistoricizing, depoliticizing, and so forth; the idea of the ‘aesthetic’ was rejected as necessarily Kantian, idealist, and universalizing.” Yetit was in fact quite wrong to reject the project of criticism as if its motivating concept, the aesthetic, could only ever be thought through in idealist terms. What was being elided here was the fact that modern disciplinary criticism had been founded on an aesthetics of just the opposite kind. In our own period, this historical amnesia has allowed a programmatic retreat from the critical project of intervening in the culture, back toward the project of analyzing the culture, without any mandate for intervention.The newer style of interpretation recognized context, oppression, and injustice, yes, but it also masked a movement away from “criticism” and toward what North calls “scholarship.” Criticism, as he sees it, aspires to intervene in social life. Scholarship, as he sees it, is knowledge-production that has no such aspiration. Scholarship gets off on interpreting the world but can’t be bothered to do anything non-scholarly to change it. Since close reading, as North sees it, was a way of changing the world, if only reader by reader, what looked like a lurch to the left was actually a subtle move to the right.For North, the production of analytic knowledge about the past, whatever its political motives, amounts to complacent non-interference. It’s a way of comfortably inhabiting a present that we ought to see, ethically speaking, as unfit for human habitation, hence requiring us to get up from our desks to do something about.
OK, so all the political critics have been hoisted on their own petards as it where. A call for revolution uttered from the comfort of one’s study is no call at all. Let’s just leave that alone.
Thursday, July 13, 2017
Walter Murch on being immersed in a film project and then pulling yourself out
Walter Murch is perhaps best-known for his work on Apocalypse Now, where he did the sound design (for which he won and Oscar) and much of the editing. This is a passage from an interview about his craft and his career that he did with Emily Buder in 2015:
To be an editor, you have to be the kind of person who can be in a room for 16 hours at a time. You are working alone a lot of the time, but there are also times when you’re working with a director in the room. You have to be able to accommodate that. For feature-length pictures, it’s like running a marathon. You have to pace yourself over a year. When I’m considering a film, that’s in the back of my mind. You have to really like the project. Also, you are frequently away from home. You go where the director is. I was working in Argentina for a year, a number of years ago. Before that, I was in Romania, and before that I was in London, and then after that about 2 years ago I was in New York for a year. If you’re married, you have to find ways of coping with that and that’s a whole chapter unto itself.At the end of the film, it can be very disorienting when the work is suddenly finished. This is not exclusive to film editing; I’m sure it’s true of many other areas of human activity. Soldiers have this problem, actors who are acting in a play when the play is suddenly over, it’s like you’ve been cut loose: “Now what?!” This was never explained to me at film school. So when it first happened, I felt something was wrong with me. It’s the equivalent of a kind of seasickness; if you’ve never been on a ship before and somebody warns you about it, it’s okay. You’ll still feel just as sick, but you won’t feel like killing yourself. This is not that intense, but it is that kind of disorientation. And it passes, but it takes anywhere from two to six weeks to go away. During that time I would be very reluctant to try to decide what to do next. It’s like a love affair where you don’t want to bounce from one relationship to another; that’s dangerous. So, you should just let that project fade away and get back to normal, and then you can decide what to do next. We frequently don’t have the luxury of that, but that’s a goal.
That seems like a kind of mourning. When you work that long and with that intensity, you become attached to the film. When it's over, you've got to unattach yourself. That requires something very like mourning.
Wednesday, July 12, 2017
MAGA: A conspiracy of oligarchs vs. the rest of us?
Just a quick take: We know that prior to becoming President Donald Trump was doing business in Russia. We now know that the Trump campaign – DJ Jr., Kushner, & Manafort – had a conversation with well-connected Russians about dirt on Hillary Clinton. We don’t yet know whether or not anything illegal has been done – expert opinion seems divided. But at the very least, it’s unseemly. Is this how to make American great again, collaborate with a nation that, not so long ago, was America’s fiercest rival?
But is this about nations, or just about the oligarchs and plutocrats that run them? We know that any self-respecting Russian oligarch is going to have an apartment in London, or New York, perhaps Singapore, or Dubai? The Chinese too? And folks on Jersey City, across the Hudson from Manhattan, have been getting exercised at son-in-law Jared’s sister dangling HB-5 visas before potential Chinese investors in their projects.
It’s looking like “Make America Great Again” is just the brand name under which a loose transnational gaggle of oligarchs manipulates politics in the USofA.
Meanwhile, I keep reading these articles about the waning of the nation-state as a vehicle for governance. The most recent of these talk about how states and cities in America are going around the federal government on climate change. That is to say, on this issue, they’ve decided to conduct their own foreign policy and foreign policy, we know, has traditionally be the prerogative of the nation-state. That’s why nation-states exist, to conduct foreign affairs.
What’s it all mean?
Monday, July 10, 2017
Ted Underwood on Intellectual Genealogies: Distant Reading is Social-Science, Not Digital Humanities [#DH]
Ted Underwood, “A Genealogy of Distant Reading”, DHQ Vol. 11, No. 2, 2017:
Abstract: It has recently become common to describe all empirical approaches to literature as subfields of digital humanities. This essay argues that distant reading has a largely distinct genealogy stretching back many decades before the advent of the internet – a genealogy that is not for the most part centrally concerned with computers. It would be better to understand this field as a conversation between literary studies and social science, inititated by scholars like Raymond Williams and Janice Radway, and moving slowly toward an explicitly experimental method. Candor about the social-scientific dimension of distant reading is needed now, in order to refocus a research agenda that can drift into diffuse exploration of digital tools. Clarity on this topic might also reduce miscommunication between distant readers and digital humanists.
Rather than attempt to summarize it myself, I’ll present a set of tweets by Alan Liu, starting with this:
Made 41 annotations in my copy of @tedunderwood's new, crucial DHQ piece, "A Genealogy of Distant Reading" https://t.co/CIuXj7uAdH.— Alan Liu (@alanyliu) July 10, 2017
Liu continues with a long series of tweets, which I’ll present as quotes without the Twitter format. Along the way I will present brief comments of my own, thus inserting my own concerns into the argument.
Liu begins:
Here are my top 13 quotes--a kind of thirteen ways of looking at distant reading, cited by paragraph number.(As it were: "Among twenty snowy mountains of texts, The only moving thing Was the eye of the distant reader"):¶5: "The questions posed by distant readers were originally framed by scholars (like Raymond Williams and Janice Radway) who worked on the boundary between literary history and social science."¶10: "these projects … pose broad historical questions about literature, and answer them by studying samples of social or textual evidence. I want to highlight the underlying project of experimenting on samples, and the premise that samples … have to be constructed"¶21 "The crucial underlying similarity between [Radway & Moretti's] works, which has made both of them durably productive models for other scholars, is simply the decision to organize critical inquiry as an experiment."¶22 "Distant reading is a historical science, and it will need to draw on something like Carol Cleland’s definition of scientific method, which embraces not only future-oriented interventions, but any systematic test that seeks 'to protect … from misleading confirmations.'"¶22 "Literary historians who use numbers will have to somehow combine rigor with simplicity, and prune back a thicket of fiddly details that would be fatal to our reason for caring about the subject."¶24 "I try not to join any debate about the representativeness of different samples until I have seen some evidence that the debate makes a difference to the historical question under discussion…. [S]amples are provisional, purpose-built things. They are not canons. It makes no sense to argue about their representativeness in the abstract, before a question is defined."¶27 "Instead of interpreting distant reading as a normative argument about the discipline, it would be better to judge it simply by asking whether the blind spot it identified is turning out to contain anything interesting."¶28 "Consensus about new evidence emerges very slowly: inventing an air-pump doesn’t immediately convince readers that vacuums exist…. But at this point, there is no doubt in my mind that literary scholarship turned out to have a blind spot. Many important patterns in literary history are still poorly understood, because they weren’t easily grasped at the scale of individual reading."
The insane confusion about culture that's behind too much contemporary thinking
From an essay by Walten Benn Michaels, The Myth of ‘Cultural Appropriation’:
The logic is on vivid display in a TV ad for Ancestry.com featuring a woman named Kim who pays her money, gets her DNA scan, and is thrilled to discover that she’s 23-percent Native American. Now, she says, while standing in front of some culturally appropriate pottery, "I want to know more about my Native American heritage." If the choice of Southwest-style cultural artifacts seems a little arbitrary, that’s because, as the Ancestry.com website warns you, the technology isn’t yet advanced enough to tell you whether you’re part Navajo or part Sioux. But, of course, that arbitrariness is less puzzling than the deployment of any artifacts at all.The point of Kim’s surprise is that she has no Native Americancultural connection whatsoever; the point of those pots is that they become culturally appropriate only when they’re revealed to be genetically appropriate.As befits an ad, Kim’s story is a happy one. But it could have gone differently. The genetic transmission of an appreciation for Navajo pottery could just as easily have turned out to be a genetically traumatic relation to the catastrophe of the Long Walk. What if Sam Durant had gotten himself an Ancestry.com saliva test and discovered that he, too, was part Native American? The bad news: Thirty-eight of his ancestors had been unjustly hanged; the good news: their hanging was part of his story after all.
Later, writing about sociologist Alice Hoffman, who'd done fieldwork in a black neighborhood in Philadelphia:
Even when the experiences really are shared — when something actually did happen to us — we don’t think that autobiographical accounts of people’s own experiences are necessarily more true than other people’s accounts of those same experiences, or that only we have a right to tell our stories. No one thinks that either Goffman or the men she wrote about are the final authorities on their lives. My version of my life is just my version; no one is under any obligation to agree with it, much less refrain from offering his or her own.So even our own stories don’t belong to us — no stories belong to anyone. Rather, we’re all in the position of historians, trying to figure out what actually happened. Interestingly, even if the logic of their position would seem to require it, the defenders of a racialized past haven’t been all that interested in confining historians to what are supposed to be their own stories. Maybe that’s because history (at least if it isn’t cultural) makes it harder to draw the needed lines. You obviously can’t understand the political economy of Jim Crow without understanding the actions of both white and black people. And you can’t understand the actions of those white and black people without reading the work of historians like (the white) Judith Stein and (the black) Adolph Reed.
And so:
Not THAT's an interesting argument.The students at elite American universities come overwhelmingly from the upper class. The job of the faculty is to help them rise within (or at least not fall out of) that class. And one of the particular responsibilities of the humanities and social-science faculty is to help make sure that the students who take our courses come out not just richer than everyone else but also more virtuous. (It’s like adding insult to injury, but the opposite.)Identity crimes — both the phantasmatic ones, like cultural theft, and the real ones, like racism and sexism — are perfect for this purpose, since, unlike the downward redistribution of wealth, opposing them leaves the class structure intact. [...]The problem is not that rich people can’t feel poor people’s pain; you don’t have to be the victim of inequality to want to eliminate inequality. And the problem is not that the story of the poor doesn’t belong to the rich; the relevant question about our stories is not whether they reveal someone’s privilege but whether they’re true. The problem is that the whole idea of cultural identity is incoherent, and that the dramas of appropriation it makes possible provide an increasingly economically stratified society with a model of social justice that addresses everything except that economic stratification.
Sunday, July 9, 2017
Terrorists, Collateral Damage, and Trolley Problems
Moral philosophers in the analytic tradition like to run thought experiments of a kind known as the trolley problem:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:1. Do nothing, and the trolley kills the five people on the main track.2. Pull the lever, diverting the trolley onto the side track where it will kill one person
A recent movie, Eye in the Sky, posed a problem with a similar form. Instead of five people tied to a track we have five terrorists having a meeting inside a private house in Nairobi. Instead of a person tied to a sidetrack we have an innocent young girl selling bread on the street outside that same house. An explosion that kills the terrorist will likely kill the girl as well. Do we do it?
That’s the basic situation. In fact, things are more complicated. Three of the terrorists are hold high-level leadership roles in the organization. The other two are suicide bombers who have just donned explosive vests. Presumably when the meeting is over they are going to public places and kill themselves, along with tens if not hundreds of others. So we can’t wait for the girl leave. But, of course, we don’t really know about the timing of things.
As for We, we are several levels of military and civilian leadership in Britain and American plus the remote pilot who flies the drone and who is the one who actually executes the order to bomb the house. The drama lies in the back and forth decision-making and buck-passing running in counterpoint with events on the ground.
The house is bombed and the girl dies, but only by seconds. If she’d been a bit quicker, if the bomb had been released half a minute later, she’d have lived while the terrorists would still have been killed.
Presumably.
It's a good film.
Collective Computation
Joshua Sokol interviews Jessica Flack at the Santa Fe Institute. She says:
Collective computation is about how adaptive systems solve problems. All systems are about extracting energy and doing work, and physical systems in particular are about that. When you move to adaptive systems, you’ve got the additional influence of information processing, which we think allows a system to extract energy more efficiently even though it has to expend a little extra energy to do the information processing. Components of adaptive systems look out at the world, and they try to discover the regularities. It’s a noisy process.Unlike in computer science where you have a program you have written, which has to produce a desired output, in adaptive systems this is a process that is being refined over evolutionary or learning time. The system produces an output, and it might be a good output for the environment or it might not. And then over time it hopefully gets better and better.
For example, the human brain:
The human brain contains roughly 86 billion neurons, making our brains the ultimate collectives. Every decision we make can be thought of as the outcome of a neural collective computation. In the case of our study, which was lead by my colleague Bryan Daniels, the data we analyzed were collected during an experiment by Bill Newsome’s group at Stanford from macaques who had to decide whether a group of dots moving across a screen was traveling left or right. Data on neural firing patterns were recorded while the monkey was performing this task. We found that as the monkey initially processes the data, a few single neurons have strong opinions about what the decision should be. But this is not enough: If we want to anticipate what the monkey will decide, we have to poll many neurons to get a good prediction of the monkey’s decision. Then, as the decision point approaches, this pattern shifts. The neurons start to agree, and eventually each one on its own is maximally predictive.We have this principle of collective computation that seems to involve these two phases. The neurons go out and semi-independently collect information about the noisy input, and that’s like neural crowdsourcing. Then they come together and come to some consensus about what the decision should be. And this principle of information accumulation and consensus applies to some monkey societies also.
Saturday, July 8, 2017
Images and Objectivity
Ryan Cordell has an interesting post, Objectivity and Distant Reading, in which he comments on Objectivity (2010) by Lorraine Daston and Peter Galison:
Objectivity attempts to trace the emergence of scientific objectivity as a concept, ideal, and moral framework for researchers during the nineteenth century. In particular, the book focuses on shifting ideas about scientific images during the period. In the eighteenth and early nineteenth centuries, Daston and Galison argue, the scientific ideal was “truth-to-nature,” in which particular examples are primarily useful for the ways in which they reflect and help construct an ideal type: not this leaf, specifically, but this type of leaf. Under this regime scientific illustrations did not attempt to reconstruct individual, imperfect specimens, but instead to generalize from specimens and portray a perfect type.Objectivity shows how, as the nineteenth century progressed and new image technologies such as photography shifted the possibilities for scientific imagery, truth-to-nature fell out of favor, while objectivity rose to prominence.
And that's what interests me, the focus on images, and the rise of photography:
In debates about the virtues of illustration versus photography, for instance, illustration was touted as superior to the relative primitivism of photography—technologies such as drawing and engraving simply allowed finer detail than blurry nineteenth century photography could. Nevertheless photography increasingly dominated scientific images because it was seen as less susceptible to manipulation, less dependent on the imagination of the artist (or, indeed, of the scientist).
Images, of course, are clearly distinct from the prose in which they are (often) set. Images are a form of objectification, though it takes more than objectification to yield objectivity.
Cordell then goes on to discuss computational criticism (aka distant reading), where "computation is invoked as a solution to problems of will that are quite familiar from decades of humanistic scholarship." Computational critics
might argue that methods such as distant reading or macroanalysis seek to bypass the human will that constructed such canons through a kind of mechanical objectivity. While human beings choose what to focus on for all kinds of reasons, many of them suspect, the computer will look for patterns unencumbered any of those reasons. The machine is less susceptible to the social, political, or identity manipulations of canon formation.
Interesting stuff. I've got two comments:
1) Consider one of my touchstone passages by Sydney Lamb, a linguist of Chomsky’s generation but of a very different intellectual temperament. Lamb cut his intellectual teeth on computer models of language processes and was concerned about the neural plausibility of such models. In his major systematic statement, Pathways of the Brain: The Neurocognitive Basis of Language (John Benjamins 1999) remarked on importance of visual notation (p. 274): “... it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.” That is, we need the visual notation in order to objectify language mechanisms.
I note that, I think of objectification (in the sense immediately above) as a prerequisite for objectivity, but it is by no means a guarantee of it. That requires empirical evidence. A computer model will give us objectification, but no more.
2) Tyler Cowen has an interesting and wide-ranging interview with Jill Lepore in which she notes that Frederick Douglass was the most widely photographed man of 19th century America: "In the 1860s, he writes all these essays about photography in which he argues that photography is the most democratic art. And he means portrait photography. And that no white man will ever make a true likeness of a black man because he’s been represented in caricature — the kind of runaway slave ad with the guy, the little figure, silhouette of the black figure carrying a sack."
Friday, July 7, 2017
Bill McKibben on the new nation-states
But the Paris decision may also reshape the world for the better, or at least the very different. Consider: A few days after Trump’s Rose Garden reveal, California Governor Jerry Brown was in China, conducting what looked a lot like an official state visit. He posed with pandas, attended banquets—and sat down for a one-on-one meeting with President Xi Jinping, which produced a series of agreements on climate cooperation between China and California. (Trump’s secretary of energy, Rick Perry, was in Beijing the same week: no pandas, no sit-down with Xi.) It was almost as if California were another country. Call it a nation-state—a nation-state that has talked about launching its own satellites to monitor melting polar ice. A nation-state that has joined New York and a dozen others in a climate alliance to announce they will meet the targets set in the Paris accord on their own. A nation-state that already holds joint auctions with Quebec in its carbon cap-and-trade program. A nation-state that is convening hundreds of other “subnational actors” from around the world next year to pledge to keep the rise in global temperature below 2 degrees Celsius.
Subnationalism:
It’s ironic that global warming might be the wedge issue for the rise of “subnationalism.” After all, if you ever wanted an argument for world government, climate change provides it. But the United Nations has been trying to stop global warming since the days when we called it the greenhouse effect. And national governments, hijacked by the fossil fuel industry, have intervened again and again to obstruct any progress: The Kyoto treaty more or less collapsed, as did the Copenhagen talks. Paris “succeeded,” but only if you squint: The world’s nations vowed to keep the planet’s temperature increase to under 2 degrees Celsius, but their promises actually add up to a world that will grow 3.5 degrees hotter. The real hope was that the accord would spur private investment in renewable energy: And as the price of solar panels plummeted, in fact, China and India started to exceed their pledges.Even that modest progress alarmed what energy expert Michael Klare calls the Big Three carbon powers: the United States, Saudi Arabia, and Russia. (Trump’s foreign policy looks more coherent, by the way, when viewed through this prism.) The United States has now pulled out of Paris, and an aide to Vladimir Putin has said the withdrawal makes it “perfectly evident” the pact is now “unworkable.”So what’s a state like California to do? It can’t ignore climate change, which threatens its very existence. [...]If you want to know who is serious about forging a new path on global warming, ignore all the airy proclamations about meeting the Paris targets—and instead pay attention to the cities and states making the very real and measurable pledge to go 100 percent renewable. California’s senate just passed such a commitment by a 2–1 margin. More dramatically, the day after Trump said he had been elected to serve “Pittsburgh, not Paris,” Mayor Bill Peduto announced that Pittsburgh will run entirely on clean energy by 2035. “If you are a mayor and not preparing for the impacts of climate change,” Peduto said, “you aren’t doing your job.” All told, 27 cities in 17 states have pledged to go 100 percent renewable—a move that puts them at direct odds with federal policy. Call them “climate sanctuaries.” San Francisco, Boulder, and Burlington won’t surprise you—but Atlanta and Salt Lake City and San Diego have done the same.If you want to know who is serious about forging a new path on global warming, ignore all the airy proclamations about meeting the Paris targets—and instead pay attention to the cities and states making the very real and measurable pledge to go 100 percent renewable. California’s senate just passed such a commitment by a 2–1 margin. More dramatically, the day after Trump said he had been elected to serve “Pittsburgh, not Paris,” Mayor Bill Peduto announced that Pittsburgh will run entirely on clean energy by 2035. “If you are a mayor and not preparing for the impacts of climate change,” Peduto said, “you aren’t doing your job.” All told, 27 cities in 17 states have pledged to go 100 percent renewable—a move that puts them at direct odds with federal policy. Call them “climate sanctuaries.” San Francisco, Boulder, and Burlington won’t surprise you—but Atlanta and Salt Lake City and San Diego have done the same.
Thursday, July 6, 2017
Two Problems for the Human Sciences, and Two Metaphors
For as long as I can remember such things – back to my undergraduate years in the 1960s – humanists have been defending themselves and their work against all comers: politicians, scientists of all kinds, and disgruntled letter writers. And always the defense comes down to this: we provide a holistic and integrated view of what it is to be human in a world that is, well, just what IS the world like anyhow?
It’s a mugs game and I refuse to play it. I was trained in the human sciences: hermeneutics AND cognitive science, history AND social science, and I’ve played jazz and rhythm and blues in seedy nightclubs, ritzy weddings, and outdoors before thousands. It’s all good. It’s all come into play as I’ve investigated the human mind through music and literature.
In this essay I look at literature. First I consider literary form as displayed in ring form texts. Then I review the historical problem posed by Shakespeare and the rise of the European novel. My general point is that we need all our conceptual resources to deal with these problems. But let’s begin with an analogy: how do we understand, say, a cathedral?
The Cathedral Problem
Cathedrals are made of stone blocks, mortar, pieces of stained glass, lead strips, metal fittings, wooden beams and boards, and so forth. You can go through a cathedral and count and label every block and locate them on a (3D) map. You can do the same for the doors and cabinets, the plumbing, heating fixtures, and wiring, and so forth. You will now, in some sense, have described the cathedral. But you won't have captured its design. That’s difficult and those how focus on it often use vague language, not because they like vagueness, but because, at the moment, that’s all that’s available.
And so it goes with literature and newer psychologies: cognitive science, evolutionary psychology, and neuroscience. My humanist colleagues keep hearing that they should get on board with the cognitive revolution and the decade of the brain. But it all sounds like trying to explain a cathedral by counting the building blocks, measuring the pitch of the roof, and analyzing the refractive properties of pieces of colored glass.
The advice may be well meant, but it isn’t terribly useful. It takes our attention away from the problem – how the whole shebang works – and asks us to settle for a pile of things we already know. Almost.
Ring Forms in Literature
I first learned of ring form in an article published in PMLA – the oldest literary journal published in the United States – back in 1976: “Measure and Symmetry in Literature” by R. G. Peterson. The idea is a simple one, that some texts, or parts of texts, are symmetrically arranged about a center point:
A, B … X … B’, A’
He produced many examples, from Iliad through Shakespeare’s Hamlet to the “Author’s Prologue” to Dylan Thomas, Collected Poems. But my interests, like those of most literary critics, were elsewhere and so I merely noted the article and went on about my business.
I was reminded of this work some years ago when I entered into correspondence with the late Mary Douglas, a British anthropologist who rose to academic stardom – such as it was back in ancient times – after the 1966 publication of Purity and Danger: An Analysis of Concepts of Pollution and Taboo. She spent the last decade of her career immersed in the arcana of classical and Biblical studies, publishing monographs on the Book of Leviticus and the Book of Numbers and, in 2007, Thinking in Circles: An Essay on Ring Composition, based on a series of lectures she had delivered at Yale. Among other things, she argues that such forms aren’t special to the ancient world, that they continue in modern times – she offers Sterne’s Tristram Shandy as an example.
She opens her 10th chapter by referring to Roman Jakobson, one of the pioneering linguistics of the 20th Century, who believed, on the basis of extensive study, that such patterns reflect “a faculty inherent in the relation among language, grammar, and brain.” But why are such patterns so very difficult to recognize if they are so natural to us?
Wednesday, July 5, 2017
The soverign state of New York?
Will the citizen's of New York decide to amend the state's constitution to give the state greater independence from the federal government? From the NYTimes:
Every 20 years, New Yorkers have the chance to vote whether they want to hold a constitutional convention to amend, tweak or otherwise improve the founding document of the state.
For the past half-century, voters have demurred. This year, however, academics, good-government groups and others believe the outcome of the ballot question in November may be different. And — perhaps no surprise — it has something to do with the current occupant of the White House.
“Trump’s election emphasizes how valuable it is for states to chart their own course,” said Peter J. Galie, author of “Ordered Liberty: A Constitutional History of New York” and a professor of political science at Canisius College in Buffalo. “We can put a right to clean air and water in our Constitution. If we want to add more labor protections, we can do it. That’s the beauty of federalism.”
What about New York City separating from the rest of the state?
John Bergener Jr., a retiree who lives outside Albany, would like to see the separation of New York City from the rest of the state. As chairman of Divide NYS Caucus, a political committee, he believes a constitutional convention is the best mechanism to achieve that.
Upstate areas, he said, have suffered economically from excessive business regulations and unfunded mandates. His vision — and he claims thousands of supporters — calls for two or three autonomous regions, each with its own regional governor and legislature. (The upstate region, north of the lower Hudson Valley, would be called New Amsterdam.) A statewide governor would be titular, with the same “powers as the queen of England.”
Subscribe to:
Posts (Atom)