Monday, August 31, 2020

Will the regulation of Big Tech backfire

Cory Doctorow Twitter thread that starts here:

Then continues on to this:


And continues until it ends on Tweet #39. The whole thing is worth reading.

Henry Rawlinson and reorganizing our understanding of historical time

Ali Minai, Henry Rawlinson and the Transformation of History, 3 Quarks Daily, August 31, 2020.

Rawlinson was a British civil servant who served in the Middle East in the 19th century, Iran in particular, and played a major role in deciphering ancient texts, thereby opening up a new world of archaeological and historical investigation which changed Europe's understand of the history.
Darwin’s ideas on evolution are often – correctly – seen as a critical factor in the transition to modern, secular thinking in the West, but the great archaeological discoveries which were happening in the same period (mid-19th century) surely played a significant role as well. They changed the whole horizon of our understanding of history. Indeed, the two things – the theory of evolution and the archaeology of ancient civilizations – can be seen as part of the same process: A process that suddenly extended the Western (and, ultimately, universal modern) view of time from a short period in the past that then disappeared into the mythical haze of Genesis or cosmic cycles to one where time stretches in a scientifically understandable way to billions of years, and where living things, peoples, civilization, and cultures emerge from physical – and, therefore, understandable – processes rather than by the handiwork of a deity or supernatural forces. The time-scales are different, but Darwin and the archaeologists eventually turned out to be fellow travelers in the journey towards a modern, scientific, materialistic view of the world that has ultimately enabled all the scientific and technological progress we see around us. Both changed history by changing history.

But it is also possible to turn this question around and ask: Why did it take so long for a scientific discipline of archaeology to arise? Why did the systematic rediscovery of ancient history in Mesopotamia, Egypt, Iran, India, and Central Asia have to wait until the colonial period? Great civilizations with erudite scholars and intrepid explorers ranged over these regions for millennia, surrounded by remarkable ruins, monuments, and inscriptions. True, some of these had been buried by sand and debris, but enough remained visible to have excited curiosity. Why did no one think to dig into mounds or decipher forgotten scripts?

There is anecdotal evidence of sporadic interest in ancient artifacts and ruins going as far back as ancient Egypt and Mesopotamia, but no systematic science of archaeology is apparent until the mid-18th century. Rulers did pay attention to the monuments of their predecessors – even those from long ago – with a view to emulating or outdoing them, but not with any desire for knowledge. Clearly, the civilizations of regions from India to Egypt did value knowledge: They made great contributions in astronomy, mathematics, chemistry, medicine, geography, and other disciplines. They did systematic studies of language, culture, recent history, and even the very processes of history. But, for some reason, the artifacts of the past did not interest them in the same way. [...]

The question is: Why? What was it in a diverse range of cultures that they remained content in their ignorance when history lay all around them? Was it disinterest in the past – an attitude that knowing more beyond what scripture and tradition said would be a waste of time, or that there was nothing to learn from vanished alien cultures except sic transit gloria mundi? Or was it something deeper – a different relationship with the world, and especially with time? Perhaps for some, it was a view of history as the work of God rather that the business of Man; for others, an inability to distinguish between history and legend. One may also speculate that the emergence of archaeology as a science required a transition from an eschatological view of history to a secular, material, and humanistic one – a view that was made possible by the same changes that made European colonialism possible, and that underlie the qualitative change in material progress since the Renaissance.

Whatever it was, something awoke in 16th century Europe that ignited a passionate interest in digging up and studying the past. Initially, it was an interest in “antiquities” unearthed all over Europe. Soon, pioneers like John Aubrey extended this to a larger scale study of megalithic sites such as Stonehenge and Avebury. The earliest European colonizers – the Spanish, Portuguese, and Dutch – showed little interest in history even as they marauded their way through lands rich in lost civilizations. It was with the arrival of Napoleon in Egypt that this changed, and the great age of European archaeology began. In the 150 years between Napoleon’s invasion of Egypt and the British departure from India and the Middle East, the great monuments of Egypt had been revealed In full; storied ancient cities like Babylon, Nineveh and Ur had been excavated; the Indus Valley Civilization had been discovered; Egyptian hieroglyphics and several cuneiform scripts had been deciphered and their languages understood; the initial family tree of the Indo-European languages had been drafted; and the understanding of history had been revolutionized utterly. What had been neglected for thousands of years was laid bare in little more than a century.

It is obvious that the scientific, rational revolution that swept Europe after the Renaissance transformed human understanding of the universe and Man’s place in it. One of the most profound transformations in this regard was a re-organization of time – ultimately at three levels: Human, geological, and cosmic. Before that, it was quite typical for an educated European to believe that the Universe, including the Earth, was created a few thousand years ago by an act of God and then populated with beasts and men. [...] There was little notion of a long human history, let alone a much longer prehistory. Distant times, like distant lands, were populated by fantasies akin to “here be dragons.” [...] And then, as mentioned earlier, came Darwin’s theory of evolution that not only stretched the age of the Earth by orders of magnitude but also linked humanity into the extremely ancient chain of life. [...] And here we are today, living with at least an abstract apprehension of billions of years in cosmic and geological time, and hundreds of thousands of years in human time.

Is this for real, on a Monday morning?

The distributed 'intelligence' of fungi

Ashutosh Jogalekar, Life. Distributed. 3 Quarks Daily, August 31, 2020:
...perhaps the most interesting quality of fungi lies not in what we can see but what we can’t. Mushrooms may grace dinner plates in restaurants and homes around the world, but they are merely the fruiting bodies of fungi. They may be visible as clear vials of life-saving drugs in hospitals. But as Sheldrake describes in loving detail, the most important parts of the fungi are hidden below the ground. These are the vast networks of the fungal mycelium – the sheer, gossamer, thread-like structure snaking its way through forests and hills, sometimes spreading over hundreds of square miles, occasionally being as old as the neolithic revolution, all out of sight of most human beings and visible only to the one entity with which it has forged an unbreakable, intimate alliance – trees. Dig a little deep into a tree root and put it under a microscope and your will find wisps of what seem like even smaller roots, except that these roots penetrate into the trees roots. The wisps are fungal mycelium. They are everywhere; around roots, under them, over them and inside them. At first glance the the ability of fungal networks to penetrate inside tree roots might evoke pain and invoke images of an unholy literal physical union of two species. It’s certainly a physical union, but it may be one of the holiest meetings of species in biology. In fact it might well be impossible to find a tree whose roots have no interaction with fungal mycelium. The vast network of fibers the mycelium forms is called a mycorrhizal network.

The mycorrhizal networks that wind their way in and out of tree roots are likely as old as trees themselves. The alliance almost certainly exists because of a simple matter of biochemistry. When plants first colonized land they possessed the miraculous ability of photosynthesis that completely changed the history of life on this planet. But unlike carbon which they can literally manufacture out of sunlight and thin air, they still have to find essential nutrients for life, metals like magnesium and other life-giving elements like phosphorus and nitrogen. Because of an intrinsic lack of mobility, plants and trees had to find someone who could bring them these essential elements. The answer was fungi. Fungal networks stretching across miles ensured that they could shuttle nutrients back and forth between trees. In return the fungi could consume the precious carbon that the tree sank into its body – as much as twenty tons during a large tree’s lifetime. It was the classic example of symbiosis, a term coined by the German botanist Albert Frank, who also coined the term mycorrhiza.

However, the discovery that fungal networks could supply trees with essential nutrients in a symbiotic exchange was only the beginning of the surprises they held. Sheldrake talks in particular about the work of the mycologists Lynne Body and Suzanne Simard who have found qualities in the mycorrhizal networks of trees that can only be described as deliberate intelligence. Here are a few examples: fungi seem to “buy low, sell high”, providing trees with important elements when they have fallen on hard times and liberally borrowing from them when they are doing well. Mycorrhizal networks also show electrical activity and can discharge a small burst of electrochemical potential when prodded. They can entrap nematodes in a kind of death grip and extract their nutrients; they can do the same with ants. Perhaps most fascinatingly, fungal mycelia display “intelligence at a distance”; one part of a huge fungal network seems to know what the other is doing. The most striking experiment that demonstrates this shows oyster mushroom mycelium growing on a piece of wood and spreading in all directions. When another piece of wood is kept at a distance, within a few days the fungal fibers spread and latch on to that piece. This is perhaps unsurprising. What is surprising is that once the fungus discovers this new food source, it almost instantly pares down growth in all other parts of its network and concentrates it in the direction of the new piece of wood. Even more interestingly, scientists have found that the hyphae or tips of fungi can act not only as sensors but as primitive Boolean logic gates, opening and closing to allow only certain branches of the network to communicate with each other. There are even attempts to use fungi as primitive computers.

Sunday, August 30, 2020

Jurassic Park [Media Notes 44]

I suppose I’ve seen Jurassic Park four times, once in theaters when it came out (I think) back in the 1990s and then two, three, times on the small screen, most recently over the last two days. That’s right, I didn’t watch it in one sitting, more like three or four. So I wasn’t looking for the full experience of the story, which I know well enough. I was just reacquainting myself with the film.

I came away with the sense that it would respond well to careful analysis and description or the sort that might require, say, at least 40 or 50 screen captures and 5K to 10K words, perhaps a diagram. I’d be on the lookout for ring-composition. I’m not at all sure I’ll find it, but these seems like the sort of film where at least looking would be worthwhile.

What do we have to work with? I’m not going to try to be systematic here. Obviously we’ve got dinosaurs, on the one hand, and computer technology on the other. Between those two, perhaps orthogonally, we’ve got the genetic tech that allowed Hammond’s team to create the dinosaurs and that, shall we say, is in opposition to the principle, “life will find a way,” enunciated by the mathematician, Ian Malcolm.

Then we have Hammond himself, and his grandchildren. Between them we have Drs. Grant and Sattler. Hammond wants his grandchildren to see the park and be amazed and overjoyed. Grant and Sattler save them, and everyone else, when things fall apart.

Hammon also wants to make money. I don’t think he’s greedy, certainly not Gordon “greed is good” Gekko greedy, but he wants to make money while presenting the world with this wonder. But he drove a hard bargain with his computer tech guy, Dennis Nedry, and Nedry does get greedy. His desire to make a buck is one factor in the collapse of the park.

The other, of course, is the weather; perhaps we think of the weather as the collective summation of natural phenomena. What is set in opposition to the weather? I’m inclined to think it’s something like the human capacity for construction, perhaps the park itself.

That’s our array of forces then. Now we have to deal with how they’re deployed in time. When do we know that things are going to fall apart? In one sense of course we know that when we enter the theatre; it’s that kind of film. Setting that aside, is it when our scientists interrupt the ride to enter the laboratory, or when they stop the SUVs to go off the course? How close to the center is the brontosaurus mucus episode? Notice how that’s mirrored by the blinding of Nedrey (computer guy) by the much smaller dinosaur. And so forth.

The whole thing is very nicely orchestrated. And of course, above all, we have T. Rex. What kinds of dinosaurs do we have, and the relationships among them?

Perhaps some day I’ll work on the film, but not now.

Meet Jibo, your robot companion


JIBO: The World's First Social Robot for the Home

Meet Jibo, The World’s First Social Robot for the Home Friendly, helpful and intelligent. He can sense and respond, and learns as you engage with him. More details: https://jibo.com/
Jibo is a product of NTT Disruption, which seems to be headquartered in Madrid. “NTT”, however, stands for “Nippon Telephone and Telegraph Corporation” which is, of course, Japanese, with one-third of its shares owned by the Japanese government. It is not clear to me just how many legal entities stand between that Japanese ownership and the sales and marketing of little Jibo.

Jibo, however, was conceived and created by Cynthia Breazeal, formerly of MIT’s Media Lab. The video was created for a company she founded whose assets were eventually acquired by NTT.

See my recent post, AI, robots, and imitation.

Family life

Saturday, August 29, 2020

Wiring the brain from the inside [fantasies in direct brain-to-brain communication]

Move over, Elon Musk and Christof Koch, Rodolfo Llinás was first to enter the magical brain-to-brain communication sweepstakes. Back in the early 2000s he had the idea of communicating with the brain by inserting nanowires into the brain through arteries going into the brain. He even obtained a patent on the nanowire technology (US Patent No. 8,447,392 B2). He explains the technology and its possibilities in this astounding video from 2008:

Wires in the Brain

Rodolfo Llinas tells the story of how he has developed bundles of nanowires thinner than spider webs that can be inserted into the blood vessels of human brains.

While these wires have so far only been tested in animals, they prove that direct communication with the deep recesses of the brain may not be so far off. To understand just how big of a breakthrough this is—US agents from the National Security Agency quickly showed up at the MIT laboratory when the wires were being developed.

What does this mean for the future? It might be possible to stimulate the senses directly - creating visual perceptions, auditory perceptions, movements, and feelings. Deep brain stimulation could create the ultimate virtual reality. Not to mention, direct communication between man and machine or human brain to human brain could become a real possibility.

Llinas poses compelling questions about the potentials and ethics of his technology.
The patent was granted in 2013. That's the most recent information I've been able to find about this technology. 

How to build a state

That's the title of a useful article by Anton Howes appearing in Works in Progress, 28 August 2020. The opening paragraphs:
It’s easy to imagine that governments were always as bureaucratic as they are today. Certain policies, like the widespread granting of monopolies in the seventeenth century, or the presence of a powerful landed aristocracy, seem like archaic products of a past that was simply more corrupt. The fact that governments rarely got involved with healthcare or education before the mid-nineteenth century seems the product of a lack of imagination, or perhaps yet another product of our ancestors’ venality – simply what happens when you put the war-hungry knights and nobles in charge.

But the bureaucratic state of today, with its officials involving themselves with every aspect of modern life, is a relatively recent invention. In a world without bureaucracy, when state capacity was relatively lacking, it’s difficult to see what other options monarchs would have had. Suppose yourself transported to the throne of England in 1500, and crowned monarch. Once you bored of the novelty and luxuries of being head of state, you might become concerned about the lot of the common man and woman. Yet even if you wanted to create a healthcare system, or make education free and universal to all children, or even create a police force (London didn’t get one until 1829, and the rest of the country not til much later), there is absolutely no way you could succeed.

For a start, you would struggle to maintain your hold on power. Fund schools you say? Somebody will have to pay. The nobles? Well, try to tax them — in many European states they were exempt from taxation — and you might quickly lose both your throne and your head. And supposing you do manage to tax them, after miraculously stamping out an insurrection without their support, how would you even begin to go about collecting it? There was simply no central government agency capable of raising it. Working out how much people should pay, chasing up non-payers, and even the physical act of collection, not to mention protecting that treasure once collected, all takes substantial manpower. Not to mention the fact that the collecting agents will likely siphon most of it off to line their own pockets.

As a monarch in 1500, you would be forced to rely heavily on delegation. As the economic historian Jared Rubin emphasizes, every ruler requires agents to propagate their rule. These can take the form of big burly blokes with heavy weapons — your enforcers — and they can take the form of people spreading the ideology of your right to rule, lending your orders legitimacy and more generally spreading social norms of obedience — the local officials, jurists, and clerics. Crucially, these propagating agents needed to be kept on-side at all costs. Hence the tax exemptions for nobles, many of whom were rich enough to support their own private armies, and whose ancestors might have been granted such a privilege by one of your predecessors.
The demise of monarchies allowed the rise of the modern state:
Yet, ironically, it was when monarchs lost control that they did most to boost the capabilities of the centralised state. It was under Parliament, first in the 1650s when it briefly overthrew the monarchy, and then from the late 1680s following its deal with the usurping William III, that British state capacity began to most rapidly and inexorably grow. Likewise, in France, it was following the French Revolution that the steady rise of state capacity was boosted — it was then, over three centuries after the fact, that the perpetual tax exemptions for Joan of Arc’s village were finally rescinded.

Parliaments, as bodies of legitimising agents, despite their lack of representation in any modern democratic sense, had the unquestioned legitimacy with which to raise taxes, change policy, and undo the deals of previous monarchs. In the process, they often trampled on the ancient liberties of citizens and subjects. But, unlike monarchs, they found it much easier to force the changes through. When motivated by the needs of war — often the one thing members could agree on — parliaments in the eighteenth century were able to raise cash that would have been unfathomable to the monarchs of even a few decades before. And it was parliaments, also, that were eventually susceptible in the nineteenth century to the lobbying of those who wished the state to involve itself in areas like education, health, and policing.

Friday, August 28, 2020

Fight to the death



A Myth of Africa: Ritual Structure in Dusk of Dawn

The headnote aside, these notes date back to the 1990s. For those interested in formal matters, this is about ritual patterning in a piece of autobiographical non-fiction. [I'd originally posted this in 2010. I'm bumping it to the top of the queue because it's about time.]
Serendipity, fate, kismet, synchronicity, or just plain coincidence. Call it what you will.

I’d been planning to post these notes ever since I started my series on race in the symbolic universe. A couple of years ago Aaron Bady had a post at The Valve on the Western construction of Africa. And so this post of mine becomes something of an oblique counterpoint to that. While it too is about a Western construction of Africa, it is a specific construction, by a single individual, the great W. E. B. Du Bois. Du Bois was born in New England and died in Ghana. He was a Pan-Africanist whose need for a symbolic Africa was different from that which Bady describes in his post.

* * * * *

Du Bois, William Edward Burghardt. Dusk of Dawn. In W.E.B. Du Bois: Writings, ed. by Nathan Huggins, Library of American © 1986 by Literary Classics of the United States, Inc., New York, pp. 549 - 802. First published in 1940 by Harcourt Brace.

Ritual Structure

No autobiography is a simple chronological account of the facts. There is always a plot, an argument, some special pleading, a mythological/symbolic dimension. This is very obviously so with Dusk of Dawn -- Du Bois tells us as much several times, first in the opening "Apology."

I believe that Dusk of Dawn has an overall form corresponding to ritual structure as expounded by Van Gennep and Durkheim. The center section of the book, in which De Bois describes his trip to Africa, corresponds to the marginal phase of ritual where the celebrants have left the secular world for the sacred but have not yet returned. It is in the center of the book, chapter 5 out of 9, that we find the romantic evocation of Africa.

Here’s how I explained this standard ritual structure in my essay on Sir Gawain and the Green Knight:
In “Two Essays Concerning the Symbolic Representation of Time” Edmund Leach has described the ritual structure of Durkheim's “states of the moral person” (Leach 1965a). They are: 1) secular life, 2) separation from the secular world and transition to 3) the marginal state where the ‘moral person’ is in a world discontinuous from the ordinary world, often being regarded as being dead, and from which a return to the secular is made by a process of 4) aggregation or desacralization, often symbolized by rebirth. Arnold van Gennep talks of separation, transition, and incorporation in The Rites of Passage (Van Gennep 1960). The ritual sequence involves two realms of being, the secular and the sacred, and is designed to order the transition of initiates between these two realms. The ontological problem is isomorphic to that of hypnosis. Secular life corresponds to ordinary waking consciousness; separation corresponds to induction; margin or transition corresponds to trance; and aggregation or incorporation to release – which leaves the person back at the initial state, ordinary consciousness, or secular life. Hypnosis and ritual both involve ontological transition.
This is the pattern that Northrup Frye, in his Anatomy of Criticism, associated with New Comedy in ancient Greece, and which Frye C. L. Barber have used in analysing Shakespeare’s comedies. It is thus a pattern with a strong literary pedigree.

What’s surprising about finding it in Dusk of Dawn is that that work is not a work of fiction, it is an autobiography, a work of fact. Fictions can be patterned to suit the needs of the author. Lives are not so readily patterned, what happened is what happened.

But one need not tell what happened in the order in which it happened. One can change the order in the telling while remaining truthful about the dates so that the reader can supply the chronological order. And that is what Du Bois does. His displaces his first trip to Africa, which took place later in his life, to an earlier part of his narrative. It is that displacement that signals to the reader that something special is going on here.

Here’s the chronology. The first four chapters take us to 1910. Then, ruoghly 15 pages into Chapter 5, Du Bois tells of his trip to Africa, which began late in 1923. Thus he has skipped 13 years in his chronology. This chapter and the next two are discursive and expository in nature, not narrative. It isn’t until chapter 8 that Du Bois rejoins the narrative, which he does at the point where he left off in Chapter 4, 1910.

Here then is the overall scheme of the book, with the secular/sacred designations being mine:
SECULAR

1. The Plot
2. A New England Boy and Reconstruction
3. Education in the Last Decades of the Nineteenth Century
4. Science and Empire

SACRED

5. The Concept of Race
6. The White World
7. The Colored World Within

SECULAR

8. Propaganda and World War
9. Revolution
The rest of this post consists of my notes on those three central chapters. While I’ve touched them up a bit here and there, they’re still pretty much raw notes. As such they jump around a bit and, in particular, invoke Freud in a way that makes sense to me – after all, they are my notes – but may not make sense to anyone else. For that I apologize. Still, you should be able to get the drift.

Friday Fotos: More SK8-boarding





Some varieties of culture and identity [in the shadow of the “West”]

I’ve been interested in cultural identity for a long time. I have written 60 (now 61) posts around and about it since April, 2010, when I posted about Shakespeare’s Caliban. That is one of six posts on the topic, Race in the Symbolic Universe, which I had originally written in the mid-1990s, in the early days of the web. My interest in cultural identity is older even than that.

By linking “tribe” or “nation” with cultural repertoire, the concept of cultural identity obscures the complexities of historical interaction through which people populate their cultural repertoires, often forcing individuals to belie their own experience, and reifying cultural forms and practices into a Procrustean prison.

In this post I consider four cases. The first two involve specific people, a White scholar and a Black poet, both American. Then I consider the situation of baseball in Japan and of Arabic mathematics in “Western” culture. I then take an excursion into Shakespeare’s London, and conclude by returning to those four cases.

Other people’s culture

This passage was first published in post at The Valve in a symposium on Walter Benn Michaels back in (2006). Since The Valve is now defunct I have republished the whole post at New Savanna.

* * * * *

Let us consider an article in Krin Gabbard's anthology, Jazz Among the Discourses (1995), one of a pair of anthologies arguing that “jazz has entered the mainstreams of the American academy” (p. 1). The general purpose of the anthology is to help ensure that this new discipline is in harmony with the latest developments in postmodern humanities scholarship. One Steven Elworth contributed a paper examining the critical transformation of jazz into an art music: “Jazz in Crisis, 1948-1958: Ideology and Representation.”

In the course of his argument, Elworth offers this observation (p. 65):
The major paradox of all writing about culture is how to take seriously a culture not one’s own without reducing it to an ineffable Other. I do not wish to argue, of course, that one can only write of one's own culture. In the contemporary moment of constant cultural transformation and commodification, even the definition of one’s own culture is exceedingly contradictory and problematic.
While the entire passage is worthy of comment, I want to consider only the first sentence: Just what “culture not one’s own” is Elworth talking about? Since this article is about jazz I assume that jazz culture is what he’s talking about. While the jazz genealogy has strands extending variously to West Africa and Europe, jazz has been and continues to be performed by Blacks and Whites, before audiences both Black and White - though, in the past, these have often been segmented into different venues, or different sections of the same venue - the music is conventionally considered to be Black. That convention is justified by the fact the music’s major creators have been overwhelmingly Black. Thus it follows that jazz culture is, as these conventions go, Black culture.

That convention leads me to infer that Elworth is White. I do not have any hard evidence for this assumption; I’ve never met the man, I've seen no photographs, and the contributor's blurb certainly doesn't indicate race. But the same set of conventions that dictate that jazz is Black music also make it unlikely that any Black scholar would refer to jazz culture as “a culture not one’s own.” It follows that Elworth is White, or, at any rate, not-Black.

I don’t know anything about Elworth beyond this article and a note indicating that, at the time of publication (1995), he was completing a doctorate at NYU. The fact that he is writing about jazz suggests that he likes it a great deal and knows more than a little about it. It is quite possible that he grew up in a house where folks listened to jazz on a regular basis. If not that, perhaps he discovered jazz while among friends or relatives and came to love it. He may also attend live performances, perhaps he is a weekend warrior, jamming with friends either privately or in public. He may well have been to weddings where a jazz band played the reception. He is comfortable with jazz; it is not exotic music. That is to say, it is unlikely that Elworth discovered jazz in some foreign land where no one speaks English, nor eats and dresses American style, nor knows anything of Mozart or Patsy Cline, among many others. Jazz is a routine and familiar part of Elworth’s life.

So why doesn't he think of it as his culture? Why must he caution himself (and us) against “reducing it to an ineffable Other”?

The An African-American sonnet

This example comes from the opening of Hollis Robbins’ new book, Forms of Contention: Influence and the African American Sonnet Tradition (2020) p. 2.

* * * * *

Trethewey’s reading at Duke was moving: she recited poems and spoke about her family and her early years in Mississippi. She touched on her use of the sonnet form, a “received European form,” and spoke of using the master’s tools to dismantle the master’s house, as Audre Lorde put it. The audience was transfixed.

I asked the first question after Trethewey finished and the applause died down. I said that I loved her work and her reading. But why was she still characterizing the sonnet as a form received from white people? Given the long list of African American sonnet writers–I counted them off on my fingers: Paul Dunbar, James Corrothers, Leslie Pinckney Hill, Claude McKay, Countee Cullen, Sterling Brown, Gwendolyn Brooks, Margaret Walter, Robert Hayden, Rita Dove–how many sonnets have to be written before someone will say she received the form from a black poet? How long until a poet says she received the sonnet from Natasha Trethewey?

Trethewey was silent for a long moment and then said, quietly, “You’re right I can’t say that anymore. I received the form from Gwendolyn Brooks.” She spoke about Brooks, very movingly, and then paused, contemplative. Nobody asked any more questions. The audience got up and moved to the lobby. I spoke to Trethewey near the podium and she thanked me, but there was a tension between us too. My question pointed to a difficult truth in African American poetry: that too many ancestors have been forgotten. Too much nineteenth- and twentieth-century poetry was banished from the black poetry canon in the 1960s because it was too “traditional.” The erasure of these poets long meant the erasure of their legacy.

Japanese baseball

Let’s set those American cases aside for a moment and consider Japanese baseball. I will say upfront that I have never been to Japan. Much of what I know about Japanese culture comes from watching anime and reading manga, in English translation. But certain facts seem obvious enough.

The game was invented in the United States in the nineteenth century. The small city where I currently live, Hoboken, New Jersey, claims to be the home of baseball and has a plaque at 11th Street and Washington Ave. declaring that. The claim is based on the fact that the first competitive baseball game was played in 1846 at Elysian Fields.

Baseball was introduced into Japan in 1872 and has been played there ever since. Professional baseball started in the 1920s but didn’t become successful until the 1930s. Currently the game is enormously popular.

My point is simple: Anyone born in Japan after, say, the mid-1930s would have grown up being familiar with the game. They may know that baseball originated in the United States, but as far as their experience goes, it is as Japanese as sushi and (European style) sailor uniforms for school children.

The complexity of erectus craftsmanship

Thursday, August 27, 2020

Barge coming down the Hudson [early morning]


Comparison of cultural and biological evolution

Frameworks for Identity [in America]

This is from my book on music, Beethoven’s Anvil: Music in Mind and Culture (2001), pp. 269-273. I’m working a new post on identity and this will be useful background.


African Americans were naturally offended by white America’s obsession with their sexuality. Denunciation of such foolishness was one of Malcolm X’s themes. But this recognition is older than the Civil Rights era. After the Titanic sank in 1912, blacks began reciting a narrative poem in which a mythic black boiler man, Shine, escapes from the sinking ship and swims safely ashore. The ship’s captain attempts to keep Shine on board, first offering him money and then offering the sexual favors of white women, including his own daughter. Shine rejects all offers and remains steadfast in his determination to swim ashore. The poem thus rejects white evaluation of black character by depicting a white authority figure as being so depraved as to offer his daughter up to a boiler man for no rational purpose. This obsession, it implies, is a white folks’ problem and it’s about time they dealt with it.

Thus we have a white obsession born out of emotional repression and the black critique of that obsession. That critique reflects common sense grounded in a black frame of reference, not a white one.

The black rejection of white mythology is grounded in a black sense of cultural and social identity that is different from the identity assigned to black folks by whites. Identity in this sense is about how individuals feel themselves connected to the broader currents of history. Few of us ever become one of the “great men” whose acts get recounted in history books, nor do many of us have personal acquaintance with them. But we identify with nations, and national histories are typically told through stories of the deeds of these great men, both real and legendary. Our national identity is our means of connecting with history.

This realization of identity emerges with striking force in Ken Burns’ recent documentary history of jazz. Much of the narration and expert commentary is about how jazz is America’s music. Not only did jazz originate in American but, as Wynton Marsalis says at the opening of the first episode, “jazz music objectifies America” (Episode One, Jazz: A Film by Ken Burns). With that statement and with countless others that follow we’re told that jazz captures some American essence, that it is what Americans are as a people. When you listen to jazz, you listen, not only to an American voice, but to the voice of America.

This nationalist concern with the nature of America and its people dates back to the early 19th century, when American intellectuals began to weave a new mythology about “an individual emancipated from history, happily bereft of ancestry, untouched and undefined by the usual inheritances of family and race; an individual standing alone, self-reliant and self-propelled,” an individual who was thereby separated from Europe (Lewis, American Adam, 1955, p5). As David Stowe has shown, jazz became woven into this fabric during the 1930s, in part as “a musical thumbing of the nose at fascism, whose Nazi theorists regarded swing as a debased creation of Jews and blacks” (Swing Changes, 194, p. 53) Once installed in the temple of Americanism jazz remained there.

But not without considerable tension and, with the publication of Amari Baraka’s Blues People in 1963, open conflict and dissension. Baraka, a very angry, militant and brilliant black poet and playwright, saw African-Americans as blues people and jazz as a son of the blues. As such it was essentially black music, but one “that offered such a profound reflection of America that it could attract white Americans to want play it or listen to it for exactly that reason. ... It made a common cultural ground where black and white America seemed only day and night in the same city...”(Blues People, 159-60). The ethnic and national character of jazz has been a matter of open contention ever since.

As I suggested above, the question of jazz’s “identity” is insoluble precisely because it is framed in nationalist terms. Long before it became assimilated to the concept of the nation-state, “nation” was used in English to designate a racial group. In come contexts, it still retains the sense of racial or ethnic identity. The concept of a nation thus tends to work against the cultural plurality that has been critical in the origins and development of jazz and its descendants.

There is no doubt, for example, that Ellington thought of himself as a “race man,” to use the term of the times. It was as a race man that, in 1943, he wrote “Black, Brown, and Beige” as a musical portrait of (in the terminology of the day) Negro America. Like many African Americans, Ellington had little difficulty in thinking of himself as simultaneously a Negro, a race man, and a patriotic American citizen. In the words of Hannah Nelson, who was interviewed by John Langston Gwaltney for his study of “core black culture,”
I think it was Frederick Douglass who said we were a nation within a nation. I know that will probably bother your white readers, but it is nonetheless true that black people think of themselves as an entity. . . . We are a nation primarily because we think we are a nation. This ground we have buried our dead in for so long is the only ground most of us have ever stood upon. . . . Most of our people are remarkably merciful to Africa, when you consider how Africa has used us. (Drylongso, 1993, p. 5)
Clifford Yancy, “a prudent grandfather in his later fifties” also interviewed by Gwaltney, expressed the practical significance of this separateness:
White people and black people are both people, so they’re alike in most ways, but they don’t think the same about some things. Your white man might be a little weaker, but that’s just because they generally have easier work. I think they are probably as smart as we are because I have seen them doing any kind of work that any of us can do. Now, some of these young white boys might get a job they can’t handle just because they know somebody, but, I mean, an experienced white man can do anything an experienced black man can do. I go by what I see going down out here and that’s the way it looks to me.
This statement is perfectly matter of fact in its assumption that black performance is the standard by which all performance is to be judged. Yancy is not worried about what white people think about him. He has his own frame of reference. That is what it means to be a member of a nation within a nation: it means you have your own frame of reference.

That’s what the discussion of jazz, black, white, and America is about: What is the framework for discussion, analysis and judgment? Who says so? The history of jazz is also a history of arguments about the nature of jazz. Is swing real jazz, or is New Orleans the only true jazz? Is bebop jazz or is it just a commercial stunt, as some of its detractors thought? Is Ornette Coleman playing jazz, or is he playing all those wrong notes because he can’t play the saxophone? Does jazz-rock fusion qualify as jazz or not? At every point in its history, critics, journalists, fans and scholars have argued about the nature of true jazz. Blues, rock and hip hop have occasioned similar arguments. To the extent that these discussions have taken place in trade journals, fan magazines, and the commercial press, they shade into the need to commodify the music so that it can be marketed to the appropriate consumers.

These arguments are rarely merely about separating apples from oranges so you can price them right. They are moral arguments. Jazzers argued that rock was inferior and immoral and rockers countered that jazz is old and dried-up. These arguments are passionate because they are about personal and cultural identity.

Such arguments are impossible in the social and culture worlds we looked at in Chapter 9. Those societies may have recognized differences between children’s and adult music, men’s and women’s songs, songs belonging to particular clans, and so forth. But all of the songs are in the same tradition, the only one members of these homogenous societies feel they must answer to. Their identity is not at issue—an innocence that is rapidly vanishing as these societies become absorbed into the political life of larger nation-states. In complex cultural pluralities like the United States, people have to negotiate a new identity for each arena in which they function, home, church, work, social club, political party, whatever. They have many bodies of music to choose from. Musical preference thus becomes a matter of personal choices that imply specific connections to the larger currents of history.

This is the world in which jazz has sought its way. Not only did it emerge from a diversity of rags, blues, tangos, marches, ballads, Broadway tunes, and so forth, but it feeds into a similar diversity. Beyond this, we must consider the evidence presented by Charlie Keil and his students, who have found that individuals listen to a variety of different kinds of music. Someone who likes jazz may also like country and western and a bit of classical. This variety of musical interests is no doubt abetted by the ready availability of recordings, which didn’t happen until the first quarter of the 20th Century. But it is quite clear that the musical ensembles in the 19th century—brass bands, string bands, minstrel shows—all played a variety of musical styles.

Frameworks of ethnic and national identity set up social boundaries. Even as memes migrate across these boundaries to serve people’s emotional and physical needs, thereby reducing the differences between groups, the need to maintain boundaries asserts itself. It also results in new musical styles as black Americans continue to create music they can think of as specifically theirs. This is the mechanism that Amiri Baraka identified in Blues People. It is the mechanism that has been driving American popular culture through the 20th Century and into this one.



Note: I have elaborated on Baraka's dynamic in my article, Music Making History: Africa Meets Europe in the United States of the Blues, in Nikongo Ba'Nikongo, ed., Leading Issues in Afro-American Studies. Durham, North Carolina: Carolina Academic Press. pp. 189-233,  https://www.academia.edu/8668332/Music_Making_History_Africa_Meets_Europe_in_the_United_States_of_the_Blues

An old rusted furnace, with tires and graffiti [Bergen Arches]

Wednesday, August 26, 2020

What does Islamic inheritance law have to do with Algebra?

Multi-modal transformer architecture

The Great Stagnation, another voice heard from, Jason Furman

Tyler Cowen interviews Jason Furman on a variety of things. On productivity:
COWEN: If investment is important, does that make you an economic pessimist, given the apparent end of the savings glut from Asia?

FURMAN: I think I’m an economic realist. You look at Robert Gordon, and he’s been turned into this big pessimist, this pole of prediction that’s this negative. We’ve run out of ideas. Nothing ever is going to happen. It turns out, if you just look at average productivity growth over the last 50 years and assume that’s what you’re going to get over the next 50 years, we’re going to have a GDP growth rate that is around 2 percent, maybe a little bit below it. And that’s just from continuing the last 50 years.

Now, the average of the last 50 years is better than the average of the last 15 years. So, in some ways, I’m an optimist. I think we’ll get back to our historical productivity growth. But the idea that we’re going to get way beyond that certainly could happen, but I don’t think it’s a reasonable forecast of what’s going to happen.

COWEN: Referring to Gordon, here’s another softball question. Why did productivity fall so much after either 1973 or in, say, the last 15 years?

[laughter]

This is the easy podcast.

FURMAN: Yep. I think there’s something. Less infrastructure investment, less R&D. There were some major projects in terms of highways in the 1950s, the lunar landing in the 1960s, et cetera. I’d like to think that was the answer to your question. I think the truth is, when you go and quantify that type of public investment, it’s a couple tenths of a percent of the answer.

In the recent period, I do think the increased concentration we’re talking about might be a tenth or two percentage point of the slowdown that we’ve seen. Some of it may be that you need to invert your question and ask not why did productivity slow down, but why was productivity so fast?

There’s a lot of very special things about the 1950s and the 1960s, just coming off of World War II. And in terms of ’95 to 2005, well, it’s a relatively short period of time. Stuff fluctuates. You have a period of faster growth. But broadly speaking, we’re in a world of 1.5 percent productivity growth if we’re lucky. Unfortunately, it’s been more like 1 [percent] lately, and that’s just what it is.

COWEN: What is your take on the degree of unmeasured benefits from the internet? High, low, intermediate?

FURMAN: Intermediate, but most of them were in the base 5, 10, 15 years ago — Google, online travel, online shopping, Wikipedia. It’s not like adding these are going to add to your measure of productivity growth in the year 2019 or the year 2018. They might add to the level of productivity, but they were in the level. They were in the base a while ago now. And they’ve just continued up. So, I certainly think there’s some things we’re not measuring.

A bunch of it, though, is about the use of our time. YouTube may be better than TV, but it’s only a bit better than TV, so you can’t just take the gross benefits of YouTube. You have to do the net benefits relative to the other activity. And then, if you’re being a paternalist, you could ask, “Is any of this a good use of our time?” But without even going there, I think you get decent-size impact on the level of well-being, a small impact on the growth of well-being over the last decade.

The brain's "face" area is sensitive to touch as well as vision

If you want a nice primer on how the brain works

that ranges from the whole shebang down to the level of individual neurons and cortical layering, I recommend this long post by Tim Urban, Neuralink and the Brain’s Magical Future. As the title indicates, the post is headed toward an understanding of Elon Musk's Neuralink software, and company of the same name. It's organized into these parts:
Part 1: The Human Colossus
Part 2: The Brain
Part 3: Brain-Machine Interfaces
Part 4: Neuralink’s Challenge
Part 5: The Wizard Era
Part 6: The Great Merger
Wizard's Hat from Disney's Fantasia
By all means read the whole thing. But it gets a bit wonky with part 5, which is about a whole brain interface. That's best thought of as science fiction. Here's the idea.
Magic has worked its way from industrial facilities to our homes to our hands and soon it’ll be around our heads. And then it’ll take the next natural step. The magic is heading into our brains.

It will happen by way of a “whole-brain interface,” or what I’ve been calling a wizard hat—a brain interface so complete, so smooth, so biocompatible, and so high-bandwidth that it feels as much a part of you as your cortex and limbic system. A whole-brain interface would give your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone with a similar interface in their head. This flow of information between your brain and the outside world would be so effortless, it would feel similar to the thinking that goes on in your head today. And though we’ve used the term brain-machine interface so far, I kind of think of a BMI as a specific brain interface to be used for a specific purpose, and the term doesn’t quite capture the everything-of-everything concept of the whole-brain interface. So I’ll call that a wizard hat instead.
Um, err, no. Color me skeptical, very skeptical.


This post captures my skepticism: Why we'll never be able to build technology for Direct Brain-to-Brain Communication. Of course, there's been quite a bit of work on coupling with the brain, and I've got a hand full of posts on the subject.

Is that where the future is?

AI as platform [Andreessen]: PowerPoint Squared and beyond

Update: September 2, 2020



Last December Kevin Kelly and Mark Andreessen had a conversation: Why You Should Be Optimistic About the Future (December 12, 2019)

In that conversation Andreessen argued that his firm (Andreessen Horowitz) sees AI as a platform or an architecture, not a feature. I think he is correct. But I think there are two different ways, two different directions, that can happen. Judging from his remarks, he is thinking of an AI that looks OUTWARD from the machine and toward the world. This ultimately leads to something like the Star Trek computer.

I think it is also possible to create an AI that looks INWARD toward the computer itself. This ultimately leads to AI as the operating system coordinating all the activity on the platform. That’s what this piece is about.

Note that I DO NOT think of these as conflicting visions. On the contrary, they are complementary. We must and will pursue both avenues. But they involve different technology. This post is about AI platforms that look inward toward the machine itself. I’ll cover outward looking AI platforms (the Star Trek computer) in a separate post.

But first let’s listen to Andreessen.

Andreessen’s remarks, AI as platform

Here’s a lightly edited transcript of those remarks (starting at roughly 07:23):
I think that the deeper answer is that there’s an underlying question that I think is an even bigger question about AI that reflects directly on this, which is: Is AI a feature or an architecture? Is AI a feature, we see this with pitches we get now. We get the pitch and it’s like here are the five things my product does, right, points one two three four five and the, oh yeah, number six is AI, right? It’s always number six because it’s the bullet that was added after they created the rest of the deck. Everything is gonna’ kind of have AI sprinkled on it. That’s possible.

We are more believers in a scenario where AI is a platform, an architecture. In the same sense that the mainframe was an architecture or the minicomputer is an architecture, the PC, the internet, the cloud has an architecture. We think AI is the next one of those. And if that’s the case, when there’s an architecture shift in our business, everything above the architecture gets rebuilt from scratch. Because the fundamental assumptions about what you’re building change. You’re no longer building a website or you’re no longer building a mobile app, you’re no longer building any of those things. You’re building an AI engine that is, in the ideal case, just giving you're the answer to whatever the question is. And if that’s the case then basically all applications will change. Along with that all infrastructure will change. Basically the entire industry will turn over again the same way it did with the internet, and the same way it did with mobile and cloud and so if that’s the case then it’s going to be an absolutely explosive....

There are lots and lots of sort of business applications ... where you type data into a form and it stores the data and later on you run reports against the data and get charts. And that’s been the model of business software for 50 years in different versions. What if that’s just not needed anymore. What if in the future you’ll just give the AI access to all your email, all phone calls, all everything, all business records, all financials in the company and just let AI give you the answer to whatever the question was. You just don’t go through any of the other steps.

Google’s a good example. They’re pushing hard on this. The consumer version of this as search. Search has been that you know, it’s been the ten blue links for 25 years now. What Google’s pushing toward, they talk about this publically, it’d just be the answer to your query, which is what they’re trying to do with their voice UI. That concept might really generalize out, right, and then everything gets rebuilt.
I think Andreessen is right, that AI will become a platform, or an architecture, rather than just one feature among others in an application.

His remarks clearly indicate that he is looking outward from the machine and toward the world. “You’ll just give the AI access to all your email, all phone calls, all everything, all business records, all financials in the company” – that’s looking toward the world, the company itself and the larger business environment. That AI is going to be running a model of the business. When he talks about Google want its engine simply to present the user with the answer to their query, that’s moving in the direction the computer as a living universal reference source. When you turn that up to eleven it becomes the Star Trek computer.

Let’s put that aside. It’s going to happen. But, as I indicated at the top, I want to look in a different direction.

What are computers good at? What’s their native environment?

Why direct the AI inward, toward the computing environment? Several reasons. In the first place, we know that dealing with the external physical world is difficult for computers. Visual and auditory recognition are ill-defined, open-ended, and computationally intensive. Moreover, though I’m not familiar with the work (and thus cannot cite it) I know there has been a lot of work on automatic and semi-automatic code generation, evolutionary computation, and so forth, which seems directly relevant to managing the computing environment.

AI systems are computational systems, the world of computation is their native environment, unlike the external physical world. Why not take advantage of this?

PowerPoint Squared

Back in 2004 I dreamed up a natural language interface for PowerPoint. That was before machine learning had blossomed as it has in the last decade [1]. And so I imagined an AI engine with two capacities: 1) a basic hand-coded natural language functionality to support simple ‘conversations’ with a user, and 2) the ability to ‘learn’ through these conversations. I called it PowerPoint Assistant (PPA).

Described in that way it sounds like AI-as-a-feature, not as a platform, and developed with aging technology. Bear with me. For the purpose of PPA was to take an out-of-the-box PowerPoint and customize and extend it to meet the specific requirements of a user, in fact, of a community of users. Thus, as I read over that original document [2] I find it easy to conceive of this assistant as the platform.

Here’s the abstract I wrote for a short paper setting for the idea [2]:
This document sketches a natural language interface for end user software, such as PowerPoint. Such programs are basically worlds that exist entirely within a computer. Thus the interface is dealing with a world constructed with a finite number of primitive elements. You hand-code a basic language capability into the system, then give it the ability to ‘learn’ from its interactions with the user, and you have your basic PPA (PowerPoint Assistant).
Yes, I know, that reads like PPA is an add-on for good old Powerpoint, so AI as feature. But notice that I talk of programs as “worlds that exist entirely within a computer” and of the system learning “from its interactions with the user.” That’s moving into AI-as-platform territory.

Tuesday, August 25, 2020

Hierarchies in resting-state brain activity


Abstract of the linked article:
The human cortex is characterized by local morphological features such as cortical thickness, myelin content, and gene expression that change along the posterior-anterior axis. We investigated if some of these structural gradients are associated with a similar gradient in a prominent feature of brain activity - namely the frequency of oscillations. In resting-state MEG recordings from healthy participants (N=187) using mixed effect models, we found that the dominant peak frequency in a brain area decreases significantly along the posterior-anterior axis following the global hierarchy from early sensory to higher-order areas. This spatial gradient of peak frequency was significantly anticorrelated with that of cortical thickness, representing a proxy of the cortical hierarchical level. This result indicates that the dominant frequency changes systematically and globally along the spatial and hierarchical gradients and establishes a new structure-function relationship pertaining to brain oscillations as a core organization that may underlie hierarchical specialization in the brain.

It's a crazy little world

Perhaps these conditions aren't mental disorders at all (anxiety, depression, PTSD)

What if mental disorders like anxiety, depression or post-traumatic stress disorder aren’t mental disorders at all? In a compelling new paper, biological anthropologists call on the scientific community to rethink mental illness. With a thorough review of the evidence, they show good reasons to think of depression or PTSD as responses to adversity rather than chemical imbalances. And ADHD could be a way of functioning that evolved in an ancestral environment, but doesn’t match the way we live today.

Adaptive responses to adversity 
Mental disorders are routinely treated by medication under the medical model. So why are the anthropologists who wrote this study claiming that these disorders might not be medical at all? They point to a few key points. First, that medical science has never been able to prove that anxiety, depression or post-traumatic stress disorder (PTSD) are inherited conditions.

Second, the study authors note that despite widespread and increasing use of antidepressants, rates of anxiety and depression do not seem to be improving. From 1990-2010 the global prevalence of major depressive disorder and anxiety disorders held at 4.4% and 4%. At the same time, evidence has continued to show that antidepressants perform no better than placebo.

Third, worldwide rates of these disorders remain stable at 1 in 14 people. Yet “in conflict‐affected countries, an estimated one in five people suffers from depression, PTSD, anxiety disorders, and other disorders,” they write.

Taken together, the authors posit that anxiety, depression and PTSD may be adaptive responses to adversity. “Defense systems are adaptations that reliably activate in fitness‐threatening situations in order to minimize fitness loss,” they write. It’s not hard to see how that could be true for anxiety; worry helps us avoid danger. But how can that be true for depression? They argue that the “psychic pain” of depression helps us “focus attention on adverse events... so as to mitigate the current adversity and avoid future such adversities.”

If that sounds unlikely, then consider that neuroscientists have increasingly mapped these three disorders to branches of the threat detection system. Anxiety may be due to chronic activation of the fight or flight system. PTSD may occur when trauma triggers the freeze response which helps animals disconnect from pain before they die, and depression may be a chronic activation of that same freeze response.
The article linked in the first paragraph: Kristen L. Syme, Edward H. Hagen, Mental health is biological health: Why tackling “diseases of the mind” is an imperative for biological anthropology in the 21st century, Yearbook Phys Anthropol. 2020;171 (Suppl. 70):87–117, https://doi.org/10.1002/ajpa.23965
Abstract: The germ theory of disease and the attendant public health initiatives, including sanitation, vaccination, and antibiotic treatment, led to dramatic increases in global life expectancy. As the prevalence of infectious disease declines, mental disorders are emerging as major contributors to the global burden of disease. Scientists understand little about the etiology of mental disorders, however, and many of the most popular psychopharmacological treatments, such as antidepressants and antipsychotics, have only moderate‐to‐weak efficacy in treating symptoms and fail to target biological systems that correspond to discrete psychiatric syndromes. Consequently, despite dramatic increases in the treatment of some mental disorders, there has been no decrease in the prevalence of most mental disorders since accurate record keeping began. Many researchers and theorists are therefore endeavoring to rethink psychiatry from the ground‐up. Anthropology, especially biological anthropology, can offer critical theoretical and empirical insights to combat mental illness globally. Biological anthropologists are unique in that we take a panhuman approach to human health and behavior and are trained to address each of Tinbergen's four levels of analysis as well as culture. The field is thus exceptionally well‐situated to help resolve the mysteries of mental illness by integrating biological, evolutionary, and sociocultural perspectives.
FWIW, this line of thought is consistent with my sense of why I enter a melancholy phase at the beginning of the year. Here's three paragraphs from a post where I talk about that:
I do think my mood follows my productivity, or vice versa, and that’s what’s interesting. David Hays once conjectured that melancholy (aka depression) is the price you pay for large-scale reorganization or the sort implied by creativity. “Reorganization” is a term of art from William Powers’s account of the mind. It is so deeply embedded in that theory that there is, alas, no easy gloss.

Start out by thinking of learning. When you learn, your mind is also reorganizing on a large scale. Now think of mourning as well. When you mourn the loss of someone close you have to reorganize your mind for life without them. You don’t actually learn anything, but you have to figure out how to function in those situations where you would have go to that person. It’s unlearning.

In order to be creative you must first unlearn on a large scale. That is accompanied by melancholy. That, on this view, accounts for those troughs in my intellectual output, which is mostly though not entirely through New Savanna. That creates some “slack” in the system. Once enough unlearning has taken place I can then think new thoughts and my productivity goes up. When the slack has been used up, it’s time for more unlearning./blockquote>

Monday, August 24, 2020

Electric chives

Analogy as a force in cultural evolution


David Hays and did a paper on metaphor that is relevant. We sketched out a neural argument that metaphor is one way the mind has of moving into new territory: William Benzon and David Hays, Metaphor, Recognition, and Neural Process, The American Journal of Semiotics, Vol. 5, No. 1 (1987), 59-80,
https://www.academia.edu/238608/Metaphor_Recognition_and_Neural_Process.