Monday, September 30, 2019

Off in the distance, the George Washington Bridge

More on speech as computation [what disfluencies tell us]

I continue to think about language as the basic computational operation of the mind/brain. The idea is that this business of stringing words together into coherent, intelligible, utterances is irreducibly computational; the linking of signifier to signified, that’s the basic “atom” of computation. Think of that as roughly analogous to a basic proposition in arithmetic as given in the tables for addition, subtraction, multiplication, and division. Stringing signifiers together into utterances is then roughly like performing arithmetic operations involving two or more of those basic propositions. Consider, for example, adding a two-digit number and a one digit number: e.g. 25 + 8. In that particular case we invoke “5 + 8 = 13” and “1 + 2 = 3” in just the right way to yield “33”. That’s computation. And so is a multi-word utterance, but the computation is ‘hidden’.

The following passage from one of my papers on “Kubla Khan” [1] is about linguistic computation in that sense:
Nonetheless, the linguist Wallace Chafe has quite a bit to say about what he calls an intonation unit, and that seems germane to any consideration of the poetic line. In Discourse, Consciousness, and Time Chafe asserts that the intonation unit is “a unit of mental and linguistic processing” (Chafe 1994, pp. 55 ff. 290 ff.). He begins developing the notion by discussing breathing and speech (p. 57): “Anyone who listens objectively to speech will quickly notice that is not produced in a continuous, uninterrupted flow but in spurts. This quality of language is, among other things, a biological necessity.” He goes on to observe that “this physiological requirement operates in happy synchrony with some basic functional segmentations of discourse,” namely “that each intonation unit verbalizes the information active in the speaker’s mind at its onset” (p. 63).

While it is not obvious to me just what Chafe means here, I offer a crude analogy to indicate what I understand to be the case. Speaking is a bit like fishing; you toss the line in expectation of catching a fish. But you do not really know what you will hook. Sometimes you get a fish, but you may also get nothing, or an old rubber boot. In this analogy, syntax is like tossing the line while semantics is reeling in the fish, or the boot. The syntactic toss is made with respect to your current position in the discourse (i.e. the current state of the system). You are seeking a certain kind of meaning in relation to where you are now.

Chafe identifies three different kinds of intonation units. Substantive units tend to be roughly five words long on average and, as the term suggests, present the substance of one’s thought. Regulatory units are generally a word or so long (e.g. and then, maybe, mhm, oh, and so forth), and serve to regulate the flow of ideas, rather than to present their substance. Given these durations, a single line of poetry can readily encompass a substantive unit or both a substantive and a regulatory unit.

The third kind of unit, fragmentary, results when one of the other types is aborted in mid-execution. That is to say, one is always listening to one’s own speech and is never quite sure, at the outset of a phrase, whether or not one’s toss of the syntactic line will reel-in the right fish. If things do not go as intended, the phrase may be aborted. Fragments do not concern us, as we are dealing with a text that has been thought-out and, presumably, edited, rather than with free speech, which is what Chafe studied.
The starting and stopping, the disfluencies, all betray the operations of the underlying computation [2]. The goal of the computation is to produce a coherent, an intelligible, utterance. How is that judged? Both by how the speaker interprets the unfolding utterance and how they judge the interlocutor’s response.

[1] William Benzon, “Kubla Khan” and the Embodied Mind, PsyArt: A Hyperlink Journal for the Psychological Study of the Arts, Article 030915, Published to the web on 14 November 2003.
http://www.psyartjournal.com/article/show/l_benzon-kubla_khan_and_the_embodied_mind.

[2] See my earlier post, Speech as computation [Trump's speaking], New Savanna, September 23, 2019, https://new-savanna.blogspot.com/2019/09/speech-as-computation-trumps-speaking.html.

Flower in a sidewalk planter in Hoboken

Beliefs about local impact of climate change affect beliefs about the desirability of geoengineering

Astrid Dannenberg & Sonja Zitzelsberger, Climate experts’ views on geoengineering depend on their beliefs about climate change impacts, Nature Climate Change 9, pages769–775 (2019):
Abstract: Damages due to climate change are expected to increase with global warming, which could be limited directly by solar geoengineering. Here we analyse the views of 723 negotiators and scientists who are involved in international climate policy-making and who will have a considerable influence on whether solar geoengineering will be used to counter climate change. We find that respondents who expect severe global climate change damages and who have little confidence in current mitigation efforts are more opposed to geoengineering than respondents who are less pessimistic about global damages and mitigation efforts. However, we also find that respondents are more supportive of geoengineering when they expect severe climate change damages in their home country than when they have more optimistic expectations for the home country. Thus, when respondents are more personally affected, their views are closer to what rational cost–benefit analyses predict.
H/t Tyler Cowen.

Cuban music of several styles [a network, not a tree]

Shannon Sims (text), Discovering Cuba, an Island of Music, NYTimes, Sept 29, 2019:
Just an hour’s flight from the United States, Cuba is drenched in music. You hear it everywhere, emanating from bars or homes or religious ceremonies. For many visitors, Cuban music is defined by the traditional sounds of the Buena Vista Social Club or Celia Cruz. But Cuban music stretches far beyond those sounds; its roots draw on Africa and Haiti, France and Spain. Genres come together and break apart, like flocks of starlings at dusk, endlessly forming new shapes and sounds.

In an effort to better understand Cuba through its music, Todd and I traveled east from the capital city of Havana toward Santiago de Cuba, in the southeast. For 12 days, past potholes and beach towns and rolling green hills, we went in search of Cuba’s musical roots. We waited in the rain for midnight shows, ran out to central plazas to hear local orchestras, and tried not to creak the floorboards during intimate recording sessions.

With enough time we could have included salsa, son, hip-hop and other genres, and stopped in other spots famous for music — like Pinar del Río or Baracoa.
A network of styles and influences, not a tree:
Cuban music is often described as a tree, with various primary roots that supply life for many branches. But separating the island’s music into distinct genres is an inherently flawed task — they intertwine and cross. And it’s become trickier in recent years: Styles shift with increasing speed as Cubans dive into the possibilities provided by the internet. Across the island, we met musicians taking traditional sounds and twisting them, and finding new ways to reach an audience. Cuban music is in turbo mode.

“I wish you luck in trying to describe Cuban music with words,” Claudio laughed at me as we headed home that night in Gibara, after a stop for a pork sandwich. “The way to know Cuban music is to hear it for yourself.”
Admittedly, this account is informal, but it is consistent with what I know of musical styles in complex modern societies.



One musician's story:
Cimafunk’s story is typical of the musicians who come to Havana to try and make it. He grew up in western Cuba singing in the church and intended to become a doctor. After moving to Havana in 2011, he quickly fell into the lifestyle of a struggling artist, washing cars during the day and sleeping on friends’ couches at night. “Sometimes I’d play music in the park from 8 at night until 6 in the morning and then sleep on the Malecón,” he told me with a laugh. In 2014 he finally landed a spot in Interactivo, and sang with them before forming his own band; he still joins them for jam sessions from time to time.

The response was almost immediate. The band’s 2017 album, Terapia, with celebratory songs like “Ponte Pa’ Lo Tuyo” and “Me Voy,” won the biggest music awards on the island. Ned Sublette, a musician and Cuban music scholar who leads music tours of the island, says Cimafunk had “the hit of the year in Havana” with “Me Voy”: “It was just an absolutely irresistible song and inescapable.”

The band found a global audience by streaming its music; it has been signed by a Miami record label, and Billboard named Cimafunk one of the “10 Latin Artists to Watch in 2019.” Music critics often compare Cimafunk to James Brown.
Cimafunk's advice:
To really discover Cuban music, he said, you need to head to the countryside. “In Havana you can see a lot of people from a lot of places in Cuba making interesting stuff, but what you miss are the roots.”
For example, Los Muñequitos (notice the children):



Moving on:
The core of rumba is the clave, an instrument that to an outsider looks like two wooden sticks about the width and length of carrots. But the clave, in the hands of rumba musicians like Los Muñequitos, becomes a through-line from Africa to Cuba, and acts as the maestro of rumba, setting the pace and the tone of all other instruments, like the maraca shaker, or the batá drum, a Yoruba drum that stands upright on the ground and is slapped on the top.

Other percussion elements are usually added into a rumba composition, and soon it becomes a crowd of sounds, almost like a cascade of beats. Because rumba is polyrhythmic, with multiple rhythms happening at the same time in one song, to an outsider it can sound cacophonous and disorganized. But if you let your mind give up trying to find the rhythm, you have a better chance of actually finding it.
And not only the rumba, the clave is at the core of a lot of Afro-Cuban music, if not quite all. That's the clave that enters at about 00:43, the first thing in addition to the vocal. That same rhythm is ubiquitous.

And so on for several more musical styes. There are photos and videos with the article (by Todd Heisler).

Sunday, September 29, 2019

Bristles on a large motorized cylindrical brush for cleaning streets and sidewalks

Cultural Evolution: Expressive culture before practical culture [Progress]

I'm jumping this to the top to accommodate some additional remarks about progress which I've placed at the end. [Latest update: Monday morning, 30 Sept., at 9:19 AM.]
* * * * * 
Poets are the hierophants of an unapprehended inspiration; the mirrors of the gigantic shadows which futurity casts upon the present; the words which express what they understand not; the trumpets which sing to battle, and feel not what they inspire; the influence which is moved not, but moves. Poets are the unacknowledged legislators of the world. 
– Percy Bysshe Shelley
The ‘collective mind’, the mesh, the cultural reticulum (to use my current technology) is the medium in which culture evolves. By that I mean the “connected” minds/brains of individuals in a given society. I’ve put connected in scare quotes to indicate that the connection is not permanent, hard-wired, as it were, or necessarily synchronous. Face-to-face interactions are synchronous, but temporary, and the connection is through sound, sight, and possibly touch, odor, or even taste. But one can be connected through written documents, paintings, sound recordings, and so forth. In those cases, the connection with others will be asynchronous and could be quite scattered. And so forth.

Cultural beings thrive or fail in the mesh. Cultural being is my current term for the cultural analog of the biological phenotype. As such it is the mental trajectory of a collection of coordinators (the cultural analog of the biological gene). That is most obviously true for expressive culture – stories, dance, charms, pictures, music, and so forth – but true as well for all manner of practical devices and practices.

And, yes, I know this is a bit dense with strange terms. They’re all defined on this page with discussion scattered, alas, throughout this blog and in working papers.

Moving on, we’ve got to make a distinction. For the moment we’ll call it a distinction between expressive culture and practical culture. Expressive culture–song, dance, art, stories, religion, etc.–lives in the reticulum. It speaks to our need for meaning and coherence in the world. Practical culture, on the other hand, supports the body in one way or another, providing nutrition and protection. The devices and practices of practical culture, however, must be consistent with the designs of expressive culture.

Expressive culture leads

Not only that, but I posit a principle for consideration:
Expressive culture leads; the seeds of practical culture are always in expressive culture.
But is that true? I don’t know.

It was suggested to me by the following passage from David G. Hays, The Evolution of Technology, Chapter 5, “Politics, Cognition, and Personality”:
In reading to prepare to write this book, I have learned that the wheel was used for ritual over many years before it was put to use in war and, still later, work. The motivation for improvement of astronomical instruments in the late Middle Ages was to obtain measurements accurate enough for _astrology_. Critics wrote that even if the dubious doctrines of astrology were valid, the measurements were not close enough for their predictions to be meaningful. So they set out to make their instruments better, and all kinds of instrumentation followed from this beginning. (That from White, MRTe*). Metals were used for ornaments very early – before any practical use?
In its original manifestation the compass was a divination, or future-predicting, instrument made of lodestone, which is naturally magnetic." (George Basalla, p. 172; in BIBLNOTE*)
I suspect that we could get many further examples, up into the growth curve from rank 2 to rank 3.

In fact, someone in the future may look back on psychoanalysis and remark that its origin was in parapsychology – dreams were interpreted first for divination, second for diagnosis of pathology.

Here is my first point: The driving force behind progress in social organization, government, technology, science, and art is the need to control anxiety, to satisfy the brain's striving for understanding.

To take a political example: In the origin of government, is the key problem why men choose to follow leaders, or how men succeed in making themselves into leaders? [Not a sexist formulation; just the way things happened.] For most of my life, I took for granted the first answer. Recently I recognized the second problem and adopted it. Ethnographies (culture reports) from hunting-and-gathering societies show absolute egalitarianism. But more: They show an absolute unwillingness to rise above one's fellows in any respect whatsoever.

Here and there we find a clue as to the kind of child-rearing practices (sometimes brutal) that produce such adults.

Since the earliest community leaders were war leaders, who gradually came to exercise some authority between wars, perhaps the answer to the key question is this: The first leaders draw their ability to accept the responsibility of leadership from success in war, from religious experience, and from innate genius or special accidents of handling in early childhood.
And now to find more examples where practical technologies and practices are preceded by their appearance expressive culture.

What does this imply for Collier/Cowen's Progress Studies?

I note in the first place that Hays's examples (in that first paragraph) are all old. I note, however, that in a fairly standard account the rise to Rank 3 culture in the West began in the visual arts with the sciences emerging later. The following table lists some representative figures and dates:


This is not quite the phenomenon Hays was pointing out, which is specific to particular devices, but supports a general view that change expressive culture precedes change in practical culture.

Just how prevalent is that dynamic and does it still hold in the modern world, and in what way? I"m thinking, for example, of Neil deGrasse Tyson's recent talk on Mars settlement. He notes that, despite all the inspirational rhetoric that accompanied it, the political motivation for the Apollo project came from Cold War competition with the Soviet Union. But he also pointed out an inspirational series of articles that appeared in Collier's Magazine in between 1952 and 1954. What effect did they have on the nation's appetite (and aspirations for) venturing into space? I note as well that Walt Disney used his Sunday evening television program (which premiered in 1954) to evangelize for space exploration. I wonder if Tyson and others have been under-valuing or misunderstanding the role of that inspirational rhetoric. And then we have some anecdotes mentioned by Frederik Schodt in his history of Japanese robots, Inside the Robot Kingdom (Kodansha America 1990). In the early days of industrial robotics a new robot would be welcomed to the production line by a blessing ceremony officiated by a Shinto priest. Hmmmm.... Is THIS why "moonshot" has become a metaphor for high risk/high gain endeavors despite the fact that, technologically, the Apollo program wasn't all that risky? We pretty much had the technology in hand; we just had to develop the will to implement and deploy it.

Finally, consider the Technological Singularity, that point in the future when machines will surpass us in intelligence and all sorts of awesome things – whether good or bad (for us) it’s hard to tell – will come to pass. Judged as practical goal setting and prognostication it leaves something to be desired. Taken as expressive culture though, more akin to art and religion than to science and technology, it makes sense.

To see what I mean compare the state of our knowledge with respect to establishing permanent settlements on Mars with the state of our knowledge regarding computational superintelligence. As far as I can tell the only gap in our knowledge about Mars settlement concerns human ability, both mental and physical, to last. We have already sent humans to the moon and back, and we’ve landed exploratory unmanned vehicles on Mars where we’ve been able guide them in doing useful work. Lastly, we have had humans orbit the earth in a space station for months at a time, in one case for a year. That’s a long time, but not as long as a Mars mission. In short, we know a lot. We also know that it will be very expensive.

But machine intelligence, that’s a roller coaster ride. Starting in the early 1950s the federal government funded research into machine intelligence. The objective was a practical one, to translate technical documents from Russian to English. That enterprise went bust in the mid-1960s and was defunded. Good old-fashioned symbolic AI was riding high in the early 1980s and commercial ventures were formed (e.g. Symbolics, to manufacture LISP machines). Then AI Winter set in. The business is now riding machine learning and neural nets in a truly spectacular up-phase. But a self-driving car is a long way from Artificial General Intelligence (AGI). We don’t really know how these marvelous sports do what they do. Common sense reasoning, one of the problems that laid waste to symbolic AI, is still problematic. We lack anything like a coherent theory that moves from sensory processing and motor action through cognition and speech to abstract reasoning. We have no theory on which to base our super-intelligence.

But wait! I get it. We don’t have to have such a theory. The computer will think it up for us.

Ha!

When it comes to creating human-class intelligence, we don’t have a viable theory. We don’t know what we’re doing. That’s very different from the situation we face in thinking about settling Mars. The idea of the Technological Superiority belongs to expressive culture, not engineering or scientific culture.* But who knows what practical developments will come out of this fundamentally expressive (religious) ambition.

Moreover, there are constraints on practical things

So, yes, as Hays suggests, we have anxiety and the need to control it through understanding. But, when it comes to the creation of practical devices, we have constraints as well. The devices, or social practices for that matter, must actually work as intended. And that, I suspect, imposes more constraints on them than the medium does on expressive culture. Expressive culture has more freedom to wander and explore than does practical culture, or so I’m maintaining at the moment.

And what of politics?

Expressive or practical culture? Something of both I would think.

All of this bears further investigation. 

More later.

* * * * *

*See, for example,  Sabine Hosenfelder debunks "tech-bro monotheism".

Some amazing early morning light [Empire State building]

Saturday, September 28, 2019

From Roman war chariots to the gauge of US railroats

The US standard railroad gauge (width between the two rails) is 4 feet, 8.5 inches. That’s an exceedingly odd number. Why was that gauge used?

Because that’s the way they built them in England, and the US railroads were built by English expatriates.

Why did the English build them like that? Because the first rail lines were built by the same people who built the pre-railroad tramways, and that’s the gauge they used.

Why did “they” use that gauge then? Because the people who built the tramways used the same jigs and tools that they used for building wagons which used that wheel spacing.

Okay! Why did the wagons have that particular odd wheel spacing? Well, if they tried to use any other spacing, the wagon wheels would break on some of the old, long distance roads in England, because that’s the spacing of the wheel ruts.

So who built those old rutted roads? The first long distance roads in Europe (and England) were built by Imperial Rome for their legions. The roads have been used ever since. And the ruts in the roads? Roman war chariots first formed the initial ruts, which everyone else had to match for fear of destroying their wagon wheels. Since the chariots were made for (or by) Imperial Rome, they were all alike in the matter of wheel spacing.

The United States standard railroad gauge of 4 feet, 8.5 inches derives from the original specification for an Imperial Roman war chariot. Specifications and bureaucracies live forever. So the next time you are handed a specification and wonder what horse’s ass came up with it, you may be exactly right, because the Imperial Roman war chariots were made just wide enough to accommodate the back ends of two war horses. Thus, we have the answer to the original question.

Said Pops to the Sphinx...

To J. Hillis Miller, 2019: On the State of Literary Criticism

A new working paper, title above, abstract, table of contents, and introduction below. Download at:

Academia.edu: https://www.academia.edu/40466672/To_J._Hillis_Miller_2019_On_the_State_of_Literary_Criticism

* * * * *

Abstract: J. Hillis Miller is one of the premier literary critics in the American academy over the last half-century. He is a first-generation deconstructive critic. I studied with him in the 1960s at Johns Hopkins and then went a different way, toward cognitive science. This working paper consists three documents: 1) A letter to the editor (of PMLA) responding to Miller’s 1986 President’s address, 2) a long open letter from 2015 in which I discuss structuralism, cognitive science, and computational criticism, and 3) a chronology sketching out parallel developments in literary theory and cognitive science from the 1950s through the end of the century.

Contents

J. Hillis Miller, at Johns Hopkins and beyond 3
Response to the President’s Address, 1986: On the Demise of Deconstruction 4
Paths Not Taken: An Open Letter to J. Hillis Miller 5
A Fork in the Road 5
Heart of Darkness 6
Apocalypse Now 8
Myth and Form 9
Literary Culture and History 10
The Land Before Us 14
Appendix: A Parallel Chronology of Literary Theory and Cognitive Science 15

J. Hillis Miller, at Johns Hopkins and beyond

In my first semester at Johns Hopkins I took a course on the modern British novel. It was taught by J. Hillis Miller. I can’t say that I remember much about him, after all, 1966 was awhile ago. I do remember that the course was taught in one of those amphitheater style lecture halls, a rather old one. And I remember three of the texts we read: E. M. Forster, A Passage to India (a revelation), Henry James, The Ambassadors (a snore), Joseph Conrad, The Secret Agent (???) – there must have been a half dozen more, but I don’t recall what they were.

I remember a bit more about the graduate seminar I audited in the fall of 1969. By that time, of course, I’d become acclimated to literary criticism at Hopkins, having taken courses with D. C. Allen, Don Howard, and Earl Wasserman in English and a handful of courses with Dick Macksey in the Humanities Center. This was three years after the infamous 1966 structuralism conference and things were hoppin’. The course was the Victorian novel. I specifically remember reading Trollop (The Last Chronicle of Barset) and Dickens (Bleak House). I also remember the graduate students asking him about going to the MLA convention. Miller offered some dismissive remarks about the intellectual proceedings but suggested the “meatmeet market” aspect had some value (for them). Given that Hopkins was in the vanguard of work in critical theory Miller’s attitude was natural.

A decade-and-a-half later, 1986, Miller was giving the annual Presidential Address at the MLA convention: “The Triumph of Theory, the Resistance to Reading, and the Question of the Material Base.” Over the course of a decade and a half Miller had gone from the outskirts of the profession to the apex, from being an outsider storming the ramparts to commanding the heights and noting that the revolution, alas, seems to have disintegrated.

His theme was the eclipse of deconstruction in favor of a turn toward “toward history, culture, society, politics, institutions, class and gender conditions, the social context.” I don’t recall whether or not I attended the MLA convention that year – I would have been job hunting – but I didn’t hear the address. I did read it, though, when it was published in PMLA.

By that time I was effectively out of the profession. I had been unable to make it through/over structuralism to deconstruction. Back in the early 1970s that was of little significance. Structuralism didn’t know that it was dead and Prof. Miller was happy to write a letter of recommendation for me based on my master’s thesis, which was a structuralist analysis of “Kubla Khan”. That letter, and a couple of others (certainly one from Dick Macksey), took me to the English Department at SUNY Buffalo. There I went over the river and through the woods to the Linguistics Department where I went from structuralism to computational linguistics under the tutelage of David Hays.

That was, and is, an intellectually plausible move. But professionally, my goose was cooked, though it took a few years for me to figure that out. The intellectual openness that had characterized the 1960s and 1970s was gone and the profession was shrink-wrapping itself around “history, culture, society, politics, institutions, class and gender conditions, the social context,” to repeat Miller’s phrase.

I was a bit surprised hear him assert that deconstruction was waning; (faux?) deconstruction seemed to be all over the place. If it was on the way out, though, that was fine by me. I wrote a letter to PMLA in which I offered a generational-succession account of deconstruction’s demise. That letter is reproduced below.

The idea is simple. Critics of Miller’s generation had to work to earn their rebellion from mid-century critical routines. Subsequent generations simply learned rebellion from that initial generation, though they may have donned different hats or masks to assert their difference/différance. But the thrill was gone, hence deconstruction’s demise.

And so it goes. Miller went his way and I went mine. He’s retired and I’m still scrounging about in the hinterlands and conducting the occasional guerilla raid on the folk in that gritty city on the hill.

Back in 2015 I decided to address at open letter to Miller. I’d done this several times before (and since). While some people have replied to such open letters (Steven Pinker, Willard McCarty) and that is certainly welcome, it isn’t necessary. That isn’t why I undertake the exercise, which is fundamentally a rhetorical device for thinking things through in a fairly specific way.

The open litter affords me a particular audience. It’s much easier to address an audience with known interests than to address the General and Undefined Other. While I don’t know Miller’s criticism in any detail, I have read several recent pieces and a couple of interviews where he talks about the profession in general historical terms. I have a sense of the intellectual milieu. That, plus resonance from ancient days at Hopkins, was enough for me. Addressing this letter to Miller was a way for me to think about my work in relation to (what I perceived to be some of) his interests/that milieu.

I open the letter, Paths Not Taken – which I’ve reproduced below, by recalling my structuralist account of “Kubla Khan”, and obseving that the profession had gone one way, while I had gone another. Then I reprise my recent work on Heart of Darkness, a text Miller knows well (and had written about more than once), which was in some way reminiscent of my work on “Kubla Khan” over 40 years ago. From there I move to Apocalypse Now, which, after all, had been based on Heart of Darkness. That’s not the only reason I chose that work. Miller has several times mentioned that he thinks the profession has to address itself to other media, that the literary no longer has the status it had in the mid twentieth century. Thus it made sense for me to write about a film, one with literary resonance. From there I wander back into classic structuralist territory, Lévi-Strauss and Mary Douglass, and come out of into computational criticism. I look at Matt Jockers’s Macroanalysis, charts and all (after all, did not Lévi-Strauss use charts?) and manage to close on Edward Said (believe it or not).

Toto Prof. Miller, we’re not in Kansas Baltimore/New Haven/Irvine anymore.

But that’s where there document ends, figuratively speaking.

As I noted up top I took many courses with Dick Macksey when I was at Hopkins. For each author we studied he’d hand out a chronology of important events. I conclude this working paper with a brief chronology showing the parallel course of the cognitive sciences and literary theory. The chronology begins in the 1950s – Chomsky, Frye, and Sputnik in 1957 – and ends at the turn of the millennium. Miller’s Presidential address to the MLA runs parallel with a panel discussion, “The Dark Ages of AI”, held at the 1984 meeting of the American Association for Artificial Intelligence.

Am I blue? Not when I'm orange!

When I leaned that today's theme was going to be "selfie" I thought, WTF! I don't do selfies. And I don't. But since I decided to join this bit of fun I figured I should play along.

And so I did.

Now, there are two reasons I don't do selfies. The first reason is general principle. The second reason is that I can't, not really. That is, these days selfies are executed with smart phones. And while I've got a smart phone, I've got bare bones service, no data plan. Which means that, while I can take photos, and thus do selfies, there's no way to get them out of the phone, hence no way to use them in this game.

What to do?

Old school, that's what. 

People were doing selfies long before smart phones, My fiend Al did them all the time. So I got out my trusty DMC-ZS7 and snapped away. Since I'm shooting myself I can't really see what I'm doing. So there's no point in fiddling with composition. Just shoot blind. And since this is an exercise, why not engage the old Verfremdungseffekt with a little orange and blue? And that's what I did.








That's a bit intense, no?


Ahhhh....


Friday, September 27, 2019

Billionaires: What they want, what they're up to


But, Krugman points out, Elizabeth Warren doesn't buy it:

He'd linked to this article at the top of the thread: Benjamin I Page, Jason Seawright, Matthew J Lacombe, What billionaires want: the secret influence of America’s 100 richest, The Guardian, Oct 31, 2018.
The very top titans – Warren Buffett, Jeff Bezos, Bill Gates – have all taken left-of-center stands on various issues, and Buffett and Gates are paragons of philanthropy. The former New York mayor Michael Bloomberg is known for his advocacy of gun control, gay rights, and environmental protection. George Soros (protector of human rights around the world) and Tom Steyer (focused on young people and environmental issues) have been major donors to the Democrats. In recent years, investigative journalists have also brought to public attention Charles and David Koch, mega-donors to ultra-conservative causes. But given the great prominence of several left-of-center billionaires, this may merely seem to right the balance, filling out a picture of a sort of Madisonian pluralism among billionaires.

Unfortunately, this picture is misleading. Our new, systematic study of the 100 wealthiest Americans indicates that Buffett, Gates, Bloomberg et al are not at all typical. Most of the wealthiest US billionaires – who are much less visible and less reported on – more closely resemble Charles Koch. They are extremely conservative on economic issues. Obsessed with cutting taxes, especially estate taxes – which apply only to the wealthiest Americans. Opposed to government regulation of the environment or big banks. Unenthusiastic about government programs to help with jobs, incomes, healthcare, or retirement pensions – programs supported by large majorities of Americans. Tempted to cut deficits and shrink government by cutting or privatizing guaranteed social security benefits.
And they operate in secret:
Both as individuals and as contributors to Koch-type consortia, most US billionaires have given large amounts money – and many have engaged in intense activity – to advance unpopular, inequality-exacerbating, highly conservative economic policies. But they have done so very quietly, saying little or nothing in public about what they are doing or why. They have avoided political accountability.

Friday Fotos: Variations in light and dark on 'natural' subjects





Thursday, September 26, 2019

Ted Cloak: The Wheel and Cultural Evolution

Bumping this to the top of the queue as a reminder, Cloak was there first. And that 1968 study of the spoked wheel is very interesting. I wish there were more work like it.
Before there was Dawkins, there was Ted Cloak. His 1968 study (PDF) of the cultural evolution of the spoked wooden wheel and documentation of the manufacture of same (with photos) is a classic document, one not widely enough known. Here's a website he's put up setting out his ideas. Pay particular attention to his use of the work of William Powers (includes a number of useful videos of Power's Model).

* * * * *

Addendum: Note the following passage from David G. Hays, The Evolution of Technology, Chapter 5, “Politics, Cognition, and Personality”:
In reading to prepare to write this book, I have learned that the wheel was used for ritual over many years before it was put to use in war and, still later, work.

I took a day off from blogging: More on regulating the mind [in the zone]

Actually it’s two: no blog posts on Tuesday the 24th and Sunday the 22th. If I were in a down phase, as I usually am in the Spring, that would be normal. But at the moment I’m in an up phase, 98 posts so far this month, 96 for August, 98 for July, 77 for June, and only 25 for May (down phase). What’s up?

Well, Sunday afternoon I went to Liberty State Park and took a bunch of photos. What did I do in the morning? Can’t recall. Nor do I remember what I did when I came back from the photo shoot, though no doubt I did spend some time examining and rendering the photos. But the rest of the day? Probably read a bit, watched some online video. No matter.

But it’s Tuesday that interests me.

A photo safari

Whatever else I do on Tuesdays, the morning is always punctuated by the need to move my car to accommodate street cleaning. At about 9:45 AM or so I walk to my car and then drive it around the block so I’m in position to park once the street cleaner has gone through. This is VERY important. I live in Hoboken, NJ, a small city across the Hudson River from New York. On street parking is very difficult. It’s not unusual to drive around for 15 or 20 minutes to find a parking space a quarter of a mile or more from my building.

But Tuesday the 24th was different. I decided to return to Liberty State Park for more photos. Fact is, I’d somehow neglected to take my DSLR with me on Sunday, though I’d gotten it out, and only had my point-and-shoot. I wanted photos with the good camera. I decided to deal with the parking problem by going out early enough so that I could take my photos and be back in time to slip into a parking spot after the street cleaner passed by. Plus, this would get me in the park early enough to get some sunrise shots.

Since I routinely get up early that was no problem. I was in my car by 6:10 AM or so and arrived at the park at about 6:30, just a few minutes after the sun made it over the buildings in Brooklyn on the east side of the Hudson River. I got my sunrise shots, and then some.

I spent the next two hours walking through the park and taking photos. Lots of photos, leisurely walk. After about a hour my right hand began to hurt a bit, especially my index finger, which I use to trip the shutter. That never happened to me before, though I’d been on many two-hour photo walks. Had I taken THAT many photos? When I got back and downloaded the photos to my computer I found that I’d taken 471 photos. I’d done 200, 250 shots, maybe more, in other safari’s but that seemed like the most I’d taken in one shoot. But I’m getting ahead of myself.

I finished shooting at about 8:30, too early to head back to Hoboken. So I drove to my old neighborhood, Bergen-Lafayette, in Jersey City–the oldest in the city. I walked around a bit and then dropped in on June Jones at the Morris Canal CDC (Community Development Corporation). I’d worked with her quite a bit when I was living in there and we had a few things to catch up on.

I left June at about 9:45, which meant I’d be cutting it close on the parking spot. It was no more than five miles away, but this is city driving. But I made it.

I went back to the apartment, called up my buddy Greg, and then downloaded my photos to the computer. I started going through them and rendered a generous handful, say a dozen to 20, before I knocked off to... To do what? I don’t recall exactly. I took a nap at some point, and another one a bit later. Took a run to the grocery store to get some stuff. Rendered some more photos. Posted some photos online to Facebook. Had dinner at some point. Did a bit of reading. Videos.

But no blogging, none. It was a lazy kind of day. I was wiped out.

Why?

In the mood

I don’t really know.

Let me speculate. It WAS the photo shoot. Though I’ve done many early morning shoots before, not so many this year. But I think it was mostly the intensity.

It didn’t feel all that intense at the time, but still, hear me out.

Some years ago when I was living in Troy, NY, I went on a video shoot with a local news team. I noticed that a half-hour or an hour into the shoot the videographer was sweating. The shoulder mounted camera must have weight 30 or 40 pounds, but it wasn’t that weight that was forcing the sweat. I ask his producer, Steve Rosenbaum, about it and he agreed that, no, it wasn’t the weight. It was the perceptual and emotional intensity involved.

I think it was something like that for me. When I’m on one of these shoots I’m absorbed into the process of taking photos. This morning I was particularly absorbed. I took a lot of shots into the sun, which means I spent time looking into intense light. But more than that, it was the particular kind of photos I was taking.

I like to play around with the sun by using intervening objects to block and deflect the light. That requires taking a bunch of shots in close succession while moving the camera ever so slightly to control just how much the sun is occluded. This is the sort of thing I’m talking about:


In that shot I was aiming directly into the sun. In this next one I kept the sun just off the frame above my focal point:


In this one I’m shooting at those leaves of grass in the center while keeping the sun to the upper right:


Nor do you have to use nearby objects in this way. I went out this morning, walked two blocks to the river, and took a lot of shots of the sun rising over Midtown Manhattan. Here I’ve put the sun behind the spire atop the Empire State Building:


And, yes, I didn’t do anything to the sun. Rather I moved to a position where the relationship between the sun and the building pleased me. But that's how I (almost have to) think about it, moving myself as a way of positioning objects I'm photographing.

That takes a lot of concentration and more than a little motor control. And we know that the brain is an energy hog.

That’s part of the story of why I was wiped out after Tuesday’s photo shoot. But only part of the story. The energy part.

Inequality plagues AI research

Computer scientists say A.I. research is becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products like self-driving cars or digital assistants that can see, talk and reason.

The danger, they say, is that pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data centers.

In the have-not camp, they warn, will be university labs, which have traditionally been a wellspring of innovations that eventually power new products and services.

“The huge computing resources these companies have pose a threat — the universities cannot compete,” said Craig Knoblock, executive director of the Information Sciences Institute, a research lab at the University of Southern California. [...] the scientists are worried about a barrier to exploring the technological future, when that requires staggering amounts of computing. [...]

A recent report from the Allen Institute for Artificial Intelligence observed that the volume of calculations needed to be a leader in A.I. tasks like language understanding, game playing and common-sense reasoning has soared an estimated 300,000 times in the last six years.

All that computing fuel is needed to turbocharge so-called deep-learning software models, whose performance improves with more calculations and more data. Deep learning has been the primary driver of A.I. breakthroughs in recent years.
Power hogs:
Academics are also raising concerns about the power consumed by advanced A.I. software. Training a large, deep-learning model can generate the same carbon footprint as the lifetime of five American cars, including gas, three computer scientists at the University of Massachusetts, Amherst, estimated in a recent research paper. (The big tech companies say they buy as much renewable energy as they can, reducing the environmental impact of their data centers.)
What are the appropriate metrics?
The field’s single-minded focus on accuracy, they say, skews research along too narrow a path.

Efficiency should also be considered. They suggest that researchers report the “computational price tag” for achieving a result in a project as well.

Henry Kautz, a professor of computer science at the University of Rochester, noted that accuracy is “really only one dimension we care about in theory and in practice.” Others, he said, include how much energy is used, how much data is required and how much skilled human effort is needed for A.I. technology to work.

A more multidimensional view, Mr. Kautz added, could help level the playing field between academic researchers and computer scientists at the big tech companies, if research projects relied less on raw computing firepower.
Keep in mind what the human brain accomplishes without anything remotely approaching those power requirements.

Neil deGrasse Tyson, the possibility of permanent settlements on Mars



On February 11, 2018, astrophysicist Neil deGrasse Tyson delivered a presentation at the World Government Summit held annually in Dubai, UAE. His presentation was entitled, "The Future of Colonizing Space." The session included a discussion about the recent first launch of the SpaceX Falcon Heavy rocket from Kennedy Space Center, Florida.

He opened by observing that predicting the future is hard and made his point by juxtaposing past predictions (from the late 19th and early 20th centuries, but also from the 1950s or so) with what has actually happened. At about 15:50 to 19:45 he tells that, in researching an article on exploration, he found "three drivers that enable a nation to commit large sums of money"– the examples are his:
  1. War/Defense, e.g. Apollo, Manhattan Project, Great Wall of China, Interstate Highway System...
  2. Praise of deity or royalty, e.g. Pyramids, Versailles, Cathedrals...
  3. Promise of economic return, e.g. Columbus, Magellan, Lewis & Clark...
On 1, he noted that regardless of the rhetoric that accompanied the Apollo program, it was actually motivated by fear that the Russians would beat America in space. The Manhattan Project and the Great Wall of China are obvious. And it was Germany's Autobahn that inspired Eisenhower to create the Interstate Highway System; he was impressed with its utility for military transport.

He goes on the assert that 2 is out of the question in the current world, and goes on to explore 1 and 3 as drivers for space exploration. Thus at about 49:58 he throws up a slide asserting that the first trillionaires will be from these businesses:
  • Mining Asteroids
  • Deflecting Asteroids
  • Harvesting Water from Comets
  • Vacation: Moon, Mars, & Beyond
That's all well and good.

But I've got my doubts about his dismissal of 2, "Praise of deity or royalty." That's basically about the sacred. And the sacred is still alive within us in various ways. All this tech Singularity nonsense is about the sacred, albeit in a somewhat confused way. And much of the inspiring rhetoric cloaking the Apollo was sacred in character. Kubrick's 2001: A Space Odyssey was about the sacred as, I believe, was Chazelle's First Man. Who knows where we'll be when we've come through the current rank shift and reconsider our place in the universe. Perhaps we'll evolve a sense of the sacred that will take us deeper into space, even to Mars.

One never knows.

African AI

Deep Learning Indaba has become connective tissue for the African A.I. community — not only the space for the community to meet, but a part of the community itself. The conference forges relationships between researchers on the continent with a clear agenda: to build a vibrant, pan-African tech community — not through reinventing existing technologies, but by creating solutions tailored to the challenges facing the region: sprawling traffic, insurance claim payments, and drought patterns.

Google, Microsoft, Amazon, and other tech companies underwrite Indaba to a ballpark figure of $300,000, but organizers are still adamant about creating a new, distinct field of research — an industry free from the grip of Silicon Valley.

As Vukosi Marivate, an Indaba organizer and chair of data science at the University of Pretoria in South Africa, told me, “We need to find a way to build African machine learning in our image.”
Foreign investment:
It’s part of a trend in Silicon Valley firms making significant investments across the continent. Google sponsors organizations like Data Science Africa and the African Institute of Mathematical Sciences. In 2018, the company announced its first African research center in Accra, Ghana. Meanwhile, Microsoft and the Bill Gates Foundation have donated nearly $100,000 to Data Science Nigeria, which hopes to train one million Nigerian engineers in the next 10 years.

The appeal of investing in Africa is obvious. 75% of the continent still has no internet access. That’s a challenge for local populations, but also an investment opportunity for international tech firms. Establishing a presence in Africa now means building valuable relationships with users from the very start of their digital lives. A recent investor report indicates Facebook stands to make $2.13 for every user per year in the developing world — up from $.90 in 2015.

Chinese firms have spent years investing heavily in Africa’s tech infrastructure. Huawei installed surveillance cameras around Nairobi on behalf of the Kenyan government, and the company is currently working on large-scale facial recognition surveillance systems around Zimbabwe.

All this foreign investment and data collection raises red flags on a continent scarred by exploitation.
Home grown:
This year’s Indaba conference presented the Maathai Impact award to Bayo Adekanmbi, chief transformation officer of South African telecom MTN, and the founder of Data Science Nigeria (DSN), an organization that has trained tens of thousands of Nigerians to bolster the country’s IT sector. [...]

Adekanmbi wants to train a million Nigerian data scientists in the next 10 years, and as such, he takes the threat of a brain drain very seriously. “It’s a big concern. Talent always moves to the area of highest concentration,” he says. “If we do not build another community here, talent will continue to disperse.” 
One way of keeping talent at home is remote work. Adekanmbi started a program called Data Scientists on Demand, which facilitates engineers in Nigeria to work remotely for companies around the world.

Wednesday, September 25, 2019

Wild flowers from yesterday's shoot


It’s that time of year again. [Yawn] The MacArthur Fellows have been announced for 2019.


And no doubt they are a worthy bunch. But the foundation keeps doing it, by which I mean they award the majority of their Big Mac fellowships to people who already have secure jobs at good places, mostly colleges and universities. Some of these are the kinds of places that prompt rich people to hire consultants who then fake test results and/or bribe officials on behalf of their students.

I started keeping track of this in 2013, when 63% of the fellowships went to fellows with secure gigs. This year it’s 65% (17 out or 25), up from 52% last year. Here’s how it’s gone since 2013:
2013: 63%
2014: 52%
2015: 54%
2016: 57%
2017: 50%
2018: 52%
2019: 65%
What, you ask, is wrong with that? Well, the original intent was to fund very creative people whose very creativity made it difficult or impossible for them to get such jobs, people who had to wait tables, do temp office work, and who knows, maybe work in an Amazon warehouse schlepping orders. The idea was to fund people who really need the money in order to exercise their creativity.

These aren’t those people. They have secure gigs. My suggestion was and remains simple: Don’t fund people who don’t need the money (in order to survive). Stop taking the easy way out by funding people with university gigs. Take more risks.

In previous years I’ve gone to the trouble of making a table of the winners and listing their field, their place of employment (for most of them, though a few don’t have regular gigs), and classifying into secure university posts (guaranteed employment), pre-tenure university (not yet guaranteed), other-employed, and self-employed. This year I’ve not done that. It’s tiring and pointless.

This year the foundation gave out 26 awards, of which 17 went to people at universities, colleges, or research institutes. I’ve not bothered to check for academic rank.

Here’s the list:
Bard College (2)
Boston University School of Law
Cold Spring Harbor Laboratory
Columbia University
Duke University
Harvard University
Massachusetts Institute of Technology
Montclair State University.
The Rockefeller University
University of California

University of California-Berkeley
University of Massachusetts-Amherst
University of Michigan
University of Pennsylvania
University of Wisconsin-Madison (2)
If you want to read my arguments, you can check out my working paper, which includes year-by-year commentary along with more general commentary on the program and on talent search:

The Genius Chronicles: Going Boldly Where None Have Gone Before? Version 6, Working Paper, October 2018, 61 pp.

Monday, September 23, 2019

Into the jungle: Yesterday's walk in Liberty State Park



Should there be a limit on how much income a wealthy person can hold?

Ingrid Robeyns asks that question at Crooked Timber, The most blasphemous idea in contemporary discourse?, Sept. 21, 2019:
I have no idea how he found it, but George Monbiot read an (open access) academic article that I wrote, with the title “What, if Anything, is Wrong with Extreme Wealth?’ In this paper I outline some arguments for the view that there should be an upper limit to how much income and wealth a person can hold, which I called (economic) limitarianism. Monbiot endorses limitarianism, saying that it is inevitable if we want to safeguard life on Earth.

As Monbiot’s piece rightly points out, there are many reasons to believe that there should be a cap on how much money we can have. Having too much money is statistically highly likely to lead to taking much more than one’s fair share from the atmosphere’s greenhouse gasses absorbing capacity and other ecological commons; it is a threat to genuine democracy; it is harmful to the psychological wellbeing of the children of the rich, and to the capacity of the rich to act autonomously when it concerns moral questions (which includes the reduced capacity for empathy of the rich); and, as I’ve argued in a short Dutch book on the topic that I published earlier this year, extreme wealth is hardly ever (if ever at all) deserved. And if those reasons weren’t enough, one can still add the line of Peter Singer and the effective altruists that excess money would have much greater moral and prudential value if it were spent on genuine needs, rather than on frivolous wants.

Monbiot wrote: “This call for a levelling down is perhaps the most blasphemous idea in contemporary discourse.”

I agree that mainstream capitalist societies operate on the assumption that the sky is the limit. But it is important to point out that the idea that there should be a cap on how much we can have, is not at all new. Historically, thinkers from many corners of the world and writing in very different times, have either given reasons why no-one should become excessively rich, or have proposed economic institutions that would have as an effect that no-one would become superrich (I suppose Marx would be in that latter category). Matthias Kramm and I have joint research on this that I’ll happily post on this blog once it is published. But to give a flavour of the range of support for the view that there should be upper limits, here are three very different sources. (I’ll leave out any comments on Socrates and Plato, since John and Belle are the obvious experts on those thinkers).

And so forth.

I posted the following comment:

A book that has influenced my thinking quite a bit is David Boehm's Hierarchy in the Forest: The Evolution of Egalitarian Behavior (1999). Boehm is interested in accounting for the apparent egalitarian behavior of hunter-gatherer bands, the most basic form of human social organization. While individuals can assume a leadership role for specific occasions, e.g. a hunt, there are no permanent leaders in such bands. Boehm does not argue that such bands are egalitarian utopias; on the contrary, primitive egalitarianism is uneasy and fraught with tension. But it is real. Boehm finds this puzzling because, in all likelihood, our immediate primate ancestors had well-developed status hierarchies. Boehm ends up adopting the notion that the hierarchical behavioral patterns of our primate heritage are overlain, but not eradicated or replaced, by a more recent egalitarian social regime. Other than suggesting that this more recent regime is genetic Boehm has little to say about it.

What I like about this is the idea that our social behavior is mediated by (at least) two behavioral systems, which are organized on very different principles: hierarchy and dominance vs. equality and anarchy (in the sense of self-organizing w/out orders from above). So let's accept that as a premise. That is in our 'nature'. I'm also going to postulate our 'nature' has no way of giving priority to one of these systems. Rather, than is something that is done by 'culture' according to local social circumstances.

In this view, one of the things we're working out over the course of history, then, is the relationship between these two systems. The (phylogenetically older) hierarchical system is perfectly happy with extreme wealth because the resulting inequality is consistent with it. But the (phylogenetically newer) system doesn't like it at all. I don't see any inherently 'right' way to resolve this interaction, but I note that neither system is going to disappear. Both 'make demands' on our behavior.

So, it's all well and good for the economists to tell us that a rising tide floats all boats. But there's going to be a point where the peasants in the little rafts and zodiacs are going to be angry with the plutocrats and oligarchs in their megayachts sailing around the sea like they own it.

* * * * *

We can see this two-systems dynamic on display in Shakespeare. Consider Much Ado About Nothing. We've got two couples. Claudio and Hero interact through the hierarchical system. How does Claudio pursue Hero? Without speaking to Hero at all, he approaches his military commander to broach the matter with her father. Her father accepts on her behalf, all without conferring with her. Beatrice and Benedick, on the other hand, confront one another as equals, and one of the joys of this play is their wit combats. While both are aristocrats (as are all the principals in Shakespeare's plays), neither is rigidly fixed in the aristocracy. And so the play moves back and forth between the stories of these two couples. Of course, the play has a happen ending; both couples are to be married. But that ending has required the interaction of both of these plot lines.

Speech as computation [Trump's speaking]

If I might indulge a current hobby horse, I've been playing with the idea that language is the simplest thing humans do that requires a computational account. From this premise it follows, for example, that however the minds/brains of chimpanzees, dogs, bees, ants, or c. elegans work, it's not through communication. Something else is going on, complex dynamics, for example. OK.

I'm thinking that all these bumps, hesitations, fillers, whatever, of conversation betray the inner workings of these mechanisms. We've got, say, a dynamical system implementing a computational process, speech. And it doesn't always go smoothly. The right word or phrase isn't always available; it's not like they're all queued up just waiting to be entered into the speech stream. So the system has to hunt around looking for them. That is, we're listening to and making sense of our own speech via the auditory system even as the motor system is placing words into the speech stream.

Now, when we write, he can clean things up so it appears perfect. The language computer can parse those sentences readily (that is, map words and phrases onto semantic structures) and it all makes sense. But we all know that writing can often be quite difficult. We have to do quite a bit of reworking to produce computationally fluid prose.