Friday, November 30, 2018

A short note on cognitive ranks

This is from section 2.1., “The Four Ages” of Chapter 2, “Ranks, Revolutions, and Paidias,” of David Hays, The Evolution of Technology Through Four Cognitive Ranks (1993). I’d posted the introduction from Chapter 1 on Wednesday. This says a bit more about how we arrived at the concept and the terminology. I continue the convention of setting Hays' text in Courier (as a reminder of its origins in a mono-spaced electronic format).

* * * * *

In Section 1.1.4, I told you how William L. Benzon, then a graduate student at SUNY Buffalo, generalized Walter Wiora's concept of four ages of music into a scheme for culture history. Informatics, I said, went through ages of
Rank 1. Speech
Rank 2. Writing
Rank 3. Calculation
Rank 4. Computation
And perhaps I should tell you how we got that list. The 1st, 2nd, and 4th entries were given to me by Charles F., a linguist and generalist, before I met Benzon. He called computation the third information-processing revolution. Since I already felt that computation was a revolutionary art, I liked his placing it in this grand context. When we [that is, Hays and Benzon] began talking in detail about the four ranks, we saw clearly that Hockett had missed something (if the theory of informatic rank is to be taken seriously, it may not be permitted such a large gap). But what had Hockett missed?

It occurred to me that he had missed the Renaissance, when life and art changed enormously and science began. Students in our seminar suggested the printing press. Well, it is certainly true that the availability of written material to many people made a fundamental change in all of life, but that is really a matter of diffusing the kind of thinking that goes with writing (rank 2) to a large part of the population. The printing press does not lead to a new kind of thinking. So we poked around. I got out James R. Newman's anthology _The World of Mathematics_ and found that arithmetic came to the western world at roughly the right time:

"by the end of the thirteenth century the Arabic arithmetic had been fairly introduced into Europe ..." (p. 18, from a selection called "The Nature of Mathe- matics" by Philip E. B. Jourdain)

Before that time, arithmetic might be done with counters on a board, with tricky methods by specialists, or by creative insight (no kidding). What Europe got from the Arabs, who had it from India, was a routine way of dealing with numerical problems – calculation. Science needs calculation as you need your bones. So we had a list with four entries, as required.

For a fresh example, and one from technology, take the task of moving a load on land:
Rank 1. Human carriers
Rank 2. Animals carrying packs or pulling carts
Rank 3. Steam railways
Rank 4. Jet aircraft
The wheel has been found by archeologists in Mesopotamia at sites dated between 3500 and 3000 BC, during the long, slow development of writing, but carts frightened both people and horses in rural England around 1700. The steam locomotive developed early in the nineteenth century. Jet aircraft appeared late in World War II and were made safe for commercial use some years after the war ended; without computers, both the design and the regular operation of jets would be inordinately difficult.

Friday Fotos: A color study, RGB plus amplifications and inversions of same

* * * * *

* * * * *

* * * * *

Thursday, November 29, 2018

Georgia Tech imagining brick-and-mortar 'storefront' portals to online education

Georgia Institute of Technology is considering creating brick-and-mortar "storefronts" for prospective and current students to sample its course offerings, listen to lectures and network.

The effort is part of Georgia Tech's plans to make its online degrees and professional education certificates more appealing to the nontraditional students of tomorrow, who the institution predicts will expect "flexible learning experiences."

“We know that students are happy with the online delivery, but we have found that they still have the desire, and in many cases the need, to connect physically with us,” said Rafael Bras, the university's provost.

Georgia Tech administrators published an ambitious report earlier this year exploring how the university might evolve to meet the needs of an increasingly diverse student population in the next 20 years. The report included a proposal to build a “distributed worldwide presence” through the creation of spaces called Georgia Tech atriums.

Bras, who convened the commission that created the atriums concept, imagines the spaces being located in co-working spaces, corporate offices or even shopping malls. Georgia Tech has already trademarked the atrium name, though the plans remain conceptual for now.

Rock Camp for Girls


What happens when whites become a minority in the USA?

According to the NYTimes, the US Census Bureau projects that will happen in 2044 (but maybe it's 2042 or 2050).
In a nation preoccupied by race, the moment when white Americans will make up less than half the country’s population has become an object of fascination.

For white nationalists, it signifies a kind of doomsday clock counting down to the end of racial and cultural dominance. For progressives who seek an end to Republican power, the year points to inevitable political triumph, when they imagine voters of color will rise up and hand victories to the Democratic Party.

But many academics have grown increasingly uneasy with the public fixation. They point to recent research demonstrating the data’s power to shape perceptions. Some are questioning the assumptions the Census Bureau is making about race, and whether projecting the American population even makes sense at a time of rapid demographic change when the categories themselves seem to be shifting.

Jennifer Richeson, a social psychologist at Yale University, spotted the risk immediately. As an analyst of group behavior, she knew that group size was a marker of dominance and that a group getting smaller could feel threatened. At first she thought the topic of a declining white majority was too obvious to study.

But she did, together with a colleague, Maureen Craig, a social psychologist at New York University, and they have been talking about the results ever since. Their findings, first published in 2014, showed that white Americans who were randomly assigned to read about the racial shift were more likely to report negative feelings toward racial minorities than those who were not. They were also more likely to support restrictive immigration policies and to say that whites would likely lose status and face discrimination in the future.
What to do?
Beyond concerns about the data’s repercussions, some researchers are also questioning whether the Census Bureau’s projections provide a true picture. At issue, they say, is whom the government counts as white.

In the Census Bureau’s projections, people of mixed race or ethnicity have been counted mostly as minority, demographers say. This has had the effect of understating the size of the white population, they say, because many Americans with one white parent may identify as white or partly white. On their census forms, Americans can choose more than one race and whether they are of Hispanic origin.

Among Asians and Hispanics, more than a quarter marry outside their race, according to the Pew Research Center. For American-born Asians, the share is nearly double that. It means that mixed-race people may be a small group now — around 7 percent of the population, according to Pew — but will steadily grow. Are those children white? Are they minority? Are they both? What about the grandchildren?

“The question really for us as a society is there are all these people who look white, act white, marry white and live white, so what does white even mean anymore?” Dr. Waters said. “We are in a really interesting time, an indeterminate time, when we are not policing the boundary very strongly.”
But you know...
Race is difficult to count because, unlike income or employment, it is a social category that shifts with changes in culture, immigration, and ideas about genetics. So who counts as white has changed over time. In the 1910s and 1920s, the last time immigrants were such a large share of the American population, there were furious arguments over how to categorize newcomers from Europe.
Hmmm.... Something's rotten in the state of the USA.
To Richard Alba, a sociologist at the City University of New York, the Census Bureau’s projections seemed stuck in an outdated classification system. The bureau assigns a nonwhite label to most people who are reported as having both white and minority ancestry, he said. He likened this to the one-drop rule, a 19th-century system of racial classification in which having even one African ancestor meant you were black.

Wednesday, November 28, 2018

Kim Stanley Robinson on why we're earth-bound

Aslı Kemiksiz and Casper Bruun Jensen interview Kim Stanley Robinson.
KSR: It has not been a program on my part, but just a matter of taking it one story at a time. Maybe it’s revealing that I find these stories interesting. Mars is the main example for me, and that came about because of my interests in wilderness and utopia, science and history, all combining with the revealing of the next planet out by way of the Mariner and Viking missions. Those robotic explorations gave us so much new information about Mars that the first response was a kind of science fiction story from scientists themselves (just as Percival Lowell generated a science fiction story from his data, 70 years earlier)—the new notion that Mars might be amenable to terraforming. I took that as a way to write what was both science fiction and metaphor or allegory, or a kind of modeling by miniaturization or what Jameson called ‘world reduction’—Martian society would be smaller and thus simpler, and it would be very obviously revealed to be necessarily also a place where people were actively engaged in making the biophysical substrate that we need to live. All this was analogous enough to our situation on Earth that I found it the right story to tell at that time.

Aurora (2015) was an attempt to explain why that same process of terraformation and human inhabitation that might work on Mars would not work outside this solar system, for reasons the novel tried to dramatize. The problems of alien life being possibly dangerous, while a dead world would need terraforming without much physical force being available to apply to the project; also, the sheer magnitude of distance from Earth, and the resulting huge gulfs of time needed to get anywhere, or to terraform a dead world once we got there—all these were points that needed to be made in a case I had become convinced of, that we are stuck in our solar system and aren’t going to be leaving it. So that was a completely different kind of project, although obviously it does shine light from a different angle on the difficulties of terraforming even Mars, where now we are not sure if it is alive or dead (when I wrote my Mars novel it was agreed it was dead), nor does it seem to have enough volatiles to make terraformation possible.

Then the moon is different again—too small and volatile-free to be terraformed, and thus just a rock in space, a place for moon bases perhaps, but not for habitation as we usually think of it. Most of the solar system is like the moon in this; Mars is an anomaly. So the full consideration of possibilities leads to the conclusion that there is no Planet B for us, although Mars might be made such over thousands of years, perhaps. But for the most part, these stories have together convinced me that we co-evolved with Earth and are a planetary expression that needs to fit in with the rest of the biosphere here, that we have no other choice about that—and this is an important story for science fiction to tell, given there are so many other kinds of science fiction stories saying otherwise.

Are we there yet?

Cognitive evolution over the longue durée: Ranks, cities, and rankshift

Over the span of roughly 20 years David Hays and I worked on a theory of cultural ranks, the underlying matrix of cognitive systems that have evolved over the longue durée of human history. We produced a handful of articles, jointly and individually, and Dave wrote a book on the history of technology:
Hays, D. G. (1993). The Evolution of Technology Through Four Cognitive Ranks. New York, Metagram Press.
This is the first of a number of passages from that book that I will be posting from time to time.

Dave wrote the book for a history of technology course he taught through The New School under the auspices of Connected Education. He taught the course online, though the technology available at the time was primitive by current standards, basically email. He distributed the book to students on an MS-DOS disk – chosen by the school, I assume, as the least common denominator among personal computer operating systems. Thus it was written as a text file in a simple hypertext system. Nor was it ever published in hardcopy. At the moment it exists only in electronic form in more or less the original format as I have translated it into HTML (use link above).

This first excerpt is the “Introduction” from the first chapter, History, Evolution, and Technology.

Here's an exercise for the reader. In section 1.1.4 below Hays associates each of the four ranks with characteristic cities. I figure we're on the run up to rank 5. What's a characteristic city for the toe of that curve? Singapore? Lagos?

In the interest of getting this up quickly I have made a few changes, mostly eliminating the hyperlinks to bibliography, but you can find that material in the complete version at the links above. I have retained the Courier font as a reminder of the text’s primitive underpinnings, including the ancient convention for indicating _italics_.  You will find a number of bibliographic references followed by an asterisk (*); I forget why Dave did that, perhaps simply to make them noticeable, but I have retained them. But for the actual citation you will have to go to the text I've linked above and there...let's just say the limitations of the original medium made a civilized bibliography impossible. It's all there, but it would take hours to rearrange it into a single file and them make proper references.

* * * * *

1.1.1. History. Who, what, where, when ...
1.1.2. W h y ? Toward scientific explanation ...
1.1.3. Evolution. Blind variation, selective retention ...
1.1.4. Four Stages, Called Ranks. Technology linked to ways of thinking
Technology ordinarily evolves by small steps, but when the level of thinking rises technology is reconstructed on a new basis. Qualitative differences can be seen between foragers, agriculturalists, the era of smokestack industry, and today's most advanced technology.

1.1.1. History

History and journalism have a lot in common, for they are both narrative arts. The reporter's checklist (who, what, where, when) is also the historian's checklist. Thomas A. Edison and his crew invented the electric light bulb at Menlo Park, NJ, in 1879. That is history. Many people read about history for pleasure. The bibliographic note ( BIBLNOTE* ) includes some books on the history of technology, and I hope that many readers will read one of them – or a comparable book – for pleasure. To appreciate the _evolution_ of technology, one needs a pretty good idea of who invented what, and when and where – the _history_ of technology.

The history of technology lends itself to several approaches. Until recently, the standard approach was "Gee, whiz!" or "Wow!" Gee-whiz books are still published, often in coffee-table format. These books are worth skimming, spending lots of time on the pictures. Even better, visit a museum of technology if you can. The point is that much technology is about things, and the best way to get a sense of material culture is to handle it, or at least to see it. Look at the parts of things, how they are formed and joined, how they move on each other if they do move, how their surfaces are finished.

Another approach to the history of technology is to deplore its dehumanizing effects. Lewis Mumford might be described, not too unfairly, as having deplored the whole history of technology in his last books, and plenty of authors have told some part of the story in this way (among them Aron* Ellul* and Marcuse* ). And we are more alert now to the dangers of technology: During the summer of 1989 _The New Yorker_ magazine ran a 2-part article on the apparent danger of low-level electrical radiation. _The New Yorker_ also published Rachel Carson's _Silent Spring_ before the book appeared in 1962.

Physics (electromagnetic radiation), chemistry (pesticides), and biology (genetic engineering) all seem, to many around me, to threaten our health and safety. Or the whole history of technology is despicably interwoven with imperialism (for example, Wall* and Headrick* ).

And then there are fairly scholarly books that tell the story in an objective, detached manner. Reading one or more of these books will give you the facts about who, what, when, and where. I am not a historian, nor a journalist.

1.1.2. W h y ?

Why did Edison (and his crew of hundreds, all educated and skillful) invent the incandescent light bulb in 1879, and not in 1779 or 879? Why in Menlo Park, NJ, and not in Aix-en-Provence, France, or in New Delhi, India? Why a light bulb and not a glovewarmer? Why Edison and not Elmer Fudd?

In the nineteenth century, when history and historicism were at their peak, there arose a belief that history has laws: What has happened, is happening, and will happen hereafter is determined by historical principle. Not social, cognitive, emotional, cultural, or political principles, but historical principle. The name that I attach to this belief is Hegel, and the example that comes to mind is Marxist: The inevitable decline of capitalism, withering away of the state, and triumph of communism.

The notion of history that allows it to have a principle of its own is unintelligible to twentieth-century minds. The truth of this kind of proposition must be mystical. Let me put the matter another way: I cannot understand a universe in which history determines itself.

If you ask "Why?" quite seriously, you deserve to be told how your phenomenon fits into a great structure of ideas. Science is by far the greatest structure of ideas in our time. For 200 years at least we have invested heavily in science. By our collective effort we have built libraries full of theories, some of them tested with the utmost care. These theories fit together, sometimes snugly and sometimes crudely. When the question "Why?" can be answered by fitting the phenomenon into the structure of science, we generally believe that we have a good explanation.

Well, then, "Why technology?" Is that question too big to have an answer? Can it conceivably have a scientific answer? That is to say,
Can we give an account of technology that fits into the structure of science?
The point is not to ask whether science helps technology. The point, stated another way, is
Can we give a scientific explanation of the history of technology?
Do you think that the question is foolish? Or hopeless? Or even obscene? You could be right. All I can do at this time is show you how work toward an answer is going.

At the end of this book we will not have a completely satisfying answer, but we will have earned some benefits ...
A sense of the great changes in material culture that have come about in the last 6000 years.

A broad understanding of the inventions and innovations that shaped the modern world – who, what, where, and when – and how inventions breed inventions.

At least an inkling of what it is like to seek an explanation of an intricate phenomenon.

And an approximate answer to the main question, not complete but well worth thinking about because it puts history in a new perspective.
The key idea will be evolution.

1.1.3. Evolution

Two great men of the nineteenth century, Herbert Spencer* and Charles Darwin* (and also Wallace* ), made the idea of evolution known to everyone.

Spencer said that evolution is change toward differentiation of more kinds of parts, and integration of more kinds of parts into individual systems. Nicolis & Prigogine echoed him in our own time.

Darwin said that evolution occurs by survival of the fit- test. Two dandelions are born; one absorbs nourishment from the soil more effectively; it has more offspring. If you kill most of the dandelions in the next generation, some of this one's many offspring are more likely to survive.

Do not believe in teleology. This obsolete concept, abhorred by most biologists, supposes that the end is determined before the beginning. In a footrace, the finish line is painted before the runners take their places. They go for it. If you believe in teleological explanation, you may say that the human type is the goal of biological evolution. The race to this goal – evolution toward us – began as soon as life appeared on earth. (Of course, if you adopt teleology with respect to matters outside the range of my text, that is not my concern.)

The advantage of Darwin's tautology, that the fittest survive, is that it permits us to reject teleology. We can believe that all biological change begins with random variation, but the disappearance of the unfit makes for improvement.

The trouble with evolution is that fitness is relative, and not absolute. The more fit an animal is to live in a marsh, the less fit to live in a dry place. If after 50 million years the marsh dries up, the very fit marsh-dwellers may disappear. The Darwinian mechanism makes no predictions; those survive who are fit to live in the here-and-now, and if they are unfit to live in the world of the future, that's just tough.

We should notice, also, that survival is a random process. If we could take all the rabbits on Old MacDonald's farm and rank them from left to right according to fitness, highest on the right, we could not separate them into a right-hand group of survivors and a left-hand group of nonsurvivors. All we could say is this: The further to the right, the better the chance of survival. Even the most fit can die by accident without progeny. Even the least fit can sometimes slip through all of life's troubles.

David Bowie in Nippon

Tuesday, November 27, 2018

Virahanka-Fibonacci numbers

See, for example: Parmanand Singh, The so-called fibonacci numbers in ancient and medieval India, Historia Mathematica 12(3):229-244, August 1985. DOI: 10.1016/0315-0860(85)90021-7.
Abstract: What are generally referred to as the Fibonacci numbers and the method for their formation were given by Virahṅka (between a.d. 600 and 800), Gopla (prior to a.d. 1135) and Hemacandra (c. a.d. 1150), all prior to L. Fibonacci (c. a.d. 1202). Nryana Paita (a.d. 1356) established a relation between his ssik-paṅkti, which contains Fibonacci numbers as a particular case, and “the multinomial coefficients.”
Résumé: Avant L. Fibonacci (env. 1202 ap. J. C.), Virahanka (entre 600 et 800 ap. J. C.), Gopala (avant 1135 ap. J. C.), et Hemacandra (env. 1150 ap. J. C.) intruidisirent les nombres de Fibonacci ainsi qu'une méthode de les générer. Narayana Pandita (1356 ap. J. C.) établit une relation entre son samasika-pankti, dont les nombres de Fibonacci sont un cas particulier, et les “coefficients multinomiaux.”
Zusammenfassung: Die gewöhnlich nach Fibonacci bezeichnete Zahlenfolge wie auch deren Bildungsgesetz wurden von Virahnka (zwischen 600 und 800 nach Christus), Gopla (vor 1135), und Hemacandra (um 1150) angegeben, die alle früher als L. Fibonacci (um 1202) lebten. Nryan Panita (1356) fand eine Beziehung zwischen seinen smsik-paṅkti, worin die Fibonacci-Zahlen als Sonderfall enthalten sind, und den “multinomialen Koeffizienten.”
Downloadable PDF:

Lillies in the morning

Physicist Kip Thorne on the importance of collaboration in science

Sean Carroll interviews Kip Thorne, who got the 2017 Nobel Prize in physics, along with Rainer Weiss and Barry Barish for the detection of gravitational waves.
0:09:21 KT: And I told him in no uncertain terms, the Nobel committee has an obligation to educate the public about the importance of collaborations. There are some kinds of major scientific breakthroughs that can only be done by a big collaboration, and that the process of collaboration is absolutely crucial for success. And you’re not doing a good job of educating the public about that. And he said, “Well, yes,” he said, “I’ve been sensitive to this. There are a number of members of the committee that don’t agree that our principal goal with the Nobel prize is to educate the public about the importance of science, the value of science, what has been done, and three individuals are better icons for science than a big team.” And so that’s how the conversation went. And they’re still struggling with this question of…
H/t 3QD.

The Web of Knowledge: A model of cognition from my 1978 dissertation

I’ve now uploaded Chapter 4 of my dissertation, “The Web of Knowledge” to

I did my doctorate in the English Department at SUNY Buffalo. The department, and hence the doctoral program, was (radically) interdisciplinary. Thus one had the option of presenting a Master’s level of competence in some related discipline in lieu of the standard foreign language competence. The department itself had programs in Literature and Philosophy (mostly, but not entirely, Continental), Literature and Psychology (psycholoanalysis), and Literature and Society (Frankfurt school, British style cultural studies).

I chose to present competence in computational psycholinguistics, which I studied in the Linguistics Department under David Hays. Consequently half of my dissertation was as much a technical exercise in cognitive science followed by application of the model to literary study:
William L. Benzon, Cognitive Science and Literary Theory, State University of New York at Buffalo, 1978, 359 pages.
The dissertation had seven chapters:
  1. Introduction: Cognitive Science and the Problem of Man
  2. The Foundations of Cognitive Science
  3. Synchrony, Self, and Awareness
  4. The Web of Knowledge
  5. From Ape to Essence and the Evolution of Tales
  6. Poetics and the Play of Words
  7. Text Analysis: Th’ expence of Spirit
This, more or less, is an abstract of chapter 4:
Cognition is grounded in sensorimotor perception and action. The sensorimotor system is organized in a hierarchy of servomechanical systems as explicated by William Powers (Behavior: The Control of Perception, 1973), from the bottom up: intensities, sensations, configurations, sequences, programs. Cognition is conceptualized as a network organized on three hyper-orders: systemic, episodic, and gnomonic. Systemic nodes of different types represent sensorimotor schemes through a system of parameters. Intensities have no direct representation in cognition. The other orders (Powers’ term) are represented by systemic nodes as follows: sensations > properties (adjective, adverb); configurations > entities (nouns); sequences > events (intransitive verbs); programs > plans (transitive verbs). Nodes representing schemas from in different sensorimotor channels are linked by the assignment relation. Paradigmatic arcs connect nodes of the same type (properties etc.) but varying amounts of detail while composition structure reflects the scope of connected nodes (small, medium, large). Nodes of different channels are connected by syntagmatic arcs. Episodic nodes represent coherent subnetworks of systemic structure and locate things in time and space. The episodic network also has on-blocks as control structures. The gnomonic network is sensitive to the epistemic status of items in the systemic and episodic networks and regulates the interaction of language and cognition.

Multimodal meditation facility


Receptive multilingualism, how fascinating!

In the latest The Atlantic, Michael Erard describes a fascinating linguistic phenomenon: "The Small Island Where 500 People Speak Nine Different Languages: Its inhabitants can understand each other thanks to a peculiar linguistic phenomenon".

The article begins:
On South Goulburn Island, a small, forested isle off Australia’s northern coast, a settlement called Warruwi Community consists of some 500 people who speak among themselves around nine different languages. This is one of the last places in Australia—and probably the world—where so many indigenous languages exist together. There’s the Mawng language, but also one called Bininj Kunwok and another called Yolngu-Matha, and Burarra, Ndjébbana and Na-kara, Kunbarlang, Iwaidja, Torres Strait Creole, and English.
None of these languages, except English, is spoken by more than a few thousand people. Several, such as Ndjébbana and Mawng, are spoken by groups numbering in the hundreds. For all these individuals to understand one another, one might expect South Goulburn to be an island of polyglots, or a place where residents have hashed out a pidgin to share, like a sort of linguistic stone soup. Rather, they just talk to one another in their own language or languages, which they can do because everyone else understands some or all of the languages but doesn’t speak them.

The name for this phenomenon is “receptive multilingualism”, something I'd never heard of before reading Erard's article. Upon first learning of receptive multilingualism, it seemed improbable. How could a community use as many as nine different languages? Upon giving it more thought, however, the probability that members of a group living in a zone where many languages are in daily contact would develop passive fluency in several of them began to make sense.
Mair then goes on to present examples of similar phenomenon from his own polyglot experience, mostly involving Chinese topolects plus English (e.g. Singapore). He concludes: "All of this leads me to conclude that, in general, passive recognition of languages is easier than active production, and that this holds true both with speech and writing."

One of the things that I learned as an undergraduate is that our passive or receptive vocabulary is larger than our active or productive vocabulary. That would seem to be in the same behavioral ball park. What I'd like to know is why? It seems to me that there has to be a simple and direct explanation, but I can't for the life of me come up with one. That explanation obviously needs to be couched in a good account of our linguistic mechanisms. And THAT's something we don't have. OTOH, this phenomenon seems to me to be a valuable clue about the workings of that mechanism.

What do the computational linguists know about the difference between parsing a sentence and generating one? Of course one can, in principle, parse a sentence without knowing what the words mean, and I believe that we've got parsers that do a pretty good job of this. To produce a sentence one starts with items of meaning (whatever/however they are) and assembles them into a coherent string.

Let's step back a second. On the receptive side, let's posit that the world is pretty much the same regardless of the language you speak (yeah, I know, Sapir-Whorf, but how strong is that, really?). And, however we order our inner semantic engine, there is coherence and order there. When listening to someone speak a familiar but foreign language, we don't have to supply the syntax; it's already there in the string. All we have to do is map the individual lexemes to our inner semantic store, perhaps with the help of a syntactic clue or three, and we understand. And if we're in conversation we can do a bit of back and forth to clarify things, even if our conversational partners are in the same situation as we are (i.e. they can understand us, but not speak our language). 

So, it's one thing to understand language strings, where syntactic order is there in the string. It's quite something else to produce them, where we have to create that order. Both cases presuppose that we've got usable mappings between lexical forms and semantic elements.

That's connected speech. But what about individual vocabulary items in our own language? Why does the range of words we can understand in context exceed the number we can actually use? Well, duh, because we actually HAVE a semantic context supplied to use when listening to speaking or reading written language. Again, its the context.

Now, what's the right set of technical details needed to pull this all together?

* * * * *

Monday, November 26, 2018

Here's to your personal moonshot

This image embodies rather nicely the argument I made in MOONSHOT: Does Project Apollo bring us to redefine humanity? [TALENT SEARCH]

More and more I'm thinking that the 1969 Apollo landings (there were two that year, in July and November) will be seen as the Singularity marking the beginning of a new era in human history, through we're still clawing our way into even now almost 50 years later.

Sunday, November 25, 2018

The disaster that is palm oil, or: Look before you freakin' leap!

Abrahm Lustgarten, Palm Oil Was Supposed to Help Save the Planet. Instead It Unleashed a Catastrophe. NYTimes Magazine, November 20, 2018.
Most of the plantations around us [in Borneo] were new, their rise a direct consequence of policy decisions made half a world away. In the mid-2000s, Western nations, led by the United States, began drafting environmental laws that encouraged the use of vegetable oil in fuels — an ambitious move to reduce carbon dioxide and curb global warming. But these laws were drawn up based on an incomplete accounting of the true environmental costs. Despite warnings that the policies could have the opposite of their intended effect, they were implemented anyway, producing what now appears to be a calamity with global consequences.

The tropical rain forests of Indonesia, and in particular the peatland regions of Borneo, have large amounts of carbon trapped within their trees and soil. Slashing and burning the existing forests to make way for oil-palm cultivation had a perverse effect: It released more carbon. A lot more carbon. NASA researchers say the accelerated destruction of Borneo’s forests contributed to the largest single-year global increase in carbon emissions in two millenniums, an explosion that transformed Indonesia into the world’s fourth-largest source of such emissions. Instead of creating a clever technocratic fix to reduce American’s carbon footprint, lawmakers had lit the fuse on a powerful carbon bomb that, as the forests were cleared and burned, produced more carbon than the entire continent of Europe. The unprecedented palm-oil boom, meanwhile, has enriched and emboldened many of the region’s largest corporations, which have begun using their newfound power and wealth to suppress critics, abuse workers and acquire more land to produce oil.

Hiromi plays Beethoven – but what would HE think about it?

The emergence of Rank 4 thought in evolutionary biology, with some reflections on the reconstruction of the study of literature in our current era

A week ago I ran up a post, Innovation, stagnation, and the construction of ideas and conceptual systems, where I argued that conceptual exhaustion is one reason for the intellectual and economic stagnation economists have been observing for years. By conceptual exhaustion I mean that the systems of thought employed here, there, and perhaps everywhere (by now) no longer have anything new to tell us, no really new opportunities for invention and development. At this point it’s all about dotting i’s and crossing t’s. I offered that suggestion in the context of the theory of cognitive development that David Hays and I began with out paper, “The Evolution of Cognition” [1].

Such stagnation has been evident in academic literary criticism for some time now. I saw it two or three decades ago, at least, and the profession has more or less seen it for a decade or so. I have a few things to say about that in the last section, but I want to preface that with some observations about evolutionary biology in the penultimate section. That’s important because evolutionary biology was built on careful naturalistic description and that, naturalistic description, is what I think literary criticism must do. Before I do either, however, I want to say a few words about the general model Hays and I developed.

Cognitive rank

The idea is simple enough. Over the long course of human history new modes of thought emerge. These modes of thinking are grounded in very basic ‘information processing’ technologies – for want of a better generic term.

This is the scheme Hays and I developed and set forth in that initial paper:

Rank 1
Rank 2
Metalingual Definition
Rank 3
Rank 4

By process we mean a general mode of thought and by mechanism we mean a particular conceptual device characteristic of that method. The medium is the external physical matrix in/on which the mechanism operates while enacting the general thought process. The emergence of speech and writing are widely recognized as important ‘break points’ or ‘singularities’ in cultural history. Where we have calculation the printing press is more commonly recognized but, regardless, the early modern era (aka Renaissance) is widely recognized as a major watershed. As is the late 19th and early 20th century, where we place computation. That is to say, the historical eras we recognize are not of our making; others recognize them as well.[2] Our contribution is to suggest/align specific cognitive mechanisms with those eras.  

Evolutionary biology as Rank 4 thought 

Hays and I argued that starting around the turn of the 20th century culture began
evolving toward a new rank and that most of the intellectual and artistic displacement we are seeing reflects these growing pains. This new cognitive rank, Rank 4, has its origins in two developments in late nineteenth century thought: the creation of formal systems of logic and metamathematics and the emergence of non- mechanistic science.
We then went on to assert:
The new scientific style was forced further and further from a mechanistic universe by hard facts of the most intransigent and nonmechanistic sort. Thermodynamics provides the prototype, with biology (evolution) right behind (Prigogine and Stengers 1984). Perhaps the most deeply unnerving case, however, is that of quantum mechanics. That light behaved in some experiments like waves and in other experiments like particles was uncomfortable. To explain what they could see, physicists had to imagine a quantum world that they could not see: In principle and not in mere practice, the quantum world is not observable (Penrose 1989). Yet it provides a framework for mathematical derivations that explain, if that is the right word, the observations that can be made in our world. The fact of the matter is, Rank 4 science is as different from Rank 3 science as Rank 3 science is from Rank 2 natural philosophy. Sophisticated logic and mathematics become ever more necessary to thought. To admit the forces and the intangible particles without logic and mathematics to regulate explanation would be to readmit magic and superstition.
Time itself comes under new scrutiny. It was only around the turn of the 20th century that motion was studied in enough detail so as to provide descriptions of complex irregular movement. Motion pictures, photographs showing the paths taken by hands performing a task, time-motion studies in factories, and paintings of a single subject at several stages of an action all turned up more or less together. This conceptual foregrounding of temporality, when combined with metamathematical and logical reasoning, led to the development of the abstract theory of computing between the world wars.
The central work is Turing's explication of the algorithm. Rank 3 had concocted and used algorithms, but Turing explained what an algorithm was. In order to formulate the algorithm Turing had to think explicitly about the control of events in time. He described his machines as performing an action at a certain time; then another action at the next moment; and so on for as many consecutive moments and actions as necessary. His universal machine, a purely abstract construction, was an algorithm for the execution of any algorithm whatsoever. With it, he showed that no interesting formal system can be complete in the sense of furnishing a proof for every true statement.
Let’s go back to biology. Darwin is the major thinker and his theory of evolution is his major thought.

That thinking was grounded three centuries of careful naturalistic observation and description resulting in museums full of reference collections and volumes of verbal and graphic description. And this descriptive work went hand-in-hand with the classification system set forth by Carl Linnaeus in his Systema Naturae (1735). This would, in our terms, be the fruit of Rank 3 thinking.

It was Darwin’s achievement to examine that record and see in it a causal mechanism at work, a non-teleological mechanism. And thus Darwin’s thinking was necessarily embedded in a conception of time (cf. the second paragraph in quotation). And he had to imagine some mechanism capable of producing the lineages one observes in the historical record. In general terms the mechanism is descent with modification (third paragraph). That, if you will, is his “algorithm”. I realize that, in fact, the term “evolutionary algorithm” is much in use in this context, though I’m not sure how useful it is. But one can see easily enough how it would arise.

It would be three-quarters of a century from Darwin’s full-scale exposition of this theory to Turing’s explication of algorithms. Is it too much to assert that that explication was somehow implicit in Darwin’s thought? Perhaps so, perhaps so. But it is nonetheless useful to see a connection, however distant.

The site of the remains of Jersey City's Van Leer Chocalate Factory, December 2, 2007

Saturday, November 24, 2018

MOONSHOT: Does Project Apollo bring us to redefine humanity? [TALENT SEARCH]

A couple days ago I ran up a post, What’s a (metaphorical) moonshot? [TALENT SEARCH], where I looked at a couple of articles where Mercatus Center fellows talked of moonshots. I was looking for the metaphorical work done by the idea of ‘moonshots’, and I missed it. ‘Moonshots’, whatever they are, are obviously awesome. But why?

Why? The idea seems to carry the connotation that these are very risky ventures where success is not certain. That’s certainly what Tyler Cowen seems to think. After all he’s asserted that if most of his picks for Emergent Ventures aren’t failures, then he’s not doing it right (you’ll find the link in the previous post). But as Graboyes and Stossel pointed out (again, link in previous post), Project Apollo was not a particularly risky venture, not from an engineering point of view. We knew all the relevant physical principles. It was just a matter of getting the engineering right. The time table may have been a bit tricky, but as long as we’re willing to commit the resources success seemed certain. And it was.

So what’s the big deal?

THAT was the big deal, that success was all but certain. What does it imply about who and what we are that, when a nation set out to land a man on the moon within a decade, it did so? What does it mean that that (kind of) feat is within our capabilities

Mars too. Elon Musk says we could have a base on Mars by 2028. Do I believe him? Yes/no/I don’t know. But it’s the timing I’m iffy about, not the technical capability. If we’re willing to commit the resources, then YES, we can do it. Well, I’m also iffy about the psychological capacities of humans in a venture like that. But what kind of doubt it that?

On the other hand, while I think that computers will be doing some pretty interesting things in 2028, some of them not anticipated at the moment, I don’t think we’ll have common sense knowledge under control, nor do I think we’ll be anywhere near ‘artificial general intelligence’, whatever that is. In this domain we lack knowledge of the fundamental principles governing mental phenomena and so we’re just grope around trying this or that. Some things succeed spectacularly, others fail, but we don’t quite know why in either case. We’re accomplishing something, learning something, but just what, who the hell knows? We don’t. Not yet.

Back to moonshots. It seems to me that what underlies the metaphor’s power is the simple fact that, YES, we set out to land a man on the moon and we did it, on time and on budget (I think). What’s awesome about Project Apollo isn’t that it was a crapshoot that came through. Rather, what’s awesome is that it WASN’T a crapshoot and there was little substantial doubt that we would succeed (at least not among the engineers and scientists who planned and designed the mission). That such a thing is now within human capacity, THAT’S WHAT’S AWESOME.

Addendum, 11.25.18: Posted to my Facebook page:

Dictionary, language, word friends, I'm interested in when and how the idea of a "moonshot" became a metaphor meaning roughly, "a highly improbable undertaking but of possibly high value if successful". The reason that interests me is that the vehicle, in one terminology, of the metaphor is obviously Project Apollo. But success was not highly improbable. There were no unknown laws of physics etc. involved. It was a highly focused engineering venture, one that, given time and resources, was all but certain to succeed. As for its value, well, how do we determine that? But it was funded as a propaganda effort in the Cold War (though I doubt that's how the people involved in the project thought of it). That is to say, Project Apollo was NOT a moonshot in the metaphorical sense.

So, how and when did that come about? The OED might tell you something, but I don't have access. I checked the BYU corpus of contemporary American English. It's earliest example (and it only goes back to 1990) is from 1995, and that was clearly metaphorical.

Google Ngram is sparse and enigmatic. Thus from 1979, New York Holstein-Friesian News: "ALSO: A Redwood Ramona Moonshot with 1st calf from a dam with 17,767 3.5% 619 in 288d at 3-0." Further googling reveals "Redwood Ramona Moonshot" to be a bull in Livingston County NY, just how prolific I can't say. And then there's Glenmore Moonshot, a horse of some distinction back in 1982. And "the town of Moonshot, Oregon, is experiencing a rapid growth of population because of the recent relocation of an assembly plant for hand calculators near the town." From 1976, "But the real news is that short days, at least in the case of the Galores and Moonshot types result in substantially earlier flowering — and substantially dwarfer plants." So, we've got moonshot animals, moonshot flowers, and a town. All of which is really quite interesting. But I don't know what to do about it.


Ambiguity in Self-Other Processing

Christophe Emmanuel de Bézenac, Rachel Ann Swindells, and Rhiannon Corcoran, The Necessity of Ambiguity in Self–Other Processing: A Psychosocial Perspective With Implications for Mental Health, Frontiers in Psychology 2018; 9: 2114. doi: [10.3389/fpsyg.2018.02114]

Abstract: While distinguishing between the actions and physical boundaries of self and other (non-self) is usually straightforward there are contexts in which such differentiation is challenging. For example, self–other ambiguity may occur when actions of others are similar or complementary to those of the self. Even in the absence of such situational challenges, individuals experiencing hallucinations have difficulties with this distinction, often experiencing thoughts or actions of self as belonging to other agents. This paper explores the role of ambiguity in self–other differentiation, drawing from developmental, psychodynamic, and neurocognitive perspectives. A key proposal is that engagement in contexts that make distinctions between self and other challenging yet necessary allow reality-testing skills related to agency to develop. Attunement in typical caregiver–infant interactions is framed as a safe but inherently ambiguous environment that provides optimal condition for the infant to develop a coherent self–other sense. Vulnerability to psychosis may be related to limited access to such an environment in early development. However, the perceptual, cognitive, and social skills that contribution to attribution are likely to be malleable following infancy and improve though opportunities for boundary play in similarly ambiguous settings. Using music-making to illustrate, we postulate that engagement in intricate joint-actions that blurs agentic boundaries can contribute to the continued development of an adaptive sense of self and other essential to healthy social functioning. Increased insight into the self–other ambiguity may enhance our understanding of mechanisms underlying “self-disorders” such as schizophrenia and eventually extend the range of social and arts-based therapeutic possibilities.

The Paintings of Akira Kurosawa

Friday, November 23, 2018

Psilocybin as a treatment for depression

Once again, the failure of the nation-state as a form of governance

Writing in the Financial Times, Maark Mazower discusses Leonard Smith, Sovereignty at the Paris Peace Conference of 1919 (Oxford 2018), and Paul Hanebrink, A Specter Haunting Europe: The Myth of Judeo-Bolshevism (Harvard 2018).
Government of the people by the people for the people: a world of sovereigns yielded to a world of sovereignty, a much vaguer concept and a more easily exploited one. And how many peoples there turned out to be. In 1914 there were just over 50 internationally recognised states, mostly in Europe and the Americas. Today there are almost 200 and more than half of these (often small) entities are of very recent vintage. International recognition of independent statehood now depends not on the nod of a crowned head but on admission to membership of the United Nations. Sovereignty in this modern conception, Leonard Smith suggests in his lucid study, was created in Paris at the peace conference and worked out in rough and ready fashion in the drafting of rules, the obsessive drawing and redrawing of boundaries, and above all the establishment of a very new kind of institution — a body of states brought together in an international organisation charged with preserving the peace.

As Sovereignty at the Paris Peace Conference of 1919 shows, this new order never entirely managed to disentangle itself from the old. Great powers did not change their spots or slouch offstage: the British, French and Americans were in the driving seat at Paris and everyone knew it. [...] Thus the story of the League and its role in the shaping of modern conceptions of sovereignty is an ambiguous one — it is about the rise of modern international organisation but also about its limits, limits that would become even more evident when the League was adapted for the needs of the post-Nazi world and rebranded as the UN.

The war’s effect on long-run global economic trends was less marked than one might expect. Europe’s gradual decline as an economic powerhouse actually predated the first world war and the slide continued thereafter. [...] It was not so much the numbers that changed as institutions of government, and political ideas along with them. For behind the post-1918 remodelling of sovereignty lay a quite extraordinary social transformation. The old aristocratic order had rested on toilworn hands and bad roads. In societies where deference, or laws, or outright servitude, kept people tied to the land, there was no politics in the modern sense. So it had been since ancient times. Even in 1900, a century or more after the onset of the industrial revolution, only about 13 per cent of the world’s population lived in cities. By 1960 the figure had risen to a third. Today far more people are urban; only in Africa and parts of southern Asia do rural populations remain a majority.
And now, a big one:
This social transformation coincided with an equally dramatic change in the role of government. To ask people to die for the nation required looking after their basic needs in the peace. Sovereignty in the age of total war thus implied a new conception of the state — the guarantor of basic housing, educational and welfare, the arbiter of industrial peace. Public spending on a new scale also meant correspondingly higher tax levels. Modern fiscal policy was born, and with it the modern welfare state. The democratic version of this, halting as it was, was spurred by the competition provided by two far more sweeping systems of social security — fascism and communism. This raises an awkward question: if historically the emergence of state-driven affordable healthcare, housing and education was a product of the age of mass conscription, total war and ideological competition, what of its future?
And thus the modern nation-state was born, a scant century ago. But can it last, can it survive such complications as "the ethnic minority [...] together with such innovations as organized population exchanges, minority rights treaties and genocide"? Nationalism was relatively new in 1919, when "one state after another across central and eastern Europe acquired a modern constitution." Alas, "by 1939, most of the new democracies created in 1919 had ended up as dictatorships."

Friday Fotos: Shadows

pigeon & shadow.jpg






Power in Cultural Evolution and the Spread of Prosocial Norms


According to cultural evolutionary theory in the tradition of Boyd and Richerson, cultural evolution is driven by individuals' learning biases, natural selection, and random forces. Learning biases lead people to preferentially acquire cultural variants with certain contents or in certain contexts. Natural selection favors individuals or groups with fitness-promoting variants. Durham (1991) argued that Boyd and Richerson's approach is based on a "radical individualism" that fails to recognize that cultural variants are often "imposed" on people regardless of their individual decisions. Fracchia and Lewontin (2005) raised a similar challenge, suggesting that the success of a variant is often determined by the degree of power backing it. With power, a ruler can impose beliefs or practices on a whole population by diktat, rendering all of the forces represented in cultural evolutionary models irrelevant. It is argued here, based on work by Boehm (1999, 2012), that, from at least the time of the early Middle Paleolithic, human bands were controlled by powerful coalitions of the majority that deliberately guided the development of moral norms to promote the common good. Cultural evolutionary models of the evolution of morality have been based on false premises. However, Durham (1991) and Fracchia and Lewontin's (2005) challenge does not undermine cultural evolutionary modeling in nonmoral domains.