It's one thing to figure out a foreign language if you are working with an informant who shares some language with you. In that case you can use the shared language to guide your exploration of the foreign language. But what if you're working with an informant where you don't share a language with them? How does it work then? In this video Dan Everett demonstrates.
Thursday, June 28, 2018
Yesterday I put up a short post containing a conversation between Daniel Kaufman and Massimo Pigliucci about ontology. Toward the end Pigliucci suggested that we needed to recognize multiple ontologies: 1) physical objects, 2) biological beings and processes, 3) the (human) social world and 4) mathematics. A bit later I commented on that post over at Meaningoflife.tv. Here's an edited and expanded version of that comment.
* * * * *
Yet...come to think of it, you know, in the world of computing they talk of IMPLEMENTATION, where some system X is said to be implemented in Y. X might be a 'high-level' programming that is implemented in the machine/assembly language for a specific processor, or it might be, say, a word processor that is implemented in, say, C++ (where C++ is a high level programming language). In the word processor example, while there is a sense in which you could say that the word processor is reducible to C++, the fact is if all you've got available to you are the constructs of C++ you're never going to understand how the word processor works. You can construct the FUNCTIONS of a word processor out of C++ in the way that you can construct a house out of wood and nails, but you really can't define and organize the functions of a word processor in terms of C++. You've got to do the definitions in terms that are appropriate to word processors, not high-level programming languages.
In the case of houses, you're going to want to have a living room, bedrooms, a kitchen, bathroom, and so forth. You define and design those rooms in terms of their (human) functions. Once the functions have been specified and the rooms designed, you can then figure out how to implement them using the building materials at hand, wood, nails, glue, pipes, glass, etc. But you can't define those room functions directly in terms of wood and nails, etc. And so it goes with a word processor. Sentences and paragraphs are defined in terms having to do with language and documents, not the data types and commands of a programming language. Given an understanding of what sentences and paragraphs are, you can then proceed to implement them in programming constructs.
It's in that sense, I'm suggesting, that we can say that the world of biology is implemented in physical objects and processes, and that the human social world is implemented in the worlds of biology and physical objects. [It's not clear to me that mathematics can be accommodated in this way.]
Wednesday, June 27, 2018
Near the end of this discussion, Massimo Pigliucci argues for a "plurality of ontologies" (c. 1:06:25). He suggests four (c. 1:09:28): 1) physical objects, 2) biological ontology (function), 3) social ontology, and 4) mathematics. 1:10:32. That seems a bit like the notion of Realms of Abundance that I argued for some years ago: Matter, Life, Culture, and now, just emerging, an Arena of Abundance. See:
The end up pointing out, for example, panpsychism an the absurd position motived by the assumption of metaphysical monism. That is, if we are allowed to think in terms of only one ontology, then we must conclude that even brute material objects has some form of however rudimentary consciousness. Hence, folks like David Chalmers and Galen Strawson need to rethink what they're up to.
Tuesday, June 26, 2018
"Estrellita": a powerful animated film about immigration, #ICE , and #Vermont— Bryan Alexander (@BryanAlexander) June 26, 2018
Created by Middlebury College students, staff, and faculty, I think.
A fine example of what the digital #liberalarts can do.#digitalstorytelling
While the president has demonized Muslims, we know from living in #JerseyCity (the most diverse city in the US) a strong Muslim community is a great part of any city. The best statement against @realDonaldTrump ‘s policies will be via the ballot box in November #NoMuslimBanEver— Steven Fulop (@StevenFulop) June 26, 2018
When I first started posting at The Valve I posted a series on the problem of literary character: Since they ARE fictions, why is it so difficult for us to talk about them AS fictions? Why are we always using that language and concepts of real people to talk about these fictions? This is one of those posts, originally going on the web on August 5, 2006.
Let's look at the individual reader as he or she apprehends a text and thus (re)creates the lives of the fictional characters in the text. It is common to say that we come to identify with literary characters. But, as Norman Holland pointed out in The Dynamics of Literary Response (1968, pp. 262 ff.), it is by no means clear just what we mean by identification in this context. Still, in order to get this discourse on the road, we need some word for the relationship a reader establishes with a character. If not “identification,” then what?
Keith Oatley has been writing about fiction as simulation of the world:
Shakespeare’s great innovation was of theatre as a model of the world. The audience member constructs the simulated model in the course of the play, and thereby takes part in the design activity. So fiction is to understanding social interaction as computer simulation is to understanding, perception and reasoning. Shakespeare designed plays as simulations of human actions in relation to predicaments, so that the deep structure of selfhood and of the interaction of people who have distinct personalities becomes clearer.*
Oatley has the notion of simulation from computing, where computers are used to simulate a wide variety of phenomena – traffic patterns, explosions, fluid flow, and so forth. He proposes that simulation is just the notion we need in order properly to interpret the Greek mimesis. Stories are “the kind of simulation that runs on minds rather than on computers."
I find Oatley’s proposal to be plausible, but I’ve got reservations. Thus much of what I say will be a critique of that notion. I am not particularly happy about this mode of proceeding, as I would prefer simply to set forth a problem-free account. Alas, I am not aware of such an account and so must be content with a crude demonstration by via negativa.
A Scene from Shakespeare
I would like to discuss Shakespeare’s Much Ado About Nothing, focusing on the first scene of Act IV. I have two reasons for choosing this scene: 1) Eight major characters are on stage and most of them have substantial speaking parts in the scene. 2) The scene is emotionally rich, with the characters having distinctly different interests in the action. Whatever it means for a reader to simulate an imaginary world, the complexity of this scene taxes that capacity.
Here’s what’s going on: The characters have gathered for the wedding of Claudio and Hero, the arrangement of which has been accomplished in one of three plot lines intertwined in the first three acts. The relationship between Beatrice and Benedick is another of those plot lines. While the third is Don John’s scheme to destroy the wedding. Don John’s notion has been to deceieve Claudio into believing that Hero is a woman of loose morals who has deceived him even after having accepted his proposal. Thus, while the other characters, save Don John, believe they are about to witness a wedding, Claudio intends to denounce Hero before the assembled group.
And that’s what he proceeds to do, within thirty lines of the scene’s opening. Hero doesn’t say much, but she does deny the charges. She then faints and is taken for dead. At that point Don John, Claudio, and Don Pedro leave. Hero then revives and those who remain plan a course of action that will, they hope, clear her name.
My first issue is this: Is it physiologically possible for one person, one nervous system, to simulate the actions and emotions of all the characters in this scene? Different emotions are mediated by different neural and physiological systems. In particular the sympathetic and parasympathetic systems are important in motivation and emotion and they are antagonists, pulling physiological processes in opposite directions. Claudio’s aggressive anger – perhaps with overtones of hatred – would be sympathetically driven while Hero’s protective faint would be parasympathetically driven. Can a single nervous system simulate both of those states, either simultaneously or in close succession? That seems highly unlikely. And those are only two characters in the scene. What of Hero’s father, Claudio’s patron, of Beatrice and Benedick, the Friar? And what of Don John? Is he feeling pleasure, perhaps even triumph – albeit concealing these feelings from the others – at seeing his plotting bear fruit?
It seems rather unlikely that a reader would be able to simulate these various feelings and attitudes within the relatively brief compass of a few minutes. Beyond the difficulty of simultaneously activating mutually exclusive neuro-physiological systems, we have the fact that these hormonally rich systems change state more slowly than perceptual and cognitive systems. Even if we simplify the reader’s problem by asserting that the reader only need simulate the character who is speaking, we have the problem of switching from one character to the next, which could be daunting where three or four characters switch back and forth within the compass of only a dozen or two lines.
So, perhaps the reader does not simulate these feelings and attitudes in any very deep way; in particular, perhaps the slower acting hormonal systems are not recruited into action at all. Or perhaps the reader is not simulating the emotions of any of the characters in the scene. Rather, the reader is simply reacting to the actions and words of people whom the reader “knows” and toward which the reader has various attitudes, both positive and negative. That is, if the reader is simulating anyone, it is a person watching such a scene. I am imagining that the reader is simulating someone in attendance at such a wedding, but not participating in them in any way.
And this is not so far from imagining the reader to be in the audience of a performance of Much Ado About Nothing. In this situation each actor has responsibility for simulating the words and actions of a character, and only one character. The playgoer need only react to the play.
At this point, however, the notion that a reader, or a playgoer, is simulating the action seems rather far away from what Oatley has been asserting, who talks as though the reader is simulating the characters from the inside. Though he doesn’t use phrases like “from the inside,” that seems to be what he is asserting. If, for the reasons I have asserted, that is difficult or impossible, then it is not at all obvious to me what simulation might mean. What could it mean to simulate a character from the outside?** (Note that this problem doesn’t arise in computer simulation of physical phenomena.)
One way to deal with this problem would be to say something like: “Well, we don’t simulate all the characters. Just one or a small group of them.” Given that Oatley is arguing that such simulations help us understand ourselves and others, and thus help us negotiate our social lives, it is not at all clear that such a narrowing of scope is legitimate. But even if we accept it, difficulties remain.
The Foolish Protagonist
Let us return to Much Ado. Unlike Hamlet or Othello, the play doesn’t really have a single protagonist. But Claudio takes the active role in one plot and is obviously is a central figure in this drama.
Let us say that Claudio is motivated by anger in this scene. But the accusation motivating that anger is wrong, and the reader knows it. That knowledge effectively bars the reader from “identifying” with Claudio and so simulating his anger toward Hero. If the reader feels any anger at all in this scene – as this reader did – it is more likely to be directed at Claudio himself, perhaps against Don John as well, or simply at the whole bollixed situation. One sort of reader may also feel a bit of pity for Claudio, who, after all, was deceived; while another sort of reader may feel that Claudio was wrong not to have first broached the matter in private. But no reader is simply going to follow along with Claudio’s feelings and actions.
At this point, it seems to me that, if the notion of simulation is to be of much use, that we need to know considerably more about just how the brain does these things. Rather than speculate about what such knowledge might yield, I want to move in a different direction.
Monday, June 25, 2018
Leif Weatherby, Digital Metaphysics: The Cybernetic Idealism of Warren McCulloch, The Hedgehog Review, Vol. 20, No. 1 (Spring 2018)
McCulloch never thought the real would yield to data; nor did he ever think humans would defer to their machines. Instead, he saw that the machines would make new principles of abstraction—new kinds of cognition—available. It was a kind of mutated Kantian question. Kant had wanted to know how much mind is in the world, and McCulloch thought the sum might shift. That is, the shape of the relation between abstraction and the real might change with the new machines.
Oh woe are the humanities, or, What becomes of moral education in an age of intellectual specialization?
Pual Reitter and Chad Wellmon, Melancholy Mandarins: Bloom, Weber, and Moral Education, The Hedgehog Review, Fall 2017. I've snipped from paragraphs from the article here and there.
Relying mostly on anecdotal evidence, and writing in accessible, simplifying prose, an insider-outsider figure—almost always a male humanities professor with solid academic credentials—condemns the culture of specialized research. He tells readers that as a result of this and other ills, alma mater has lost her way. Our once great institutions of higher learning have strayed from their mission of guiding young people through the process of building a soul, a failure that is both a symptom and a cause of a broader decline in our system of values. The lament culminates in a call for colleges and universities to rededicate themselves to the humanities in the right way. Pushing them to do so is the best chance we have to save ourselves from our malaise.
Despite their differing views on the fate of the humanities in the modern age, Bloom and the more recent melancholy mandarins agree that the research university has undermined the kind of education they deem so essential. It compartmentalizes inquiry into ever more specialized domains and thus makes “knowledge of the whole man,” Bloom’s formula for the end of education, impossible. Delbanco, in laying out what college should be, distinguished the purpose of research universities from that of the undergraduate colleges they house. Whereas the former produces new knowledge, the latter enables “self-discovery” or the formation of “a new soul,” he wrote, citing the German sociologist Max Weber, the man credited with first using the term “mandarin,” which had referred to Confucian scholar-bureaucrats, to describe Western intellectuals not lacking in self-importance.
But there is also a great irony to the melancholy mandarins’ position. It was the modern research university, after all, that sacralized the humanities, accorded them prestige, and made the study of humanities an end in itself, providing a foundation for the academic freedom that, according to the mandarins, “real education” requires. The modern research university created the humanities as we know them today. Despite their differences in context and disposition, Bloom and Weber understood both this and the profound contradictions that followed. Even more reason, on the thirtieth anniversary of Bloom’s Closing of the American Mind and the centennial of Weber’s Science as a Vocation, that we return to those texts in order to make sense of the permanent crisis of the humanities in the modern age.
The old system subordinated the humanities–philosophy, philology, history, rhetoric, and literature–to professional education in law, Theology, or medicine. Reformers set out to change that at the beginning of the 19th century.
During the nineteenth century, the reformers’ dreams were, in one sense, largely realized. Humanistic inquiry was liberated from law, medicine, and theology, and humanities scholarship flourished. Universities in Berlin, Göttingen, and Heidelberg established the standards of systematic scholarship for everything from philosophy and classics to history and literature. New mechanisms for promotion were institutionalized. Research seminars were founded. Professional journals and societies were created. The modern principle of faculty self-governance was put into practice. And though its dependence on German state governments made for complications, the research university moved toward giving scholars academic freedom—the institutional space and support to teach and write what they wanted. As the Prussian constitution of 1850 codified it, “Scholarship and its teachings are free.”But as humanities scholarship advanced, scholars within the university, as well as critics outside it, began to worry that the success of the research university had ushered in a fragmented and ever narrower kind of knowledge. The modern university and its ideals of pure research and academic autonomy may have helped free the humanities, but they also paved the way for another master: specialization.
Disney’s been doing live-action remakes of many of its animated features. It released its second remake of The Jungle Book in 2016 and will be releasing a remake of Dumbo next year.
I’ve put up a post at 3 Quarks Daily where I examine the symbolic penumbra elephants have taken on in that film: Disney’s Dumbo, Tripping The Elephants Electric.
You might be interesting in my working paper, Walt Disney’s Dumbo, a Myth of Modernity.
Sunday, June 24, 2018
Friday, June 22, 2018
An interview with Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle.
On the limitations of AI:
AI systems can make limited black and white distinctions. Understanding is more difficult. Allen asked me at first, “Is it possible to give an artificial intelligence a reference book to read and then ask it questions?” It is presumably a simple activity, but the answer was no. We have been working on it and there’s progress but it is still a difficult problem.
The problem of common sense reasoning is one aspect, and a very important one, of this limitation. In the near term:
Q: Okay, so without being overly optimistic or pessimistic: Where are we going in ten years?
A: The best way to think ten years ahead is to look ten years back. During this time, in the micro, things changed like we have moved past the iPhone 3. But on the macro scale, not much has changed. In ten years, we are still going to be building AI systems that are narrow, that can play Go, for example, and win. Maybe they will also recognize faces and diagnose certain diseases. AI will be able to carry out those tasks in a superhuman way. But wider capabilities, the ones we think of as intelligence, such as understanding a situation or context, will be much harder to achieve. In 1996, the computer system Deep Blue beat Garry Kasparov in a game of chess. It can play the best chess game in the world, all the while the room is on fire, and not notice a thing. Today we have a program that can be the world champion of Go, which is a much more complicated game, while the room is on fire.
Q: Meaning that AI still cannot tell what is happening around it.
A: Yes. There has been no progress in its ability to understand what is happening around it. I expect that ten years from now, maybe there will be a program that beat the best Minecraft player in the world but it still won’t notice that the room is on fire. That’s where it needs us. That is why we need to aim for intelligence that enhances human capabilities, that works in tandem with people. [...] There’s a paradox that people tend to miss: things that are difficult for people are easy for machines and things that are difficult for machines are easy for people. The real world, real people, real speech, books—these are a lot harder than Go.
Stanley Fish has just published a piece in The Chronicle of Higher Education, “Stop Trying to Sell the Humanities”, (June 17 2018). As the title indicates, it’s mostly about what he regards as futile efforts to justify the ways of humanist to the populace. He may well be right about the futility of those appeals, and he may be right, as well, that they are deeply mistaken about the value of the humanities, but those are not my concerns in this post. Near the end of the piece he makes a drive-by hit on that digital humanities. That’s what interests me.
Here’s a paragraph:
But there is an even deeper problem with the digital humanities: It is an anti-humanistic project, for the hope of the project is that a machine, unaided by anything but its immense computational powers, can decode texts produced by human beings. For it to work, the project requires a digital dictionary — a set of fixed correlations between formal patterns and the significances they regularly convey. There is no such dictionary, although if there were one the acts of readers and interpreter could be dispensed with and bypassed; one could just count things and go directly from the result to a statement of what Paradise Lost means. That is the holy grail of the digital-humanities project, at least with respect to interpretation: It wants to get rid of the inconvenience of partial, limited human beings by removing from the patterns they produce all traces of the human. It is an old game forever being renewed, but in whatever form it takes, it’s a sure loser.
As far as I know, no one has made such a proposal–though there’s much beyond my knowledge so it’s possible that somewhere out there such a proposal has been entertained. It’s a straw man.
He’s been stalking that straw man, or a close relative, since the 1970s. In “What Is Stylistics and Why Are They Saying Such Terrible Things About It?”  he berates several scholars, Louis Milic in particular, for being bewitched (my terms) with “the promise of an automatic interpretive procedure.” It’s not at all clear to me that any of those thinkers had such a creature explicitly in mind, though they may have had such longings. It is, in a way, an attractive prospect, especially when you consider the contemporary context, where critics were warring over the disconcerting fact that critical agreement is impossible to come by (a way, as far as I can tell, that hasn’t been won, but has mostly been abandoned). Mostly, however, it is an Other that Fish can set in opposition to his own position, whatever it might.
Let me suggest that “mathophobia” is at the heart of that Other, its skeleton, heart, stomach, and brain, all in one. In today’s edition of the Humanist newsletter (32.103 Fish’ing for fatal flaws) Willard McCarthy asserts, in response to the Chronicle piece:
I suspect there's another problem here as well: the fear of, and so inability to see work tinged with or involving, mathematics (mathophobia?). We’ve run into his "extreme or irrational fear or dread aroused by" (OED) mathematically involved analysis of literary style before. What he and others are missing as a result! Note that it is not necessary at all to be mathematically competent to see what's happening and appreciate the importance of current work in statistically sophisticated computational stylistics, for example. It helps to observe that sorting and counting are mathematical operations, then to investigate what happens when these are powered by the digital machine over large quantities of data.
As I have found more than once, it is a mistake to assume that the old fears are a thing of the past or will be any time soon. Fearful reactions, such as Fish's, are valuable. They point to the depth and breadth, if you will, of the cognitive changes at work, slow though they may be.
I think, no, I’m sure, that McCarthy is right in this.
Fish’s mathophobia was in full force in the Q&A after “If You Count It, They Will Come: The Promise of the Digital Humanities”, an address he gave before the School of Criticism and Theory in the summer of 2015–a video and a transcript are online.
In the course of answering a question he mentions Literary Lab, Pamphlet 4: A Quantitative Literary History of 2,958 19th Century British Novels: the semantic cohort method. He asks: “Now what is the semantic cohort method? Well, it turns out to be a method-- by the way, just as a piece, I don’t know, something that's almost, if you pardon the word, aesthetic. When I come upon an essay that has a page in it like that, I want to reach for my gun.” As he utters that last phrase (in a rising tone of voice) he’s holding up a page from the pamphlet, a page given over to a graph. And everyone knows that graphs consist of math wrapped in visible clothing.
Thursday, June 21, 2018
Melania's jacket: I REALLY DON'T CARE, DO U?— Ezra Klein (@ezraklein) June 21, 2018
Melania's spox: "It's a jacket. There was no hidden message.”
Trump: There's a hidden message to the Fake News Media.
Melania: 🤐 https://t.co/PXg3donD70
"Koko — the gorilla known for her extraordinary mastery of sign language, and as the primary ambassador for her endangered species — passed away yesterday morning in her sleep at the age of 46. "https://t.co/QgGKaz9kyi— Marc Kissel (@MarcKissel) June 21, 2018
Here's a post where I talk about other linguistic apes, Taboo, abstraction, and living with animals.
Koko had pet cats.
Koko wanted a birthday party celebration with her family, which included her cats! pic.twitter.com/r4XFa2gcNy— Gorilla Foundation (@kokotweets) July 8, 2016
Wednesday, June 20, 2018
Kret ME, Tomonaga M (2016) Getting to the Bottom of Face Processing. Species-Specific Inversion Effects for Faces and Behinds in Humans and Chimpanzees (Pan Troglodytes). PLoS ONE 11(11): e0165357. https://doi.org/10.1371/journal.pone.0165357
For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the ‘face inversion effect’, humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.
For group-living animals, primates included, the recognition of conspecifics is crucial for their survival. Humans have specialized brain areas to recognize faces and whole bodies[2–5] and their expertise in face recognition is demonstrated by the ‘inversion effect’, showing that faces and whole bodies, but not objects, are recognized configurally rather than by their parts[6–8]. Importantly, their recognition is disproportionally impaired, relative to objects such as houses or cars, when they are seen inverted rather than upright. Conclusive evidence has shown that this effect is primarily due to a disruption in the processing of configural, rather than featural, information in faces [e.g., [9–13]. The face inversion effect has been observed in chimpanzees too, and although not all chimpanzees show this effect at all times[14, 15], overall there is evidence that configural processing is a critical element of efficient face detection in chimpanzees as well[16, 17]. Thus, effects of inversion have been observed for faces and whole bodies, but are generally not found for individual body parts. Intriguingly, previous studies included almost all body parts, except the most obvious one, which is the behind, as we will outline below.
Previous research has shown that in recognizing each other, chimpanzees do not rely on the face alone, but also easily recognize each other by their behinds. Most non-human female primates, chimpanzees included, show a swelling and reddening of the anogenital region around the time of ovulation. At some point during human evolution, these changes in size and color along the menstrual cycle have disappeared, and large quantities of ‘permanent’ adipose tissue on the behind emerged[21, 22]. Possibly, this became more adaptive when our species started to walk upright, or to hide oestrus as to be attractive for males throughout the menstrual cycle and foster pair bond formation and shared caring for offspring. To date, it is not known how behinds as compared to faces are recognized in humans and their closest relatives, but this knowledge can enhance our understanding of the evolution of face processing, as we will argue below.
Face recognition plays an incredibly important role in the survival of animals living in social groups, including humans and chimpanzees. The changeable properties of faces like expression and gaze, display emotions and intentions and are used by observers to predict behavior. The more or less invariant properties of faces are used for identification and display physical characteristics, including sex, age and attractiveness.
* * * * *
Note: Configural recognition means that something is recognized as a whole (as a gestalt), rather than recognizing parts and assembling them into a whole. David Hays and I discuss this in Metaphor, Recognition, and Neural Process (1987). The Wikipedia entry on the Thatcher effect is about configural processing of faces.
Monday, June 18, 2018
Previous research has shown the hunter-gatherer Jahai are much better at naming odors than Westerners. They even have a more elaborate lexicon for it. New research by language scientist Asifa Majid of Radboud University shows that despite these linguistic differences, the Jahai and Dutch find the same odors pleasant and unpleasant.Scholars have for centuries pointed out that smell is impossible to put into words. Dutch, like English, seems to support this view. Perhaps the only really clear example of a smell word in Dutch is "muf." The Jahai, a group of hunter-gatherers living in the Malay Peninsula, appear to be special in that they have developed an exquisite lexicon of words for smell, like other hunter-gatherers. Earlier work of Majid and colleagues already showed that hunter-gatherers seem to be especially good at talking about smell.In a new study, the researchers tested 30 Jahai speakers and 30 Dutch speakers and asked them to name odors. At the same time they also videoed their faces so they could measure their facial expressions to the different odors after the experiment. The researchers replicated the finding that Jahai speakers use special odor words to talk about smells (e.g., cŋεs used to refer to stinging sorts of smells associated with petrol, smoke, and various insects and plants, plʔeŋ used for bloody, fishy, meaty sorts of smells), while Dutch speakers referred to concrete sources (e.g., 'if you ride along or stand behind a garbage truck, but not right on top of it').
Asifa Majid et al. Olfactory language and abstraction across cultures, Philosophical Transactions of the Royal Society B: Biological Sciences (2018). DOI: 10.1098/rstb.2017.0139
Sunday, June 17, 2018
Creative thinking – Did you ever wonder where 'brainstorming' and 'thinking beyond the box' came from?
Bregje van Eekelen, Discipline and Creativity, Institute for Advanced Study, 2018.
On April 6, 1960, Institute for Advanced Study Director Robert Oppenheimer received a letter from psychologist John E. Drevdahl, requesting his support in setting up a study among IAS Members to assess the factors that made them creative. Thus far, Miami University-based Drevdahl wrote, most studies were “based upon Air Force captains and industrial chemists,” noting understatedly that “I do not feel that [this]… resulted in the identification of those personality factors which are most characteristic of a truly creative and productive researcher.” While it is easy to relate to Drevdahl’s intuition that the military and industry were not the most suitable places to capture creative thinking, it was in those very places that creativity theories and techniques were flourishing in the United States at the time.My research project on the social history of creativity shows that in the decade preceding the correspondence, processes to garner new ideas and techniques to think “beyond” existing bodies of knowledge became an object of professional interest in a contact zone of industry, the military, and academia. Various elements of the military were early sites for the introduction of creative ideation techniques. Imagine for instance a psychologist (Abraham Maslow no less) imploring military officers in 1957 to get in touch with their unconscious: “out of this unconscious, out of this deeper self, out of this portion of ourselves of which we generally are afraid and therefore try to keep under control, out of this comes the ability to play—to enjoy, to fantasy, to laugh, to loaf, to be spontaneous.” By 1964, at least 50,000 Air Force members had taken creative problem-solving courses. U.S. Steel, Reynolds Metals, Ethyl Corp, GE Motors, New York Telephone Company, and Boeing Airplane were some of the earliest industrial places where free-wheeling buzz sessions, brainstorms, and group thinks emerged.The scientific study of creativity, as carried out by Drevdahl and numerous others at the time, can be regarded as a legitimating element in this professionalization process. The field of creativity studies drew on a motley set of practitioners from military and industrial settings, engineers, philosophers, anthropologists, and psychologists. Many of their research endeavors were generously supported by military funding. The Cold War provided a generative backdrop for much of the interest in creative ideation, as it highlighted numerous pressing situations that necessitated a move beyond existing knowledge. [...] As befitted the Cold War atmosphere, Drevdahl’s creativity study was also framed as a matter of national security. “[T]he survival of this nation, and perhaps, even of Western civilization,” he argued, depended on future creators. His thesis was that the most creative people were “of only moderately superior intelligence” (which does beg the question why he was keen to study IAS Members). Rather than intelligence, he hypothesized, “personality” might be the deciding factor in creativity, and personality was amenable to change, in that it was “produced by a person’s environment.” If his hypothesis that creativity was a matter of nurture rather than nature was correct, the United States government could step in by fostering an educational and institutional ecosystem that would “create more creative people.”
And so on.
Saturday, June 16, 2018
Friday, June 15, 2018
Majorie Ingall reviews Blue Note Records: Beyond the Notes, a documentary about Blue Note Records, a record label that was extraordinarly important in the jazz world of the 1940s, 50s and 60s. From the review:
The film is full of photos (Wolff was a passionate photographer), musical snippets, and footage of black jazz artists from the 1940s to the ’60s doing their thing. Blue Note’s most important behind-the-scenes hire was Van Gelder, another Jew, who was associated with it for decades; for almost seven years in the 1950s, the label’s albums were recorded in Van Gelder’s parents’ living room. Van Gelder, Donaldson, Hancock, Shorter, and jazz historian Michael Cuscuna—a consultant for Blue Note since 1984—talk about how much artists loved Lion and Wolff, how they never took advantage of the musicians who recorded for them, how they were directed by a pure love of the music. Which is probably true! But anyone who pays attention to contemporary music should be clued in to the oft-contentious relationship between African-Americans and Jews in the music business. Were Lion and Wolff extraordinary? How do they fit into the narrative of African-American art forms being capitalized on, popularized, and monetized by Jewish composers from Berlin to Jolson to Gershwin to Bernstein? Black artists have spoken of feeling exploited by white management; Jews have pointed to anti-Semitism in hip hop. Jazz in particular feels like a complex petri dish of cultural anxiety; hip hop has seemingly taken on much of the urgency jazz once had, and jazz audiences today feel heavy on wannabe-down white dudes in fedoras.... As its fans age, does an art form get less relevant?These are big questions. But this movie doesn’t go there. It’s purely a celebration of one label, which may be sufficient for informed jazz fans and lovers of classic jazz, but isn’t enough for viewers who seek to understand jazz’s place in the world now. Young and young-ish Blue Note artists like drummer Kendrick Scott, pianist and educator Robert Glasper, and bassist Derrick Hodge talk eloquently about why jazz mattered back in the day. The film shows footage of civil-rights protests and the musicians reflect on how the music reflected the social upheaval of the era. “Never at any point do I hear the music and hear them being defeated,” Hodge reflects. “Somehow, regardless of what they were fighting with, they’re going down in history, creating something … in a way that I felt freedom, in a way that brought me joy, in a way that made me want to write music that gave people hope.”
The film doesn’t effectively convey the fury and grief of the civil-rights movement. It’s not until hip-hop producer Terrace Martin shows up that we feel the immediacy and high stakes that jazz must have conveyed in the 1960s. “When I was a kid, the ghettos wasn’t used to seeing motherfuckers with instruments no more,” he says intently. “Because at that point they’d killed all the music programs in the schools.”
With a bonus about why Melissa McCarthy's Sean Spicer impersonation is superior to Alec Baldwin's Trump.
From Conversations with Tyler, Malcolm Gladwell Wants to Make the World Safe for Mediocrity:
From Conversations with Tyler, Malcolm Gladwell Wants to Make the World Safe for Mediocrity:
GLADWELL: Well, yeah, there is something — well, I hesitate to say under-theorized, but there is something under-theorized about the differences between West Indian and American black culture, the psychological difference between what it means to come from those two places. I think only when you look very closely at that difference do you understand the heavy weight that particular American heritage places on African-Americans. What’s funny about West Indians is, they can always spot another West Indian. And at a certain point you wonder, “How do they always know?” It’s because after a while you get good at spotting the absence of that weight.And it explains as well the well-known phenomenon of how disproportionately successful West Indians are when they come to the United States because they seem to be better equipped to deal with the particular pathologies attached to race in this country — my mother being a very good example. But of course there are a million examples.I was just reading for one of my podcasts; I’ve been reading all these oral history transcripts from the civil rights movement. I was reading one today and I’m halfway through. And I had that completely unbidden thing, “Oh, this guy’s a West Indian.” He was an African-American attorney and a civil rights lawyer in Virginia in the ’60s. I got a 30-page transcript. I got to page 15, I’m like, “He’s West Indian.” And then, literally page 16, “My father came from Trinidad and Tobago with my mother and me.”COWEN: [laughs]GLADWELL: There is something very, very real there that’s not, I feel, fully appreciated.COWEN: Another difference that struck me — tell me what you think of this — is that the notion of freedom for much of the Caribbean, it’s in some way more celebratory, and it’s more rooted in history, and it may be because these are mostly majority black societies. History is in a sense controlled; it’s much more commemorative. Does that make sense to you? It’s not a struggle to control the narration of history at a national level.GLADWELL: Yes. You’re in charge of the narrative —COWEN: Yes.GLADWELL: . . . which is huge. I thought of this because I wanted to do — sorry, my podcast is on my mind — I wanted to do and I haven’t managed to figure out how to do it, but there’s a Jamaican poet called Louise Bennett. If you are Jamaican, you know exactly who this person is. She’s probably the most important colloquial poet. I think that’s the wrong word. Popular poet. And she wrote poetry in dialect. So for a generation of Jamaicans, she was an assertion of Jamaican identity and culture. My mother was a scholarship student at a predominantly white boarding school in Jamaica. She and the other black students of the school, as an act of protest, read Louise Bennett poetry at the school function when she was 12 years old.If you read Louise Bennett’s poetry, much of it is about race. It’s about race where the Jamaican, the black Jamaican often has the upper hand. The black Jamaican is always telling some sly joke at the expense of the white minority. So it’s poetry that doesn’t make the same kind of sense in a society where you’re a relatively powerless minority. It’s the kind of thing that makes sense if you’re not in control of major institutions and such, but you are 95 percent of the population and you feel like you’re going to win pretty soon.My mother used to read this poem to me as a child where Louise Bennett is . . . the poem is all about sitting in a beauty parlor, getting her hair straightened, sitting next to a white woman who’s getting her hair curled.[laughter]GLADWELL: And the joke is that the white woman’s paying a lot more to get her hair curled than Louise Bennett is to get her hair straightened. That’s the point. It’s all this subtle one-upmanship. But that’s very Jamaican.
GLADWELL: Well, I don’t like the Alec Baldwin Donald Trump, I don’t think, actually, if you compare it to the Sean Spicer . . .[laughter]GLADWELL: It’s not as good, and it’s not as good because the truly effective satirical impersonation is one that finds something essential about the character and magnifies it, something buried that you wouldn’t ordinarily have seen or have glimpsed in that person.With the Spicer impersonation, why that’s so brilliant is, it draws out his anger. He’s angry at being put in this impossible position. That is the essence of that character. So how does a person respond to this, it’s almost an absurd position he’s in. And he has this kind of — it’s not sublimated — it’s there, this rage. In every one of his utterances is, “I can’t fucking believe that I am in this . . .”[laughter]GLADWELL: And so that Saturday Night Live impersonation gets beautifully at that thing, it satirizes that. I’ve forgotten the name of the woman who does it.AUDIENCE MEMBER: Melissa McCarthy.GLADWELL: Yes, when Melissa McCarthy, when she picks up the podium . . .[laughter]GLADWELL: That’s an absurd illustration of that fundamental point. But the Alec Baldwin Trump doesn’t get at something essential about Trump. It simply takes his mannerisms and exaggerates them slightly. But he hasn’t mined Trump.
Thursday, June 14, 2018
We need horror physics— German Sierra (@german_sierra) June 14, 2018
“How the belief in beauty has triggered a crisis in physics”
Every sense has its own “lexical field,” a vast palette of dedicated descriptive words for colors, sounds, tastes, and textures. But smell? In English, there are only three dedicated smell words—stinky, fragrant, and musty—and the first two are more about the smeller's subjective experience than about the smelly thing itself.
All of our other scent descriptors are really descriptions of sources: We say that things smell like cinnamon, or roses, or teen spirit, or napalm in the morning. The other senses don't need these linguistic workarounds. We don't need to say that a banana “looks like lemon;” we can just say that it's yellow. Experts who work in perfume or wine-tasting industries may use more metaphorical terms like decadent or unctuous, but good luck explaining them to a non-expert who's not familiar with the jargon.
In contrast, "the Jahai of Malaysia and the Maniq of Thailand use between 12 and 15 dedicated smell words":
“These terms are really very salient to them,” she says. “They turn up all the time. Young children know them. They're basic vocabulary. They're not used for taste, or general ideas of edibility. They're really dedicated to smell.”H/t Dan Everett.
For example, ltpit describes the smell of a binturong or bearcat—a two-meter-long animal that looks like a shaggy, black-furred otter, and that famously smells of popcorn. But ltpit doesn't mean popcorn—it's not a source-based term. The same word is also used for soap, flowers, and the intense-smelling durian fruit, referring to some fragrant quality that Western noses can’t parse.
Another word is used for the smell of petrol, smoke, bat droppings, some species of millipede, the root of wild ginger, the wood of wild mango, and more. One seems specific to roasted foods. And one refers to things like squirrel blood, rodents, crushed head lice, and other “bloody smells that attract tigers.”
Wednesday, June 13, 2018
Pika S, Wilkinson R, Kendrick KH, Vernes SC. 2018 Taking turns: bridging the gap between human and animal communication. Proc. R. Soc. B 285: 20180598. http://dx.doi.org/10.1098/rspb.2018.0598
Language, humans’ most distinctive trait, still remains a ‘mystery’ for evolutionary theory. It is underpinned by a universal infrastructure—cooperative turn-taking—which has been suggested as an ancient mechanism bridging the existing gap between the articulate human species and their inarticulate primate cousins. However, we know remarkably little about turn-taking systems of non-human animals, and methodological confounds have often prevented meaningful cross-species comparisons. Thus, the extent to which cooperative turn-taking is uniquely human or represents a homologous and/or analogous trait is currently unknown. The present paper draws attention to this promising research avenue by providing an overview of the state of the art of turn-taking in four animal taxa—birds, mammals, insects and anurans. It concludes with a new comparative framework to spur more research into this research domain and to test which elements of the human turn-taking system are shared across species and taxa.
9. The comparative turn-taking framework
The new framework enabling comparative, systematic, quantitative assessments of turn-taking abilities centres on four key elements characterizing human social action during conversation:(A) Flexibility of turn-taking organization(B) Who is taking the next turn?(C) When do response turns occur?(D) What should the next turn do?The first element—flexibility of turn-taking organization (A)—refers to the phenomena of varying size and ordering of turns and intentionality involved in human turn-taking sequences . The element mirrors the ability to voluntarily change and adjust signals/actions and thus the degree of underlying cognitive flexibility. It can be operationalized by quantifying the number, frequency and degree of repetition of signals and actions produced in turn-taking events, their combination (e.g. A-B-A; A-B-C), distribution of roles between participants (e.g. role reversal), and intentionality involved (e.g. goal persistence, sensitivity to the social context) [34,112,113].The second element—who is taking the next turn (B)—concerns who can or should produce the next signal and includes techniques for allocating turns to individuals or parties . Parameters should involve (i) body orientation towards recipient(s), (ii) gaze direction of signaller, (iii) response waiting, and (iv) whether recipient(s) can perceive the signal (e.g. being in the visual or auditory field).The third element—when do response turns occur (C)—addresses the time window or temporal relationship between an initiating turn and the response turn [10,24]. Since the normative timing of signal exchanges may differ across species, modalities, and transmission medium, a first mandatory step should be to establish typical time windows for a given species (see  for ideas to operationlize this element).The fourth element—what should the next turn do? (D)—concerns one of the most fundamental structures in the organization of human conversation: adjacency pairs . An adjacency pair can be recursively reproduced  and expanded in conversation and—in its minimal, unexpanded form—is composed of two turns, by different participants, that are adjacently placed, and are relatively ordered into first pair parts (actions that initiate some exchange, e.g. requests), and second pair parts (responsive actions, e.g. grants) . This element can be operationalized by testing whether subsequent turns qualify as adjacency pairs involving predictable signal-response sequences (e.g. a request gesture is typically responded with a granting signal; a call is typically responded with the same call type, e.g. common marmosets) [74,116].
Mesoudi A, Thornton A. 2018 What is cumulative cultural evolution? Proc. R. Soc. B 285: 20180712. http://dx.doi.org/10.1098/rspb.2018.0712
Abstract: In recent years, the phenomenon of cumulative cultural evolution (CCE) has become the focus of major research interest in biology, psychology and anthropology. Some researchers argue that CCE is unique to humans and underlies our extraordinary evolutionary success as a species. Others claim to have found CCE in non-human species. Yet others remain sceptical that CCE is even important for explaining human behavioural diversity and complexity. These debates are hampered by multiple and often ambiguous definitions of CCE. Here, we review how researchers define, use and test CCE. We identify a core set of criteria for CCE which are both necessary and sufficient, and may be found in non-human species. We also identify a set of extended criteria that are observed in human CCE but not, to date, in other species. Different socio-cognitive mechanisms may underlie these different criteria. We reinterpret previous theoretical models and observational and experimental studies of both human and non-human species in light of these more fine-grained criteria. Finally, we discuss key issues surrounding information, fitness and cognition. We recommend that researchers are more explicit about what components of CCE they are testing and claiming to demonstrate.
Labels: cultural evolution
Tuesday, June 12, 2018
Wallmark Z, Deblieck C, and Iacoboni M
Neurophysiological Effects of Trait Empathy in Music Listening
Front. Behav. Neurosci., 06 April 2018 https://doi.org/10.3389/fnbeh.2018.00066
Wallmark Z, Deblieck C, and Iacoboni M
Neurophysiological Effects of Trait Empathy in Music Listening
Front. Behav. Neurosci., 06 April 2018 https://doi.org/10.3389/fnbeh.2018.00066
AbstractThe social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.
IntroductionMusic is a portal into the interior lives of others. By disclosing the affective and cognitive states of actual or imagined human actors, musical engagement can function as a mediated form of social encounter, even when listening by ourselves. It is commonplace for us to imagine music as a kind of virtual “persona,” with intentions and emotions of its own (Watt and Ash, 1998; Levinson, 2006): we resonate with certain songs just as we would with other people, while we struggle to identify with other music. Arguing from an evolutionary perspective, it has been proposed that the efficacy of music as a technology of social affiliation and bonding may have contributed to its adaptive value (Cross, 2001; Huron, 2001). As Leman (2007) indicates: “Music can be conceived as a virtual social agent … listening to music can be seen as a socializing activity in the sense that it may train the listener’s self in social attuning and empathic relationships.” In short, musical experience and empathy are psychological neighbors.The concept of empathy has generated sustained interest in recent years among researchers seeking to better account for the social and affective valence of musical experience (for recent reviews see Clarke et al., 2015; Miu and Vuoskoski, 2017); it is also a popular topic of research in social neuroscience (Decety and Ickes, 2009; Coplan and Goldie, 2011). However, the precise neurophysiological relationship between music processing and empathy remains unexplored. Individual differences in trait empathy modulate how we process social stimuli—does empathy modulate music processing as well? If we consider music through a social-psychological lens (North and Hargreaves, 2008; Livingstone and Thompson, 2009; Aucouturier and Canonne, 2017), it is plausible that individuals with a greater dispositional capacity to empathize with others might also respond to music-as-social-stimulus differently on a neurophysiological level by preferentially engaging brain networks previously found to be involved in trait empathy (Preston and de Waal, 2002; Decety and Lamm, 2006; Singer and Lamm, 2009). In this article, we test this hypothesis in two experiments using functional magnetic resonance imaging (fMRI). In Experiment 1, we explore the neural correlates of trait empathy (as measured using the Interpersonal Reactivity Index) as participants listened to isolated instrument and vocal tones. In Experiment 2, excerpts of music in four conditions (familiar liked/disliked, unfamiliar liked/disliked) were used as stimuli, allowing us to examine correlations of neural activity with trait empathy in naturalistic listening contexts.
News Article Reporting the Research
Milla Bengtsson, People With Higher Empathy Process Music Differently In The Brain, Reliaware, June 12, 2018. From the article:
Individuals who deeply grasp the pain or happiness of others also differ from others in the way their brains process music, a new study by researchers at Southern Methodist University, Dallas and UCLA suggests.The researchers found that compared to low empathy people, those with higher empathy process familiar music with greater involvement of the reward system of the brain, as well as in areas responsible for processing social information.
Monday, June 11, 2018
Philip Carl Salzman, Tribes and States, Inference, Vol. 2, No. 1.
A basic fact of pre-industrial life is that it is easier to take wealth away from others than to produce it oneself. This also applies collectively. It is easier to take wealth from other societies than to extract a sufficient amount from your own. For this reason, agrarian societies turn to expansion, sending military expeditions beyond their boundaries to strip wealth from other populations. Armies have to be paid with the spoils of conquest. Further expansion and conquest is thus necessary. It is a positive feedback cycle. Obvious examples are the Roman Empire and the Arab Muslim Empire.A key element is the taking of slaves. The wealth gained is long-term labor that requires only the most minimal compensation. A society that can produce little needs to import labor that does not need to be compensated, or to be compensated only at very low levels. This transfers the wealth of its production to the elite and its apparatus. In ancient Athens and Rome, slaves counted for more than half the population. Indian civilization solved the production problem slightly differently, with uncompensated labor performed by the so-called untouchables.Agrarian states were thus hierarchical, centralized, and authoritarian, and the means of coercion were limited to the elite and its army as much as possible. But while the reach of the elite was strong, its scope was narrow. They wanted only two things from their subjects: crops and manpower. They controlled little else, and cared about little else. The welfare of their subjects was of no interest, except that they must be protected from predation by other states or tribes. And, to be sustainable, their own predation of their subjects had to be limited.These pre-industrial, agrarian states were not large stable blocks of territory with effective state control and sharp boundaries. They were centers of power claiming control and authority over surrounding regions and populations. Over time, these states could vary in economic, political, and military power. Partly in response to the strength of their leadership, they waxed and waned, increasing their effective reach or seeing their control contract.On the margins of their effective power, these states might make alliances with quasi-independent or independent populations, in most cases tribes. The priority of tribes would have been to remain independent and predatory. Failing that, they would have striven to remain independent, perhaps entering into some lucrative alliance with the state. In the case of an expanding state, the tribes in the path of that expansion would retreat, something fairly easy for pastoral nomads with mobile housing and capital. When a state was weak and began to contract, the tribes would reclaim their independence and return to predation.This picture of states is accurate up to the eighteenth century, even in Britain. Until then, Britain and the states of Western Europe were ruled by autocrats or absolute monarchs, and their polities experienced constant attempted coups, civil wars, and invasions. It was only in the eighteenth century, not coincidentally the period in Western Europe of the modern agricultural and industrial revolutions, that the state changed. Its structure moved from top-down rule toward more general participation in decision making, and from tyrants toward governments based on law.
Sunday, June 10, 2018
Cultures without writing are referred to as ‘non-literate’, but their identity should not be associated with what they don’t do, but rather with what they do from necessity when there is no writing to record their knowledge. Cultures without writing employ the most intriguing range of memory technologies often linked under the academic term ‘primary orality’, including song, dance, rhyme and rhythm, and story and mythology. Physical memory devices, though, are less often included in this list. The most universal of these is the landscape itself.
Australian Aboriginal memory palaces are spread across the land, structured by sung pathways referred to as songlines. The songlines of the Yanyuwa people from Carpentaria in Australia’s far north have been recorded over 800 kilometres. A songline is a sequence of locations, that might, for example, include the rocks that provide the best materials for tools, to a significant tree or a waterhole. They are far more than a navigation aid. At each location, a song or story, dance or ceremony is performed that will always be associated with that particular location, physically and in memory. A songline, then, provides a table of contents to the entire knowledge system, one that can be traversed in memory as well as physically.
Enmeshed with the vitalised landscape, some indigenous cultures also use the skyscape as a memory device; the stories of the characters associated with the stars, planets and dark spaces recall invaluable practical knowledge such as seasonal variations, navigation, timekeeping and much of the ethical framework for their culture. The stories associated with the location in the sky or across the landscape provide a grounded structure to add ever more complexity with levels of initiation. Typically, only a fully initiated elder would know and understand the entire knowledge system of the community. By keeping critical information sacred and restricted, the so-called ‘Chinese whispers effect’ could be avoided, protecting information from corruption.
Rock art and decorated posts are also familiar aids to indigenous memory, but far less known is the range of portable memory devices. Incised stones and boards, collections of objects in bags, bark paintings, birchbark scrolls, decorations on skins and the knotted cords of the Inca khipu have all been used to aid the recall of memorised information. The food-carrying dish used by Australian Aboriginal cultures, the coolamon, can be incised on the back, providing a sophisticated mnemonic device without adding anything more to the load to be carried when moving around their landscape. Similarly, the tjuringa, a stone or wooden object up to a metre long decorated with abstract motifs, is a highly restricted device for Aboriginal men. As the owner of the coolamon or the elder with his tjuringa touched each marking, he or she would recall the appropriate story or sing the related song.
And so on.
Friday, June 8, 2018
Very sad. He was to me the greatest anthropologist. https://t.co/MlgDzWpjlp— Daniel Everett (@amazonrambler) June 8, 2018
When I got up this morning I thought I’d be writing a post about Joe Rogan. And in that post I figured I’d mention his conversation with Anthony Bourdain. Instead wake to find that Bourdain is dead, an apparent suicide at 61.
I don’t know what to say. You never know, do you?
Bourdain was one of the good ones. Muhammad Ali took boxing beyond boxing into politics. Anthony took food TV beyond food. He took food into adventure and, above all, into culture and society. Food became a way to explore the variety of human life, to explore the multiplicity and richness of human nature.
Food as philosophy?
Food as philosophy?
* * * * *
Thinking about it, for some odd reason I sort of thought about Bourdain as my own secret discovery. He’s mine, mine, mine! And so it’s just a little – but only that, just a little – surprising to see the coverage of his death all over the place, including top of the ‘front page’ of the (digital edition) of The New York Times (did he make that front page of the print edition?). He meant a lot to a lot of people.
Of course, it’s absurd that I should have thought of Anthony Bourdain as my own personal discovery. For one thing, I didn’t know about him until last year, which is rather late in the game. Once I’d discovered him I watched a bunch of his shows on Netflix (mostly the early A Cook's Tour and the more recent Parts Unknown). And I watched scads of interview clips on YouTube. I knew perfectly well that lots of people had discovered him and obviously valued and were touched by his work.
It’s in spite of that knowledge that I thought of him as mine, mine, mine! I wonder of others felt that way as well?
* * * * * *
Thursday, June 7, 2018
Wednesday, June 6, 2018
A couple of years ago, 2014 I believe, a troop of Girl Scouts visited the building where I live and sang Christmas songs in the lobby–I’m told they’d done this before, but 2014 was my first year in the building. When they were done, one of my neighbors urged me to get my trumpet and return the gift. I did it, and the girls appreciated it.
Next year they came back. This time I was ready. I had my trumpet with me. When they were done singing, I got it out and played some songs for them. After I played, oh, three or four, I decided to play “My Favorite Things”–which John Coltrane had made into a jazz standard.
As I made that decision–I didn’t have a set list, but simply decided what to play song by son–I had some misgivings. Why? “My Favorite Things” is a little more complicated than most Christmas carols and I wasn’t quite sure I’d remember it correctly. Still, I wasn’t very worried. I’m a jazz musician. If things got sketchy I could make something up and things would work out.
As things worked out, the Girl Scouts had a surprise for me. It wasn’t anything I’d planned or that they’d planned. It just happened.
As soon as I started playing they started singing along. Cool! But also, Oh oh! Now the pressure was on, just a bit. Because if I wandered away from the song-as-written, I’d mess them up. And I didn’t want to do that.
Things worked out fine.
Purzycki, B. G., Pisor, A., Apicella, C., Atkinson, Q., Cohen, E., Henrich, J., McElreath, R., et al. (2018). The Cognitive and Cultural Foundations of Moral Behavior. Evolution and Human Behavior, doi: 10.1016/j.evolhumbehav.2018.04.004.
Abstract: Does moral culture contribute to the evolution of cooperation? Here, we examine individuals' and communities' models of what it means to be good and bad and how they correspond to corollary behavior across a variety of socioecological contexts. Our sample includes over 600 people from eight different field sites that include foragers, horticulturalists, herders, and the fully market-reliant. We first examine the universals and particulars of explicit moral models. We then use these moral models to assess their role in the outcome of an economic experiment designed to detect systematic, dishonest rule-breaking favoritism. We show that individuals are slightly more inclined to play by the rules when their moral models include the task-relevant virtues of “honesty” and “dishonesty.” We also find that religious beliefs are better predictors of honest play than these virtues. The predictive power of these values' and beliefs' local prevalence, however, remains inconclusive. In summary, we find that religious beliefs and moral models may help promote honest behavior that may widen the breadth of human cooperation.
* * * * *
Tim Mauldin has a double book review in the Boston Review, "The Defeat of Reason". One book is about quantum mechanics and the other is about Thomas Kuhn. I found the quantum mechanics section more interesting. Here's a few paragraphs:
In 1925 Werner Heisenberg had invented matrix mechanics. Heisenberg’s mathematical formalism got the predictions that Bohr had been seeking. But the central mathematical objects used in his theory were matrices, rectangular arrays of numbers. The predictions came out with wonderful accuracy, but that still left the old puzzle in place: how does the electron get from one orbit to another? You can stare at a matrix from morning to night, but you will not get a clue.
Bohr took an unexpected approach to this question: instead of asking if the theory was too young to be fully understood, he declared that the theory was complete; you cannot visualize what the electron is doing because the microworld of the electron is not, in principle, visualizable (anschaulich). It is unvisualizable (unanschaulich). In other words, the fault lay not in the theory, it lay in us. Bohr took to calling any visualizable object classical. Quantum theory had passed beyond the bounds of classical physics: there is no further classical story to tell. This became a central tenet of the Copenhagen interpretation of quantum theory.
Imagine Bohr’s motivation to adopt this extreme conclusion. For over a decade, he had been seeking exact, visualizable electron trajectories and failed. He concluded that his failure was rooted in the impossibility of the task.
But in 1926 Erwin Schrödinger produced a mathematically different theory, wave mechanics. Schrödinger’s mathematics was essentially just the classical mathematics of waves. The atomic system was not designated by a matrix, it was described by a wavefunction. And waves may not be particles, but they are certainly visualizable objects from everyday life...
So the situation in 1926 was rather confused. Matrix mechanics and wave mechanics were, in some sense, thought to be the same theory, differently expressed. But if you use the mathematics to derive a certain matrix yet have no notion of how the physical situation associated with the matrix would appear, how do you get a prediction about what you will observe? And wave mechanics is not much better off. Waves are certainly visualizable, but the world we live in, the world of laboratory experiments, does not present itself as made of waves. It presents itself, if anything, as made of particles. How do we get from waves to recognizable everyday stuff?
This, in a nutshell, is the central conundrum of quantum mechanics: how does the mathematical formalism used to represent a quantum system make contact with the world as given in experience? This is commonly called the measurement problem, although the name is misleading. It might better be called the where-in-the-theory-is-the-world-we-live-in problem.