Tuesday, August 14, 2018
Dating in Seattle: write a lot of stuff that's kind of negative. https://t.co/QtWzOvkdMq pic.twitter.com/5iQvs74wSQ— Simon DeDeo (@SimonDeDeo) August 14, 2018
The article's abstract:
Elizabeth E. Bruch1, and M. E. J. Newman, Aspirational pursuit of mates in online dating markets, Science Advances 08 Aug 2018: Vol. 4, no. 8, eaap9815Romantic courtship is often described as taking place in a dating market where men and women compete for mates, but the detailed structure and dynamics of dating markets have historically been difficult to quantify for lack of suitable data. In recent years, however, the advent and vigorous growth of the online dating industry has provided a rich new source of information on mate pursuit. We present an empirical analysis of heterosexual dating markets in four large U.S. cities using data from a popular, free online dating service. We show that competition for mates creates a pronounced hierarchy of desirability that correlates strongly with user demographics and is remarkably consistent across cities. We find that both men and women pursue partners who are on average about 25% more desirable than themselves by our measures and that they use different messaging strategies with partners of different desirability. We also find that the probability of receiving a response to an advance drops markedly with increasing difference in desirability between the pursuer and the pursued. Strategic behaviors can improve one’s chances of attracting a more desirable mate, although the effects are modest,
I’ve been interested the mind, computing, brains, and computers ever since my undergraduate years at Johns Hopkins back in the mid-1960s. To that end I’ve educated myself about computation, and computers as well, and psychology of various kinds, neuroscience, literature, music, and the arts. I’ve also looked into philosophical issues on these matters. I’m unimpressed.
I suppose that John Searle’s Chinese Room argument is as well known in this business as any one line of argument is. I’ve read it, in several versions, and arguments against it. I find the arguments on both sides uninteresting. I’ve read a bit of Dreyfus, some Dennett, and somewhere along the way some others as well. I’m unimpressed.
Do I think that the mind is a (digital) computer? No, I don’t. But I do believe that using computation as a conceptual vehicle for thinking about the mind has taught us a lot, and will continue to do so. And the same with brains and computers. But the philosophers aren’t in that game and their arguments don’t contribute to it, one way or any other.
Their arguments seem to be about whether or not the game is worth playing. But how could they know if they don’t know and don’t care about how the game is played? Let us imagine, for example, that one day machine translation will be good enough for legal documents. Who cares whether or not those computers have minds or whether they’re really thinking? Well, I suppose it might matter in issues of liability. If a computer should make a mistake in translation, as human translators sometimes do, then perhaps the computer would be held liable if it is deemed to have a mind, otherwise its owner would be held liable. But beyond that?
Just what have these philosophers been arguing about for the last five or six decades?
Perhaps they are tracing the outer edge, the boundary, of a certain era of thought, an épistémè, to use Foucault’s term. Is that épistémè the long 19th century, or mid-20th century? I don’t know.
Now, I suppose those arguing that, no, the mind/brain is nothing like a computer, are also arguing that any research along those lines, whether in artificial intelligence, or some varity of psychology or linguistics, is so misguided that it cannot possibly be of any value. And in that case ALL such research must be stopped immediately. But that’s a VERY hard line to take given that such reseach has alrady produced much that is of value, both in practical technology and theoretical understanding. Still, if THAT’s what they’re arguing, then I suppose that those philosophers arguing the opposite line, that the mind is a computer etc., they’re trying to make room for research. Still, it seems like a wast.
And I don’t think that’s what’s going on. It’s something else.
But I’d like to think that whatever it is, it’s coming to an end. Those arguments are a waste of brain power.
Monday, August 13, 2018
From an article in the NYTimes Magazine about Carl Woese:
What made Woese the foremost challenger and modifier of Darwinian orthodoxy — as Einstein was to Newtonian orthodoxy — is that his work led to recognition that the tree’s cardinal premise is wrong. Branches do sometimes fuse. Limbs do sometimes converge. The scientific term for this phenomenon is horizontal gene transfer (H.G.T.). DNA itself can indeed move sideways, between limbs, across barriers, from one kind of creature into another.Those were just two of three big surprises that flowed from the work and the influence of Woese — the existence of the archaea (that third kingdom of life) and the prevalence of H.G.T. (sideways heredity). The third big surprise is a revelation, or anyway a strong likelihood, about our own deepest ancestry. We ourselves — we humans — probably come from creatures that, as recently as 41 years ago, were not known to exist. How so? Because the latest news on archaea is that all animals, all plants, all fungi and all other complex creatures composed of cells bearing DNA within nuclei — that list includes us — may have descended from these odd, ancient microbes. Our limb, eukarya, seems to branch off the limb labeled archaea. The cells that compose our human bodies are now known to resemble, in telling ways, the cells of one group of archaea known as the Lokiarcheota, recently discovered in marine ooze, almost 11,000-feet deep between Norway and Greenland near an ocean-bottom hydrothermal vent. It’s a little like learning, with a jolt, that your great-great-great-grandfather came not from Lithuania but from Mars.We are not precisely who we thought we were. We are composite creatures, and our ancestry seems to arise from a dark zone of the living world, a group of creatures about which science, until recent decades, was ignorant. Evolution is trickier, far more complicated, than we realized. The tree of life is more tangled. Genes don’t just move vertically. They can also pass laterally across species boundaries, across wider gaps, even between different kingdoms of life, and some have come sideways into our own lineage — the primate lineage — from unsuspected, nonprimate sources. It’s the genetic equivalent of a blood transfusion or (to use a different metaphor preferred by some scientists) an infection that transforms identity. They called it “infective heredity.”
Aim to have the courage of the Bishop of Orlando telling Pope Paul VI that the Moon was part of his diocese pic.twitter.com/9Bx8pFbbq0— Ned Donovan (@Ned_Donovan) August 12, 2018
If the Bishop is correct, the Diocese of Orlando has an area of 15,000,000 square miles, which may explain the Pope’s slightly surprised face. pic.twitter.com/FgD4o5UmmV— Ned Donovan (@Ned_Donovan) August 12, 2018
Fareed Zakaria reviews Adam Tooze, CRASHED: How a Decade of Financial Crises Changed the World (2018):
Tooze calls it a problem in “Western capitalism” intentionally. It was not just an American problem. When it began, many saw it as such and dumped the blame on Washington. In September 2008, as Wall Street burned, the German finance minister Peer Steinbruck explained that the collapse was centered in the United States because of America’s “simplistic” and “dangerous” laissez-faire approach. Italy’s finance minister assured the world that its banking system was stable because “it did not speak English.”In fact this was nonsense. One of the great strengths of Tooze’s book is to demonstrate the deeply intertwined nature of the European and American financial systems. In 2006, European banks generated a third of America’s riskiest privately issued mortgage-backed securities. By 2007, two-thirds of commercial paper issued was sponsored by a European financial entity. The enormous expansion of the global financial system had largely been a trans-Atlantic project, with European banks jumping in as eagerly and greedily to find new sources of profit as American banks. European regulators were as blind to the mounting problems as their American counterparts, which led to problems on a similar scale. “Between 2001 and 2006,” Tooze writes, “Greece, Finland, Sweden, Belgium, Denmark, the U.K., France, Ireland and Spain all experienced real estate booms more severe than those that energized the United States.”But while the crisis may have been caused in both America and Europe, it was solved largely by Washington. Partly, this reflected the post-Cold War financial system, in which the dollar had become the hyperdominant global currency and, as a result, the Federal Reserve had truly become the world’s central bank. But Tooze also convincingly shows that the European Central Bank mismanaged things from the start. The Fed acted aggressively and also in highly ingenious ways, becoming a guarantor of last resort to the battered balance sheets of American but also European banks. About half the liquidity support the Fed provided during the crisis went to European banks, Tooze observes.
Sunday, August 12, 2018
I've posted another working paper to Academia.edu. The title's above. Here's the URL:
Abstract, contents, and introduction below.
* * * * *
Abstract: The Joe Rogan Experience is one of the most popular podcasts on the web, reaching 10s of millions of viewers per month. Rogan discusses a wide variety of topics with a variety of guests, some of them friends who appear regularly: martial arts, comedy, nutrition and health, conspiracy theories, psychedelic drugs, and a variety of science topics. In this paper I discuss several specific podcasts, with: Steven Pinker (his latest book, Enlightenment Now), Michael Pollan (psychedelic drugs), Neil deGrasse Tyson (moon landing), Howard Bloom (music, altered states, space program), and Joey Diaz (Bruce Lee and martial arts). I also include an annotated transcription of the Joey Diaz conversation.
Introduction: Joe Rogan and Socrates in the Agora 2
Grappling at the edges of reality with Joe Rogan 5
The world according to jiujitsu 9
Two notes on psychedelic experience (Michael Pollan on Joe Rogan) 10
Joe Rogan talks with Howard Bloom, Zoooommm! 12
Joe Rogan and Joey Diaz call “Bruce Lee vs. Chuck Norris” – a rough transcript 16
On the conversational construction of cultural reality: Joe Rogan and Joey Diaz discuss Bruce Lee and Chuck Norris in Way of the Dragon 25
Introduction: Joe Rogan and Socrates in the Agora
The idea that Joe Rogan is a modern Socrates is absurd, no? Well...let’s think about it.
Sure, I know, Socrates is the father of Western philosophy, or maybe grandfather, or great uncle, who knows, but one of those ancients. Apparently he’d go into Athens’ public square, the agora, and hold conversations on philosophical matters. I have no idea what mix of strangers and regulars joined in, but there certainly were regulars. Plato was one of them and he modeled his philosophical discourses after Socrates’ mode of talk, thus producing the first substantial body of written intellectual discourse in Western secular thought.
I have no reason to think that Rogan has made or will make any original contributions to contemporary thought, not does Rogan make any such claims for himself. As far as I can tell he thinks of himself as a guy who likes to have rambling conversations on various topics with various people. Some of these people are friends of his, or at least people he knows personally even if he doesn’t routinely hang out with them. He may have these people on the podcast time and again. Of these, some are comedians, others are martial artists, and others, bow hunting, who knows? These conversations typically run two-and-a-half to three hours long and are just casual conversation, though every once in awhile they’ll dig in a bit.
But Rogan also has conversations with guests having substantial intellectual expertise of one kind or another, for example: Steve Pinker (linguistics, psychology), Sean Carroll (physics, cosmology), Dennis McKenna (ethnopharmacology, psychedelic drugs), Debra Soh (neurosciences), Brett Weinstein (evolutionary biology), and Adam Frank (physics, astronomy). These conversations are also casual in manner, where Rogan plays the Everyman who is curious about these topics. And so he learns something from these experts, as do his viewers.
And he has lots of viewers, millions of them. Of course they don’t interact with him in the way Socrates’ interlocutors interacted with him – we simply don’t have any venue like the agora of ancient Greece. But they interact with one another in comments to the podcasts and in online forums like Reddit. And, yes, a lot of that interaction is just chitchat, but some of it is more substantive. There is learning going on, ideas exchanged, opinions validated.
Molly Worthen discussed Rogan and others in a recent column in The New York Times, “The Podcast Bros Want to Optimize Your Life”, noting:
Don’t dismiss the podcast bros merely as hucksters promoting self-help books and dubious mushroom coffee. In this secularized age of lonely seekers scrolling social media feeds, they have cultivated a spiritual community. They offer theologies and daily rituals of self-actualization, an appealing alternative to the rhetoric of victimhood and resentment that permeates both the right and the left. “They help the masses identify the hole in the soul,” Karli Smith, 38, a fan who lives in Tooele, Utah, told me. “I do feel the message is creating a community.”All this continues a long American tradition of self-help and creative, market-minded spirituality. The 19th century brimmed with gurus ready to guide you to other dimensions and prophets of the path from rags to riches. The podcasters’ exhortations to cultivate character and learn from the habits of successful businessmen, scientists and soldiers (whom they invite for interviews that sometimes stretch longer than two hours) could come straight from the pages of Victorian self-improvement manuals.
A bit later:
This is the podcast bro ethos: Ditch your ideologically charged identity. Accept your evolutionary programming. Take responsibility for mastering it, and find a cosmic purpose. “I’m not saying it’s only personal responsibility that matters, but you have to start there,” Mr. Marcus told me.But wait — how does cutting down carbs and tossing kettlebells set me up to serve the universe? Here is where the podcast bros get metaphysical. Many have a strong interest in spirituality, and see practices like Buddhist meditation or consuming hallucinogenic “plant medicine” as not just a way to improve daily performance, but a path to something deeper. Their metaphysical tastes range from Carl Jung’s psychology to ancient Stoic philosophy, which calls for self-control and transcendence of material wealth.
That’s a pretty interesting mix of materials. That kind of range, and his popularity, is what makes Joe Rogan such an interesting and, I believe, important figure.
That’s why I’ve spent a good many hours over the last three months watching Rogan’s podcasts. I’ve watched, I don’t know, a dozen or two dozen all the way through, and many fragments from others. For Rogan’s fans will cut his podcasts into fragments and post them online, as does Rogan himself (more likely Jamie Vernon, his technical aide-de-camp). And from that I’ve made half a dozen posts to my blog, New Savanna, which I’ve gathered into this working paper.
Because that’s what it is, a work in progress, not finished–and maybe never to be finished. Who knows?
Well over half of this paper is devoted to a single 10 minute conversation Rogan had with one of his regulars, comedian Joey Diaz. They’re discussing the fight between Bruce Lee and Chuck Norris at the end of Way of the Dragon, an important martial arts film. Both Rogan and Diaz are martial artists, so Bruce Lee is a hero to them, perhaps Chuck Norris as well, though he doesn’t have the iconic status that Lee does. I watched that segment relatively early in my dive into the Joe Rogan Experience and the conversation struck me as emanating from the center of Rogan’s worldview. So I included it in the article I wrote for 3 Quarks Daily in late May – “Grappling at the edges of reality with Joe Rogan”, where I also discussed conversations with Steven Pinker and Neil deGrasse Tyson. That piece is the first section of this working paper.
The Rogan/Diaz was still with me after two more months of Joe Rogan, so I decided to transcribe it, not for any “deep” philosophical content, no, not that. But just to see how it went, because it’s that kind of conversation that’s made Rogan a star of the podcast world. In the next to last section of this document – “Joe Rogan and Joey Diaz call ‘Bruce Lee vs. Chuck Norris’ – a rough transcript” – I present the transcript without any comments other than a few introductory remarks. In the last section – “On the conversational construction of cultural reality: Joe Rogan and Joey Diaz discuss Bruce Lee and Chuck Norris in Way of the Dragon” – I add extensive interline commentary to the conversation. Just how do these guys build a sense of reality? That’s what I’m curious about.
The shortest piece in this collection, “The world according to jiujitsu”, consists of a transcription of an impromptu speech that Rogan gave when he was awarded a black belt by Eddie Bravo – also a friend and regular guest on the podcast. That speech, so it seems, expresses Rogan’s core values. The other two pieces are based on specific podcasts, one with Michal Pollan and the other with Howard Bloom (whom I know), and have transcripts of short segments from each. Why transcriptions? Because the back-and-forth is important.
It’s the shuttle and the loom.
The work of @SimonDeDeo has made me realize a seminar on the philosophy of textual difference, vector spaces, and information theory would be amazing. This is the new humanities.— Andrew Piper (@_akpiper) August 8, 2018
Here's a report I prepared in 1985, my last year on the faculty at The Rensselaer Polytechnic Institute (aka RPT). I was one of a handful of faculty given a small summer grant to design a new and interdisciplinary undergraduate course for the School of Humanities and Social Sciences. I imagined that this course would be team taught by faculty representing the three different modes of thought in the human sciences (English for les sciences de l'homme). These three modes of thought don't have standard names and I've never been able to come up with labels I found satisfactory. Still, they are:
1) discursive, hermeneutic, interpretive, such as literary criticism and most history before cliometrics,2) social/behavioral science, characterized by statistical analysis of data (the corpus linguistics methods of so-called "distant reading" fall under this rubric), and3) structural, linguistic, even computational, where computation is a model for human thought and activity.
The idea of the course was to expose undergraduate students to these three modes of thought as applied to some one topic. And, incidentally, it might do the same for the faculty teaching the course.
Now THAT would be a new humanities. It's probably not possible within the current institutional setting, but really, it has to happen some day. It's the only thing that makes intellectual sense.
Download the full report:Policy, Strategy, Tactics: Intellectual Integration in the Human Sciences, an Approach for a New EraThe human sciences encompass a wide variety of disciplines: literary studies, musicology, art history, anthropology (cultural and physical), psychology (perceptual, cognitive, evolutionary, Freudian, etc.), sociology, political science, economics, history, cultural geography, and so forth. In this paper I process to organize courses and curricula aso as to include: 1) material from three different methodological styles (interpretive, behavioral or social scientific, and structural/constructive: linguistics, cognitive science), 2) historical and structural/functional approaches, and 3) materials from diverse cultures. The overall scheme is exemplified by two versions of a course on Signs and Symbols, one organized around a Shakespeare play and the other organized around traditional disciplines.
#HistoryofPainting— History of Painting (@TheNewPainting) August 12, 2018
“Beauty awakens the soul to act.” Dante (Durante degli Alighieri)#TheNewPainting
Don't Be Afraid of #Art!
Christian Benjamin Olsen (3 May 1873, Odense - 11 February 1935, Copenhagen) was an Danish painter.#TheFreeExhibition
After the Rain", before 1929 pic.twitter.com/Q1T1em8S9c
You would prefer maybe this version:
This, near as I can tell, is the "origial", more or less as it came out of the camera:
What is it? Snow falling at night. There's a street light above and it appears that we're looking at the edge of a building.
Saturday, August 11, 2018
Galen Strawson review's Michael Pollan's How To Change Your Mind: The new science of psychedelics in TLS.
What should we call the experience?
There’s a terminally weary group of words used to characterize psychedelic experience. Among them we find (in descending order of association with the supernatural) “holy”, “sacred”, “mystical”, “spiritual”; “transcendence”, “bliss”, “selflessness”, “oneness”. Some are so loaded, and directly question-begging (in the original sense of the term), that it seems best to introduce a new neutral term – “X” – for the purposes of this review. X is whatever it is that is most powerfully positive in psychedelic experience. It is what psychologists try to measure when they administer the “Mystical Experience Questionnaire”, devised in the 1960s. There’s a wide consensus that there is no significant experiential difference between pharmacologically induced X and X that arises as a result of meditative or other spiritual practices.
There is an extraordinary degree of agreement, on the part of those who have successful “trips” under suitably controlled conditions, that the fundamental principle of reality is love.
As the Beatles' sang, "Say the word, the word is love". After this and that Strawson observes:
But love requires a lover and a loved (it is logically a two-place relation), and most of those who use the word in an attempt to convey their X experience seem to have something else – a kind of perfectly impersonal blessedness – in mind.We shouldn’t, then, look for “authenticity” in X experience – if that is supposed to mean that there’s nothing (ultimately) bad in reality. We can leave room for primordial blessedness if it allows for unutterable tragedy. But we should probably look no further than the magnificence of the experience itself. Its significance consists in the fact that it exists.We can go a little further. There seems to be a deeper psychological formation underneath the experience of love. The best name for it, perhaps, is Acceptance (awarded a capital “A” to match Huxley’s capital-L “Love”): profound, anxiety-dissolving acquiescence in how things are, acceptance of life, acceptance of death. Acceptance, when attained, involves experience of great joy – just as relief from intense pain is (some say) the greatest human pleasure. It is what Nietzsche is after when he speaks of amor fati, loving one’s fate. It’s precisely what he lacked when, in July 1885, he wrote to Franz Overbeck that “my life now consists in the wish that things might be other than I understand them to be, and that someone might make my ‘truths’ appear unbelievable to me”.Capital-A Acceptance seems tightly linked with the dissolution of one’s sense of self, or at least the elimination of one’s sense of the importance of self, and neuroscientists have not been slow to speculate about this. Scans of the tripping brain show dramatic reduction in the activity in the so-called default mode network or DMN – known to some neuroscientists as “the me network”. One may doubt all such specific neurological hypotheses, but those who believe that the DMN is a suspect theoretical construct can think simply of activity in, and interaction between, the medial prefrontal cortex, posterior cingulate cortex, inferior parietal lobule, lateral temporal cortex, dorsal medial prefrontal cortex and hippocampus.Pollan reproduces two diagrams recently published by the Imperial College lab using various scanning technologies. They represent the activity and interconnectivity of a brain under the influence of psilocybin, and a brain after the administering of an “active placebo” (a placebo that causes a strong tingling sensation, so that one feels one may have been given the drug under test). They’re spectacularly different. The psilocybin brain is thick with areas of activity and lines of interconnection; the placebo or everyday brain is almost bare by comparison. One doesn’t have to accept any of the specific neurological explanations to concede that the diagrams point up the richness of psychedelic experience.Some think that psychedelics simply reactivate earlier capacities. “Babies and children are basically tripping all the time”, in Alison Gopnik’s words. Growing up fits a powerful “reducing valve” onto the great consciousness engine of the brain, as philosophers like Henri Bergson and C. D. Broad once proposed, and as Wordsworth intimated – and St Paul (“now we see through a glass, darkly; then, face to face”). According to this theory, maturation renders the brain fit for purpose in a difficult world; it imposes a mental filter that admits, in Huxley’s words, only the “measly trickle of the kind of consciousness” we need in order to survive. Psychedelic drugs remove the valve or filter. They dissolve the standard self-system, interrupting what Hazlitt called the “long narrowing of the mind to our own particular feelings and interests”. They return us, in Pollan’s words, to the wonder of “unencumbered first sight, or virginal noticing, to which the adult brain has closed itself. (It’s so inefficient!)”In ordinary life, as Kant said, the “dear self is always turning up”. Psychedelics takes it offline. In X experience we lose what Iris Murdoch calls the “fat relentless ego”. We quit – again in Murdoch’s words – the “familiar rat-runs of selfish day-dream”. It seems, furthermore – and crucially – that a single dose can have lasting effects.
There's a bit more.
Kunert R, Willems RM, Casasanto D, Patel AD, Hagoort P (2015) Music and Language Syntax Interact in Broca’s Area: An fMRI Study. PLoS ONE 10(11): e0141069. https://doi.org/10.1371/journal.pone.0141069
Abstract: Instrumental music and language are both syntactic systems, employing complex, hierar- chically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic pro- cessing may be partly shared between music and language. However, functional neuroim- aging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sen- tences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the com- plex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anom- aly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syn- tactic integration resources in Broca’s area.
Uri Hasson, Giovanna Egidi, Marco Marelli, Roel M. Willems, Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension, Cognition, Volume 180, November 2018, Pages 135-157, DOI: https://doi.org/10.1016/j.cognition.2018.06.018
Abstract: Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
#HistoryofPainting#Art is life!♡Art is love!♡Art is freedom!— History of Painting (@TheNewPainting) August 11, 2018
Egon Schiele (12 June 1890 – 31 October 1918) was an Austrian painter. A protégé of Gustav Klimt, Schiele was a major figurative painter of the early 20th century.
Study with flowers or Flower study, before 1900 pic.twitter.com/jinhAUJPwB
Friday, August 10, 2018
A new humanities for the 21st century, really? – Heart of Darkness across multiple media, Or: The times they are a changin’
From the other day, “the new humanities”:
An Open Letter to Dan Everett about Literary Criticism
The work of @SimonDeDeo has made me realize a seminar on the philosophy of textual difference, vector spaces, and information theory would be amazing. This is the new humanities.— Andrew Piper (@_akpiper) August 8, 2018
I replied with this:
A New Humanities implies not only new methods, but new objects of inquiry. What is the current status of, e.g. The Sopranos, The Wire, Mad Men, Deadwood, Breaking Bad, etc. as objects of humanistic inquiry?
And then, after a bit of discussion, we got around to this:
Actually, you can probably find a syllabus somewhere with everything except the Bourdain. This Cliffs Notes section gives you a sense of how often teachers are assigning both novel & film at least. https://t.co/UQ1i8jevI0— Ted Underwood (@Ted_Underwood) August 10, 2018
I picked Heart of Darkness, of course, because I’ve done quite a bit of work on it, Apocalypse Now as well. As for the range of associated texts–a major Hollywood film, a Japanese manga, and whatever-the-hell Bourdain's show is–that allows you to address culture in a more complex and interesting way than you could using just the High Culture literary text. And that's important for understanding how culture works, no?
I cite some of that work on both Heart of Darkness and Apocalypse Now in an open letter I wrote to Dan Everett, who was then Dean of Arts and Sciences at Bentley University (he’s now back to be just a regular faculty member):
I cite some of that work on both Heart of Darkness and Apocalypse Now in an open letter I wrote to Dan Everett, who was then Dean of Arts and Sciences at Bentley University (he’s now back to be just a regular faculty member):
An Open Letter to Dan Everett about Literary Criticism
In one section of the letter I talk of teaching Heart of Darkness at the undergraduate level, including those other texts. I’ve reproduced that section below. That’s followed by another discussion of the text, one more oriented toward research, perhaps a graduate level course, one concerned with patterns in the text, including a remarkable pattern that one finds by looking at how long the paragraphs are. You’ll have to read the open letter for that discussion.
Teaching Conrad’s Heart of Darkness
Joseph Conrad’s Heart of Darkness is one of the texts most frequently taught in undergraduate courses. Why is it so popular? Well, it’s relatively short, 40,000 words, which is a consideration, albeit a minor one. Surely it’s popular for its subject matter – roughly, European imperialism in Africa – and, secondarily, Conrad’s impressionist style. Still, why, why do those things matter?
Let’s look at a short statement by J. Hillis Miller, a senior and very respected literary critic. He is old enough to have gotten his degree at Harvard at a time when, in his view, few in the English Department there were much good at interpreting texts. Moreover he is one in the first generation of deconstructive critics. Shortly after the turn of the millennium the Association of Departments of English honored him for his fifty years in the profession. Of his early days as a faculty member at Johns Hopkins, the 1950s and 1960s, Miller tells us:
English literature was taken for granted as the primary repository of the ethos and the values of United States citizens, even though it was the literature of a foreign country we had defeated almost two hundred years earlier in a war of independence. That little oddness did not seem to occur to anyone. As the primary repository of our national values, English literature from Beowulf on was a good thing to teach. 
Which is to say, literature is taught as a vehicle for cultural indoctrination. Of course you know that; you don’t need Hillis Miller to tell you that. But I just wanted to get the idea explicitly on the record along with that little (emic) irony about English literature in the United States – Miller had earlier pointed out that, at the time, American literature was marginal in the academy, at least at Hopkins. And what I’m wondering is if the original impetus behind interpretive criticism wasn’t cultural anxiety: Just who are we and what are our values? Let’s set that aside for the moment, though I’ll return to it at the end of this section.
Just a bit more about Heart of Darkness, which is a relatively straight-forward story. A pilot, Charles Marlow, needs a gig. He calls on an aunt who gets him an interview with a continental firm which hires him to pilot a steamer up the Congo River to a trading station that has gone incommunicado. Marlow’s job is to make contact with the station agent, who is regarded as a real up-and-comer. Marlow is to recover the ivory that Kurtz has, presumably, been accumulating. Marlow is our narrator. Actually, he tells the story to an unnamed third party, who then tells it to us, but we can skip that detail. That third party presents the bulk of the story to us as Marlow’s own words. Marlow’s steamer is crewed by native Africans and, in addition to personnel from the trading company, there are pilgrims on board.
Marlow is presented as a brilliant and talented man who went to Africa to earn enough money to make him worthy of his Intended; we don’t learn this last detail until late in the story, nor are we ever told her name. We’re also led to believe that Kurtz has gone mad, setting himself up as a demi-god to the natives and taking a native mistress. As for those natives, it is clear that they have been very badly treated by the Europeans.
Whatever else is going on, Heart of Darkness is an indictment of European imperialism in Africa. And yet in 1975 Chinua Achibe, the Nigerian novelist, set off bombshells when he delivered a lecture, “An Image of Africa: Racism in Conrad's Heart of Darkness” . How could Heart of Darkness be racist, people objected, when the text obviously condemns imperialism? Easy, goes the rejoinder, for Conrad deprives Africans of agency, depicts them only as victims, and never has even one of them speak. Now, NOW, we’ve got something to think and talk about. Heart of Darkness may be over a century old, but the issues it embodies are very much alive in this, the 21st century.
This post hit the humanities blogosphere in late July like Little Boy and Fat Man combined:
I recant my 2013 view that there's no humanities crisis in the US. The last five years of degree data have been brutal, and all humanists need to worry about how to deal with the attendant changes to our disciplines.— Benjamin Schmidt (@benmschmidt) July 27, 2018
Post: https://t.co/VpLxPVUFrh pic.twitter.com/ypOYj3gu2Q
What of it?
Two weeks later I offered a pair of tweets:
So, in one story it's sci-biz-tech crushing the humanities & True Culture. Do we need a new narrative: THAT story belongs to an old cultural formation rooted in 19th century binaries. Things have changed & we need a New Story.Shakespeare is now the New Homer, ancient as the hills. But who’s the New Shakespeare? Or is the question itself a product of that old cultural formation?
My point was that that first story has been around for a long time. I heard it in my undergraduate years in the 1960s and it’s rooted at least in part in anti-science romanticism dating back to the early 19th century. But things have changed. In the 1960s and 70s the kool kids joined rock bands and created communes. In the 90s they were creating high-tech start ups, nor should we forget that Steve Jobs visited an ashram in India before he founded Apple with Steve Wozniak.
So, what’s our new story, one crafted for a world where general media literacy is as important as print literacy, one where movies, video, and electronic games occupy much of the cultural space that had belonged to print before the 20th century.
That’s one thing. Here’s another.
When I was an undergraduate at Johns Hopkins I took a course in social theory taught by Arthur Stinchcombe in the Sociology Department. That is, this was a social sciences course, not a humanities course. It was one of those courses designed from upper level undergraduates and beginning graduate students.
On the first day of class Stinchcombe asked us to do a bit of theorizing. He reported that a recent study had shown that college students were more likely to shift from non-humanities majors to humanities majors than vice versa (remember, this was back in the mid-1960s, the high point of humanities enrollment in the chart above). Why, he asked us, why?
Some of the graduate students offered up accounts that were clever. But I wasn’t buying it, nor was I saying anything. But after awhile I decided to speak up. I offered the high school students are going to pick a major they know something about. They wouldn’t really know that the humanities is a ‘thing’, but they understand biology and chemistry and engineering or business. So those are the kinds of majors they’re going to declare their freshman year. But once in college they learned that, yes, the humanities exist and you might even be able to get a job. And so, when they get a chance to change majors, they change into humanities.
That, it turned out, was what the study had found.
When students switch majors these days, what’s the switch? Not, presumably, to the humanities.
Thursday, August 9, 2018
Naomi Klein says the New York Times blew its massive story about the decade in which we frittered away our chance to halt climate change
Naomi Klein, Capitalism Killed Our Climate Momentum, Not “Human Nature”, The Intercept.
According to Rich, between the years of 1979 and 1989, the basic science of climate change was understood and accepted, the partisan divide over the issue had yet to cleave, the fossil fuel companies hadn’t started their misinformation campaign in earnest, and there was a great deal of global political momentum toward a bold and binding international emissions-reduction agreement. Writing of the key period at the end of the 1980s, Rich says, “The conditions for success could not have been more favorable.”And yet we blew it — “we” being humans, who apparently are just too shortsighted to safeguard our future. Just in case we missed the point of who and what is to blame for the fact that we are now “losing earth,” Rich’s answer is presented in a full-page callout: “All the facts were known, and nothing stood in our way. Nothing, that is, except ourselves.”Yep, you and me. Not, according to Rich, the fossil fuel companies who sat in on every major policy meeting described in the piece. (Imagine tobacco executives being repeatedly invited by the U.S. government to come up with policies to ban smoking. When those meetings failed to yield anything substantive, would we conclude that the reason is that humans just want to die? Might we perhaps determine instead that the political system is corrupt and busted?)This misreading has been pointed out by many climate scientists and historians since the online version of the piece dropped on Wednesday. Others have remarked on the maddening invocations of “human nature” and the use of the royal “we” to describe a screamingly homogenous group of U.S. power players. Throughout Rich’s accounting, we hear nothing from those political leaders in the Global South who were demanding binding action in this key period and after, somehow able to care about future generations despite being human. The voices of women, meanwhile, are almost as rare in Rich’s text as sightings of the endangered ivory-billed woodpecker — and when we ladies do appear, it is mainly as long-suffering wives of tragically heroic men.All of these flaws have been well covered, so I won’t rehash them here. My focus is the central premise of the piece: that the end of the 1980s presented conditions that “could not have been more favorable” to bold climate action. On the contrary, one could scarcely imagine a more inopportune moment in human evolution for our species to come face to face with the hard truth that the conveniences of modern consumer capitalism were steadily eroding the habitability of the planet. Why? Because the late ’80s was the absolute zenith of the neoliberal crusade, a moment of peak ideological ascendency for the economic and social project that deliberately set out to vilify collective action in the name of liberating “free markets” in every aspect of life. Yet Rich makes no mention of this parallel upheaval in economic and political thought.
Labels: global warming
In chapter “e) Charlotte” in Part Five, there was a vote on whether or not to accept an offer to buy out the Met Life Tower. The vote was close, 1207 against, 1093 for. Charlotte was pissed at those who voted to accept the offer (331-332):
What were they thinking? Did they really imagine that money in any amount could replace what they had made here? It was as if nothing had been learned in the long years of struggle to make lower Manhattan a livable space, a city state with a different plan. Every ideal and value seemed to melt under a drenching of money, the universal solvent. Money money money. The fake fungibility of money, the pretense that you could buy meaning, buy life?She stood up and Mariolino nodded at her. As chair it was okay for her to speak, to sum things up.“Fuck money,” she said, surprising herself. “It isn’t all it’s cracked up to be. Because everything is not fungible to everything else. Many things can’t be bought. Money isn’t time, it isn’t security, it isn’t health. You can’t buy any of those things. You can’t buy community or a sense of home. So what can I say. I’m glad the vote went against this bid on our lives. I wish it had been much more lopsided than it was. We’ll go on from here, and I’ll by trying to convince everyone that what we’ve made here is more valuable than this monetary valuation which amounts to a hostile takeover bid of a situation that is already as good as it can get. It’s like offering to buy reality. That’s a rip at any price. So think about that, and talk to the people around you, and the board will meet for its usual scheduled meeting next Thursday. I trust this little incident won’t be on the agenda. See you then.”
I’m wondering if that doesn’t leave room for a counter narrative, one the KSR doesn’t tell, though he hints at it at the very end.
Just how is it that they made their community, their sense of home? Not simply by living together in the Met Life Tower. That’s something, but not enough. It’s only an opportunity. They grow their own food in the tower, at least some of it. That’s getting warmer. And the fact that they eat together in a large hall, that’s getting warmer as well. But is it enough?
In the very last chapter we’re deep below the surface in a nightclub (611):
Everyone in the room is now grooving to the tightest West African pop any of them have ever heard. The guitar players’ licks are like metal shavings coming off a lathe. The vocalists are wailing, the horns are a freight train.
And we learn what one of our crew can’t dance worth spit, is a regular klutz on the dance floor, though another is somewhat better. And then another musician joins the band, — “Tall skinny guy, very pale white skin, black beard” — and when he starts to play (612):
The other horn players instantly get better, the guitar players even more precise and intricate. The vocalists are grinning and shouting duets in harmony. It’s like they’ve all just plugged into an electrical jack through their shoes...Crowd goes crazy, dancing swells the room.
And a deep epiphanic glow pervades the room.
Now THAT’s how you make community. But where’d it come from? That whole scene struck me as being uncharacteristic of the novel. When I read that, at the very end no less, I realized I’d been seeking such scenes all along. Why’d KSR wait until the very end?
And it’s not just “Where in KSR did that come from”? But also, “Where in that world did in come from?” It’s as though KSR didn’t intend for it to happen, wasn’t part of his world building, but somehow at the very end a different world collided with KSR’s in that underground club and insisted on ending the story.
There’s a counter narrative there, one about how such clubs came to be/continued to be. About the music, the musicians, the dance. This counter narrative goes back at least to the First Pulse. It involves Jew Grew. It involves the Mystic Jewels for the Preservation of all that’s Righteous and Funky. And how the Met Lifers grooved together.
BTW, the name of KSR’s underground club is Mezzrow’s. Mezz Mezzrow was born Milton Mesirow. He played jazz clarinet and was a part of the mid-century traditional jazz scene in America. He also supplied the musicians with reefer. Mezz became slang for marijuana.
Like I said, a counter narrative.
Conjoining uncooperative societies facilitates evolution of cooperation | Nature Human Behaviour https://t.co/FRt3zCSHCT— Steven Strogatz (@stevenstrogatz) August 9, 2018
A bit earlier in this millennium I thought it would be interesting to write a book on the parallel evolution of computer culture and psychedelic culture in the United States from mid-century to the end of the millennium. I wrote up a proposal, called it Dreams of Perfection, and it went nowhere. Subsequently John Markoff and Fred Turner each published parts of the story. Now I’m publishing a slightly revised portion of that old book proposal as an independent document. I’ve cut the marketing portions of the proposal, leaving only the conceptual overview and the chapter summaries. I’d envisioned five longish chapters, one for each decade, each chapter keyed to a different film: Fantasia, Forbidden Planet, 2001: A Space Odyssey, Tron, and The Matrix. I’ve retained that structure in this précis and concluded it with a consolidated chronology is significant dates.
Downloadt Academia.edu: https://www.academia.edu/37206924/Mind_Hacks_R_Us_Computing_and_Tripping_to_the_Millenniums_End
Here's the consolidated chronology:
1936: Alan Turing publishes “On Computable Numbers, with an application to the Entscheidungsproblem.” This paper established that some computations were, in principle, impossible to perform.
1936: Avant-garde playwright Antonin Artaud travels to Mexico to try mescaline. Artaud’s writings were very influential among the people who created the “happenings” of the 1960s and 1970s.
1938: LSD was first synthesized at Sandoz Pharmaceuticals in Switzerland, though its mind altering properties were not discovered until 1943. LSD, of course, was the major “drug” of the psychedelic 60s.
1945: Vannevar Bush published “As We May Think,” in the Atlantic Monthly. This article, written by a government research administrator, is often regarded as a harbinger of the personal computer revolution of the 1980s.
1945: Hungarian emigrant and polymath John von Neumann wrote “First Report on the Development of EDVAC,” the document that defined the basic scheme used to implement software in digital computers.
1953: Watson and Crick publish the double helix structure of DNA and thus initiate the age of microbiology, initiating biology into the information paradigm.
1954: J. R. R. Tolkein publishes The Fellowship of the Ring, The Two Towers.
1954: Thorazine, the first major tranquilizer, is marketed.
1956: The Bathroom of Tomorrow attraction opens at Disneyland.
1959: John McCarthy proposes time-sharing to MIT’s director of computing; time-sharing would make computers much more accessible.
1960: Robert Heinlein publishes Stranger in a Strange Land, which would become a major point of literary reference in the drug and mystical counter-culture of the 1960s.
1966: LSD and other psychedelics are outlawed, making it illegal for people to take trips.
1966: Original Star Trek series debuts on television, starting an entertainment franchise that would produce offspring for the rest of the century.
1967: The Beatles release Sgt. Pepper’s Lonely Hearts Club Band, a triumph of high-tech electronics and the psychedelic imagination.
1968: President Nixon declares war on drugs.
1969: Herbert Simon publishes The Sciences of the Artificial, a collection of essays that makes fundamental insights of the artificial intelligentsia accessible to a larger audience.
1969: Man lands on the moon, the first time we have traveled to another world from ours – in physical reality.
1969: The Defense Department’s Advanced Research Projects Agency (ARPA) establishes a nationwide network linking computer researchers to one another. This is the original seed of the internet.
1977: Star Wars became a motion-picture blockbuster, wedding martial arts mysticism with a space opera story and high tech special effects.
1981: William Bennett appointed to head the National Endowment of Humanities, his first government post. This positioned him to declare and lead a culture war. Later he would become “Drug Czar” under President Bush.
1981: Ted Nelson publishes Computer Liberation, in which he describes an elaborate hypertext system he called Project Xanadu (after the kingdom described in Coleridge’s “Kubla Khan”).
1984: Apple Computer advertised their new Macintosh in a Super Bowl advertisement that pitched IBM in the role of Big Brother and the Mac as humankind’s liberator.
1984: William Gibson combines video game imagery with reggae music and mythology and sets it in the future, thus placing psychedelia and computing on equal terms in the cyberpunk world of Neuromancer.
1985: Steve Jobs purchases Pixar, from George Lucas, and establishes it as an independent computer graphics company. Pixar would go on to produce the first completely computer-generated motion pictures.
1990: The world-wide-web was proposed by Swiss researchers.
1991: Desert Storm, the first war televised live and in real-time. Now real war is at our fingertips, just like a video game.
1994: Netscape was founded, with its IPO in 1995. This is the first company created by the world-wide-web, and it started the dotcom bubble that was to burst in 2001.
1999: Inventor and businessman turned futurist guru Raymond Kurzweil publishes The Age of Spiritual Machines, a manifesto about a future in which machines will surpass the accomplishments of their human creators.
2000: Detroit Electronic Free Music Festive draws 1.5 million people in its first year and is dedicated to genres of music which cultivate trance states.
2003: Anti-war organizers used the world-wide-web to organize world-wide protests against the American-led war against Iraq.
2003: Two major books about psychedelic drugs and mysticism are published, Rational Mysticism, and Breaking Open the Head.
Wednesday, August 8, 2018
Now we’re ready to look at a computational model I developed for a Shakespeare sonnet, 129, “The Expense of Spirit”. When did this work I had no intention of modeling the whole thing, much less implementing a computer simulation of such a model. I just wanted to do something that felt useful – a vague criterion if ever there was one, but nonetheless real – that afforded some kind of insight. In such a model the nodes in a graph are generally taken to represent concepts while the links between them represent relations between concepts, see my earlier post, Gavin 1.2: From Warren Weaver 1949 to computational semantics, for further remarks.
With that in mind, here’s the sonnet with modernized spelling:
1 The expense of spirit in a waste of shame 2 Is lust in action, and till action, lust 3 Is perjured, murderous, bloody, full of blame, 4 Savage, extreme, rude, cruel, not to trust; 5 Enjoyed no sooner but despised straight, 6 Past reason hunted, and no sooner had, 7 Past reason hated as a swallowed bait 8 On purpose laid to make the taker mad: 9 Mad in pursuit and in possession so, 10 Had, having, and in quest to have, extreme; 11 A bliss in proof, and proved, a very woe, 12 Before, a joy proposed, behind, a dream. 13 All this the world well knows; yet none knows well 14 To shun the heaven that leads men to this hell.
Let’s begin at the beginning. The first line and a half is generally taken as a play words. In one sense, expense means ejaculation and spirit means semen, making lust in action an act of sexual intercourse. You can’t get more sensorimotor than that.
But the lines can also be mapped into Elizabethan faculty psychology, in which spirit was introduced as a tertium quid between the material body and the immaterial soul(s). The rational soul exerted control over the body through the intellectual spirit or spirits; the sensitive soul worked through the animal spirit; and the vegetative soul worked through the vital spirit. Madness could be rationalized as the loss of intellectual spirit causing a situation in which the rational soul can no longer control the body. The body is consequently under control by man’s lower nature, the sensitive and vegetative souls. That too is lust in action. But the conception is abstract. It is this abstract lust in action that allows uncontrolled pursuit of the physical pleasures (and disappointments) of sex.
The uncontrolled and unfulfilling pursuit of sex can be expressed as a narrative that is at the core of the sonnet. The first 12 lines direct our attention back and forth over the following sequence of actions and mental states:
Desire: Protagonist becomes consumed with sexual desire and purses the object of that desire using whatever means are necessary: “perjur'd, murderous, bloody . . . not to trust” (ll. 3-4).Have Sex: Protagonist gets his way, having “a bliss in proof” (l. 11).Shame: Desire satisfied, the protagonist is consumed with guilt: “despisèd straight” (l. 5), “no sooner had/ Past reason hated” (ll. 6-7).
We might diagram that narrative sequence like this, where that loop at the top indicates that the whole sequence repeats:
|Figure 1: Lust Sequence|
Line 4 looks at Desire (“not to trust”), then line 5 evokes Have Sex followed by Shame. Line 6 begins in Desire then moves to Have Sex, followed by Shame at the beginning of line 7, whose second half begins a simile derived from hunting. Line 10 begins by pointing to Shame, then to Have Sex, then to Desire, thus moving through the sequence in reverse order. It concludes by characterizing the whole sordid business as “extreme.” I leave it as an exercise for the reader to trace the sequence in lines 11 and 12.
Now consider this Figure 2, in which I have taken lines from the poem and superimposed them on the lust sequence with pointers from words in the text to nodes in the network:
|Figure 2: Words and concepts|
Those words ARE the text of Shakespeare’s sonnet. In the diagram they are physically distinct from the mental machinery postulated to be supporting their meaningfulness. Notice that many words are without pointers. The diagram is incomplete, which is obvious at a glance. That’s a virtue, not the incompleteness, but that it is readily apparent. When pursued properly, such models are brutally unforgiving in such matters.
Ashutosh Jogalekar over the 3 Quarks Daily (look in the comments) has called my attention to a recent article in which Douglas Hofstadter puts Google Translate through its paces and, surprise! surprise! finds it wanting. This is the sort of thing he does:
I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.The program fell into my trap, not realizing, as any human reader would, that I was describing a couple, stressing that for each item he had, she had a similar one. For example, the deep-learning engine used the word “sa” for both “his car” and “her car,” so you can’t tell anything about either car-owner’s gender. Likewise, it used the genderless plural “ses” both for “his towels” and “her towels,” and in the last case of the two libraries, his and hers, it got thrown by the final “s” in “hers” and somehow decided that that “s” represented a plural (“les siennes”). Google Translate’s French sentence missed the whole point.
What any human reader would realize, in this case, falls under the general heading of common sense knowledge, a known failing of AI.
Hofstadter goes on more-or-less in the vein, I think–for I just skimmed over the article–until he arrived at something that captured my attention:
It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are. It’s almost irresistible for people to presume that a piece of software that deals so fluently with words must surely know what they mean. This classic illusion associated with artificial-intelligence programs is called the “ELIZA effect,” since one of the first programs to pull the wool over people’s eyes with its seeming understanding of English, back in the 1960s, was a vacuous phrase manipulator called ELIZA, which pretended to be a psychotherapist, and as such, it gave many people who interacted with it the eerie sensation that it deeply understood their innermost feelings.
Yes, of course. The illusion is very compelling, both with ELIZA and with Google Translate. But it is just that, an illusion. It seems that our so-called Theory of Mind module is easily fooled. We're more than willing to attribute mindful behavior at the drop of a semi-coherent word or three.
Hofstadter goes on to wax a bit poetic about what he does when he translates:
I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental “halo” has been realized—only when the elusive bubble of meaning is floating in my brain—do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising. This process, mediated via meaning, may sound sluggish, and indeed, in comparison with Google Translate’s two or three seconds per page, it certainly is—but it is what any serious human translator does.
And so on and so forth etc. Here he states his faith in the theoretical possibility of machines mimicking the mind:
In my writings over the years, I’ve always maintained that the human brain is a machine—a very complicated kind of machine—and I’ve vigorously opposed those who say that machines are intrinsically incapable of dealing with meaning. There is even a school of philosophers who claim computers could never “have semantics” because they’re made of “the wrong stuff” (silicon). To me, that’s facile nonsense. I won’t touch that debate here, but I wouldn’t want to leave readers with the impression that I believe intelligence and understanding to be forever inaccessible to computers.
I take it that the "wrong stuff" remark is a dig principally at John Searle. He goes on:
There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that’s not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind’s profundity fervently hopes.
But what that last sentence? I conclude with some remarks from a paper David Hays and I wrote some years ago – incidentally, several years before Deep Blue defeated Garry Kasparov at chess:
The computer is similarly ambiguous. It is clearly an inanimate machine. Yet we interactwith it through language; a medium heretofore restricted to communication with other people.To be sure, computer languages are very restricted, but they are languages. They have words,punctuation marks, and syntactic rules. To learn to program computers we must extend ourmechanisms for natural language.As a consequence it is easy for many people to think of computers as people. Thus JosephWeizenbaum, with considerable dis-ease and guilt, tells of discovering that his secretary "consults" Eliza—a simple program which mimics the responses of a psychotherapist—asthough she were interacting with a real person (Weizenbaum 1976). Beyond this, there areresearchers who think it inevitable that computers will surpass human intelligence and somewho think that, at some time, it will be possible for people to achieve a peculiar kind ofimmortality by "downloading" their minds to a computer. As far as we can tell suchspeculation has no ground in either current practice or theory. It is projective fantasy,projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. Westill do, and forever will, put souls into things we cannot understand, and project onto them ourown hostility and sexuality, and so forth.A game of chess between a computer program and a human master is just as profoundlysilly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see atthe time. At the time it seems necessary to establish a purpose for humankind by asserting thatwe have capacities that it does not. It is truly difficult to give up the notion that one has to add"because . . . " to the assertion "I'm important." But the evolution of technology will eventuallyinvalidate any claim that follows "because." Sooner or later we will create a technology capableof doing what, heretofore, only we could.
Here is the publisher's blurb for Lorraine J. Daston and Peter Galison, Objectivity, MIT Press, 2007:
Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences—and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images.From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences—from anatomy to crystallography—are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a decision enforced by an ethos as well as by an epistemology.As Daston and Galison argue, atlases shape the subjects as well as the objects of science. To pursue objectivity—or truth-to-nature or trained judgment—is simultaneously to cultivate a distinctive scientific self wherein knowing and knower converge. Moreover, the very point at which they visibly converge is in the very act of seeing not as a separate individual but as a member of a particular scientific community. Embedded in the atlas image, therefore, are the traces of consequential choices about knowledge, persona, and collective sight. Objectivity is a book addressed to anyone interested in the elusive and crucial notion of objectivity—and in what it means to peer into the world scientifically.
Interesting, very interesting.