Wednesday, October 5, 2022
I spotted this post on LessWrong two months ago, jam-mosig, Paper reading as a Cargo Cult. It was the presence of the word “cult” in the title that caught my attention. Why? Because I think that belief in AI-Doom is ‘cult’ behavior – see my post in 3 Quarks Daily, On the Cult of AI Doom – but I’ve got reservations about the word ‘cult.’? Why? Well, I’m not clear on just what the word means, how to apply it to the world. Here’s the problem, sorta’. Every now and again I’ve seen a discussion of the difference between cult and religion in which it is observed that a religion is just a cult that has managed to achieve legitimacy. That is to say, the difference is not a matter of the beliefs, but of their social standing. This post is not a place to attempt to hash that out, but I think it’s a legitimate issue.
However, LessWrong is a place where AI Doom (that is, AI as existential risk) is a legitimate complex of beliefs. It is one of the central sites on the web for discussions of those ideas. Thus it is interesting and significant to see the word used on that site. It’s a sure indication that the issue of broader legitimacy is explicitly recognized in that world.
Note, however, that the post isn’t specifically about belief in AI Doom, but rather about the broader issue of AI alignment, where the possibility that AI presents an existential risk is only one issue among many. But it is the most extreme and distressing one.
Having said that, let’s take a look at the post. Here’s how the post opens:
I have come across various people (including my past self) who meet up regularly to study, e.g., alignment forum posts and discuss them. This helps people bond over their common believes, fears, and interests, which I think is good, but in no way is this ever going to lead anyone to find a solution to the alignment problem. In this post I'll reason why this doesn't help, and what I think we should do instead.
Reading good papers can be fun. You learn something interesting and, if the topic is hard but well presented by the authors, you get a kick from finally understanding something complicated. But is what you learned actually useful for the problem at hand? What is the question that drove you to read this paper in such detail?
Yes, you need to regularly skim papers for fun, so you get an idea of what's out there and where to look when you need something. You also need to absorb terminology and good writing practice, so you can communicate your own research. Yet, I believe that fun-reading should only occupy a tiny fraction of your time, as you have more important things to do (see next section).
Despite its relative unimportance, paper reading groups tend to focus a lot on this fun-reading aspect. They are more of a social gathering than a mechanism to boost progress.
The post is, in effect, distinguishing between a serious concern about AI alignment (which includes AI Doom) and a more superficial commitment. This more superficial commitment is (thus) cultlike and centers on social activity, not intellectual investigation.
After some more remarks about ‘the cult’ jam-mosig takes up the issue of “actual science”:
To drive scientific progress means to do something that nobody else has ever done before. This means that your idea or line of research tends to seem strange to others (at first sight). At the same time, it also tends to seem obvious to you - it's just the natural next step when you take seriously what you've learnt so far.
Before I properly reconcile "strange" and "obvious" here, let me warn you of a trap: It is very easy to have an idea that seems obvious to you, but strange to others, when you are delusional. Especially when you are good at arguing, you can easily make yourself believe that you are right and everybody else is just not seeing it. Beware that trap.
I find that second paragraph interesting because it outlines the epistemological problem presented by coltishness. Cult beliefs are obvious to those in the cult, but strange to others.
While there’s more to the post, though not much more, that’s enough for my purpose, which is simply to show that the issue of coltishness is real within the AI alignment community. The author of the post seems intent on showing that, while they once engaged in this cultish behavior, they’ve since moved beyond it. But those other people, over there...they’ve got to change their ways.
"I consider the Organism, or natural Machine, a machine in which each part is a machine."— Santa Fe Institute (@sfiscience) October 5, 2022
🦠 Energy → Work
"Active matter *employs* control, either internally [e.g., embryogenesis] or externally [e.g., with sheepdogs]."
- @SurajShankar92 pic.twitter.com/4l2NAG3F7C
Tuesday, October 4, 2022
The paper's conclusion:
In this paper, we test the extent to which the representations of language models encode information about the non-linguistic world in terms of their ability to use image representations to perform vision-language tasks. We show through LiMBeR (Linearly Mapping Between Representation spaces) that training a linear (thus, distance-preserving) transformation to connect image features to an LM’s input space is competitive on image captioning and visual question answering benchmarks with similar models like MAGMA that tune both image and text networks. However, we also find that such transfer is highly dependant on the amount of linguistic supervision the image encoder backbone had during its pretraining phase. BEIT, which is a vision-only image encoder underperforms compared to CLIP, which was pretrained with natural language captions. We explore what conceptual information transfers successfully, and find through probing, clustering, and analysis of generated text that the representational similarity between LMs and vision-only image representations is mostly restricted to coarse-grained concepts of perceptual features. Our findings indicate that large LMs do appear to form models of the visual world along these perceptual concepts to some extent, but are biased to form categorical concepts of words that are not distinguished by vision-only models. We are excited by future work applying LiMBeR to other domains and modalities as a behavioral tool for understanding the representations of LMs and other deep neural networks.
Sunday, October 2, 2022
it is generally thought that fMRI responses are too slow for language decoding. to overcome this low temporal resolution, we developed a bayesian decoder that combines state of the art language models and encoding models to generate rapidly changing word sequences (2/7)— Jerry Tang (@jerryptang) September 30, 2022
the same decoder also worked on brain responses while subjects imagined telling stories, even though the decoder was only trained on perceived speech data. we expect that training the decoder on some imagined speech data will further improve performance (4/7) pic.twitter.com/z63D7Xe3Sa— Jerry Tang (@jerryptang) September 30, 2022
Abstract of linked article:
A brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, decoders that reconstruct continuous language use invasive recordings from surgically implanted electrodes, while decoders that use non-invasive recordings can only identify stimuli from among a small set of letters, words, or phrases. Here we introduce a non-invasive decoder that reconstructs continuous natural language from cortical representations of semantic meaning recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single language decoder can be applied to a range of semantic tasks. To study how language is represented across the brain, we tested the decoder on different cortical networks, and found that natural language can be separately decoded from multiple cortical networks in each hemisphere. As brain-computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation, and found that subject cooperation is required both to train and to apply the decoder. Our study demonstrates that continuous language can be decoded from non-invasive brain recordings, enabling future multipurpose brain-computer interfaces.
Friday, September 30, 2022
AI journalism has a hype problem. To move past the cycle of reacting to each story, @sayashk and I analyzed over 50 pieces from 5 prominent outlets. We've compiled a list of recurring pitfalls that readers should watch out for and journalists should avoid. https://t.co/uHFUpHMZ6U— Arvind Narayanan (@random_walker) September 30, 2022
This is the first tweet in the tweet thread.
Thursday, September 29, 2022
Pay attention to audience interaction at 7:07 to the end.
We collect human preference annotations for news summaries generated by current SOTA and zero-shot GPT-3 models. For multiple settings (generic + keyword) and datasets (CNN + BBC), GPT-3 summaries beat prior fine-tuned models!— Tanya Goyal (@tanyaagoyal) September 27, 2022
This also means we can now break away from noisy benchmark datasets, e.g. XSum, that (we observe) cannot produce systems for real settings. Instead, actual use cases and not data availability can now dictate future research directions (task goals, domains, etc.)— Tanya Goyal (@tanyaagoyal) September 27, 2022
Browse examples of generated summaries and human annotations at: https://t.co/vcSeVl5Zwj— Tanya Goyal (@tanyaagoyal) September 27, 2022
Tuesday, September 27, 2022
Jeffrey Funk - Gary N. Smith, No, AI probably won’t revolutionize drug development, Salon, September 24.
The opening paragraphs:
Drug development is expensive, time consuming, and risky. A typical new drug costs billions of dollars to develop and requires more than ten years of work — yet only about 0.02% of the drugs in development ever making it to market.
Some claim that AI, or artificial intelligence, will revolutionize drug development by ushering in much shorter development times and drastically lower costs. Many scientists and business consultants are especially optimistic about AI's ability to predict the shapes of nearly every known protein using DeepMind's AlphaFold, an artificial intelligence tool developed by Google parent company Alphabet; predicting this information with great detail would be key to quickly developing drugs. As one AI company boasts, "We … firmly believe that AI has the potential to transform the drug discovery process to achieve time and cost efficiencies."
The relative unimportance of AI in COVID development is consistent with the conclusion of many scientists that AI is not about to revolutionize drug development. The biggest problem is that clinical trials are the longest and typically most expensive part of the process, and AI cannot replace actual trials. Even AI's impact on drug discovery may be limited. A Science op-ed recently argued that, "[AI] doesn't make as much difference to drug discovery as many stories and press releases have had it…. Protein structure prediction is a hard problem, but even harder ones remain."
An often crucial part of drug development is to determine whether a drug binds to the candidate protein, something that MIT researchers have shown that AlphaFold cannot do and DeepMind admits AlphaFold can't do: "Predicting drug binding is probably one of the most difficult tasks in biology: these are many-atom interactions between complex molecules with many potential conformations, and the aim of docking is to pinpoint just one of them." [...]
One huge hurdle for all AI data mining algorithms is that the data deluge has made the number of promising-but-coincidental patterns waiting to be discovered far, far larger than the number of useful relationships — which means that the probability that a discovered pattern is truly useful is very close to zero.
There is more at the link.
Sunday, September 25, 2022
The NYTimes has another article that feeds my interest in changing attitudes toward work: Lisa Rabasa Roepe, Millennials Want to Retire at 50. How to Afford It Is Another Matter, Sept. 24, 2022. It opens:
Although Devangi Patel, 33, has been working as a cardiothoracic anesthesiologist at a large medical center outside Atlanta for only two years, her goal is to afford to walk away from her job at 50.
“That, to me, is the American dream,” she said.
Dr. Patel is not alone in her quest to become financially independent — and at a relatively early age. It seems that a generational shift is well underway: Many millennial workers don’t aspire to retire in their mid- or late 60s, like their parents. Instead, many with professional careers are seeking to leave their jobs by 50 and work for themselves or take a lower-paying role that is more aligned with their interests, studies are showing and financial advisers are finding.
“I want to get to a point where I don’t have to work for money anymore, and I can work for pleasure,” Dr. Patel said.
But reaching that goal has been harder than Dr. Patel anticipated. Although she contributes to a 401(k) and a Roth individual retirement account, invests in stocks with a brokerage account and maxes out her health savings account, she is also paying off a $250,000 loan for medical school and paying for her wedding in December.
There’s more at the link.
Saturday, September 24, 2022
Two weeks ago I published another essay at 3 Quarks Daily:
It’s a decent essay, in particular I like the capsule history I sketch for the emergence of AI Doom as an issue. But it’s hardly my last word on the subject.
* * * * *
Here’s a TEDx talk from 2017 by a high school student, Andrew Zeitler, entitled, “The Truth Behind Artificial Intelligence.”
It opens with two quick versions of the future, one in which AI is the center of a world of luxury and enjoyment and the other in which AI destroys the human race. At about 14:08 he says the technological singularity will arrive in 2029. He concludes by quoting Edsgar Dijkstra: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
He's bright and engaged but, as far as I can tell, does not have deep technical knowledge of AI. He presents these ideas as obvious truths. I have no idea how many people share his beliefs. But relatively few people are in a position to think these matters through with any degree of sophistication.
I have no idea when these issues will be resolved. But I suspect that, at this level, they are issues being driven by symbolic concerns, not technical ones. We’re working through a cultural mythology for the 21st century.
* * * * *
This is epistemic theater, but also PR (signaling). For those purposes the contents of the essays are irrelevant. How much more such theater is in store for us?
We think it's possible that we're totally out of it on AGI.— Leopold Aschenbrenner (@leopoldasch) September 23, 2022
And we want to learn about it as quickly as because it would change how we allocate 100s of millions of $ (or more).
Enter by Dec 23! pic.twitter.com/AxA9wM7ZAo
Saturday, September 10, 2022
Bump: from June 2017. This sort of thing is on my mind these days.
This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.
Addendum, 6.26.17: Mark Liberman has posted about this over at Language Log, and has some interesting links to remarks by Norbert Wiener as well as an interesting set of replies.
Friday, September 9, 2022
0:00 Sunset and the Mocking Bird
3:51 Lightning Bugs and Frogs
6:45 Le Sucrier Velours
9:33 Northern Lights
13:11 The Single Petal of a Rose
17:19 Apes and a Peacock
In a historic Duke-meets-Queen [in 1958] Ellington served up his famous charm for the monarch. When she asked him whether this was his first visit to Britain, Duke replied that his initial trip to London was in 1933, “way before you were born.” This was out-and-out flattery, because Queen Elizabeth had been born in 1926—but she played along with the game. “She gave me a real American look,” he later recalled, “very cool man, which I thought was too much.”
Give Duke credit for savviness. He understood that even a queen wants to hear how young she looks. Ellington followed up saying that Her Majesty “was so inspiring that something musical would come out of it.” She told him that she would be listening.
According to Ellington’s son Mercer, his father began working on the music to The Queen’s Suite as soon as he got back to his hotel room. He enlisted colleague and collaborator Billy Strayhorn. In addition to royal inspiration, the work also borrowed from the natural world: the opening movement draws on birdsong heard during a Florida visit, another section was a response to an unexpected encounter with “a million lightning bugs” serenaded by a frog. The best known part of the Suite, “The Single Petal of a Rose,” was spurred by a floral display on a piano at a friend’s home. [...]
By early 1959, the finished work was ready for performance. The Queen’s Suite was now a 20-minute work in six movements. The band recorded it over the course of three sessions in February and April 1959. A single golden disc was made, and sent to Buckingham Palace. In order to ensure that no other copies were released, Ellington reimbursed Columbia, his label, some $2,500 in production costs, and thus retained personal ownership of the master tapes.
There's more at the link.
Thursday, September 8, 2022
Prediction as a way of foreclosing the future – And no foreclosure is more absolute than apocalypse [AGI]
I’ve just come across a recent article that sheds light on one of the signal features the contemporary AI culture, in particular, the subculture devoted to the study and prevention of existential risk from advanced artificial intelligence. The article:
Sun-Ha Hong, Predictions Without Futures, History and Theory, Vol 61, No. 3: July 2022, 1-20. https://onlinelibrary.wiley.com/doi/epdf/10.1111/hith.12269
Abstract: Modernity held sacred the aspirational formula of the open future: a promise of human determination that doubles as an injunction to control. Today, the banner of this plannable future is borne by technology. Allegedly impersonal, neutral, and exempt from disillusionment with ideology, belief in technological change saturates the present horizon of historical futures. Yet I argue that this is exactly how today’s technofutures enact a hegemony of closure and sameness. In particular, the growing emphasis on prediction as AI’s skeleton key to all social problems constitutes what religious studies calls cosmograms: universalizing models that govern how facts and values relate to each other, providing a common and normative point of reference. In a predictive paradigm, social problems are made conceivable only as objects of calculative control—control that can never be fulfilled but that persists as an eternally deferred and recycled horizon. I show how this technofuture is maintained not so much by producing literally accurate predictions of future events but through ritualized demonstrations of predictive time.
From page 4:
Notably, today’s society feverishly anticipates an AI “breakthrough,” a moment when the innate force of technological progress transforms society irreversibly. Its proponents insist that the singularity is, as per the name, the only possible future (despite its repeated promise and deferral since the 1960s—that is, for almost the entire history of AI as a research problem). Such pronouncements generate legitimacy through a sense of inevitability that the early liberals sought in “laws of nature” and that the ancien régime sought in the divine. AI as a historical future promises “a disconnection” from past and present, and it cites that departure as the source of the possibility that even the most intractable political problems can be solved not by carefully unpacking them but by eliminating all of their priors. Thus, virtual reality solves the problems with reality merely by being virtual, cryptocurrency solves every known problem with currency by not being currency, and transhumanism solves the problem of people by transcending humanity. Meanwhile, the present and its teething problems are somewhat diluted of reality: there is less need to worry so much about concrete, existing patterns of inequality or inefficiency, the idea goes, since technological breakthroughs will soon render them irrelevant. Such technofutures saturate the space of the possible with the absence of a coherent vision for society.
That paragraph can be taken as a trenchant reading of long-termism, which is obsessed with prediction and, by focusing attention on the needs for people in the future, tends to empty the present of all substance.
Nor does it matter that the predicted technofuture is, at best, highly problematic (p. 7):
In short, the more unfulfilled these technofutures go, the more pervasive and entrenched they become. When Elon Musk claims that his Neuralink AI can eliminate the need for verbal communication in five to ten years [...] the statement should not be taken as a meaningful claim about concrete future outcomes. Rather, it is a dutifully traditional performance that, knowingly or not, reenacts the participatory rituals of (quasi) belief and attachment that have been central to the very history of artificial intelligence. After all, Marvin Minsky, AI’s original marketer, had loudly proclaimed the arrival of truly intelligent machines by the 1970s. The significance of these predictions does not depend on their accurate fulfillment, because their function is not to foretell future events but to borrow legitimacy and plausibility from the future in order to license anticipatory actions in the present.
Here, “the future . . . functions as an ‘epistemic black market.’” The conceit of the open future furnishes a space of relative looseness in what kinds of claims are considered plausible, a space where unproven and speculative statements can be couched in the language of simulations, innovation, and revolutionary duty. What is being traded here are not concrete achievements or end states but the performative power of the promise itself. In this context, claims do not live or die by specifically prophesized outcomes; rather, they involve a rotating array of promissory themes that create space for optimism and investment. Cars that can really drive themselves without alarming swerves, facial recognition systems that can really determine one’s sexuality, and so on—the final justifications for such totally predictive systems are always placed in the “near” future, partly shielded from conventional tests of viability or even morality.
The prediction of an AI apocalypse is not, however, an optimistic one. But it does affirm the power and potency of the technology. And the notion of the future as an ‘epistemic black market’ points up the idea that the predictive activity is a form of theater, epistemic theater.
It seems fitting, then, to think through technofutures via a concept taken from religious studies by a historian of science. John Tresch describes cosmograms as unified pictures of the world—“central points of reference that enable people to bring themselves into agreement.” This is not to say that a cosmogram boasts an explicit theory of everything, which might then be proven or disproven like a formula. Nor do such “unified pictures” enact a total and dog- matic belief upon their subjects. Technofutures as cosmograms do not insist on a particular technology or technological outcome per se. Rather, predicting the emergence of intelligent robots by a certain year is a way to build and replenish a familiar array of beliefs about human transcendence and technological solutionism while accommodating a wide variety of definitions of intelligence, robots, and progress.
Here I would emphasize the phrase “enable people to bring themselves into agreement.” AI is regarded as transformative technology. Reaching agreement on just how AI should best transform society would be difficult. But it one believes that AI is likely to pose an existential threat, that presents a much narrower target for which agreement must be reached. All can agree that they do not want humankind to be extinguished.
There’s more in the article.
Another bump to the top. I wrote several posts on Obama's Eulogy for Clementa Pinckney and gathered them into a working paper, Obama’s Eulogy for Clementa Pinckney: Technics of Power and Grace, along with some other posts on the Black church. This is the first of those posts. I argue that the eulogy exhibits ring-form composition. I conclude with the complete text of the eulogy.
1. Prologue: Address to his audience, quoting of a passage from the Bible.2. Phase 1: Moves from the Clementa Pinckney’s life to the significance of the black church in history.3. Phase 2: The murder itself and presence of God’s grace.4. Phase 3: Looks to the nation, the role racism has played, and the need to move beyond it.5. Closing: Amazing Grace.
A draft of the Charleston eulogy was given to the president around 5 p.m. on June 25 and, according to Mr. Keenan, Mr. Obama spent some five hours revising it that evening, not merely jotting notes in the margins, but whipping out the yellow legal pads he likes to write on — only the second time he’s done so for a speech in the last two years. He would rewrite large swaths of the text.
Mr. Obama expanded on a short riff in the draft about the idea of grace, and made it the central theme of the eulogy: the grace family members of the shooting victims embodied in the forgiveness they expressed toward the killer; the grace the city of Charleston and the state of South Carolina manifested in coming together in the wake of the massacre; the grace God bestowed in transforming a tragedy into an occasion for renewal, sorrow into hope.
We do not know whether the killer of Reverend Pinckney and eight others knew all of this history. But he surely sensed the meaning of his violent act. It was an act that drew on a long history of bombs and arson and shots fired at churches, not random, but as a means of control, a way to terrorize and oppress. (Applause.) An act that he imagined would incite fear and recrimination; violence and suspicion. An act that he presumed would deepen divisions that trace back to our nation's original sin.
"They were still living by faith when they died," Scripture tells us. "They did not receive the things promised; they only saw them and welcomed them from a distance, admitting that they were foreigners and strangers on Earth."
I'm bumping this to the top of the queue on general principle – and – to remind myself that, to the extent that I have a home discipline, it is the study of literature. I originally published this in October of 2015. I've written considerably more about literary study since then.
Wednesday, September 7, 2022
The title of Peter Coy’s Labor Day op-ed caught my attention: Work Is Intrinsically Good. Or Maybe It’s Not?, NYTimes, September 5, 2022. After quoting Thomas Carlyle, Coy observes:
Many of us agree that work is inherently good, character-building and a manifestation of one’s seriousness and reliability. However, others of us make a forceful argument that work for its own sake is pointless and ridiculous. Who’s right?
The antiwork movement is the one that’s getting most of the attention lately. There’s China’s “lying flat” movement. There’s quiet quitting. There’s the Great Resignation. And there’s the fact that lots of people simply don’t want to work anymore. In the United States, the labor force participation rate has fallen for two decades, and there were more than 11 million unfilled jobs on the last day of July this year.
But on this Labor Day, I want to focus on the other group: those who are anti-antiwork (or simply pro-work).
But then he equivocates, but only for a moment:
Why is work valuable for its own sake, though? In textbook economics, after all, leisure is the good stuff and work is the necessary evil. “How have so many humans reached the point where they accept that even miserable, unnecessary work is actually morally superior to no work at all?” the anthropologist David Graeber asked in a 2018 book.
Last week I read a smart piece of psychological research that I think answers Graeber’s valid question. Its title says it all: “The Moralization of Effort.”
Human beings evolved in societies that valued cooperation, the theory goes. People who work hard tend to be team players. So working hard in primitive societies was a costly but effective way of signaling one’s trustworthiness. As a result, our brains today are wired to perceive effort as evidence of morality. “Just as people will engage in unnecessary prosocial behavior to differentiate themselves as a superior cooperative partner,” the paper says, “displays of effort, including economically unnecessary effort, may serve a similar function.”
This concept isn’t new, but the “Moralization of Effort” paper tests it, and finds strong support for it, using seven clever experiments involving hundreds of people in the United States, South Korea and France. The choice of countries is interesting. Koreans are known to be hard workers. They even have a word for death by overwork: gwarosa. The French work fewer hours than most and pride themselves on their savoir vivre.
While the work-is-good mind-set may have had evolutionary advantages, it can backfire in the modern world, the authors contend. Millions of workers may “signal moral worth through structured drudgery,” they write, echoing Graeber. “We fear it has also created harmful incentive structures that reward workaholism and joyless devotion to mundane efforts that produce little value beyond the signal of effortful engagement.”
I think the pro-work camp, while quieter, remains larger and more influential than the antiwork camp that’s drawing all the headlines lately. Just look at the strong opposition to the universal basic income idea championed by Andrew Yang and others.
I count myself among those skeptical of the (strenuous) moralization of work. But I’m not sure what to replace it with.
There’s more at the link.
* * * * *
Two posts on the skeptical side:
Thriving and Jiving Among Friends and Family: The Place of Music in Everyday Life, 3 Quarks Daily, August 15, 2022.
Tuesday, September 6, 2022
Robert Wright interviews David B. Yadan, author of The Varieties of Spiritual Experience.
At about 14:11:
David B. Yaden: Right in the beginning of this project...we thought 'let's run a survey and see how these Freudian views and these Jamesian views hold up.' And it was meant to be footnotes thoughout the book. We'd say 'here are some basic descriptive data bearing on this question.' [...] The main thing that's interesting is that these massive polling companies like Gallup, the General Social Survey, and others, have for decades run these big surveys asking people, 'have you ever had a spiritual or mystical experience.' [...] Somewhat shockingly about 35% of populations in the US and the UK endorsed these statements completely.
I'm bumping this to the top to remind me of my intellectual roots in literary criticism.
* * * * *
Hey, diddle, diddle,The cat and the fiddle,The cow jumped over the moon;The little dog laughedTo see such sport,And the dish ran away with the spoon.
Monday, September 5, 2022
Vehicularization: A Control Principle in a Complex Animal with several levels of Modal Organization (with note on King Kong)
Once Darrow is successfully rescued, Kong is captured, and they're back in New York. Now the goal is simply to get rich, which will happen real soon now, as soon as the money starts rolling in from exhibiting Kong. And, of course, Darrow and Driscoll plan to get married. Important, but it doesn't drive the action. What drives the action is Kong's escape. Now, once again, the goal is immediate survival. Once Kong is dead, the couple can get married and live happily ever after.
Well, vehicularization is about temporal horizons in the control of behavior. It's about the brain, but we can also map it onto the geography of the journey in King Kong, giving us a temporal myth-logic. A similar analysis can be done for Heart of Darkness, though that's going to be tricky because of the double narration, and Gojira.
The following post consists of a section that David Hays and I removed from our article, Principles and Development of Natural Intelligence (downloadable PDF). The final draft was long and the journal editors asked that we cut it. This is much of what we cut.
For example, you’ve been working hard and, all of a sudden, you notice that you’re ravenously hungry. Your lowest level system, the one that is actually capable of satisfying your hunger, wants to grab some food and start chewing. If a cheese burger or a head of lettuce is close at hand, you can see it and grab it, it does into action and your immediate hunger is sated. If no food is available, however, you turn control over to a higher-level system that then goes looking for food. If you are in your home, you’ll go to the kitchen or the pantry and see what’s available, now.
So, you’re hungry. There’s nothing immediately to hand, but you smell something potentially delicious. You follow your nose and it leads you to a window. There’s no food on the window sill and the window’s filled by a screen you can’t break through. What do you do? If you insist on following your nose, you’re stuck. That’s a local maximum. So you’ve got to stop following your nose and do something else to take to a place in your environment where following your nose will be more successful.