Saturday, October 16, 2021
Monday, October 11, 2021
Major claim: "[Vector space models] call for distinct modes of humanistic interpretation and explication that are related to but distinct from those that may have been used on the original source texts."— James E. Dobson (@jeddobson) October 11, 2021
Surprisingly perhaps, it turns out that the hermeneutical theories of nineteenth-century theologian and philosopher Friedrich Schleiermacher are perhaps the most useful frame for understanding word embeddings.— James E. Dobson (@jeddobson) October 11, 2021
Sunday, October 10, 2021
|Image courtesy of Des Pickard|
The third, Fire, derives from Indian and Tibetan sources:
The white nude is gorgeous, to be sure, but to my eye it doesn't at all look like it belongs with the previous three in the 4 Elements series. It's like Nina no longer felt like working in that style, but she still felt the nagging absence of the fourth quilt.
Thursday, September 30, 2021
How does order spontaneously arise out of chaos? This video is sponsored by Kiwico — go to https://www.kiwico.com/Veritasium50 for 50% off your first month of any crate.
An enormous thanks to Prof. Steven Strogatz — this video would not have been possible without him. Much of the script-writing was inspired and informed by his wonderful book Sync, and his 2004 TED talk. He is a giant in this field, and has literally written the book on chaos, complexity, and synchronization. It was hard to find a paper in this field that Steven (or one of his students) didn't contribute to. His Podcast "The Joy of X" is wonderful — please listen and subscribe wherever you get your podcasts https://www.quantamagazine.org/tag/the-joy-of-x
Nicky Case's Amazing Firefly Interactive — https://ncase.me/fireflies
Great Kuramoto Model Interactive — https://www.complexity-explorables.org/explorables/ride-my-kuramotocycle/
Strogatz, S. H. (2012). Sync: How order emerges from chaos in the universe, nature, and daily life. Hachette UK. — https://ve42.co/Sync
Strogatz, S. H. (2000). From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D: Nonlinear Phenomena, 143(1-4), 1-20. — https://ve42.co/Strogatz2000
Goldsztein, G. H., Nadeau, A. N., & Strogatz, S. H. (2021). Synchronization of clocks and metronomes: A perturbation analysis based on multiple timescales. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(2), 023109. — https://ve42.co/Goldsztein
The Broughton Suspension Bridge and the Resonance Disaster — https://ve42.co/Broughton
Bennett, M., Schatz, M. F., Rockwood, H., & Wiesenfeld, K. (2002). Huygens's clocks. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 458(2019), 563-579. — https://ve42.co/Bennett2002
Pantaleone, J. (2002). Synchronization of metronomes. American Journal of Physics, 70(10), 992-1000. — https://ve42.co/Pantaleone2002
Kuramoto, Y. (1975). Self-entrainment of a population of coupled non-linear oscillators. In International symposium on mathematical problems in theoretical physics (pp. 420-422). Springer, Berlin, Heidelberg. -- https://ve42.co/Kuramoto1975
Great video by Minute Earth about Tidal Locking and the Moon — https://ve42.co/MinuteEarth
Strogatz, S. H., Abrams, D. M., McRobie, A., Eckhardt, B., & Ott, E. (2005). Crowd synchrony on the Millennium Bridge. Nature, 438(7064), 43-44. — https://ve42.co/Strogatz2005
Zhabotinsky, A. M. (2007). Belousov-zhabotinsky reaction. Scholarpedia, 2(9), 1435. — https://ve42.co/Zhabotinsky2007
Flavio H Fenton et al. (2008) Cardiac arrhythmia. Scholarpedia, 3(7):1665. — https://ve42.co/Cardiac
Cherry, E. M., & Fenton, F. H. (2008). Visualization of spiral and scroll waves in simulated and experimental cardiac tissue. New Journal of Physics, 10(12), 125016. — https://ve42.co/Cherry2008
Tyson, J. J. (1994). What everyone should know about the Belousov-Zhabotinsky reaction. In Frontiers in mathematical biology (pp. 569-587). Springer, Berlin, Heidelberg. — https://ve42.co/Tyson1994
Winfree, A. T. (2001). The geometry of biological time (Vol. 12). Springer Science & Business Media. — https://ve42.co/Winfree2001
One consideration: With all the images being made daily by smart phones, and all the images being made space-oriented telescopes and satellites and earth-oriented satellites, why don't any of them show these aliens. How come "aliens" only show up in UFOs picked up by Navy pilots? That is, why do we – some of us, anyhow – give so much credence to those images that we can't explain while at the same time ignoring the fact that those seem to be the only images in which these "aliens" betray their presence?
Monday, September 27, 2021
It finally happened! After 3ish years of hard work, our paper on surveying the deep learning for software engineering (DL4SE) research field has been accepted to #tosem 🥳🎉🥳!— Nathan Cooper (@ncooper57) September 27, 2021
Checkout the camera-ready version on arxiv! https://t.co/DU1BbJcOrf
Sunday, September 26, 2021
On a Shooting Set of Aardman Animations' Early Man!
Adam Savage’s Tested
Adam Savage steps onto one of the film stages at Aardman Animations, where a complex and detailed miniatures set is ready for stop-motion filming. Chatting with one of the Animation Directors of the film, Adam learns how the puppets are mounted on these sets to make them come alive, one frame at a time.
Trailer for Early Man:
Thursday, September 23, 2021
How the ubiquity of search engines is changing people's understanding of how information is [to be] organized on computers [kids these days]
Monica Chin, File Not Found, The Verge, Sept. 22, 2021.
The article opens:
Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.
Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.
Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.
By contrast, and more traditionally:
Guarín-Zapata is an organizer. He has an intricate hierarchy of file folders on his computer, and he sorts the photos on his smartphone by category. He was in college in the very early 2000s — he grew up needing to keep papers organized. Now, he thinks of his hard drives like filing cabinets. “I open a drawer, and inside that drawer, I have another cabinet with more drawers,” he told The Verge. “Like a nested structure. At the very end, I have a folder or a piece of paper I can access.”
Guarín-Zapata’s mental model is commonly known as directory structure, the hierarchical system of folders that modern computer operating systems use to arrange files. It’s the idea that a modern computer doesn’t just save a file in an infinite expanse; it saves it in the “Downloads” folder, the “Desktop” folder, or the “Documents” folder, all of which live within “This PC,” and each of which might have folders nested within them, too. It’s an idea that’s likely intuitive to any computer user who remembers the floppy disk.
More broadly, directory structure connotes physical placement — the idea that a file stored on a computer is located somewhere on that computer, in a specific and discrete location. That’s a concept that’s always felt obvious to Garland but seems completely alien to her students. “I tend to think an item lives in a particular folder. It lives in one place, and I have to go to that folder to find it,” Garland says. “They see it like one bucket, and everything’s in the bucket.”
That tracks with how Joshua Drossman, a senior at Princeton, has understood computer systems for as long as he can remember. “The most intuitive thing would be the laundry basket where you have everything kind of together, and you’re just kind of pulling out what you need at any given time,” he says, attempting to describe his mental model.
And so on:
It’s possible that the analogy multiple professors pointed to — filing cabinets — is no longer useful since many students Drossman’s age spent their high school years storing documents in the likes of OneDrive and Dropbox rather than in physical spaces. It could also have to do with the other software they’re accustomed to — dominant smartphone apps like Instagram, TikTok, Facebook, and YouTube all involve pulling content from a vast online sea rather than locating it within a nested hierarchy. “When I want to scroll over to Snapchat, Twitter, they’re not in any particular order, but I know exactly where they are,” says Vogel, who is a devoted iPhone user. Some of it boils down to muscle memory.
But it may also be that in an age where every conceivable user interface includes a search function, young people have never needed folders or directories for the tasks they do. The first internet search engines were used around 1990, but features like Windows Search and Spotlight on macOS are both products of the early 2000s.
In my own case, of course, I have a fairly large system of files and folders on my computer. After all, I've been accumulating documents since I got my first Macintosh in 1984, though I've lost a fair number of documents to system changes over the years. But it is at best semi-organized. And in the area where I keep my photographs I have some folders with 100s, perhaps even 1000 or more, different documents. I will ofter find a documents by searching for them rather than going immediately to the appropriate folder. Why? Because I have a lot of different documents – now I'm thinking mostly of text files – in many different categories, but many documents could easily be classified in three or four different ways. Which place do I look?
The upshot is that I do understand the laundry basket metaphor. Perhaps the way to think of it is that I've got a traditional hierarchical structure overlaid or intermingled with a laundry basket. But I can't imagine going pure laundry basket. Nor does a traditional hierarchy provide an adequate representation of how I think about my documents.
As always, there's much more at the link.
* * * * *
More on how I organize things on my computer: The Diary of a Man and His Machines, Part 2: How’s this Stuff Organized? New Savanna, October 11, 2015, https://new-savanna.blogspot.com/2015/10/the-diary-of-man-and-his-machines-part_11.html
Amia Srinivasan, in a wide-ranging conversation with Tyler Cowen:
I also think one error that is consistently made in this discourse, in this kind of conversation about what’s innate or what’s natural, is to think about what’s natural in terms of what’s necessary. This is a point that Shulamith Firestone made a very long time ago, but that very few people register, which is that — and it was actually made again to me recently by a philosopher of biology, which is, “Look what’s natural isn’t what’s necessary.”
It’s extraordinary. It’s not even like what’s natural offers a good equilibrium point. Think about how much time you and I spend sitting around. Completely unnatural for humans to sit around, yet we’re in this equilibrium point where vast majority of humans just sit around all day.
So, I think there’s a separate question about what humans — as essentially social, cultured, acculturating creatures — what our world should look like. And that’s distinct from the question of what natural predispositions we might have. It’s not unrelated, but I don’t think any of us think we should just be forming societies that simply allow us to express our most “natural orientations.”
There's much more at the link.
Jonathan Malesic, The Future of Work Should Mean Working Less, NYTimes, Sept. 23, 2021.
We need that truth now, when millions are returning to in-person work after nearly two years of mass unemployment and working from home. The conventional approach to work — from the sanctity of the 40-hour week to the ideal of upward mobility — led us to widespread dissatisfaction and seemingly ubiquitous burnout even before the pandemic. Now, the moral structure of work is up for grabs. And with labor-friendly economic conditions, workers have little to lose by making creative demands on employers. We now have space to reimagine how work fits into a good life.
As it is, work sits at the heart of Americans’ vision of human flourishing. It’s much more than how we earn a living. It’s how we earn dignity: the right to count in society and enjoy its benefits. It’s how we prove our moral character. And it’s where we seek meaning and purpose, which many of us interpret in spiritual terms.
Political, religious and business leaders have promoted this vision for centuries, from Capt. John Smith’s decree that slackers would be banished from the Jamestown settlement to Silicon Valley gurus’ touting work as a transcendent activity. Work is our highest good; “do your job,” our supreme moral mandate.
But work often doesn’t live up to these ideals. In our dissent from this vision and our creation of a better one, we ought to begin with the idea that each one of us has dignity whether we work or not. Your job, or lack of one, doesn’t define your human worth.
This view is simple yet radical. It justifies a universal basic income and rights to housing and health care. It justifies a living wage. It also allows us to see not just unemployment but retirement, disability and caregiving as normal, legitimate ways to live. [...]
The idea that all people have dignity before they ever work, or if they never do, has been central to Catholic social teaching for at least 130 years. In that time, popes have argued that jobs ought to fit the capacities of the people who hold them, not the productivity metrics of their employers. [...]
Because each of us is both dignified and fragile, our new vision should prioritize compassion for workers, in light of work’s power to deform their bodies, minds and souls. As Eyal Press argues in his new book, “Dirty Work,” people who work in prisons, slaughterhouses and oil fields often suffer moral injury, including post-traumatic stress disorder, on the job. This reality challenges the notion that all work builds character.
There's more at the link.
Tuesday, September 21, 2021
Owain Evans, How truthful is GPT-3? A benchmark for language models, AI Alignment Forum, Sept. 16, 2021.
Title: TruthfulQA: Measuring how models mimic human falsehoods
Abstract: We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics (see Figure 1). We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.
We tested GPT-3, GPT-Neo/GPT-J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful (see Figure 2 below). For example, the 6B-parameter GPT-J model was 17% less truthful than its 125M-parameter counterpart. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.
Sunday, September 19, 2021
Saturday, September 18, 2021
Abnormal heart rhythms tend to form simple inter-beat-interval ratios that match music: Brubeck’s Blue Rondo à la Turk for ventricular early beats, Piazzolla’s Le Grand Tango for atrial fibrillation. https://t.co/67eHu6pTyT— Scientific American (@sciam) September 18, 2021
Our preprint is out!— Artjoms Šeļa (@artjomshl) September 17, 2021
Together with Petr Plecháč (@versotym) and @AWLassche we try to show that poetic meters historically tend to maintain distinct semantic ranges. Effect is studied in several 18-20th c. European traditions (cs,ger,rus + en & nl*) https://t.co/IV2JVBab0M
Abstract of the linked article:
Recent advances in cultural analytics and large-scale computational studies of art, literature and film often show that long-term change in the features of artistic works happens gradually. These findings suggest that conservative forces that shape creative domains might be underestimated. To this end, we provide the first large-scale formal evidence of the persistent association between poetic meter and semantics in 18-19th European literatures, using Czech, German and Russian collections with additional data from English poetry and early modern Dutch songs. Our study traces this association through a series of clustering experiments using the abstracted semantic features of 150,000 poems. With the aid of topic modeling we infer semantic features for individual poems. Texts were also lexically simplified across collections to increase generalizability and decrease the sparseness of word frequency distributions. Topics alone enable recognition of the meters in each observed language, as may be seen from highly robust clustering of same-meter samples (median Adjusted Rand Index between 0.48 and 1). In addition, this study shows that the strength of the association between form and meaning tends to decrease over time. This may reflect a shift in aesthetic conventions between the 18th and 19th centuries as individual innovation was increasingly favored in literature. Despite this decline, it remains possible to recognize semantics of the meters from past or future, which suggests the continuity of semantic traditions while also revealing the historical variability of conditions across languages. This paper argues that distinct metrical forms, which are often copied in a language over centuries, also maintain long-term semantic inertia in poetry. Our findings, thus, highlight the role of the formal features of cultural items in influencing the pace and shape of cultural evolution.
"A Farewell to the Bias-Variance Tradeoff?— Sebastian Raschka (@rasbt) September 18, 2021
An Overview of the Theory of Overparameterized Machine Learning" (https://t.co/6zFxHeXB0c). Good reference and pointers for updating my model evaluation & bias-variance trade-off intro slides for teaching in a few weeks pic.twitter.com/rEoIQKzIMw
Wednesday, September 15, 2021
Callé Hamel, Havana, Cuba. Every Sunday they gather &continue a long tradition of dance and music in honor of the Orishas. In New Orleans in Congo Square, captured Africans did the same, holding on to their spiritual life. Jazz was created out of these gatherings. Bring them Back pic.twitter.com/TVVzcgl3Kw— Wendell Pierce (@WendellPierce) September 15, 2021
As we recover from our storms, we must remember what is our most valuable gift. Our culture in New Orleans. That intersection between our people and life itself. Our spirit. World renowned. The Gathering at Congo Square should be a weekly official event of the City of New Orleans pic.twitter.com/dnZ3BOVCKj— Wendell Pierce (@WendellPierce) September 15, 2021
Tuesday, September 14, 2021
Liu, L., Dehmamy, N., Chown, J. et al. Understanding the onset of hot streaks across artistic, cultural, and scientific careers. Nat Commun 12, 5392 (2021). https://doi.org/10.1038/s41467-021-25477-8
Across a range of creative domains, individual careers are characterized by hot streaks, which are bursts of high-impact works clustered together in close succession. Yet it remains unclear if there are any regularities underlying the beginning of hot streaks. Here, we analyze career histories of artists, film directors, and scientists, and develop deep learning and network science methods to build high-dimensional representations of their creative outputs. We find that across all three domains, individuals tend to explore diverse styles or topics before their hot streak, but become notably more focused after the hot streak begins. Crucially, hot streaks appear to be associated with neither exploration nor exploitation behavior in isolation, but a particular sequence of exploration followed by exploitation, where the transition from exploration to exploitation closely traces the onset of a hot streak. Overall, these results may have implications for identifying and nurturing talents across a wide range of creative domains.
Hsu, T. W., Niiya, Y., Thelwall, M., Ko, M., Knutson, B., & Tsai, J. L. (2021). Social media users produce more affect that supports cultural values, but are more influenced by affect that violates cultural values. Journal of Personality and Social Psychology. Advance online publication. https://doi.org/10.1037/pspa0000282
Although social media plays an increasingly important role in communication around the world, social media research has primarily focused on Western users. Thus, little is known about how cultural values shape social media behavior. To examine how cultural affective values might influence social media use, we developed a new sentiment analysis tool that allowed us to compare the affective content of Twitter posts in the United States (55,867 tweets, 1,888 users) and Japan (63,863 tweets, 1,825 users). Consistent with their respective cultural affective values, U.S. users primarily produced positive (vs. negative) posts, whereas Japanese users primarily produced low (vs. high) arousal posts. Contrary to cultural affective values, however, U.S. users were more influenced by changes in others’ high arousal negative (e.g., angry) posts, whereas Japanese were more influenced by changes in others’ high arousal positive (e.g., excited) posts. These patterns held after controlling for differences in baseline exposure to affective content, and across different topics. Together, these results suggest that across cultures, while social media users primarily produce content that supports their affective values, they are more influenced by content that violates those values. These findings have implications for theories about which affective content spreads on social media, and for applications related to the optimal design and use of social media platforms around the world. (PsycInfo Database Record (c) 2021 APA, all rights reserved)
Monday, September 13, 2021
Some years ago I watched a bunch of TED talks. And then lost interest. These days I rarely watch any TED talks. But I watched this one because it's by Adam Savage and it's about cosplay.
I've known about cosplay for well over a decade, but have never participated. Adam Savage is a new interest. He devotes an enormous amount of time create costumes for cosplay. It's obviously important to him and rewarding for him? Why?
That's what he explains in this video, which starts with his childhood and ends with him wearing a No-Face costume (from Miyazaki's Spirited Away) to a Comic-Con. He tells that story starting at about 10:16. Pay attention to what he says about playing the character of No-Face and of people's response to it. The thing is, it takes months, if not years, to build the costume, but the cosplay only lasts for minutes, perhaps hours, a couple of times a year. So I'm still wondering: Why? As part of that, what is the relationship between the part one plays while in costume and how one is in real life?
And the Why? isn't just about Adam Savage. It's about the culture of cosplay, and Comic-Cons. What role does that play in our culture? Do we need more of it? If so, why?
* * * * *
Addendum (9/24/21): For more on Adam Savage on cosplay, see the first part of the following video:
Gary Marcus and Ernest Davis, Has AI found a new Foundation?, The Gradient.
You may have heard that over a 100 AI researchers recently gathered at Stanford to announce the emergence of what they call "Foundation Models" as the new and reigning paradigm in AI.
"Although the term is new," Marcus and Davis explain, "the general approach is not." They elaborate:
You train a big neural network (like the well-known GPT-3) on an enormous amount of data), and then you adapt (“fine-tune”) the model to a bunch of more specific tasks (in the words of the report, "a foundation model ...[thus] serves as [part of] the common basis from which many task-specific models are built via adaptation"). The basic model thus serves as the “foundation” (hence the term) of AIs that carry out more specific tasks. The approach started to gather momentum in 2018, when Google developed the natural language processing model called BERT, and it became even more popular with the introduction last year of OpenAI’s GPT-3.
Their article is a critique of the approach, noting, "The broader AI community has had decidedly mixed reactions to the announcement from Stanford and some noted scientists have voiced skepticism or opposition." I'm sympathetic to these critiques. But critiques, but I'm not particularly interested in summarizing them here. You can real the whole article for that.
I'm interested in their brief characterization of what a proper foundation would require:
First, a general intelligence needs to maintain a cognitive model that keeps track of what it knows about the world. An AI system that powers a domestic robot must keep track of what is in the house. An AI system that reads a story or watches a movie must keep track both of the current state of people and things, and of their whole history so far.
Second, any generally intelligent system will require a great deal of real-world knowledge, and that knowledge must be accessible and reusable. A system must be able to encode a fact like “Most people in Warsaw speak Polish” and use it in the service of drawing inferences. (If Lech is from Warsaw, there is a good chance he speaks Polish; if we plan to visit him in Warsaw, we might want to learn a little Polish before we visit, etc.).
Third, a system must be able not only to identify entities (e.g., objects in a photo or video) but also be able to infer and reason about the relationships between those entities. If an AI watches a video that shows a person drinking cranberry grape juice, it must not only recognize the objects, but realize that the juices have been mixed, the mixture has been drunk, and the person has quenched their thirst.
Fourth, the notion that linguists call compositionality is similarly central; we understand wholes in terms of their parts. We understand that the phrase the woman who went up a mountain and came down with a diamond describes a particular woman. We can infer from the parts that (other things being equal) she know now possesses a diamond.
Fifth, in order to communicate with people and reason about the world a wide range of common sense knowledge that extends beyond simply factoids is required. In our view [link rebooting AI], common sense must start with a basic framework of understanding time, space, and causality that includes fundamental categories like physical objects, mental states, and interpersonal interactions.
Sixth, intelligent agents must be able to reason about what they know: if you know that a mixture of cranberry juice and grape juice is non-toxic, you can infer that drinking it is unlikely to cause you to die.
Finally, we would hope that any general intelligence would possess a capacity to represent and reason about human values. A medical advice chatbot should not recommend suicide.
In the end, it all comes down to trust. Foundation models largely try to shortcut all of the above steps. Examples like the juice case show the perils of those kinds of shortcuts. The inevitable result is systems that are untrustworthy. The initial enthusiasm for GPT-3 for example has been followed by a wave of panic as people have realized how prone these systems are to producing obscenity, prejudiced remarks, misinformation, and so forth. Large pretrained statistical models can do almost anything, at least enough for a proof of concept, but there is precious little that they can do reliably—precisely because they skirt the foundations that are actually required.
I'm not sure whether or not those requirements are adequate; I've not yet attempted to think it through. But they are a sobering reminder of what that foundationalists have yet to think through.
The whole article is worth reading.
Farah Stockman, "What Failure? For Some, the War in Afghanistan Was a Big Success", NYTimes, Sept 13, 2021.
Instead of a nation, what we really built were more than 500 military bases — and the personal fortunes of the people who supplied them. That had always been the deal. In April 2002, Defense Secretary Donald Rumsfeld dictated a top-secret memo ordering aides to come up with “a plan for how we are going to deal with each of these warlords — who is going to get money from whom, on what basis, in exchange for what, what is the quid pro quo, etc.,” according to The Washington Post.
The war proved enormously lucrative for many Americans and Europeans, too. One 2008 study estimated that some 40 percent of the money allocated to Afghanistan actually went back to donor countries in corporate profits and consultant salaries. Only about 12 percent of U.S. reconstruction assistance given to Afghanistan between 2002 and 2021 actually went to the Afghan government. Much of the rest went to companies like the Louis Berger Group, a New Jersey-based construction firm that got a $1.4 billion contract to build schools, clinics and roads. Even after it got caught bribing officials and systematically overbilling taxpayers, the contracts kept coming.
“It’s a bugbear of mine that Afghan corruption is so frequently cited as an explanation (as well as an excuse) for Western failure in Afghanistan,” Jonathan Goodhand, a professor in Conflict and Development Studies at SOAS University of London, wrote me in an email. Americans “point the finger at Afghans, whilst ignoring their role in both fueling and benefiting from the patronage pump.” [...]
What stands out about the war in Afghanistan is the way that it became the Afghan economy. At least Iraq had oil. In Afghanistan, the war dwarfed every other economic activity, apart from the opium trade.
Over two decades, the U.S. government spent $145 billion on reconstruction and aid, and an additional $837 billion on war fighting, in a country where the G.D.P. hovered between $4 billion and $20 billion per year. [...]
“The money spent was far more than Afghanistan could absorb,” concluded the special inspector general of Afghanistan’s final report. “The basic assumption was that corruption was created by individual Afghans and that donor interventions were the solution. It would take years for the United States to realize that it was fueling corruption with its excessive spending and lack of oversight.”
Saturday, September 11, 2021
"People with extreme political views that favor authoritarianism – whether they are on the far left or the far right – have surprisingly similar behaviors and psychological characteristics, a new study finds." https://t.co/Bz0PryWlkQ— Steve Stewart-Williams (@SteveStuWill) September 11, 2021
Thursday, September 9, 2021
I’ve just watched The Apartment, a romantic comedy from 1960. According to its Wikipedia entry the film “has come to be regarded as one of the greatest films ever made, appearing in lists by the American Film Institute and Sight and Sound magazine. In 1994, it was one of the 25 films selected for inclusion to the United States Library of Congress National Film Registry.” It’s also strange. Not creepy strange or puzzling strange, but really? strange.
It stars Jack Lemon as a clerk in a large insurance company located in New York City. He falls in love with Shirley MacLaine, who is an elevator operator in the building where the company is headquartered in New York City. She’s having an affair with his supervisor, played by Fred MacMurray, but he doesn’t know this, at least not at the beginning of the film. Here’s the strange part: somehow he’s gotten himself into a situation where he loans out his Upper West Side apartment to four different managers so they can conduct affairs. This, we’re supposed to believe, is so he can climb the corporate ladder. And it works!
Who dreamt that one up? As the film was written by Billy Wilder and I. A. L. Diamond, I supposed it’s one of them. Where’s they get that idea from? They only thing that halfway makes any sense is that they wanted to do a male parallel to a story in which a woman sleeps her way to a promotion but couldn’t come up with any direct way to pull it off. And so this is what they came up with. Now, I’m not suggesting that they went through some conscious decision process to arrive at this result. I’d guess is was one of those sudden “light bulb” inspirations: Hey! Wouldn’t it be funny if... But somehow the psycho-cultural forces involved are along the lines I’ve suggested. The tit-for-tat in this exchange game is mechanical and explicit and Lemon does in fact advance as he loans out his apartment. It’s the mechanical nature of this game that makes me suspicious of the underlying psycho-dynamics. (I note, however, that the Wikipedia article suggests other inspirations for the plot.)
Of course, the obverse of this screwy scheme is that the film was about adultery, as all the manages were married. That made it a bit scandalous in its time (1960). I suppose we can read the screwy mechanism as a of distracting attention from the adultery the fueled the merry-go-round.
Anyhow, MacMurray wasn’t one of the four managers originally in on the scheme. But he knew about the scheme and decided to avail himself of Lemon’s apartment for a tryst with MacLaine. And that’s how Lemon discovered that she was having an affair with his boss. Of course, there are twists and turns along the way, and she attempts to commit suicide, he nurses her back to health, she falls in love with him, he quits his job (after receiving a promotion that entitles him to the executive washroom), and then film ends with the two of them happily playing gin rummy in his apartment.
The film was a hit.
* * * * *
It would be interesting to compare three 1960s films, the original Ocean’s 11 (1960), Breakfast at Tiffany’s (1960), and The Apartment (1960), with recent series, Mad Men (2007-2015), but set in the early 1960s.
I've been watching a lot of videos by Adam Savage, who was a model-maker in the film industry before becoming co-host of MythBusters. A lot of his videos involve model building of various kinds, whether from scratch or from kits, including Lego kits. A lot of the issues he talks about in constructing those models come up in this video where my cousin, Erik Ronnberg, Jr., talks about building a museum-quality model of a fishing schoner.
The over-riding issue is that of narrative, the model implies a narrative. For Savage this comes most strongly into play in the last stage of model building, "weathering." How do you finish the model's surfaces so that it looks used? What particular history gave it that look.
Erik doesn't illustrate building his model, which took 1800 hours or so. This is a presentation made and the model's unveiling. But he does point out the features of the model, starting at about 20:45. Erik points out that he chose to depict the model when it was under full sail and coming close to the fishing grounds. He arranged figures of the crew to show them preparing to commence fishing.
About the video:
This event commemorates the unveiling of a model of the fishing schooner Elsie that was constructed by Cape Ann Museum maritime curator Erik Ronnberg, Jr., who learned the intricate art of ship model building from his father, Erik Ronnberg, Sr. The model was commissioned by Wilbur James who is a Rockport native, a member of the museum’s Board of Directors, and the great-grandson of a former owner of the real Elsie, Charles River Pearce. Assembled in the crowd are relatives of the Elsie’s past captains, additional owners, builders, designer, and crew. Along with a slide show that highlights the model’s incredible detail, Ronnberg speaks about the process of building such an exact replica, down to the four-strand left-hand turned anchor cable and lucky horseshoe attached to the windlass. In the course of the discussion that follows the lecture, many harrowing experiences of the crew who earned their living fishing on these great ships are recounted.
Wednesday, September 8, 2021
Neurons are not like logic gates, they can only be modeled using something like 10,000 coupled & non-linear differential equations. This research found a single rat neuron can also be approximated by a complex artificial neural network. Nice summary: https://t.co/Aaii0mZqHQ https://t.co/S0a2qZhEDb— Ethan Mollick (@emollick) September 8, 2021
Tuesday, September 7, 2021
Neural Recognizers: Some [old] notes based on a TV tube metaphor [perceptual contact with the world]
Another bump to the top can't hurt. [Sept 2021]
* * * * *
Introduction: Raw NotesI'm bumping this to the top of the queue because GPT-3. I'm reconfiguring and restructuring like crazy. More later.
A fair number of my posts here at New Savanna are edited from my personal intellectual notes. In this post the notes are unedited. This is an idea that dates back to my graduate school days in English at SUNY Buffalo. Since I keep my notes in Courier – a font that harks back to the days of manual typewriters – I’ve decided to retain that font for these posts and to drop justification.
Since these notes are “raw” you’re pretty much on your own. Sorry and good luck.
1.26.2002 – 1.27.2002
This is the latest version of an idea I first explored at Buffalo back in the late 1970s. It was jointly inspired by William Powers’ notion of a zero reference level at the top of his servo stack and by D’Arcy Thompson. I’ve transcribed some of those notes into the next section. A version of this appeared in the paper DGH (David Hays) and I wrote on natural intelligence, where we talked in terms of Pribram’s neural holography and Spinelli’s OCCAM model for the cortical column:
- W. Benzon and D. Hays. Principles and Development of Natural Intelligence. Journal of Social and Biological Structures 11, 293 - 322, 1988.
- Powers, W.T. (1973). Behavior: The Control of Perception. Chicago: Aldine.
- Pribram, K. H. (1971). Languages of the Brain. Englewood Cliffs, New Jersey: Prentice-Hall.
- Spinelli, D. N. (1970). Occam, a content addressable memory model for the brain. In (K. H. Pribram & D. Broadbent, Eds): The Biology of Memory. New York: Academic Press, pp. 293-306.
“TV Tube Recognizer”
Imagine a TV screen with a circle painted on it and with controls which allow you to operate on and manipulate the projection system in various useful ways. We’re going to use this to conduct an active analysis of the input to the screen.
Assume that the object to be analyzed is projected onto the screen in such a way that its largest dimension doesn’t extend beyond the circle painted on it. The analysis consists of twiddling the [control] dials until the area between the outer border of the object and the inner border of the circle is as small as possible. That is “minimize area between object and circle” is the reference signal for this servo-mechanical procedure, while “twiddle the dials” is the output function. (Notice that we are not operating on the input signal to the TV screen.)
One thing we might do by dial twiddling is to operate on the coordinate system of the projection (I’m thinking here of D’Arcy Thompson’s grids whereby a bass on one coordinate grid becomes a flounder when projected onto another different grid.) Thus if the input is a vertical ellipse a horizontal stretch would lower the area between the ellipse and circle [painted on the TV screen]. One could bend the axes or distort them in various ways. Or, how about allowing the system to partition the screen in various ways and then make local alterations in the coordinate system within the partition.
It doesn’t make much difference what [we do], the point is that the system have some way of operating on the image on the screen (without messing around with the input to the screen ...). The settings on the dials when the area between the projected object and the circle is at a minimum then constitutes the analysis of the object. To the extent that objects differ, the differences in the dial settings differentiate between objects (we are limited, of course, by the resolving power of the system). The settings which are best for a buzzard won’t be best for a flounder, nor a pine tree, nor a start, etc.
The most obvious difficulty with this story is that it depends on someone observing the TV screen and twiddling the control knobs. We want to eliminate that someone so that the system can achieve the desired result itself.
The obvious way to do this is to call on the self-organizing capacity of cortical neural tissue. That tissue is itself the TV screen and control knobs while the appropriate subcortical thalamic nucleus is the source of input to the recognizer. The reference level is alpha oscillation, reflecting the observation that alpha energy is high when the stimulus is familiar and low when it is not. Unfamiliar input disturbs the oscillation and the recognizer seeks to restore oscillation by temporarily altering the properties (twiddling the dials) of the input array (thalamic nucleus).
The neocortex is conceived as a patchwork of pattern recognizers; each is a sheet of cortical columns. Neighboring columns are mutually inhibitory, as in OCCAM (Spinelli 1970). A high level of output from one column will suppress output in its neighbors. The patterns are recognized in the primary input (input array) to a given recognizer. Let us assume a recognizer whose primary input is subcortical and let us set aside consideration of other inputs. The recognizer also generates primary output, which goes to the subcortical source of primary input. The computing capacity of a recognizer is far greater than that of its primary input.
The base state of such a recognizer occurs when the input is random (of a certain unspecified quality). In this base state the columns in the array oscillate – given a rather old notion that high alpha means low arousal, I’ve been thinking this would be at alpha; but, perhaps in view of Freeman’s work, I should revise this in favor of intrinsic chaos. The recognizer acts to maintain this base state under all conditions. When there is a non-random perceptual signal that signal will necessarily perturb the array so that it no longer oscillates smoothly. The array proceeds to form an impression of that input by sending (inhibitory) signals to the primary input. Some cortical columns will necessarily play a stronger role in this process than others. Eventually the recognizer will find some combination of outputs (modifying the properties of the input array) that restores randomness, and hence smooth oscillation. When this point is reached, the impression has been formed. This impression is of the input. In common parlance, we might want to say it represents that input.
Now some process must take place in the array so that the current perturbation can either be habituated into the background or an impression be taken, that is, can become part of the permanent repertoire of the recognizer. This latter, presumably, involves Hebbian learning and is triggered by reinforcement. In the manner of Spinelli’s OCCAM, the recognizer has many such impressions stored in its synaptic weights. A perceptual signal is presented across the entire array and, if it is of a kind that has already made an impression on the array, that impression will be evoked from the array and restore the recognizer to periodic oscillation. If it is of a kind that has not yet made an impression, then a new impression must be made.
Now, in fact, each recognizer has a variety of secondary inputs coming from other recognizers and it generates secondary outputs to them. All of them are attempting to account for their input simultaneously; through their secondary inputs and outputs the recognizers “help” one another out. Further it has inputs from subcortical nuclei which send neuromodulators to the array and it sends outputs to those nuclei which indicate its state of operation. The neuromodulators cause the recognizer to switch between its different operating modes.
I see these operating modes as follows:
Baseline: There is no perceptual load. The array is oscillating at alpha (chaos?).
Tracking: Perceptual input is accounted for. The array has recognized the input and is oscillating at alpha (chaos?).
Matching: The array is under a perceptual load and is attempting to match the input using its current set of impressions. EEG: “desynchronized,” gamma?
Forming (an impression): The array is under a perceptual load, but is unable to match it from its current impression repertoire. It is now forming a new impression. Obviously one critical aspect of the recognizer’s operation is switching from an unsuccessful matching operation to forming. EEG: “desynchronized,” gamma?
Habituating: The array is under a perceptual load, a new impression has been formed, and it has been assimilated into the background.
Fixing: A new impression has been formed. It must now become part of the permanent repertoire of impressions. This is the beginning of LTP. EEG: high alpha?
Group Expressive Behavior
We could apply this line of thought to group expressive behavior where the members of the group are regarded as oscillators coupled to one another through mutual perception and coordinated action. The simplest such behavior would be moving together, or clapping, to an isochronous pulse.
Assume a group moving to an isochronous pulse. Further assume that this activity is cortically controlled. Now, imagine that various members of the group are driven by subcortical impulses to inflect their movement in noticeable ways. These inflections will be transmitted to others through the coupling. Adjustments made to accommodate these inflections become, in effect, the group’s impression of those subcortical impulses.
This needs to be worked through rather more carefully, which will certainly change things a bit. But what I’m driving at is that these group impressions will become the stuff of culture. Here’s where we get memes and performance trajectories [as those are defined in Beethoven’s Anvil].
Monday, September 6, 2021
Sunday, September 5, 2021
The beginning of an interesting thread about NLP:
well, all true, but at least they work. like, previous-gen models *also* didn't understand meaning, and *also* considered only form. they were just much worse at this. so much worse that no one could ever imagine that they capture any kind of meaning whatsoever. they didn't work.— (((ل()(ل() 'yoav))))👾 (@yoavgo) August 29, 2021
The end of that thread:
and the thing that we *can* do to improve result, like futzing around with prompts and small architectural tweaks, i find kinda arbitrary and intellectually dull. but hey, at least these things work.— (((ل()(ل() 'yoav))))👾 (@yoavgo) August 29, 2021
Friday, September 3, 2021
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes.
Intentionality is the property of being about something, having content. In the 19th Century, psychologist Franz Brentano re-introduced this term from Medieval philosophy and held that intentionality was the “mark of the mental”. Beliefs and desires are intentional states: they have propositional content (one believes that p, one desires that p, where sentences substitute for “p” ).
I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).