Tuesday, June 30, 2020

Spots on white [flower]

So you want to be an auctioneer [the art of the chant]

Katy Vine, Do You Have What It Takes to Be a Master Auctioneer?, Texas Monthly, June 24, 2020.

The auctioneer's chant:
Live bid-calling is like a series of contracts, and when an auctioneer says “Sold,” accepting the bid, the highest bidder is on the hook. Therefore, each part of the chant is crucial. “A chant is made of three components: a statement, a question, and a suggestion,” Jones began. The jumbles of syllables between the numbers are called filler words. The class scribbled. The basic chant Jones proposed—the one we would employ for the remainder of the class and that would provide a soundtrack for all our dreams and nightmares—was “One dollar bid, now two, now two, will you give me two?”

This chant was the “Dick and Jane” of the form, the starter set upon which we would build our own auctioneer identities. A chant is as much a trademark to an auctioneer as James Brown’s scream and Bob Wills’s high-pitched holler were to them. All chants, Jones stressed, must be conducive to rhythm, melody, and clarity, exploiting words that easily roll off the tongue. Filler words and phrases like “bid ’em at” and “gimme” and “I’m bid” and “how ’bout” are better than “what about,” since consonants that use the front of the mouth (like b) require less breath than letters like w. Jones asked us to hold our hands in front of our mouths to compare “who” to “gimme” to demonstrate how much more breath was required for the former. The numbers need to be clear; the filler words are there for pleasure—to add an energetic, pressing, hypnotic quality. Filler words should give buyers time to consider their next offers but not so much time that the rhythm of the chant is broken. A clunky chant could lead to a hoarse auctioneer and confused or sluggish audience members, reluctant to bid.

“There is no book,” Jones continued. “There’s no company that says, ‘This is how you do it.’ There is no professor except me, Professor Jones.”
Jones' own chant:
I’ve listened to Jones’s own chant several times; in 1998 it won the International Auctioneer Championship. Best I can tell, it begins, “Five dollar now five gidibid five dollar now ten gidibid ten, ten now fifteen digibigit now fifteen now fifteen gidibid, now twenty?” He varies his hum around a low F key, breaking in with updates like “You’re out here!” and acknowledgments like “Thank you, sir!” The chant strikes the perfect balance between soothing and invigorating, like mall music.

Standing before the class now, he launched into a short example. “I’m at fifty dollars, now seventy-five, will you take it at seventy-five?” he started. Watching an authority chant up close was hardly demystifying. Jones held the microphone with his right hand. He used his left to both acknowledge bids and query potential bidders in the audience—fingers together, palm facing up, reaching high to direct his attention to the back of the room, extending low to indicate a bidder in the front of the room. He was a performer and a conductor at the same time, scanning the crowd for any hint of interest and responding with an inquiring expression. Physically, he exuded calm, steady authority.

“You have to sound melodic, but you don’t have to do a lot. If you roll it out right, it sounds like you’re doing a lot of work, but you’re not, really,” he said. He stressed that the chant needed to fit the situation. “If you’re doing a charity auction and you go too fast, you’ll lose your crowd. In real estate, you’ll scare them to death.”
It's mine!
People often pay more at live and online auctions than they would in a more straightforward transaction. They suffer from afflictions like “the winner’s curse”—when auction winners pay either more than they planned to or more than the item is worth—and “bidder’s heat,” a frenzied mindset as old as ancient Rome (calor licitantis). We’re helpless against the primal pull of competition.

While buyers may come to an auction looking for a bargain, a funny thing happens after they take their seats. “You need to know this, as an auctioneer,” Jones said. “The minute a lot of buyers place a bid on an item, they’re desperate to keep it. They become territorial. They’re thinking, ‘I’m not letting someone else get it’—then someone outbids them on it! ‘How dare you!’ ”

The increments also have a huge effect on competition. Starting low gives the crowd the impression of a deal, which sticks in their minds even as the bids begin to soar. Then, once the auction gets rolling, the increments usually increase: if an auctioneer is raising the bid in ten-dollar upticks and a bidder shouts out a twenty-dollar raise, bids generally keep going up at that rate. That causes other bidders to think, “Hey, this Garfield mug is hot!” They become even more motivated.
It's like being in a secret club:
At an auction, “actual worth” falls away. Worth is dependent upon bidders’ momentary whims—various, unique, personal equations that neutralize the outside world’s opinions as well as the seller’s sentimentality.

The bidder, I saw, was like a spectator at a magic show. Audience members walk in with a skeptical mind, but they secretly crave a portal into another dimension, where anything is possible. Where financial realities are suspended in the face of tantalizing urgency. Whether that old poster is really worth the thousand dollars you paid for it is almost irrelevant. You won!
There's more at the link.

Monday, June 29, 2020

MAGA: We've got the BEST pandemic ever

Music therapy, synch, and biorhythms



At YouTube:
Equipped with both the skills of a computer engineer and the artistry of an accomplished musician, Grace Leslie has developed a technique to use her own brain and body as a musical instrument. Grace's heart beats, neuroelectric activity, and other biofeedback are collected from interface on her body and fed into computer which then converts them into flowing waves of sounds. Just like a musician might tune a guitar or a piano, Grace must carefully adjust her emotional and physical state to create the right tones during these performances. In turn, these sonic experiences and the lessons they've taught Grace are part of growing body of research conducted her lab at Georgia Tech's Center for Music Technology. There, she and a team of researchers explore the ways that biorhythms can be translated into music or vice versa , how music can create physiological effects in listeners. Ultimately, Grace and her students aim to.create new forms of music or sounds that can act as sonic therapies for listeners.

Lavender hydrangea

Three Parties I Have Known

I'm this time of the coronavirus I'm bumping this to the top of the queue just to remind everyone that fun is real!
If you’ve ever been to a really good party, or a happening nightclub scene, you know what I’m talking about. And I’m not even a party person. But I know a good time when one’s swirling about my legs and crawling up my spine.

Yes I do!

But writing about parties, as I’m about to do, is tricky business, especially if the parties you are writing about happened half a life-time ago and have left few memories which you can recount in appropriate detail. Without the detail, what is there, but the...feeling is not the word, nor ambience, perhaps residue, Benjamin’s aura? Somewhere in there.

I HAVE known some good parties. Some of the best are those where I’ve performed as a musician. But I was not a musician at these parties, though I helped organize the third. These are the parties that stick in my mind as (among) the best I have known.

1960s at Johns Hopkins

I was an undergraduate and then a graduate student (master’s program). I don’t recall whether this party happened during my undergraduate or graduate years. Not that it matters.

It was in the Hopkins gym on the north end of the Homewood campus. So the room was large and the ceiling high. The music was rock, with a bit of a psychedelic vibe. There was a light show, colored oils floating in water in shallow dish atop an overhead projector – do they still exist, overhead projects, or have they been completely replaced by digital projects of various kinds? The light show required a certain ambient darkness which it could then illuminate.

But I forget just where the light show was being projected. One wall, the ceiling, both? Don’t know.

Nor do I recall the music, not in the sense that I can bring it up in my mind’s ear. Though who knows what could come bubbling forth under hypnosis. The band was a trio, guitar, bass, and drums. And I have a vague sense that they were spread apart a bit on the floor rather than being tightly grouped.

It was the guitarist that caught my ear, his expertise with the wa wa. Sounded like Zappa. I told him so during a break and we had a nice chat. Zappa at Hopkins. What fun.

Somehow it was all magic. The world melted away and we all melted into the party.

1970s at UB

That’s the State University of New York at Buffalo, known as UB (it used to be a private school, the University of Buffalo). This party just happened. No one planned it. It just grew up out of the ground one afternoon.

At that time the English Department was housed in two long, low buildings that had been erected at the north end of the Main Street campus as temporary structures. Somehow they became all but permanent. The buildings ran parallel to one another about, say, 20 or 30 feet apart, with enclosed walkways between them and one or two points. This little complex was surrounded by grass of some useful extent.

It was Spring, of 1977 or 1978, and by then the grass had come back, the leaves were on the trees, and the temperature was comfortable. For some reason, somehow, the faculty decided to throw a party that afternoon, for the graduate students, for the undergraduate majors, for themselves, and, as a practical matter, for anyone who came by. They set up a temporary bar outside on the lawn and started serving drinks.

Where did the tables come from? I don’t know. And the booze? Don’t know that either. Someone had an idea and someone did this and that, and things happened. Word spread and a party was on.

We drank and talked and many of us got pleasantly tipsy, though some, I recall vaguely, drank to the point of nausea and throwing up.

Somehow it was all magic. Some people kissed people they’d hardly even talked to before, even me, though I forget the woman’s name. We had one date after the party, and that was that.

1980s at Grafton, NY

I was on the faculty at the Rensselaer Polytechnic Institute in Troy, New York. My friend Al and I rented a small house on the top of a hill in Grafton, about 15 miles outside of Troy. You could see into Massachusetts from our second floor.

We decided to throw a Halloween party. We sent out invitations – just how, I forget, but this was before email and I don’t recall stuffing a lot of envelopes. We hired some undergraduates to form a jazz band and set them up in a largish downstairs room. One of them wrapped himself in toilet paper so he became a mummy.

I forget what I did for a costume, but Al managed some kind of penguin costume.

A couple of hours before things began Al was cutting veggies for hors d’oeuvres when he managed to slice into the meat of his hand (I forget which one–all this forgetting) down to the bone. The party was set to begin in two hours and I had to drive Al to the emergency room–keep calm. Which I did, and left him there. Someone else would pick him up and bring him to the party.

I returned to Grafton, finished whatever needed to be finished. People began arriving and, by the time our little house was packed, Al had returned from the hospital and was in fine spirits. The joint was rockin’.

I remember one woman, a faculty wife, had costumed herself as an upside-down person: pants pulled on over her arms, some silly head hanging between her legs, which had shirt pulled over them so they could pretend to be arms as her hands pretended to be feet. Ridiculous. Fun. I mean, when talking to her, where do you direct your voice, to her pretend head (down there) or to her pretend crotch (in your face)?

Somehow it was all magic. The mundane world was gone and the cosmos filled our house.

Sunday, June 28, 2020

If you want to be happy, it helps to be rich

Twenge, J. M., & Cooper, A. B. (2020). The expanding class divide in happiness in the United States, 1972–2016. Emotion. Advance online publication. https://doi.org/10.1037/emo0000774
Abstract: Is there a growing class divide in happiness? Among U.S. adults ages 30 and over in the nationally representative General Social Survey (N = 44,198), the positive correlation between socioeconomic status (SES; including income, education, and occupational prestige) and happiness grew steadily stronger between the 1970s and 2010s. Associations between income and happiness were linear, with no tapering off at higher levels of income. Between 1972 and 2016, the happiness of high-SES White adults was fairly stable, whereas the happiness of low-SES White adults steadily declined. Among Black adults, the happiness of low-SES adults was fairly stable, whereas the happiness of high-SES adults increased. Thus, the happiness advantage favoring high-SES adults has expanded over the decades. Age–period–cohort analyses based on hierarchical linear modeling demonstrate that this effect is primarily caused by time period rather than by birth cohort or age. (PsycInfo Database Record (c) 2020 APA, all rights reserved)

Pink clouds over Manhattan


Different though they are, David Graeber and Peter Thiel share a great deal of common ground



This conversation took place in 2015. Peter Thiel is, as I'm sure you know, a venture capitalist (first outside investor in Facebook), a libertarian, a Girardian, and a Trump supporter (at least until recently: I hear, but can offer no citation, that he's not been happy about Trump's handing of the pandemic). Graeber is trained as an anthropologist, is known for his major book, Debt, for supporting the Occupy movement, and is an anarchist. Both Thiel and Graeber feel that we've missed the boat on technology and both regard the university system as a disaster. It's an interesting conjunction of views.

Has Žižek lost his mind or is he just lost in space? [the Singularity strikes again]

Neither, actually. He’s just trying to make sense of nonsense. It’s not clear whether he’s managed to work his way free or remains entangled. He does, however, manage to demonstrate that the idea of a trans-humanist technological Singularity is bad metaphysics masquerading as technico-utopian possibility.
Slavoj Žižek, The Apocalypse of a Wired Brain, Critical Inquiry 46, (Summer 2020) 745-763.

Abstract: When the threat posed by the digitalization of our lives is debated in our media, the focus is usually on the new phase of capitalism called “surveillance capitalism”: a total digital control over our lives exerted by state agencies and private corporations. However, important as this “surveillance capitalism” is, it is not yet the true game changer; there is a much greater potential for new forms of domination in the prospect of direct brain-machine interface (“wired brain”). First, when our brain is connected to digital machines, we can cause things to happen in reality just by thinking about them; then, my brain is directly connected to another brain, so that another individual can directly share my experience. Extrapolated to its extreme, wired brain opens up the prospect of what Ray Kurzweil called Singularity, the divine-like global space of shared awareness. . . . Whatever the (dubious, for the time being) scientific status of this idea, it is clear that its realization will affect the basic features of humans as thinking/speaking beings: the eventual rise of Singularity will be apocalyptic in the complex meaning of the term: it will imply the encounter with a truth hidden in our ordinary human existence, like the entrance into a new post-human dimension, which cannot but be experienced as catastrophic, as the end of our world. But will we still be here to experience our immersion into Singularity in any human sense of the term?
I don’t know what Žižek is up to. His second paragraph (745-46):
Although the predominant image of apocalypse that haunts us today is the nightmare of a global ecological catastrophe, the prospect of a direct link between our brain and a digital machine (what Elon Musk calls a “neuralink”) also implies an apocalyptic dimension; extrapolated to its extreme, a neuralink opens up the prospect of what Ray Kurzweil terms “Singularity,” the divine-like global space of shared awareness. One should resist the temptation to proclaim that such a vision of the wired brain is an illusion or something from the far future—such a view is itself an escape from the fact that something new and unheard of is effectively emerging. We should not underestimate the shattering impact of collectively shared experience; even if such a shared experience will be realized in a much more modest way than today’s grandiose visions of Singularity, everything will change with it. Peter Sloterdijk was right to characterize Kurzweil as a new John the Baptist, a forerunner of a new form of posthumanity. Kurzweil perfectly captured the radical implications of a wired brain; he saw clearly that our entire vision of reality and our role in it will change.
If, if it happens. What’s he mean by that qualifier, “…will be realized in a much more modest way…”? He goes on (746): “What we mean by wired brain is the idea of a direct link between our mental processes and a digital machine—and that, through the mediation of a machine, we will be able to directly share our mental processes (experiences).” With that last assertion, direct sharing of mental processes, he gives the game away. There’s nothing modest about that. If that becomes possible, he’s right, everything will change.

Why grant that much? But what does he mean by “direct sharing”? What does anyone mean by such phrases? I’ve argued that the notion of direct brain-to-brain linkage is incoherent: Why we'll never be able to build technology for Direct Brain-to-Brain Communication. I note that the brain does not have any open I/O (input-output) “ports.” If it did then, of course, impulses coming into such a port would be identified as coming from some Other, their thoughts, directly. And, when we want to send thoughts directly to that Other, we send them out through such a port.

But, as I’ve said, the brain doesn’t have such things. So, regardless of all the technical details, what’s going on is we’re sending impulses back and forth between brains. But – and here’s the crucial point – there’s no way for a neuron to determine whether a pulse comes from the local brain (the one in which the neuron is located) or from a foreign brain. An electrochemical spike is an electrochemical spike; they’re all alike. Similarly, there’s no way a neuron can determine whether a pulse is destined for a terminal in the local brain or in a distant brain. I haven’t got the foggiest idea what massive – and I’m pretty sure it would have to be massive, millions of neurons – direct brain linkage would be like.

Massive confusion? Apocalypse? The end of history? Sure, why not?

Still, even if he gives away more technical feasibility that is warranted, Žižek does realize that there is something strange going on (755):
If we are dealing with superposition of multiple experiences that cannot be totalized into an ecstatic One, this means that there is no single Singularity but an inconsistent texture of shared experiences that, for structural reasons, always have to be limited—if these limits are stretched too far, my shared experience explodes into a nightmare. This brings us again to the question of power. Which regulatory mechanism will decide which experiences I will share with others, and who will control this mechanism? One thing is sure: one should discard as utopian the idea that I myself will be able to connect/disconnect my brain. And one should fully accept the fact that a wide all-encompassing link between minds cannot take place at the level of subjective experience but only at an objective level, as a com- plex network of machines that read my mental states. Such a vast synchronous collective experience is a dangerous myth. Plus, as our brains will be wired without us even being aware of it, a new form of freedom and power will arise that will reside simply in our being able to isolate ourselves from Singularity.
Later (757-58):
In short, the problem with the notion of Singularity is not that it is too radical or utopian but that it is not radical enough. It continues to locate the advent of Singularity within our common universe of intersubjectivity, ignoring how the eventual rise of Singularity will undermine the very basic presupposition of our intersubjective universe, the limitation on which our greatest achievements are based. […] When the direct link of our brain with the digital network passes a certain threshold (a quite realist prospect), the gap that separates our self-awareness from external reality will collapse (because our thoughts will be able to directly affect external reality—and vice versa—plus we will be in direct contact with other minds). Will we thereby lose our singularity (and with it our subjectivity) as well as our distance towards external reality?
And so forth. It is perhaps worth nothing that this article about an apocalypse ends with an Auschwitz joke.

David Graeber on Fun at the Heart of Being [common glad impulse]

Except for the addendum, which is new, this post is from 2014. I'm bumping it to the top on general principle.
David Graeber, most widely known as the author of Debt: The First 5000 Years and a theorist of the Occupy movement, has an article in The Baffler arguing that freedom and play inherent in the nature of things. After a certain amount of opening throat clearing about play among inchworms and lobsters he gets around to the modern economic view of things, according to which all animal behavior (including that of us featherless bipeds) is to be accounted for by appeals to rational self-interest, a view that embraces (if only metaphorically) genes, which are just (odd) components of certain molecules.

For various reasons, which he explains, Graeber's not buying it. This is what he ends up proposing, via self-organization:
Let us imagine a principle. Call it a principle of freedom—or, since Latinate constructions tend to carry more weight in such matters, call it a principle of ludic freedom. Let us imagine it to hold that the free exercise of an entity’s most complex powers or capacities will, under certain circumstances at least, tend to become an end in itself. It would obviously not be the only principle active in nature. Others pull other ways. But if nothing else, it would help explain what we actually observe, such as why, despite the second law of thermodynamics, the universe seems to be getting more, rather than less, complex. Evolutionary psychologists claim they can explain—as the title of one recent book has it—“why sex is fun.” What they can’t explain is why fun is fun. This could.
I'm sympathetic, both with his reservations about economic rationalism, and with his advocacy of ludic freedom.

Common Glad Impulse

Charlie Keil reminds me of: W.H. Hudson, The Naturalist in La Plata, D. Appleton and Company, 1895. Here's a passage:
Birds are more subject to this universal joyous instinct than mammals, and there are times when some species are constantly overflowing with it; and as they are so much freer than mammals, more buoyant and graceful in action, more loquacious, and have voices so much finer, their gladness shows itself in a greater variety of ways, with more regular and beautiful motions, and with melody. But every species, or group of species, has its own inherited form or style of performance; and, however rude and irregular this may be, as in the case of the pretended stampedes and fights of wild cattle, that is the form in which the feeling will always be expressed. If all men, at some exceedingly remote period in their history, had agreed to express the common glad impulse, which they now express in such an infinite variety of ways or do not express at all, by dancing a minuet, and minuet-dancing had at last come to be instinctive, and taken to spontaneously by children at an early period, just as they take to walking "on their hind legs," man's case would be like that of the inferior animals.

I was one day watching a flock of plovers, quietly feeding on the ground, when, in a moment, all the birds were seized by a joyous madness, and each one, after making a vigorous peck at his nearest neighbour, began running wildly about, each trying in passing to peck other birds, while seeking by means of quick doublings to escape being pecked in turn. This species always expresses its glad impulse in the same way; but how different in form is this simple game of touch-who-touch-can from the triplet dances of the spur-winged lapwings, with their drumming music, pompous gestures, and military precision of movement! How different also from the aerial performance of another bird of the same family--the Brazilian stilt--in which one is pursued by the others, mounting upwards in a wild, eccentric flight until they are all but lost to view; and back to earth again, and then, skywards once more; the pursued bird when overtaken giving place to another individual, and the pursuing pack making the air ring with their melodious barking cries! How different again are all these from the aerial pastimes of the snipe, in which the bird, in its violent descent, is able to produce such wonderful, far-reaching sounds with its tail-feathers! The snipe, as a rule, is a solitary bird, and, like the oscillating finch mentioned early in this paper, is content to practise its pastimes without a witness. In the gregarious kinds all perform together: for this feeling, like fear, is eminently contagious, and the sight of one bird mad with joy will quickly make the whole flock mad. There are also species that always live in pairs, like the scissors-tails already mentioned, that periodically assemble in numbers for the purpose of display. The crested screamer, a very large bird, may also be mentioned: male and female sing somewhat harmoniously together, with voices of almost unparalleled power: but these birds also congregate in large numbers, and a thousand couples, or even several thousands, may be assembled together: and, at intervals, both by day and night, all sing in concert, their combined voices producing a thunderous melody which seems to shake the earth. As a rule, however, birds that live always in pairs do not assemble for the purpose of display, but the joyous instinct is expressed by duet-like performances between male and female. Thus, in the three South American Passerine families, the tyrant-birds, wood-hewers, and ant-thrushes, numbering together between eight and nine hundred species, a very large majority appear to have displays of this description.

Saturday, June 27, 2020

Hiroshige, Horikiri iris garden, 1857

Hiroshige, Horikiri iris garden, 1857


Diane Disney talks about her dad, Walt

Jim Keller: Moore's Law, Microprocessors, Abstractions, and First Principles | AI Podcast [Lex Fridman]



Program notes:
Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He's known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect. This conversation is part of the Artificial Intelligence podcast.

0:00 - Introduction
2:12 - Difference between a computer and a human brain
3:43 - Computer abstraction layers and parallelism
17:53 - If you run a program multiple times, do you always get the same answer?
20:43 - Building computers and teams of people
22:41 - Start from scratch every 5 years
30:05 - Moore's law is not dead
55:47 - Is superintelligence the next layer of abstraction?
1:00:02 - Is the universe a computer?
1:03:00 - Ray Kurzweil and exponential improvement in technology
1:04:33 - Elon Musk and Tesla Autopilot
1:20:51 - Lessons from working with Elon Musk
1:28:33 - Existential threats from AI
1:32:38 - Happiness and the meaning of life
I found this conversation utterly fascination – though, truth be told, I also played solitaire while listening. Much of it just whizzed by, but that's OK.  Keller designs microprocessors, and has lived through revolutions in processor design. He actually thinks about transistor size in terms of numbers of atoms. Anyone who thinks about computing needs to think about it as a physical process, even if the conversation just whizzes by.

04:00 - 21:00: Starts talking about layers of abstraction at roughly 04:00 and continues. Interesting soundbite: you can execute a program 100 times and get the same answer each time, but have 100 different execution paths.

Friday, June 26, 2020

(Post) Modern medicine

Friday Fotos: Whisps [Hallucinated City]


• • • • •





counterpoint

Reaction Videos, a quick note [Media Notes 40]

I’ve been watching various reaction videos this last week, see examples in this post: Contemporary reactions to blue-eyed soul from 1965, Righteous Brothers [time travel in music]. I’ve been aware of and watching analysis videos (there’s on in that post) for somewhat longer; here someone will offer fairly extensive analysis of a performance in another video. There may be a few analytical remarks in a reaction video, but very few. Remarks in these reaction videos are generally more evaluative often little more than expressions of pleasure. Wikipedia dates them to the mid-2000s to 2011.

As the term suggestions, such a video shows people reacting – for the first time! – to another video. These reactors apparently devote their channel to the genre. These reactions, of course, are themselves performances. By this I do not mean to imply that the reaction displayed is fundamentally fake or forced, but only that the reactor is certainly aware of having an audience and is intent on pleasing that audience. Would their visible movements and audible sounds be the same if they were simply listening in private? In at least some cases I suspect the video performance is an exaggerated version of their private reaction.

Why is the genre popular? Obviously people like to see other people reacting. But why? To model or validate their own reactions?

I have certainly taken pleasure in watching all these reactions to “Unchained Melody” by Bobby Hatfield of the Righteous Brothers. It is a powerful performance, but I wouldn’t have watched just the performance so many times. I wonder about regular consumers of the genre? Do they watch many different reactions to the same performance, or is that just something I’m doing because the genre itself interests me? I don’t know.

For that matter, why has this performance received so much attention? Yes, it IS powerful, and is from way back in the ancient days of 1965. Most of the videos I’ve watched are of Black reactors. It is quite clear they are interested in the fact that Hatfield is white but singing in a style that is identifiably Black. That disparity is certainly of interest to them, but to me as well.

But what is the general case of reaction videos? Are the reactors usually responding to music that is beyond their normal range? I’d guess that it is. I’d guess that that is the appeal, to them, and to their audience. But I don’t know.

There’s some interesting research to be done. But not by me. Takers, anyone?

Thursday, June 25, 2020

Coping with COVID-19 in the USA

Social distance


What if dark matter isn't matter at all? What if gravity doesn't work the way we thought it did?

Ramin Skibba, Does Dark Matter Exist? Aeon.
In the early 1980s, the Israeli physicist Mordehai ‘Moti’ Milgrom questioned the increasingly popular dark matter narrative. While working at an institute south of Tel Aviv, he studied measurements by Rubin and others, and proposed that physicists hadn’t been missing matter; instead, they’d been wrongly assuming that they completely understood how gravity works. Since the outer stars and gas clouds orbit galaxies much faster than expected, it makes more sense to try to correct the standard view of gravity than to conjure an entirely new kind of matter.

Milgrom proposed that Isaac Newton’s second law of motion (describing how the gravitational force acting on an object varies with its acceleration and mass) changes ever so slightly, depending on the object’s acceleration. Planets such as Neptune or Uranus orbiting our sun, or stars orbiting close to the centre of our galaxy, don’t feel the difference. But far in the outlying areas of the Milky Way, stars would feel a smaller gravitational force than previously thought from the bulk of matter in the galaxy; adjusting Newton’s law could provide an explanation for the speeds Rubin measured, without needing to invoke dark matter.

Developing the paradigm of a dark-matter-less universe became Milgrom’s life project. At first, he worked mostly in isolation on his proto-theory, which he called Modified Newtonian Dynamics (MOND). ‘For more than a few years, I was the only one,’ he says. But slowly other scientists circled round.

He and a handful of others first focused on rotating galaxies, where MOND accurately describes what Rubin observed at least as well as dark matter theories do. Milgrom and colleagues then expanded the scope of their research, predicting a relationship between how fast the outside of a galaxy rotates and the galaxy’s total mass, minus any dark matter. The astronomers R Brent Tully and J Richard Fisher measured and confirmed just such a trend, which many dark matter models have struggled to explain.

Despite these successes, Milgrom’s modification of Newton’s second law remained just an approximation, causing his ideas to fall short of requirements for a full-fledged theory. [...]

MOND lacked much of a foundation until a few years ago, when the Dutch physicist Erik Verlinde began developing a theory known as ‘emergent gravity’ to explain why gravity was altered. In Verlinde’s view, gravity, including MOND, emerges as a kind of thermodynamic effect, related to increasing entropy or disorder. His ideas build on quantum physics as well, viewing space-time and the matter within it as originating from an interconnected array of quantum bits. When space-time gets curved, it produces gravity, and if it’s curved in a particular way, it creates the illusion of dark matter.
Dark matter dominance:
Today’s seeming dominance of dark matter wasn’t inevitable. The processes through which scientists develop theories are heavily influenced by all sorts of historical and sociological factors, a point eloquently made by Andrew Pickering, emeritus philosopher of science at the University of Exeter and the author of Constructing Quarks (1984), a 36-year-old book that’s still relevant today.

It’s important to pay attention to who decides which phenomena to study, which research earns major government grants, which big experiments get funded, who gets speaking opportunities at scientific conferences, who is media savvy, who wins prominent fellowships and awards, and who gets promoted to high-profile faculty positions. Different choices sometimes can shape the future trajectory of science. And when choices by theorists and experimentalists coincide symbiotically, Pickering argues, it can be challenging for an upstart theory – such as modified gravity – to get a fair hearing.
The early universe:
The astronomers Arno Penzias and Robert Wilson in the 1960s at first misinterpreted their radio telescope’s faint static as noise – perhaps due to pigeons roosting and leaving droppings on it. But the signal turned out to be real, and they confirmed their discovery of relic radio waves that date back to soon after the Big Bang. Then in the 1980s and ’90s, Soviet and NASA scientists used their own space telescopes, RELIKT-1 and COBE, to spot tiny wiggles in that radiation. John Mather and George Smoot, the physicists who led the COBE research, won the Nobel Prize in Physics 2006 for measuring those little radiation variations, which translate into early density differences that determined where the matter in the Universe collected and structures of galaxies formed.

Mather and Smoot’s successors have now measured the wiggles in relic radio radiation to exquisite precision, and any successful theory has to offer an explanation of them. Dark matter physicists have already shown that their theory could reproduce all of those wiggles quite well, but modified or emergent gravity has failed that critical test – so far. Bekenstein died in 2015, but his successors are still trying to make his modified gravity theory consistent with at least some of the measurements. That would be a big leap forward and a compelling one for skeptics of modified gravity, but it’s a major task that has yet to be done.

Wednesday, June 24, 2020

Look up

Stewart Brand – "We are as gods." [watch the video clip]

From Two Geniuses to the Rest of Us

I originally published this in October, 2013, when I was beginning my critique of the MacArtthur Fellowship Program.  I've now posted the published version of the review essay on Academia.edu: https://www.academia.edu/43426622/A_Tale_of_Two_Geniuses.

* * * * *

With “genius” as the topic du jour here on the new savanna I thought I’d republish this old double book review: “A Tale of Two Geniuses,” Journal of Social and Evolutionary Systems, 17(2): 227-230, 1994. Richard Feynman was one of the geniuses and John von Neumann was the other. But the piece does more than review those two (most fascinating) books. It goes on to speculate, just a bit, about the curious fact that ideas that strained the abilities of von Neumann and Feynman are now comfortably within range of advanced students of college physics and math. Such is the genius of cultural evolution.

* * * * *

Genius: The Life and Science of Richard Feynman, by James Gleick, New York: Pantheon Books, 1992, 532 pp.

John von Neumann, by Norman Macrae, New York, Pantheon Books, 1992, 405 pp.

Students of cognitive evolution and of twentieth century thought are fortunate in the simultaneous appearance of these two biographies. No doubt the simultaneity is mostly coincidence. The physicist Richard Feynman is most widely known, alas, for two autobiographical collections of anecdotes which reveal him to be a waggish and riggish anti-establishment sort; he is most deeply known for his contributions to quantum electrodynamics. John von Neumann was a thoroughly establishment sort - soldiers guarded his hospital room as he lay dying of brain cancer just in case he let out defense secrets in his sleep - and is most widely know as the name which appears in phrases like “computers using the von Neumann architecture.” The two men crossed paths in Los Alamos, where they worked on the atomic bomb. That crossing is a reasonable place to begin our review.

Los Alamos

Feynman was recruited to Los Alamos while still a graduate student. He was in charge of group T-4, Diffusion Problems. The problem was to figure out how neutrons, which drive the fission reaction, diffuse through the explosive core. Knowing the rate and pattern of diffusion was essential to determining the mass and configuration of fissile material. Since the late 30s von Neumann had been working on similar problems in connection with shock waves and explosions in general and so was able to help the Los Alamos effort between 1943 and 1945.

The difficulty was that the relevant equations could not be solved analytically. Rather, it was necessarily to simulate neutron diffusion numerically by calculating the step-by-step motion of individual neutrons. That requires lots of calculations, which were performed by a group of people operating calculators. The problems would be broken into components; each person would be responsible for one component, with each problem being passed from person to person as individual components where calculated.

Computing and von Neumann

That, of course, is the general way computers solve problems, with the computational plan being an algorithm. But, they did not have computers at Los Alamos. Computers came after the war and von Neumann was central to the effort. He understood that the computer is essentially a logical device and clarified that logic with the concepts of the stored program (Macrae, pp. 282-284), the fetch-execute cycle (pp. 287), and conditional transfer (see Bernstein 1963, 1964, pp. 60 ff.). That is to say, von Neumann clearly differentiated between the physical structures and connections of the devices from which the computer is constructed and the logical requirements which those devices have to fulfill. For that he is the progenitor of the computer.

Later on von Neumann initiated the use of computers in weather modeling. This, plus his earlier work on shock waves and the atomic bomb, makes him one of the founders of numerical analysis, a loose collection of techniques important in many scientific and technical fields. While pursuing the conceptual foundations of life, he worked out the concept of the cellular automaton, a highly parallel kind of computational device which is much favored by current theorists of chaos and dynamical systems. His work on game theory created a new field of economic and strategic analysis. Before the war von Neumann did important work on the mathematical foundations of quantum mechanics.

Feynmann and Quantum Mechanics

And so we segue to Feynman, whose most important work was that which he in the late 1940s on quantum electrodynamics. The quantum world is notorious as the domain where the fundamental stuff of the universe acts sometimes like a wave, sometimes like a particle. Particles and waves are readily visualized. But how can you visualize something which is both and neither? And, if you can't visualize it, then how do you get the physical intuition which is, for many, so important to scientific thinking (cf. Miller, 1986)? It was Feynman's genius to create diagrammatic conventions for quantum interactions which made physical intuition much easier and facilitated calculation as well. The so-called Feynman diagrams became ubiquitous once Feynman introduced them and, in 1949, Freeman Dyson [father of George Dyson] proved the diagrams to be equivalent to the more rigorously mathematical, and less intuitive, axiomatic approach of Julian Schwinger and Sin-Itoro Tomonaga.

Feynman went on to do important work in superfluids, weak nuclear force and, while on sabbatical, did some creditable molecular biology. In the wake of the Challenger disaster Feynman received a great deal of attention by performing a simple demonstration with ice water and a rubber ring. That simple demonstration unmasked the self-serving bureaucratic disregard for reality which led to the Challenger disaster. He also served on the board of directors of Thinking Machines, Inc., whose massively parallel computers are often used to implement models based on von Neumann's concept of the cellular automaton.

Lean down

To Infinity and Beyond: Progress Studies @3QD [Progress Studies]

My latest article for 3 Quarks Daily is up:
I sweated blood to write it and did three preparatory posts earlier that week:
The general idea is to place the proposed Progress Studies into a larger context, relating it to science fiction, Future Studies, and scenario planning.

I was also concerned/puzzled about the optimism expressed within the movement. It felt like a throwback to the past. In particular, to the techno-optimism typified by Walt Disney In this video from 1966 in which he proposes something he called the Experimental Prototype Community of Tomorrow (E.P.C.O.T.). Disney made the following short film to convince business leaders and Florida politicians (the project was to be located in central Florida) to join with him in this effort:



The first five minutes is an account of Disneyland, which you may skip over if you wish. Uncle Walt starts narrating at about 5:13. The good stuff starts at 9:30, when Disney starts laying out the E.P.C.O.T. concept.

The problem, in seemed to me, is that the world had changed a great deal since Disney delivered that video. We can’t simply reach back into the past and avail ourselves of the optimism. We need to craft our own rationale and drive grounded in the world as it is now. The 3 Quarks Daily piece sketched out a way of doing that.

Bonus: The article’s subtitle, “Through decadence and beyond” is a parody of Buzz Lightyear’s motto: “To Infinity… and Beyond.”

Tuesday, June 23, 2020

AGI and Superintelligence vs. Humans on Mars – such different kinds of discussions

Every once in awhile I find myself thinking about AGI (artificial general intelligence) and superintelligent machines: How long before they emerge and so on? Color me skeptical.

Beyond that, however, I don’t think we can argue the issue in an interesting and illuminating way, that is, in a way that helps proponents refine their understanding and so advance their intellectual projects. A suitable intellectual framework doesn’t exist. Sure, I’ve got my views, and I have no intention of abandoning them, but I don’t regard my arguments as particularly strong, no more than I find strong arguments in favor of the emergence of AGI or even superintelligence. We’re all just tap-dancing and hand-waving.

In contrast, consider the question of whether or not we should colonize Mars. We have a very rich framework in which to discuss that. We’ve already landed men on the moon and returned them to earth. Astronauts have spent weeks and months, even a year, living in low-earth orbit in the International Space Station. We’ve landed robots on Mars and gotten useful information back. All of that experience is relevant to sending humans to Mars and – here’s the point – we’ve got frameworks in which we can evaluate that experience against the requirements of a manned mission to Mars.

What do have in the realm of machine intelligence? We’ve got impressive working systems of machine translation. But we wouldn’t use those systems to translate legal documents, for example, and we don’t have any way of evaluating those systems that gives us a detailed sense of what we need to do to create MT systems adequate to legal translation. And so it goes in various domains. And in a few domains, such as chess and Go, the performance of artificial systems is superior to human performance. But individual humans can do all of these things, and more. How do we create a machine that can do that, much less one that can improve itself say beyond human capability? We haven’t a clue.

In the case of a manned mission to Mars we have a rich understanding of the mechanical, kinematic, chemical, thermodynamic, electrical, and electronic principles involved in building the devices needed to perform the mission. We also know quite a bit about the biological and psychological requirements of supporting human life for such a mission. But our understanding of the basic principles of intelligence – perception, cognition, reasoning, and so forth – is sadly lacking. We don’t know how humans do it – though we have learned a lot – and don’t know how to design machines that can perform at a human level. We have a rich knowledge of the basic principles involved in a manned mission to Mars, but a poor knowledge of the basic principles involved in constructing AGI or in understanding human intelligence.

Arguments about AGI seem more like science fiction than like doing a feasibility study of a mission we’re considering.

What other domains are more like AGI than a manned mission to Mars. How do we recognize the difference between such domains?

* * * * *

How we go to Mars, episode 1 of ?


The problematic of color [eye and camera]

This is how the image came out of the camera, more or less:



That's not what I saw when I took the picture. Not those colors.

Now I've calibrated the image so that the face of the largest building is not pink:



Good enough. But where's that green come from? I'm certain I didn't see that.

This is what I settled on:



Still, where'd that pink come from? Was it really there?

Here we see pink clouds:



Is the pink on the face of that building a reflection from the clouds, something registered by my camera, but not my eye?

This, of course, solves the problem, but at a price:



For a more detailed discussion of color and color photography, see Color the Subject.

Mad Max: Fury Road – A Straussian reading [Media Notes 39]

I’ve been thinking about Mad Max: Fury Road, the fourth film in the Mad Max franchise, and a film I liked very much. I’d like to offer an interpretation, perhaps even a “Straussian reading”, as Tyler Cowen likes to call such things.

Think about the world in which the film is set. It is our world, that is, our earth, but it is desolate. It is, shall we say, post-apocalyptic. The nature of the apocalypse is not at all evident, but whatever it was, it left enough machinery around that the remnants of humanity can cobble things together and make their way across the desert.

Humans live in small autonomous tribes, none obviously capable of dominating the others. In particular, there is no ginormous empire lording it over everything and all like we have in the Star Wars franchise. In that franchise the Empire is clearly evil while the rebels are clearly good. There is little such moral differentiation in the Mad Max world.

That world centers on Max Rockatansky, who had been a policeman before the apocalyptic event, and who lost is wife and child in the first film. He wanders the world alone. He is our moral center, such as it is.

At the beginning of the film he is captured by a tribe headed by Immortan Joe. Joe is a misshapen thug who hoards water, thereby controlling his tribe, and keeps a harem of young women. That makes him a suitable stand-in for evil. As for the people in his tribe, well, that depends.

He sends one of his lieutenants, Imperator Furiosa, on a run for gasoline and ammunition. She decides to help Joe’s harem flee and hides them in the rig. That makes her a suitable stand-in for good.

As for Max, he starts out as a “blood bag” for Nux, one of Joe’s War Boys. In the chase after Furiosa Max manages to escape from War Boys and join up with her. As things develop Nux ends up falling for one of the escaping wives. There is a lot of exciting this and that, Furiosa hooks up with a gang of biker women who’d raised her as a child, followed by more exciting stuff, with Max, Furiosa, and the women defeating Joe and liberating the women.

Max rides off.

* * * * *

Now, think of Immortan Joe as, say, Donald Trump, and Max Rockatansky as, say, Mark Zuckerberg – though, tbh, he strikes me as more of a Nux, with the chrome silver teeth and all.

Yes, I know, I know. Trump ran for President in 2016 while the film came out in 2015, started photography in 2012 and was a gleam in its father’s eye way back in 1998. And, yes I know, Zuckerberg already got his film in 2010 (The Social Network). Remember, we’re speaking allusively, figuratively. None of this is real, nor is it unreal. It is in suspension, hanging in the Straussian flux.

Max Zuckerberg just wants to move fast on his unicorn while breaking things and staying alive. He doesn’t want to be in thrall to Immortan Joe, Darth Sidious, or Donald Trump. But Immortan Sidious Trump, he wants it all. Why? Because it is always already all his.

All of which is to assert though implication that the emergence of digital technology and the web has created a world of actors, individual, corporate, diffuse, and otherwise, that is not well suited to existing institutional structures, which are grounded in the 19th century, if not before. These new actors have imperatives of their own. They are lawless. They are creatures of the Mad Max world. Drained of civilization, the dross, the cream, floats to the top.

* * * * *

What are we to make of the oversight board Facebook has just recently established to make content moderation decisions? Note that while Facebook has set it up and endowed it with a $130-million trust fund, it is an independent legal entity. It would be possible, in theory, for other entities in the digital wasteland to contribute money to and avail themselves of the board’s services. Is it the beginning of a new institutional order on the digital frontier, brokering deals among individuals and tribes in the Mad Max wasteland but beholding to none?

Phylogenesis of the airplane

Monday, June 22, 2020

Helter skelter goes orange



Authority in interpretation, a note quick & crude

What is the source of authority in academic literary criticism?

Authority is in the text

The critic interprets the text without reference to authorial intention and without summoning a specific interpretive vocabulary. The text is esteemed [canonical] and the interpreted meaning is said to reside in the text. The critic’s authority comes from the text.

That was the situation of the New Criticism, which had roots before World War II and emerged to the intellectual and institutional center after the war. Textual canonicity was more or less taken as given; the cultural processes that produced it were not opened for inspection.

Alas, critics often arrived at different interpretations of the texts. How could that be? So…

Authority is in the author

The question of authorial intention was opened for interrogation. Some critics argued that authorial intention should settle the matter. One author = one intention = one meaning for a text. It is not necessary to invoke a special body of interpretive method.

Not everyone bought it. The author was elided (“death of the author”) in favor of a conception that see the texts as a nexus of intersecting systems, semiotic, power, class, race, ethnicity, gender, whatever. This gives rise to so-called Theory.

Authority is in Theory

The critic now invokes some body or bodies of intellectual practice to justify the interpretation: various flavors of psychoanalysis, various flavors of Marxist analysis, various identity criticisms, each with its own body of theory. There is no appeal to authorial intention, no need. Interpretation are authorized by Theory.

Note that since authority is no longer invested in the text or in the author, it is no longer necessary to confine critical intention to canonical intentions. The standard canon is opened up to alternative canons – the so-called canon wars – and popular culture is now a legitimate arena for inquiry.

* * * * *

That’s as far as things have gone. There’s no where else to go, not in a regime where meaning is the sole center of inquiry. It’s time to look to description and to computation (in particular and this). But I need not elaborate on those in this post. They’re all over New Savanna.

Is high energy physics a case of intellectual decadence?

I mean "decadence" in Ross Douthat's sense of cultural exhaustion, the constant repetition of same old same old without anything new resulting.

For decades bigger and bigger colliders got funded thanks to past prestige, but prestige fades away while costs grew until hitting human resources and time-scales. European physicists saw this problem 60 years ago and joined national resources forming CERN. This choice paid: a few decades after WW2 Europe was again the center of high-energy physics. But energy and costs kept growing, and the number of research institutions that push the energy frontier declined as 6, 5, 4, 3, 2, 1.

Some institutions gave up, others tried. Around 2000 German physicists proposed a new collider, but the answer was nein. Around 2010 Americans tried, but the answer was no. Next Japanese tried, but the answer was “we express interest” which in Japanese probably means no. Europeans waited hoping that new technology will be developed while the Large Hadron Collider will discover new physics and motivate a dedicated new collider to be financed once the economic crisis is over. Instead of new technology and new physics we got a new virus and a possible new crisis.

The responsibility of being the last dinosaur does not help survival. Innovative colliders would need taking risks, but unexplored energies got so high that the cost of a failure is no longer affordable. But this leads to stagnation. CERN now choose a non-innovative strategy based on reliability. First, get time by running LHC ad nauseam. Second, be or appear so nice and reliable that politics might give the needed ≈30 billions. Third, make again ee and pp circular colliders but greater, 100 km instead of 27.

As a theorist I would enjoy a 100 TeV pp collider for my 100th birthday.

But would it be good for society? No discovery is warranted, but anyhow recent discoveries at colliders had no direct practical applications. Despite this, giving resources to best scientists often leads to indirect innovations. The problem is that building a 100 km phonograph seems not a project that can give a technology leap towards a small gadget with the same memory. Rather, collider physics got so gigantic that when somebody has a new idea, the typical answer no longer is “let’s do it” but “let’s discuss at the next committee”. Committees are filled by people who like discussing, while creative minds seem more attracted by different environments. I see many smart physicists voting with their feet.

Sunday, June 21, 2020

A tale of two cities: Hoboken to New York



COVID-19, lessons for global warming

Richardson Dilworth, Scott Gabriel Knowles, Franco Montalto, Mimi Sheller, COVID-19 Reveals a Path Forward on Climate Change, American Scientist, May 12, 2020.
Violent economic and social upheaval are not prerequisites for the mitigation of climate change. Rather, our work on sustainability and climate change is motivated by a desire to improve the human condition through the design of a carbon-neutral circular economy that simultaneously builds economic, natural, and social capital. We do not want to confuse the consequences of a temporary pause in consumer economies with the structural changes needed to avoid the most devastating effects of climate change. That goal requires a similar reduction in greenhouse gas emissions (about 7.6 percent) every year between 2020 and 2030, and that reduction must be accompanied by equally ambitious efforts to promote social justice and equity. UN Secretary General António Guterres could have been discussing COVID-19 when he pointed out that solutions to the climate crisis that do not benefit everybody will lead to the “survival of the richest.”

What do the voices speaking in this frightening moment tell us about how we need to structure sustainable climate action? Like those in climate denial, individuals concerned principally with the pandemic’s effects on the economy and personal liberties are rushing to suspend life-saving measures (sheltering at home) in favor of “business as usual.” Their emphasis is on the economic “restart,” with suspended environmental regulations and subsidies for fossil fuels. This carbon-committed approach leads directly to the stubborn status quo that would not budge at COP25.

But billions around the world have also accepted an abrupt and radical change to their lives, virtually overnight and amid profound uncertainty, all to slow the spread of the virus. Social science research proves time and again that disasters reveal the human capacity for creativity, improvisation and community formation, and helping others (even strangers) survive, and COVID-19 is no exception. These collective responses to COVID-19 prove that even in today’s complex world we do not need to wait for mitigation and adaptation behavior to emerge....

The agility and speed with which such life-saving policies took hold provide many rich and transferable lessons to the climate crisis, from how science is effectively communicated to the public, to the psychological frames needed to maintain alarm, build social commitment, and drive policy momentum. However, it is also true that this kind of prosocial behavior after disasters is not applied uniformly; it is constrained by factors of time, ideology, geography, and social difference. Social inequities, in particular, represent a severe handicap in our efforts to adapt the human condition for the better, whether fighting a pandemic or mitigating climate change.

Growing collectivism: irrigation, group conformity and technological divergence

Buggle, J.C. Growing collectivism: irrigation, group conformity and technological divergence. J Econ Growth 25, 147–193 (2020). https://doi.org/10.1007/s10887-020-09178-3
Abstract: This paper examines whether collaboration within groups in pre-industrial agriculture favored the emergence of collectivist rather than individualist cultures. I document that societies whose ancestors jointly practiced irrigation agriculture historically have stronger collectivist norms today. This finding holds across countries, sub-national districts within countries, and migrants, and is robust to instrumenting the historical adoption of irrigation by its geographic suitability. In addition, I find evidence for a culturally-embodied effect of irrigation agriculture on economic behavior. Descendants of irrigation societies innovate less today, and are more likely to work in routine-intensive occupations, even when they live outside their ancestral homelands. Together, my results suggest that historical differences in the need to act collectively have contributed to the global divergence of culture and technology.
H/t Tyler Cowen.

Hydrangeas all the way down