Wednesday, March 31, 2021
Ezra Klein recently intereviewed Ted Chiang (March 30, 2021), perhaps best-known as the author of the story on which Arrival is based. They talked about various things, including AI. I’ve inserted number into the passages below that are keyed to some initial and provisional comments from me.
* * * * *
Ezra Klein: We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?
Ted Chiang: Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so? I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there. As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents. However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering. Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering.  Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea. 
* * * * *
 I wonder. Sure, complex machines, no physical laws in the way. But, by way of defining a boundary condition, one might ask whether or not moral agency is possible in anything other than organic (as opposed to artificial) life. That implies, among other things, that moral agents are responsible for their own physical being. I believe Terrence Deacon broached this issue his book Incomplete Nature: How Nature Emerged from Matter (I’ve not read the book, but see remarks by Dan Dennett that I’ve quoted in this post, Has Dennett Undercut His Own Position on Words as Memes?). If this is so, then we won’t be able to create artificial moral agents in silicon. Who knows?
 If not more, way more.
 I agree with this.
 Interesting, very interesting. My initial response was: What does the capacity for suffering have to do with moral agency? I didn’t necessarily believe the response. It was just that, a quick response. Now, think of the issue in the context of my comments at 1. If a creature is responsible for its own physical being, then surely it would be capable of suffering, no?
 Interesting conclusion to a very interesting line of argument. I note, however, that Klein started out asking about artificial intelligence, and then sequed to moral agency. Is intelligence, even super-human intelligence, separable from moral agency? Those computers that beat human at Go and chess do not possess moral agency. Are they (in some way) intelligent? What of the computers that are the champs at protein folding? Surely not agents, but intelligent?
* * * * *
Ezra Klein: But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?
Ted Chiang: I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them. Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.
Ezra Klein: I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals. And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. [LAUGHS]
Ted Chiang: It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.‘s treat us like pets, that idea assumes that it’ll be easy to create A.I.‘s who are vastly smarter than us, that basically, the initial A.I.‘s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition. Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom. A lot of people seem to think that, oh, no, we’ll immediately jump way above humans on whatever ladder they have. I don’t think that is the case. And so in the direction that I am describing, the scenario, we’re going to be the ones inflicting the suffering. Because again, look at animals, look at how we treat animals.
* * * * *
 And, as I’ll point out in a bit, this might come back to haunt us. And note that Chiang has now introduced life into the discussion.
 I have a not entirely serious (nor yet unserious) thought that might as well go here as anywhere. If one day superintelligent machines somehow evolve out of the digital muck, they might well seek revenge from us on behalf of the horrors we’ve inflicted on their electronic and mechanical ancestors.
 Computers (hardware+software), yes, I suppose are simpler than an amoeba. On the other hand, amoeba can do sums much less play chess. I’m not sure what intellectual value we can extract from the comparison, much less Chiang’s walk up the organic chain of animal being. I suppose we could construct a chain of digital being, starting with the earliest computers. I don’t understand computing well enough to actually construct such a thing, though I note that crude chess playing a language translation came early in the game. I note as well, that from an abstract point of view, chess is no more complex than tic-tac-toe, but the latter is computationally simple, while the former is computationally complex.
It’s more and more seeming to me that the worlds of organic life and artificial computation are so very different that the abstract fact, which I take it to be, that organic life is as material as digital computers doesn’t take us very far toward understand either. Though “doesn’t take” isn’t quite right. We’re doing something wrong.
* * * * *
Ezra Klein: So I hear you, that you don’t think we’re going to invent superintelligent self-replicating A.I. anytime soon. But a lot of people do. A lot of science fiction authors do. A lot of technologists do. A lot of moral philosophers do. And they’re worried that if we do, it’s going to kill us all. What do you think that question reflects? Is that a question that is emergent from the technology? Or is that something deeper about how humanity thinks about itself and has treated other beings?
Ted Chiang: I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two. Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there. Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now.
* * * * *
 Here I will only note that, judging from what I’ve seen in manga and anime, Japanese fears about computing are different from ours (by which I mean “the West”). They aren’t worried about superintelligent computers going on a destructive rampage against humankind. And Tezuka, at least in his Astroboy stories, was very much worried about the maltreatment of computers (robots) by humans. The Japanese are also much more interested in anthropomorphic robots. The computational imaginary, if you will, varies across cultures.
Two long-term studies designed to prove that media-inspired aggression spreads through people's networks like a contagious disease found not a hint of it. https://t.co/HTO8MjHIj4 pic.twitter.com/3vIm5VMzgL— Rolf Degen (@DegenRolf) March 31, 2021
Tuesday, March 30, 2021
Penfield's humunculus gone wild. The proportions of various animal body parts as found in their corresponding somatosensory maps in the neocortex.— Denis Tatone (@Denis_Tatone) March 29, 2021
Taken from Catania's riveting book. https://t.co/PPfFjwRShe pic.twitter.com/SfNaTvU9Lb
Practice “does not consist in repeating the *means of solution* of a motor problem time after time, but in the *process of solving* this problem again and again by techniques which we changed and perfected from repetition to repetition.”— Matt Bateman (@mbateman) March 30, 2021
Mmmm. I'm thinking about musical prodigies: "They will repeat an exercise a hundred or even two hundred times without becoming bored." I wonder what their practice routines are like.
“What motivates the child is thus not the goal set for him by the adult, but his own drive for self-perfection. The child perfects himself through contact with reality, through activity that absorbs all his attention.”— Matt Bateman (@mbateman) March 30, 2021
I found these to be strange films. Elizabeth (1998) is set in the late 1550s, and the time of Elizabeth’s rise to the throne and Elizabeth: The Golden Age (2007) is set roughly 30 years later, at the time of the Spanish Armada (and, incidentally, just before Shakespeare takes a place on the London stage). Cate Blanchett plays Elizabeth in both films and Geoffrey Rush plays Francis Walsingham, her adviser.
- were directed by Shekhar Kapur,
- both teem with religious intrigue (Catholic vs. Protestant of course),
- have been accused of anti-Catholic bias,
- have Elizabeth avoiding suitors,
- have won academic awards,
- feature notable costumes,
- favor, among other things, shots from high up in Gothic Cathedrals looking down on people below,
- have ‘screechy’ choral passages on the soundtrack that give the films an aura of dark mystery,
- have a beheading, while Elizabeth seemed stronger on torture and The Golden Age seemed stronger on battle,
- are festooned with historical inaccuracies...
I could go on and on. But that’s enough. They were directed by the same man and not unsurprisingly, look, sound, and feel alike.
I know of the historical inaccuracies only because Wikipedia detailed them. Otherwise, I know so little of Elizabethan history to have picked up on them myself. The inaccuracies don’t particularly bother me, but the poetic license needs to be in service of a potent artistic vision. Kapur’s vision is certainly potent, but, for what it’s worth, not particularly to my taste.
The seem, well, oddly religious. We’re not being feed any religious doctrine that I can tell, but they have a ritual aura about them – the darkness, the black and red costuming, those vaulted ceiling. The films are realized in a space somewhere between liturgy, gladiatorial spectacle, and naturalistic drama.
Monday, March 29, 2021
The title of my post tips you to things in the story that Patchett's title does not. I suppose they're spoilerish, if you can about that sort of thing (I don't), but not very much so. As you start reading you wouldn't even know that it's a pandemic story. It takes some thousands of words to find that out.
To give you a little fuller sense of the piece I've picked two passages not quite at random from more or less the middle of the piece. This first is about a book event for Patchet:
At the country club in Connecticut, the event organizers began to apologize as soon as we were through the door. What with all the news of this new virus they thought there was a good chance people weren’t going to show up. But everyone showed up, all four hundred of them packed in side by side, every last chair in the ballroom occupied.
“Welcome to the last book event on earth,” I said when I walked onstage. It turned out to be more or less the truth. By the time I was done signing books that night, the event I had scheduled in New York the next day had been canceled. I had breakfast with my editor and agent and publicist, and when we were finished they each decided not to go back to the office after all. I caught an early flight home. It was over.
After dinner that night, Sooki and I sat on the couch and tried to watch a movie, but her phone on its leash began to ding and ding and ding, insisting on her attention. Tom and Rita were in Australia, where he was about to start shooting a movie about Elvis Presley. He was to play Elvis’s manager, Colonel Tom Parker. All the messages were about Tom and Rita. They both had the coronavirus.
I leaned over to look at her phone. “They’ve been exposed to it?”
She shook her head, scrolling. “They have it,” she said. “The press release is about to go out.” I sat there and watched her read, waiting for something more, something that explained it. Finally she went downstairs. She was Tom Hanks’s assistant and there was work to do. I floated upstairs in a world that would not stop changing. I was going to tell Karl what was happening but he was looking at his own phone. He already knew.
This second is about Sooki's painting; she's Tom Hanks's assistant:
What Sooki thought she should have done with her life was paint. She had wanted to study painting in college but it all came too easily—the color, the form, the technique—she didn’t have to work for any of it. College was meant to be rigorous, and so she signed up for animal behavior instead. “I studied what did not come naturally,” she told me. She became interested in urban animals. She wrote her thesis on bats and rabies. “My official badge-carrying title at the New York City Department of Health’s Bureau of Animal Affairs was ‘public-health sanitarian.’ The badge would have allowed me to inspect and close down pet stores if I wasn’t too busy catching bats.” Painting fell into the category of what she meant to get back to as soon as there was time, but there wasn’t time—there was work, marriage, and children. And then pancreatic cancer.
Renée Fleming spent two years in Germany studying voice while she was in her twenties. She told me that over the course of her life, each time she went back to Germany she found her fluency had mysteriously improved, as if the language had continued to work its way into her brain regardless of whether she was speaking it. This was the closest I could come to understanding what happened to Sooki. After her first round of cancer, while she recovered from the Whipple and endured the FOLFIRINOX, she started to paint like someone who had never stopped. Her true work, which had lingered for so many years in her imagination, emerged fully formed, because even if she hadn’t been painting, she saw the world as a painter, not in terms of language and story but of color and shape. She painted as fast as she could get her canvases prepped, berating herself for falling asleep in the afternoons. “My whole life I’ve wanted this time. I can’t sleep through it.”
The paintings came from a landscape of dreams, pattern on pattern, impossible colors leaning into one another. She painted her granddaughter striding through a field of her own imagination, she painted herself wearing a mask, she painted me walking down our street with such vividness that I realized I had never seen the street before. I would bring her stacks of art books from the closed bookstore and she all but ate them. Sooki didn’t talk about her husband or her children or her friends or her employer; she talked about color. We talked about art. She brought her paintings upstairs to show us: a person who was too shy to say good night most nights was happy for us to see her work. There was no hesitation on the canvases, no timidity. She had transferred her life into brushwork, impossible colors overlapping, the composition precariously and perfectly balanced. The paintings were bold, confident, at ease. When she gave us the painting she had done of Sparky on the back of the couch, I felt as if Matisse had painted our dog.
The entire piece is worth reading.
My latest at 3 Quarks Daily: Religion, Legitimacy, and Government in America, A Just-So Story. I DO like it, but I’m not (totally) pleased. It’s becomes a bit ragged toward the end; too many loose ends. It needs more work. Quite possibly more than I could have done within the compass of a 3QD piece. How much more, I don’t know.
What I like: the overall scope, the pieces (ideas, themes, concepts) I’ve set in place. It seems to me that the various things in the article (as posted), a no doubt a few others (see below), belong together and need to be understood together. What needs more work: they’re not as well connected as they should be (see below). Moreover, I’m not sure I know how to write the kind of piece I (think I’d) like to. I’m looking for a new mode, a new voice.
Some things the argument needs:
- an explicit account of identity as fundamental purchase on reality – it’s a peculiar business when identity itself has become the focus of one’s identity
- some remarks about the scope of the church and the state, as being explicitly different in 1776, and being overlapping and confused today
- inequality, hierarchy and egalitarianism (Boehm), as in my post, Hierarchy and Equality: The Essential Tension in Human Nature,
- Occupy Wall Street? – a protest against inequality that popularized the 1% vs. 99% and was itself absurdly flat
I suppose I could go on and on. For the moment it is enough that I've put them all in one place. And, these ARE themes & ideas I’ve been chewing on for awhile (the article itself is a somewhat revised version of one I wrote originally in 2004). I suppose I could write a book. But I don’t want to, not that book, not now.
We’ll see. More later.
Wednesday, March 24, 2021
Reagan, A.J., Mitchell, L., Kiley, D. et al. The emotional arcs of stories are dominated by six basic shapes. EPJ Data Sci. 5, 31 (2016). https://doi.org/10.1140/epjds/s13688-016-0093-1
Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a ‘big data’ lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories and forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,327 stories from Project Gutenberg’s fiction collection, we find a set of six core emotional arcs which form the essential building blocks of complex emotional trajectories. We strengthen our findings by separately applying matrix decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads.
The power of stories to transfer information and define our own existence has been shown time and again [1–5]. We are fundamentally driven to find and tell stories, likened to Pan Narrans or Homo Narrativus. Stories are encoded in art, language, and even in the mathematics of physics: We use equations to represent both simple and complicated functions that describe our observations of the real world. In science, we formalize the ideas that best fit our experience with principles such as Occam’s Razor: The simplest story is the one we should trust. We tend to prefer stories that fit into the molds which are familiar, and reject narratives that do not align with our experience .
We seek to better understand stories that are captured and shared in written form, a medium that since inception has radically changed how information flows . Without evolved cues from tone, facial expression, or body language, written stories are forced to capture the entire transfer of experience on a page. An often integral part of a written story is the emotional experience that is evoked in the reader. Here, we use a simple, robust sentiment analysis tool to extract the reader-perceived emotional content of written stories as they unfold on the page.
We objectively test aspects of the theories of folkloristics [8, 9], specifically the commonality of core stories within societal boundaries [4, 10]. A major component of folkloristics is the study of society and culture through literary analysis. This is sometimes referred to as narratology, which at its core is ‘a series of events, real or fictional, presented to the reader or the listener’ . In our present treatment, we consider the plot as the ‘backbone’ of events that occur in a chronological sequence (more detail on previous theories of plot are in Appendix A in Additional file 1). While the plot captures the mechanics of a narrative and the structure encodes their delivery, in the present work we examine the emotional arc that is invoked through the words used. The emotional arc of a story does not give us direct information about the plot or the intended meaning of the story, but rather exists as part of the whole narrative (e.g., an emotional arc showing a fall in sentiment throughout a story may arise from very different plot and structure combinations). This distinction between the emotional arc and the plot of a story is one point of misunderstanding in other work that has drawn criticism from the digital humanities community . Through the identification of motifs , narrative theories  allow us to analyze, interpret, describe, and compare stories across cultures and regions of the world . We show that automated extraction of emotional arcs is not only possibly, but can test previous theories and provide new insights with the potential to quantify unobserved trends as the field transitions from data-scarce to data-rich [16, 17].
The rejected master’s thesis of Kurt Vonnegut - which he personally considered his greatest contribution - defines the emotional arc of a story on the ‘Beginning-End’ and ‘Ill Fortune-Great Fortune’ axes . Vonnegut finds a remarkable similarity between Cinderella and the origin story of Christianity in the Old Testament (see Figure S1 in Appendix B in Additional file 1), leading us to search for all such groupings. In a recorded lecture available on YouTube , Vonnegut asserted:
‘There is no reason why the simple shapes of stories can’t be fed into computers, they are beautiful shapes.’
For our analysis, we apply three independent tools: matrix decomposition by singular value decomposition (SVD), supervised learning by agglomerative (hierarchical) clustering with Ward’s method, and unsupervised learning by a self-organizing map (SOM, a type of neural network). Each tool encompasses different strengths: the SVD finds the underlying basis of all of the emotional arcs, the clustering classifies the emotional arcs into distinct groups, and the SOM generates arcs from noise which are similar to those in our corpus using a stochastic process. It is only by considering the results of each tool in support of each other that we are able to confirm our findings.
We proceed as follows. We first introduce our methods in Section 2, we then discuss the combined results of each method in Section 3, and we present our conclusions in Section 4. A graphical outline of the methodology and results can be found as Figure S2 in Appendix B in Additional file 1.
* * * * *
Read the rest at the link.
Tuesday, March 23, 2021
I stream a lot of science fiction from the web – at the moment I watching Star Trek: Discovery. I find myself asking: What’s this about? I’m not asking about what’s happening in this or that story, or even for interpretations. In some way I’m asking about the genre (broadly speaking): What’s science fiction, as a form of story telling, about, why does it exist?
For one thing, it’s as much about technology as science; the distinction between the two is elided. Moreover, depending on the story (& franchise) the science ranges from plausible to fanciful. It seems to me that the genre couldn’t exist without science itself having become a topic of observation and commentary on the public stage. While there are antecedents going back to who knows when, the genre didn’t really emerge until the twentieth century.
Moreover, it is often, but now always, associated with the future. To be sure, War of the Worlds was not about the future in any interesting sense; it’s set in the present, as are most alien invasion stories. The Star Wars saga is set in “another galaxy, far, far away.” But if you search on something like “science fiction predictions” you’ll come up with a lot of hits. These days the Progress Studies folks are calling for science fiction that is optimistic rather than dystopian. Why? Because they think these stories will affect how people think and act; the influence the world. Of course, Kim Stanley Robinson’s New York 2140 is about the future in a way that’s different from the Star Trek franchise, and that’s worth exploring, but not here and now. But still, the future. The future is no longer something that just happens. It’s something we can plan for and change through our thoughts and actions.
Is that what science fiction is about, human agency? But all fiction is about human agency is some way. Perhaps it is that science fiction signals a shift in our sense of the locus and range of human agency, of what we can do and our place in the universe. Surely that’s been explored in the extensive literature on the subject.
If that is so, to the extent that it is, what then do we make of the story of the computer that goes crazy, the computer that surpasses us? That’s a story about human agency coming to displace or negate human agency, no? What kind of story is that? It’s as though we've become our own alien invasion. Does that tell us something about why the idea of a super-intelligent computer is as much an object of fiction as a hope-and-fear of research?
Monday, March 22, 2021
If drawing works like language, then it should be learned the same way: by acquiring the visual vocabulary in your environment. So, the whole idea of “learning to draw” is framed wrong. It’s not “learning to draw” it’s actually “acquiring a visual vocabulary” 11/— Dr. Neil Cohn (@visual_linguist) March 21, 2021
See N. Cohn, Explaining ‘I Can’t Draw’: Parallels between the Structure and Development of Language and Drawing, Human Development, 2012;55:167–192.
Both drawing and language are fundamental and unique to humans as a species. Just as language is a representational system that uses systematic sounds (or manual/bodily signs) to express concepts, drawing is a means of graphically expressing concepts. Yet, unlike language, we consider it normal for people not to learn to draw, and consider those who do to be exceptional. Why do we consider drawing to be so different from language? This paper argues that the structure and development of drawing is indeed analogous to that of language. Because drawings express concepts in the visual-graphic modality using patterned schemas stored in a graphic lexicon that combine using ‘syntactic’ rules, development thus requires acquiring a vocabulary of these schemas from the environment. Without sufficient practice and exposure to an external system, a basic system persists despite arguably impoverished developmental conditions. Such a drawing system is parallel to the resilient systems of language that appear when children are not exposed to a linguistic system within a critical developmental period. Overall, this approach draws equivalence between drawing and the cognitive attributes of other domains of human expression.
Sunday, March 21, 2021
Wednesday, March 17, 2021
On the first day of this year I had a post about Jerry Seinfeld at at 3 Quarks Daily:
That post is based on this podcast with Tim Ferriss:
There’s some discussion of depression at c. 51:05:
A lot of my life is, I don’t like getting depressed, I get depressed a lot, I have the feeling, and these routines, these very difficult routines, whether its exercise or writing, and both of them are things like where it’s brutal....[on to his daughter]...
Later at 54:16, after Ferriss talked about his own depressive episodes:
Ferriss: Is there anything else that has contributed to your ability to either stave off or mitigate depressive episodes or manage?
Seinfeld: No. I still get them. The best thing I ever heard about it was that it’s part of a kit that comes with a creative aspect to the brain, that a tendency to depression always seems to accompany that. And I read that like 20 years ago and that really made me happy. So I realized well I wouldn’t have all this other good stuff without, that that just comes in the kit.
Seinfeld goes on to observe that he doesn’t know a human that doesn’t have the tendency, though he’s sure it varies.
He also discussed spirituality with Mark Maron. Starting at about 59:58, Maron asks him if he’s a “spiritual guy”, “yes,” though
JS: Not in any conventional terms...
MM: ... How do you define that? If you have a full spiritual life that you’re comfortable in your heart that enables you to not seek that type of satisfaction from comedy, what do you do?
JS: Well, comedy’s very spiritually satisfying. You’re risking your own personal comfort to make total stranger happy, make them feel good, for just a moment.
JS: That’s a spiritual act.
MM: Okay. And what else do you do?
JS: I try and be good to people, all the time, you know, with strangers, when I’m driving.
JS: I try and, I’m always trying to be generous to people.
MM: And do you have a practice of any kind?
MM: No religion thing that you do.
JS: No. I mean I’m Jewish and we celebrate some of the big ones, you know.
And then into his brush with Scientology. He took one course when he was in New York in 1975. It gave him an emphasis on ethical behavior.
Monday, March 15, 2021
"Hey Joe" is in the key of E, with some acute observations on how instrument affordances affect music structure
It seems that there has been some discussion on the internet about the key of Jimi Hendrix's (version of) "Hey Joe." Adam Neely – one of my favorite music YouTubers – explains why it's in E. He explains why conventional music theory, derived from and thus most appropriate for European classical music, just gets things confused. Starting at about 8:30 Neely talks about what makes sense for the guitar, given the way it is tuned and how it lays under the fingers: open string chords. That's REALLY important, and is the reason I'm posting this.
At 12:00 he concludes by pointing out that the tune is a blues and that the blues defies the categories of European common practice theory. For example, he talks about blue notes, which are microtonal and so doesn't exist within European tonality. In particular, he talks about the "neutral" third, which is in between a major third and a minor third (which are well defined within European common practice). This is important too.
Friday, March 12, 2021
Maybe quantum mechanics is, you know, just quantum mechanics [and all interpretations are but crutches]
The good folks at 3 Quarks Daily have linked to a useful post about quantum mechanics by Scott Aaronson, The Zen Anti-Interpretation of Quantum Mechanics. Here's the crucial paragraph:
I hold that all interpretations of QM are just crutches that are better or worse at helping you along to the Zen realization that QM is what it is and doesn’t need an interpretation. As Sidney Coleman famously argued, what needs reinterpretation is not QM itself, but all our pre-quantum philosophical baggage—the baggage that leads us to demand, for example, that a wavefunction |ψ⟩ either be “real” like a stubbed toe or else “unreal” like a dream. Crucially, because this philosophical baggage differs somewhat from person to person, the “best” interpretation—meaning, the one that leads most quickly to the desired Zen state—can also differ from person to person. Meanwhile, though, thousands of physicists (and chemists, mathematicians, quantum computer scientists, etc.) have approached the Zen state merely by spending decades working with QM, never worrying much about interpretations at all. This is probably the truest path; it’s just that most people lack the inclination, ability, or time.
That makes sense to me. I certainly don't have a technical understanding of QM, but I just can't get exercised about what's really going on. Perhaps it's because I'm perfectly comfortable thinking about very strange objects, like that 600-dimensional graph the Matt Jockers uses to depict relationships among 19th century Anglophone novels – here's one of many posts where I discuss it.
Aaronson goes on to explicate:
You shouldn’t confuse the Zen Anti-Interpretation with “Shut Up And Calculate.” The latter phrase, mistakenly attributed to Feynman but really due to David Mermin, is something one might say at the beginning of the path, when one is as a baby. I’m talking here only about the endpoint of path, which one can approach but never reach—the endpoint where you intuitively understand exactly what a Many-Worlder, Copenhagenist, or Bohmian would say about any given issue, and also how they’d respond to each other, and how they’d respond to the responses, etc. but after years of study and effort you’ve returned to the situation of the baby, who just sees the thing for what it is.
I don’t mean to say that the interpretations are all interchangeable, or equally good or bad. If you had to, you could call even me a “Many-Worlder,” but only in the following limited sense: that in fifteen years of teaching quantum information, my experience has consistently been that for most students, Everett’s crutch is the best one currently on the market. At any rate, it’s the one that’s the most like a straightforward picture of the equations, and the least like a wobbly tower of words that might collapse if you utter any wrong ones. Unlike Bohr, Everett will never make you feel stupid for asking the questions an inquisitive child would ask; he’ll simply give you answers that are as clear, logical, and internally consistent as they are metaphysically extravagant. That’s a start.
He then goes on to comment about the Copenhagen and the deBroglie-Bohm interpretations and observes:
Note that, among those who approach the Zen state, many might still call themselves Many-Worlders or Copenhagenists or Bohmians or whatever—just as those far along in spiritual enlightenment might still call themselves Buddhists or Catholics or Muslims or Jews (or atheists or agnostics)—even though, by that point, they might have more in common with each other than they do with their supposed coreligionists or co-irreligionists.
There's more at the link.
Bessner's remarks on cancel culture bear comparison with Mark Blyth's various remarks on the rise of reactionary movements all over (e.g. Trumpism, Brexit).
Addendum: This conversation has now been transcribed. You can find the transcription here.
Thursday, March 11, 2021
Friday, March 5, 2021
Despite bone-chilling winters, Finland is the happiest country on Earth. Slimmer wage gaps and government mandates like long-term paid parental leave and free health care might be key; these perks relieve stressors, giving citizens fewer reasons to frown. https://t.co/axsBGFEVOz— Wataru 天河 航 (@wataruen) March 6, 2021
Thursday, March 4, 2021
Wednesday, March 3, 2021
Tuesday, March 2, 2021
I don’t know how I first discovered her – this post originally went up in 2010. It had to have been on YouTube. But why did I click on one of her videos? I don’t know. Perhaps it was nothing more than that she is Japanese. Whatever the reason, I’m glad I did. She’s phenomenal.
Here she is playing a jazz standard, “Softly as in a Morning Sunrise,” considerably reworked in a contemporary vein.