Thursday, January 31, 2019

William Benzon, 1912-1998: He Went Out Swinging and Smooth Shaven [born on Jan 31, 1912]

Father & co 400 dpi jpg copy

My father, William Benzon, was born on Jan. 31, 1912 in Baltimore, MD, and died November 21, 1998 in Allentown PA. His parents were Danish immigrants. For about the first sic years of his life he lived with his parents and two older sisters, Karen and Signe, on Curtis Bay in Baltimore in a house which had formerly been a yacht club. The family then moved to Longwood St. He attended Boy's Latin School and, according to his high school year book, was regarded as an intellectual and as the 2nd brightest in his class. He was on the boxing, fencing, and football teams and his favorite expression was "Well blow me down."

He did two years at the University of Virginia, where he managed to run up gambling debts that his father had to satisfy. He completed his college education at Johns Hopkins, graduating in 1934 with a degree in chemical engineering. He then went to work for Bethlehem Steel (initially at Sparrows Point) where he spent his entire career. He entered their "loop" program which was for hotshot college graduates they wanted to nurture. I don't what his initial duties were or how he ended up in the mining division of the company (Bethlehem Mines Corp.). He spent most of his career there and rose to Superintendent of Coal Preparation, a position that was created for him. He stayed in that post until his retirement in 1974. He continued to consult on coal preparation after retirement.

Early in his career he moved to Johnstown Pa. where Bethlehem Mines had its headquarters. There he met my mother, Elizabeth Tredennick, and married her in 1940. I was born in 1947 and my sister in 1951.

What was he like?

He was a brilliant man, attaining an international reputation in his field, coal preparation. He was also a loving father, expressing his love in various ways, including making some very fine things for me and my sister. He made furniture for my sister’s dolls, a high chair, and a very elegant play pen with the letters of the alphabet cut into the slats. He made me a gorgeous Indian headdress – feathers of various kinds, ermine tails, abalone shell ornament, a beaded head band – and various swords and knives etc. as appropriate for various Halloween costumes. He encouraged my sister and me in whatever we wanted to do. When I went off to Hopkins and grew long hair, a beard and mustache, that was OK. And so was going to graduate school in something as impractical as English literature. When, as an adult I needed to borrow money from him because I was out of work, he loaned me money (even after he had retired and was obviously living on a fixed income). He never ever suggested that I "face reality" and move into the corporate world, etc. He knew my intellectual work was important and helped me pursue that, as he has helped my sister pursue her interest in poetry.

He was an intellectual and books were important to him; he had many of them, including many he inherited from his father. I remember a number of Christmas seasons where he read Dickens' A Christmas Carol to the family after dinner on a number of evenings. And he certainly read stories to me before bed as a child – I remember him reading me from Mark Twain, Rafael Sabatini, and others. I was particularly struck by his ability to read dialog so naturally, with expression, like people would actually have said it. He spent a great deal of time helping me and my sister with our homework. He never just told me answers or worked problems. He always asked questions designed to lead me to the answers myself.

My father had an excellent sense of humor, which he slyly attributed to his Danish heritage. He loved Mark Twain, Charles Dickens, Lewis Carroll, and one Jerome K. Jerome, and the Marx Brothers. He was also quite fond for Victor Borge. For one thing, Borge was from Denmark, where his parents were born and raised. Borge's humor was often of a linguistic nature, and my father was interested in language (he owned books by Otto Jesperson, the Danish scholar of the English language, and H. L. Mencken). Borge was a musician and much of his humor involved committing mayhem on various pieces of classical and not-so-classical music.

An Athlete - Golf

He was a good and dedicated athlete. Beyond his high-school sports he was also an excellent swimmer and loved the sea. As an adult he was a cyclist (mostly in his youth, in Baltimore and in his early days in Johnstown) and, above all, golf. 

He took up the sport as an adult and pursued it passionately. During his prime – say from 30 though 55 – he had a single digit handicap, even as low as two or three. He kept systematic notes about his game throughout his entire golfing career. One winter he painted golf balls in red nail polish and golfed in the snow. He experimented with putters he either built himself or modified from putters he'd bought. He also had ideas about custom golf shoes which he half-way finished (my sister and I found the half-completed shoes in the basement).

One of the highlights of his life was playing golf at St. Andrews in Scotland (he shot 81 on one round). He read a great deal about the sport, knew its history backwards and forwards, and had a good collection of golf books, including a classic written by a relative of Charles Darwin. It is absolutely clear to me that, as I use music to balance out my intellectuality, so my father used golf to achieve balance in his life.

IMGP5036
Caddie's card, tickets, and scorecards for two rounds at St. Andrews

The origins of the Anthropcene and of the modern world in the post 1492 death of indigenous peoples in the Americas

Alexander Koch, Chris Brierley, Mark M. Maslin, Simon L.Lewis, Earth system impacts of the European arrival and Great Dying in the Americas after 1492, Quaternary Science Reviews, Volume 207, 1 March 2019, Pages 13-36, https://doi.org/10.1016/j.quascirev.2018.12.004.
Highlights
  • Combines multiple methods estimating pre-Columbian population numbers.
  • Estimates European arrival in 1492 lead to 56 million deaths by 1600.
  • Large population reduction led to reforestation of 55.8 Mha and 7.4 Pg C uptake.
  • 1610 atmospheric CO2 drop partly caused by indigenous depopulation of the Americas.
  • Humans contributed to Earth System changes before the Industrial Revolution.
Abstract: Human impacts prior to the Industrial Revolution are not well constrained. We investigate whether the decline in global atmospheric CO2 concentration by 7–10 ppm in the late 1500s and early 1600s which globally lowered surface air temperatures by 0.15∘C, were generated by natural forcing or were a result of the large-scale depopulation of the Americas after European arrival, subsequent land use change and secondary succession. We quantitatively review the evidence for (i) the pre-Columbian population size, (ii) their per capita land use, (iii) the post-1492 population loss, (iv) the resulting carbon uptake of the abandoned anthropogenic landscapes, and then compare these to potential natural drivers of global carbon declines of 7–10 ppm. From 119 published regional population estimates we calculate a pre-1492 CE population of 60.5 million (interquartile range, IQR 44.8–78.2 million), utilizing 1.04 ha land per capita (IQR 0.98–1.11). European epidemics removed 90% (IQR 87–92%) of the indigenous population over the next century. This resulted in secondary succession of 55.8 Mha (IQR 39.0–78.4 Mha) of abandoned land, sequestering 7.4 Pg C (IQR 4.9–10.8 Pg C), equivalent to a decline in atmospheric CO2 of 3.5 ppm (IQR 2.3–5.1 ppm CO2). Accounting for carbon cycle feedbacks plus LUC outside the Americas gives a total 5 ppm CO2 additional uptake into the land surface in the 1500s compared to the 1400s, 47–67% of the atmospheric CO2 decline. Furthermore, we show that the global carbon budget of the 1500s cannot be balanced until large-scale vegetation regeneration in the Americas is included. The Great Dying of the Indigenous Peoples of the Americas resulted in a human-driven global impact on the Earth System in the two centuries prior to the Industrial Revolution.

Tuesday, January 29, 2019

Invariance principles of brain connectivity


Monday, January 28, 2019

And this is progress in computing?


Ramiforms in white and green


“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto

Reposted from The Valve, 31 March 2010, this is an informal manifesto for whatever it is I'm up to, and why I'm come to think of it as naturalist criticism. I could link a lot more into this piece now, especially my recent work on ring composition and digital humanities, but I won't. It's a decent map to my work in literature, a place-holder until I have time to do something a bit more formal.



You mean a quasifesto?

Shoo, get out . . .


Fact is, if I’d known then what I know now, I’d never have thought of myself as being in the business of bringing cognitive science to literary criticism much less represented myself to the world in that way. But I didn’t (know) and I did (represent), so now I seem stuck with the moniker. I’d like to shake if off.

When I finally decided to publish a programmatic and methodological statement, “Literary Morphology: Nine Propositions in a Naturalist Theory of Form,” I adopted naturalism as a label. Fact is, I’d just as soon not think of it as anything but the study of literature. But we live in an age of intellectual brands, so I chose “naturalism” as mine.

Yes, I know that “the natural” is somewhat problematic, but you’ll just have to get past that. No label is perfect and I’m not about to coin a new term. Assuming you can struggle past the word, what does naturalism suggest to you? To me the term conjures up a slightly eccentric investigator wandering about the world examining flora and fauna, writing up notes, taking photos, making drawings, and perhaps even collecting specimens. That feels right to me, except that I’m nosing about poems, plays, novels, films, and other miscellaneous things. Beyond that I’d like the term to suggest some sense of literature as thoroughly in and a part of the world. There’s only one world and literature exists in it.

Beyond that, what does the term suggest? . . . Nothing, that’s what I’d like it to suggest, nothing. But whatever this naturalist criticism is or might become, that it has some kind of name suggests that it’s probably not myth criticism, New Criticism, Marxist criticism, psychoanalytic, deconstructive, archetypal, phenomenological, reader response, or any of the other existing critical brands.

What do the terms “cognitive criticism” or “cognitive rhetoric” suggest? Like many of those other labels, they suggest some body of supplementary knowledge and practice that one brings to the study of literature. Just exactly what the supplementary body is, that may not be terribly clear. But that doesn’t matter. The terms emphasize and draw your attention to the supplement. The same with “Darwinian literary criticism,” only vaguer. The only thing that’s clear about that label is a towering intellectual figure whose work had nothing to do with the study of literature.

None of this should be taken to imply that I’ve lost interest in the newer psychologies, as I like to call them. I haven’t. I believe that future literary studies must take them into account, and other theories, concepts and models as well. I just don’t want to stick those names in my brand label.

Anything else?

Yes, I put the study of form at the center of the enterprise.

So why not label yourself a formalist?

Because that’s already of term of art, and it’s too strongly identified with approaches that treat the text as an autonomous object more or less independent of reader, author, and the larger world. For that matter, many formalist critics are more interested in textual autonomy than in systematically analyzing and describing the manifold formal aspects of literary texts. In the end, they’re as greedy after meaning as most other critics are.

OK, so what do you have in mind with this naturalist criticism that emphasizes form?

Good question. And I’m afraid my best answer is a bit embarrassing. I figure the best way to scope out any literary program is to look at practical criticism. What does it do with an actual text, in some length and detail. And the best examples I know are, umm, err, from my own work. And that, as I said, is embarrassing. I’d rather point out someone else’s work.

Really? There’s nothing else? Your work is de novo, so to speak?

Well, everyone has precursors and models. I was certainly influenced by the structuralists, Roman Jakobson, Edmund Leach, Jean Piaget, and Lévi-Strauss above all. For that matter I should probably know narratology better than I do. And I’ve enjoyed the detailed analytic work that David Bordwell’s posted on his blog, though I’ve not gotten around to reading any of his books except Making Meaning, which is an analysis and critique poststructuralist film criticism. If more people analyzed literary texts like Bordwell analyzes film, that would be good.
[More specifically, check out, e.g. Bordwell’s post on “Tell, don’t show,” or this post on “Kurosawa’s early spring,” other posts tagged as “Film technique,” and this essay, “Anatomy of the Action Picture.”]
OK OK, I get the idea. I’m skeptical, but go on. What’s your best analytic work?

I suppose my recent essays on “Kubla Khan,” and “This Lime-Tree Bower My Prison,” but they’re a bit of a slog, long and detailed, and lots of diagrams. I like this old piece on Sir Gawin and the Green Knight too, and this Shakespeare piece, which looks at three plays, Much Ado About Nothing, Othello, and The Winter’s Tale and even uses evolutionary psychology.

Perhaps the best place to start would my recent post: Two Rings in Fantasia: Nutcracker and Apprentice. It focuses on form and it’s got some nice screen shots too. It’s relatively short and pretty much free of abstract critical apparatus, though there’s an addendum that heads off into the abstractosphere. Yeah, it’s about film, not literature, but that’s a secondary issue that has no bearing on my main point.

As the title suggests, I consider two episodes in Disney’s Fantasia, the Nutcracker Suite and the Sorcerer’s Apprentice. There are three things going on in that post: 1) the analysis and description of so-called ring forms in the two episodes, which is my main focus, 2) a brief characterization of the spatial worlds in the two episodes, and 3) some informal remarks on what those episodes might mean.

A ring form is a text in which episodes mirror one another around a central point or episode, thus: A, B, C . . . C’, B’, A’. One of these episodes has a conventional narrative (Sorcerer’s) while the other does not. But both are rings. Pending strong arguments to the contrary, I regard that as a fact about them. The ring structure really is there; it’s not something I’m reading into the episodes. The core work in the post is to report the relatively straight-forward analytical work needed to establish ring structure as a descriptive fact.

In the course of looking at the ring structure I also offer some remarks on the structure of the visual worlds in the two episodes and how the virtual camera moves through them. This plays no role in my argument about ring structure, but it is a formal feature of the episodes and is important in the larger scope of the hole film. Each of the eight episodes has a different theme and subject matter, and each has a different animation style. Somewhere “between” the style and the subject matter you have visual space and movement through it.

Finally, I offer some interpretive comments, some observations about what these episodes might mean. In the case of Nutcracker those suggestions lean toward the Freudian, though I suppose some might argue it has no meaning at all, that it’s just a bunch of pretty pictures set to music. Sorcerer’s is a different case, because here we have a actual story. I suppose I could’ve gone Freudian and worked on Father and Son, but I got stuck on all those industrious brooms parading across the screen and ended up giving a nod toward the Marxists.

Well, OK, OK. I’ve read the paper and it’s a nice piece of work.

Thank you.

But I don’t see anything new in kind.

Well, yes, I didn’t invent anything, but . . . .

Anyone could do it. It’s well within range of a good undergraduate . . .

And did you notice it’s not jam-packed with a lot of conceptual apparatus?

That’s what I mean, it’s almost as if anyone could do it.

Well, I rather doubt that. You do have to have a “feel” for the job, and that takes time and experience. You have to work with texts (or films) to learn how to work with them. You can’t get it by reading books and articles. But the absence of a lot of apparatus, that’s a feature, not a bug. In any event the thing to notice is that formal analysis and description is at the center of the piece.

But that’s not a central focus of practical criticism in the discipline as it is currently practiced. Nor does it seem to be on radar screen for the cognitivists and the Darwinians. They still treat meaning as the main event.

Group Minds at Wikipedia?


Simon DeDeo

(Submitted on 8 Jul 2014)

Abstract: Group-level cognitive states are widely observed in human social systems, but their discussion is often ruled out a priori in quantitative approaches. In this paper, we show how reference to the irreducible mental states and psychological dynamics of a group is necessary to make sense of large scale social phenomena. We introduce the problem of mental boundaries by reference to a classic problem in the evolution of cooperation. We then provide an explicit quantitative example drawn from ongoing work on cooperation and conflict among Wikipedia editors. We show the limitations of methodological individualism, and the substantial benefits that come from being able to refer to collective intentions and attributions of cognitive states of the form "what the group believes" and "what the group values".

Comments: 18 pages, 4 figures
Cite as: arXiv:1407.2210 [q-bio.NC]
(or arXiv:1407.2210v1 [q-bio.NC] for this version)

* * * * *


Simon DeDeo

PLoS ONE 9(6): e101511.
doi: 10.1371/journal.pone.0101511

Abstract: We investigate the computational structure of a paradigmatic example of distributed social interaction: that of the open-source Wikipedia community. We examine the statistical properties of its cooperative behavior, and perform model selection to determine whether this aspect of the system can be described by a finite-state process, or whether reference to an effectively unbounded resource allows for a more parsimonious description. We find strong evidence, in a majority of the most-edited pages, in favor of a collective-state model, where the probability of a “revert” action declines as the square root of the number of non-revert actions seen since the last revert. We provide evidence that the emergence of this social counter is driven by collective interaction effects, rather than properties of individual users.

* * * * *

Tuesday, January 22, 2019

Formal limits on machine learning

Ashutosh Jogalekar, Open Borders, 3 Quarks Daily, Jan 21, 2019:
The continuum hypothesis is related to two different kinds of infinities found in mathematics. When I first heard the fact that infinities can actually be compared, it was as if someone had cracked my mind open by planting a firecracker inside it. There is the first kind of infinity, the “countable infinity”, which is defined as an infinite set that maps one-on-one with the set of natural numbers. Then there’s the second kind of infinity, the “uncountable infinity”, a gnarled forest of limitless complexity, defined as an infinity that cannot be so mapped. Real numbers are an example of such an uncountable infinity. One of the staggering results of mathematics is that the infinite set of real numbers is somehow “larger” than the infinite set of natural numbers. The German mathematician Georg Cantor supplied the proof of the uncountable nature of the real numbers, sometimes called the “diagonal proof”. It is like a beautiful gem that has suddenly fallen from the sky into our lap; reading it gives one intense pleasure.

The continuum hypothesis asks whether there is an infinity whose size is between the countable infinity of the natural numbers and the uncountable infinity of the real numbers. The mathematicians Kurt Gödel and – more notably – Paul Cohen were unable to prove whether the hypothesis is correct or not, but they were able to prove something equally or even more interesting; that the continuum hypothesis cannot be decided one way or another within the axiomatic system of number theory. Thus, there is a world of mathematics in which the hypothesis is true, and there is one in which it is false. And our current understanding of mathematics is consistent with both these worlds.

Fifty years later, the computational mathematicians have found a startling and unexpected connection between the truth or lack thereof of the continuum hypothesis and the idea of learnability in machine learning. Machine learning seeks to learn the details of a small set of data and make correlative predictions for larger datasets based on these details. Learnability means that an algorithm can learn parameters from a small subset of data and accurately make extrapolations to the larger dataset based on these parameters. The recent study found that whether learnability is possible or not for arbitrary, general datasets depends on whether the continuum hypothesis is true. If it is true, then one will always find a subset of data that is representative of the larger, true dataset. If the hypothesis is false, then one will never be able to pick such a dataset. In fact in that case, only the true dataset represents the true dataset, much as only an accused man can best represent himself.

This new result extends both set theory and machine learning into urgent and tantalizing territory. If the continuum hypothesis is false, it means that we will never be able to guarantee being able to train our models on small data and extrapolate to large data.
Davide Castelvecchi, Machine learning leads mathematicians to unsolvable problem, Nature, Jan 9, 2019:
In the latest paper, Yehudayoff and his collaborators define learnability as the ability to make predictions about a large data set by sampling a small number of data points. The link with Cantor’s problem is that there are infinitely many ways of choosing the smaller set, but the size of that infinity is unknown.

They authors go on to show that if the continuum hypothesis is true, a small sample is sufficient to make the extrapolation. But if it is false, no finite sample can ever be enough. This way they show that the problem of learnability is equivalent to the continuum hypothesis. Therefore, the learnability problem, too, is in a state of limbo that can be resolved only by choosing the axiomatic universe.

The result also helps to give a broader understanding of learnability, Yehudayoff says. “This connection between compression and generalization is really fundamental if you want to understand learning.”
And here's the research paper: Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka & Amir Yehudayoff, Learnability can be undecidable, Nature Machine Intelligence 1, 44–48 (2019):
Abstract: The mathematical foundations of machine learning play a key role in the development of the field. They improve our understanding and provide tools for designing new learning paradigms. The advantages of mathematics, however, sometimes come with a cost. Gödel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate. We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. Our proof is based on the fact the continuum hypothesis cannot be proved nor refuted. We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis. The main idea is to prove an equivalence between learnability and compression.
And the conclusion:
The main result of this work is that the learnability of the family of sets ℱ∗ over the class of probability distributions 𝒫∗ is undecidable. While learning ℱ∗ over 𝒫∗

may not be directly related to practical machine learning applications, the result demonstrates that the notion of learnability is vulnerable. In some general yet simple learning frameworks there is no effective characterization of learnability. In other words, when trying to understand learnability, it is important to pay close attention to the mathematical formalism we choose to use.

How come learnability can neither be proved nor refuted? A closer look reveals that the source of the problem is in defining learnability as the existence of a learning function rather than the existence of a learning algorithm. In contrast with the existence of algorithms, the existence of functions over infinite domains is a (logically) subtle issue.

The advantages of the current standard definitions (that use the language of functions) is that they separate the statistical or information-theoretic issues from any computational considerations. This choice plays a role in the fundamental characterization of PAC learnability by the VC dimension. Our work shows that this set-theoretic view of learnability has a high cost when it comes to more general types of learning.

Sunday, January 20, 2019

Syncho-Dog


Saturday, January 19, 2019

Friday Fotos [a day late]: Food (at the Malibu)





Body sway and synchronization in musical performance

Andrew Chang, Haley E. Kragness, Steven R. Livingstone, Dan J. Bosnyak & Laurel J. Trainor, Body sway reflects joint emotional expression in music ensemble performance, Scientific Reports 9: 205 (2019) DOI:10.1038/s41598-018-36358-4.
Abstract: Joint action is essential in daily life, as humans often must coordinate with others to accomplish shared goals. Previous studies have mainly focused on sensorimotor aspects of joint action, with measurements reflecting event-to-event precision of interpersonal sensorimotor coordination (e.g., tapping). However, while emotional factors are often closely tied to joint actions, they are rarely studied, as event-to-event measurements are insufficient to capture higher-order aspects of joint action such as emotional expression. To quantify joint emotional expression, we used motion capture to simultaneously measure the body sway of each musician in a trio (piano, violin, cello) during performances. Excerpts were performed with or without emotional expression. Granger causality was used to analyze body sway movement time series amongst musicians, which reflects information flow. Results showed that the total Granger-coupling of body sway in the ensemble was higher when performing pieces with emotional expression than without. Granger-coupling further correlated with the emotional intensity as rated by both the ensemble members themselves and by musician judges, based on the audio recordings alone. Together, our findings suggest that Granger-coupling of co-actors’ body sways reflects joint emotional expression in a music ensemble, and thus provide a novel approach to studying joint emotional expression.
Granger causality = "is a statistical estimation of the degree to which one time series is predicted by the history of another time series, over and above prediction by its own history. The larger the value of Granger causality, the better the prediction, and the more information that is flowing from one time series to another."

From the introduction:
The performing arts represent one area in which joint emotional expression is essential. Emotional expression is a central goal in music performances15,16, and performers often depart from the notated score to communicate emotions and musical structure by introducing microvariations in intensity and speed17,18. Music ensemble performers therefore must coordinate not only their actions, but also their joint expressive goals19. For musicians in an ensemble, sharing a representation of a global performance outcome facilitates joint music performance20,21. Interpersonal event-to-event temporal precision has been widely used as a local index of sensorimotor aspects of joint action22,23,24. However, this method is likely insufficient to capture higher-order aspects of joint performance, which may involve stylistic asynchronies, complex leader-follower dynamics, and expressive variations in timbre, phrasing, and dynamics, which take place over longer time scales and are not necessarily reflected by event-to-event temporal precision. For example, a previous study examined the inter-onset intervals of piano duet keystrokes, but cross-correlation analysis failed to reveal leader-follower relationships, likely because these depend on aspects of joint performance involving longer time scales25.

Body sway among co-actors might be a useful measurement of joint emotional expression. Body sway is a domain-general index for measuring real-time, real-world interpersonal coordination and information sharing. Relations between co-actors’ body sway have been associated with joint action performance in many domains, including engaging in motor coordination tasks26,27, having a conversation28,29,30, and music ensemble performance25,31,32,33,34. Specifically in music performance, it has been associated with melodic phrasing35, suggesting it reflects the higher-order aspect of music performance, rather than lower-order note-to-note precision.

In a previous study, we experimentally manipulated leadership roles in a string quartet and examined the predictive relationships amongst the performers’ body sway movements36. Results showed that leaders’ body sway more strongly predicted other musicians’ body sway than did the body sway of followers, suggesting that body sway coupling reflects directional information flow. This effect was diminished, but still observed, even when musicians could not see each other, suggesting that body sway is, at least in part, a byproduct of psychological processes underlying the planning and production of music. This process is similar to how gestures during talking reflect thoughts and facilitate speech production, in addition to being directly communicative37. Furthermore, the total coupling strength in a quartet (averaged amount of total predictive movement across each pair of performers) positively correlated with performers’ self-ratings of performance quality, but it did not necessarily correlate with self-ratings of synchronization. This suggests that body sway coupling might reflect performance factors above and beyond interpersonal temporal precision (synchronization), and might reflect in part emotional expression.

Friday, January 18, 2019

Shrine of the Triceratops: A Graffiti Primer

Or: Indiana Jones and the Green Dinosaur, a Tale of Exploration, Deduction, and Interpretation in the Wilds of Jersey City


This is the first post I made about graffiti. It went up on The Valve on November 1, 2006, over 12 years ago. When I posted this I really didn't know what I was looking at. I didn't even know that this was a name...

IMGP1338rd.jpg

...much less that the name was Joe. In time I figured out that it was by Japan Joe, not Jersey Joe, and I found someone who was present when Joe did it. All I knew was that, like Alice, I'd tumbled to something wonderful, something I had to investigate. Note also that I framed the piece by a quote from Sir Gawain and the Green Knight, a great medieval poem. That still seems right.

* * * * *

“Now, indeed,” said Gawain, “here is a wizardy waste,

And this an ugly oratory, still overgrown with weeds;

Well it befits that wight, warped and wraped in green,
To hold his dark devotions here in the Devil's service!
Now, sure in my five wits I fear it is the foul fiend
Who here has led me far astray, the better to destroy me.

–Sir Gawain and the Green Knight

Though ignorance is not, as the saying implies, a stop on the highway to bliss, it has its uses. In my case, all-but-ignorance of the world of graffiti and hip-hop has allowed me to explore my immediate surroundings as though I were a child discovering neat things in the woods, like the abandoned electrical substation where Steve and I found the raccoon skeleton in one corner of a room, or the strange markings on Timmy's lawn that looked like landing tracks from a flying saucer.

If I had been well-informed about the current state of graffiti I would not have regarded the images I recently blundered into as objects of wonder. I would have known what and perhaps even why they were and thought nothing more of them. Thus I would have been unable to see that I had found a shrine to the spirit of the triceratops. To me it would have just been a large and interesting painting (actually, a “piece”) in a strange location, strange because it is out-doors and thus unprotected, and hidden from public view as well. What sort of artist deliberately does good work in a place where no one will see it?

Tags, Throw-ups, and Pieces

The adventure started about a week ago [remember, I wrote this in November of 2006] when I decided to take some pictures of my neighborhood, Hamilton Park, roughly a third of mile (as measured on Google Earth) from the Holland Tunnel in downtown Jersey City. It's mostly a residential neighborhood consisting of one, two, and three-family attached housing and small apartment buildings. But there are large warehouses nearby, a small abandoned rail yard, a small office building for the Port Authority of New York and New Jersey, and various signs and remnants of more substantial industrial use not so long ago. It's a gentrifying neighborhood where homeless people push their grocery carts on streets where Mazdas and Range Rovers are parked.

While walking the streets taking pictures of this and that I noticed “tags” (see Figure 1) on signs, sidewalks, walls, fire hydrants, dumpsters and other surfaces.

dumpster_872
Figure 1: Two tags on the dumpster behind my apartment building.

Rime Hael
Figure 2: Two “throw-ups” on the side of a building, Rime and Hael

I also saw some more elaborate graffiti of the kind known as “throw-ups” (see Figure 2) - though I didn't know the term when I started this adventure. They're generally, though not necessarily, larger than tags and have filled letters, with the outline and body in contrasting colors. They take more time to make than tags, but can still be thrown up rather quickly, necessary to avoid detection by the authorities.

I uploaded the first batch of photos into my computer and began examining and editing them in Photoshop. The more I looked at them, the more fascinated I became. I decided to roam the neighborhood looking for tags and other graffiti. I didn't know what I would find, but I had every reason to think there might be something interesting out there.

After all, this “piece” (Figure 3) - as they are called, from “masterpiece” - is on an embankment about 100 yards from my apartment building. Notice the lettering to the left and right and the strange long-nosed green creature in the center. Perhaps I would find one or two other pieces like it. I now know that such pieces are common enough, and have been so for years.

jj-elephunt.jpg
Figure 3: A piece on Jersey Avenue between 10th and 11th streets. It faces east toward the Holland Tunnel, the Hudson River, and Manhattan.

For that matter I had seen elaborately painted subway cars back in the 1970s. But I was only a visitor to New York at the time, such graffiti were not a part of my world. I had read about elaborate graffiti, about competitions between “writers” (the term of art for those who make graffiti) and their “crews,” about graffiti in the high art world of Andy Warhol, about Keith Haring and Jean-Michel Basquiat. But I did not live with the work. I only visited it.

Wednesday, January 16, 2019

Sabine Hossenfelder thinks a bigger collider would be a poor investment

CERN is dreaming of a new and larger particle collider, called Future Circular Collider (FCC). The cost would be in the low 10s of billions (dollars or Euros, makes little difference). Hossenfelder concludes:
... investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

No, I did not have dinner at the White House



On the vicissitudes of authors and intentions in literary criticism

John Farrell, Why Literature Professors Turned Against Authors – Or Did they?, Los Angeles Review of Nooks, 13 January 2018. The opening paragraphs:
SINCE THE 1940s among professors of literature, attributing significance to authors’ intentions has been taboo and déclassé. The phrase literary work, which implies a worker, has been replaced in scholarly practice — and in the classroom — by the clean, crisp syllable text, referring to nothing more than simple words on the page. Since these are all we have access to, the argument goes, speculations about what the author meant can only be a distraction. Thus, texts replaced authors as the privileged objects of scholarly knowledge, and the performance of critical operations on texts became essential to the scholar’s identity. In 1967, the French critic Roland Barthes tried to cement this arrangement by declaring once and for all the “Death of the Author,” adding literary creators to the long list of artifacts that have been dissolved in modernity’s skeptical acids. Authors, Barthes argued, have followed God, the heliocentric universe, and (he hoped) the middle class into oblivion. Michel Foucault soon added the category of “the human” to the list of soon-to-be-extinct species.

Barthes also saw a bright side in the death of the author: it signaled the “birth of the reader,” a new source of meaning for the text, which readers would provide themselves. But the inventive readers who could replace the author’s ingenuity with their own never actually materialized. Instead, scholarly readers, deprived of the author as the traditional source of meaning, adopted a battery of new theories to make sense of the orphaned text. So what Barthes’s clever slogan really fixed in place was the reign in literary studies of Theory-with-a-capital-T. Armed with various theoretical instruments — structuralism, psychoanalysis, Marxism, to name just a few — critics could now pierce the verbal surface of the text to find hidden meanings and purposes unknown to those who created them.

But authorship and authorial intention have proven not so easy to dispose of. The most superficial survey of literary studies will show that authors remain a constant point of reference. The texts upon which theoretically informed readers perform their operations continue for the most part to be edited with the authors’ intentions in mind, and scholars continue to have recourse to background information about authors’ artistic intentions, as revealed in public pronouncements, private papers, and letters, though they do so with ritual apologies for committing the “intentional fallacy.” Politically minded critics, of which there are many, cannot avoid authors and their intended projects. And this is just a hint of the author’s continuing presence. All the while, it goes without saying, scholars continue to insist on their own authorial privileges, highlighting the originality of their insights while duly recording their debts to others. They take the clarity and stability of meaning in their own works as desirable achievements while, in the works created by their subjects, these qualities are presumed to be threats to the freedom of the reader.

Fortunately or unfortunately, it is impossible to get rid of authors entirely because the signs that constitute language are arbitrarily chosen and have no significance apart from their use. The dictionary meanings of words are only potentially meaningful until they are actually employed in a context defined by the relation between author and audience. So how did it happen that professors of literature came to renounce authors and their intentions in favor of a way of thinking — or at least a way of talking — that is without historical precedent, has scant philosophical support, and is to most ordinary readers not only counterintuitive but practically incomprehensible?
Farrell the goes on to sketch out how that happened, beginning with the late 18th century. One thing that happened is that the stock of the author soared to impossible heights:
The elevation of the literary author as the great purveyor of experience had profound effects. Now the past history of literature could be read as the production of superior souls speaking from their own experience. In the minds of Victorian readers, for example, understanding the works of Shakespeare involved following the poet’s personal spiritual and psychological journey, beginning with the bravery of the early histories and the wit of the early comedies, turning in mid-career to the visceral disgust with life evinced in the great tragedies, and arriving, finally, at the high plane of detachment and acceptance that comes into view in the late romances. Not the cause of Hamlet’s suicidal musings but the cause of Shakespeare’s own disillusionment — that was the question that troubled the 19th century. This obsession with Shakespeare’s great soul was wonderfully mocked by James Joyce in the library chapter of Ulysses.

It was not only literary history that could be reinterpreted in the heroic manner. For the boldest advocates of Romantic imagination, all of history became comprehensible now through the biographies of the great men who made it. Poets like Homer, Virgil, Dante, and Milton were no longer spokesmen for their cultures but its creators; as Percy Shelley famously put it, poets were the “unacknowledged legislators of the world.”
And so we arrive at the late 19th and early 20th century:
So, to return to the “Death of the Author,” not only did authors have it coming; they largely enacted their own death by making the renunciation of meaning — or even speech — a privileged literary maneuver. They set themselves above the vulgar garrulity of traditional forms to pursue subtle but evanescent sensations in an almost priestly atmosphere. [...] So the author’s role in the creation of literary meaning suffered a long decline, partly because that role had been inflated and personalized beyond what was sustainable, partly because authors found value in the panache of renouncing it, and partly because critics welcomed the new sources of authority offered by Freudian, Marxist, and other modes of suspicious decoding. Up to this point, the dethroning of the author centered entirely on the relation between authorial psychology and the creation and value of literary works; it did not question that the author’s intentions played an important role in determining a work’s actual meaning.
And then came the intentional fallacy and the New Criticism:
New Criticism offered a standardized method for everyone — poets, students, and critics alike. Eliot called it the “lemon-squeezer school” of criticism. His grand, impersonal stance, which governed the tastes of a generation, had undoubtedly done a great deal to shape the detached attitude of criticism that emerged in the wake of “The Intentional Fallacy,” but his influence as a poet-legislator was also one of that article’s targets. Not only were Eliot’s critical judgments the expression of an unmistakably personal sensibility, but he had inadvertently stirred up trouble by adding his own notes to The Waste Land, the poem that otherwise offered the ideal object for New Critical decipherment. In order to short-circuit the poet’s attempt to control the reading of his own work, Wimsatt and Beardsley argued that the notes to The Waste Land should not be read as an independent source of insight into the author’s intention; instead, they should be judged like any other part of the composition — which amounts to transferring them, implicitly, from the purview of the literary author to that of the poetic speaker. Thus, rather than providing an undesirable clarification of its meaning, the notes were to be judged in terms of the internal drama of the poem itself. Few scholars of Eliot took this advice, showing once again the difficulty of abiding by the intentional taboo. [...]

In hindsight we can see that the long-term result of the trend Barthes called the “Death of the Author” was that meaning emigrated in all directions — to mere texts, to functions of texts like poetic speakers and implied authors, to the structures of language itself apart from speakers, to class and gender ideologies, to the unconscious, and to combinations of all of these, bypassing authors and their intentions. While following these various flights, critics have nonetheless continued to rely upon authorial intention in the editing and reading of texts, in the use of background materials, in the advocacy of political agendas, in the establishing of their own intellectual property, and in many other ways.
And in conclusion:
So why does it matter at this late date if literary scholars continue to reject the notion of intention in theory, given that they no longer avoid it in practice? Of the many reasons, I will note four.

First, the simple contradiction between theory and practice undermines the intellectual coherence of literary studies as a whole, cutting it off both from practitioners of other disciplines and from ordinary readers, including students in the classroom. In an age when the humanities struggle to justify their existence, this does not make that justification any easier.

Second, the removal of the author from the equation of literature, even if only in theory, facilitates the excessive recourse to hidden sources of meaning — linguistic, social, economic, and psychological. It gives license to habits of thought that resemble paranoia, or what Paul Ricoeur has called “the hermeneutics of suspicion.” Just as the New Critics feared the stability of meaning they associated with the reductive language of science, so critics on the left fear the stability of meaning they associate with the continuing power of metaphysics and tradition. Such paranoia is a poor antidote to naïveté. It puts critics in a position of superiority to their subjects, a position as unequal as the hero-worshipping stance of the 19th century, giving free rein to what E. P. Thompson memorably called “the enormous condescension of posterity.”

Third, the question regarding which kinds of authorial intention are relevant to which critical concerns is still a live and pressing one, as the case of Frankenstein suggests.

Fourth and finally, objectifying literary authors as mere functions of the text, or mere epiphenomena of language, is a radically dehumanizing way to treat them. For a discipline that is rightly concerned with recovering suppressed voices and with the ways in which all manner of people can be objectified, acquiescence to the objectification of authors is a temptation to be resisted. As Hegel pointed out long ago in his famous passage on masters and slaves, to degrade the humanity of others with whom we could be in conversation is to impoverish our own humanity.

Space as a framework for representing mental contents in the brain

Jordana Cepelewicz, The Brain Maps Out Ideas and Memories Like Spaces, Quanta Magazine, January 14, 2019. Opening paragraphs:
We humans have always experienced an odd — and oddly deep — connection between the mental worlds and physical worlds we inhabit, especially when it comes to memory. We’re good at remembering landmarks and settings, and if we give our memories a location for context, hanging on to them becomes easier. To remember long speeches, ancient Greek and Roman orators imagined wandering through “memory palaces” full of reminders. Modern memory contest champions still use that technique to “place” long lists of numbers, names and other pieces of information.

As the philosopher Immanuel Kant put it, the concept of space serves as the organizing principle by which we perceive and interpret the world, even in abstract ways. “Our language is riddled with spatial metaphors for reasoning, and for memory in general,” said Kim Stachenfeld, a neuroscientist at the British artificial intelligence company DeepMind.

In the past few decades, research has shown that for at least two of our faculties, memory and navigation, those metaphors may have a physical basis in the brain. A small seahorse-shaped structure, the hippocampus, is essential to both those functions, and evidence has started to suggest that the same coding scheme — a grid-based form of representation — may underlie them. Recent insights have prompted some researchers to propose that this same coding scheme can help us navigate other kinds of information, including sights, sounds and abstract concepts. The most ambitious suggestions even venture that these grid codes could be the key to understanding how the brain processes all details of general knowledge, perception and memory.
And so on and so forth:
This kind of grid network, or code, constructs a more intrinsic sense of space than the place cells do. While place cells provide a good means of navigating where there are landmarks and other meaningful locations to provide spatial information, grid cells provide a good means of navigating in the absence of such external cues. In fact, researchers think that grid cells are responsible for what’s known as path integration, the process by which a person can keep track of where she is in space — how far she has traveled from some starting point, and in which direction — while, say, blindfolded.

“The idea is that the grid code could therefore be some sort of metric or coordinate system,” said Jacob Bellmund, a cognitive neuroscientist affiliated with the Max Planck Institute in Leipzig and the Kavli Institute for Systems Neuroscience in Norway. “You can basically measure distances with this kind of code.” Moreover, because of how it works, that coding scheme can uniquely and efficiently represent a lot of information.

And not just that: Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.
Still further on:
Some researchers are making even bolder claims. Jeff Hawkins, the founder of the machine intelligence company Numenta, leads a team that’s working on applying the grid code not just to explain the memory-related functions of the hippocampal region but to understand the entire neocortex — and with it, to explain all of cognition, and how we model every aspect of the world around us. According to his “thousand brains theory of intelligence,” he said, “the cortex is not just processing sensory input alone, but rather processing and applying it to a location.” When he first thought of the idea, and how grid cells might be facilitating it, he added, “I jumped out of my chair, I was so excited.”
Here's a Hawkins article:
Jeff Hawkins*, Marcus Lewis, Mirko Klukas, Scott Purdy and Subutai Ahmad, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, Front. Neural Circuits, 11 January 2019 | https://doi.org/10.3389/fncir.2018.00121.

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework.

Tuesday, January 15, 2019

Tulsi Gabbard: Bolton on Iran must be shut down


Yeah, I know it may be creepy. But it's life, or death. Whatever. A cemetary. With a touch of red / life?

Some thoughts about Wikipedia

I subscribe to a listserve devoted to the digital humanities. Recently another subscriber asked us for our thoughts about Wikipedia. Here's my response.

* * * * *

I’ve got three core comments on Wikipedia: 1) I’ve been using it happily for years and am, for the most part, satisfied. 2) I think it’s important to note that it covers a much wider range of topics than traditional encyclopedias. 3) If I were teaching, I would probably have graduate students, and perhaps advanced undergraduates as well, involved in editing Wikipedia.

On the first point, Wikipedia is my default reference work on a wide range or topics (though not philosophy, where I first go to the Stanford Encyclopedia of Philosophy). This seems to be the case for many people. Depending on what I’m interested in at the moment I may consult other sources as well, some referenced in a Wikipedia article, others from a general search. I have seen Wikipedia used a source in scholarly publications that have been peer reviewed though I don’t know, off hand, whether or not I’ve done so in any of my publications in the academic literature. But I certainly reference Wikipedia in my blog posts and in the working papers derived from them.

Depending on this and that I may consult the “Talk” page for an article and/or its edit history as well, the former more likely than the latter. For example, I have a particular interest in computational linguistics. Wikipedia has an entry for computational linguistics, but also one for natural language processing (NLP). The last time I checked (several months ago) the “Talk” pages for both articles raised the issue of the relationship between the two articles. Should they in fact be consolidated into one article or is it best to leave them as two? How do we handle the historical relationship between the two? I have no particular opinion on that issue, but I can see that it’s an important issue. Sophisticated users of Wikipedia need to know that such issues exist. Such issues also exist in more traditional reference works, but there’s no way to know about them as there is no way to “look under the hood”, so to speak, to see how the entry came about.

I’ve written one Wikipedia entry from scratch, the one for David G. Hays, the computational linguist. I hesitated about writing the article as I’m a student of his and so can hardly claim to be an unbiased source. But, he was an important figure in the development of the discipline and there was no article about him. So I wrote one. I did that several years ago and so far no one has questioned the article (I haven’t checked it in a month or three). Now maybe that’s an indication that I did a good job, but I figure it’s just as likely an indication that few people are interested in the biography of a dead founder of a rapidly changing technical subject.

I also helped the late Tim Perper on some articles about manga and anime – pervasive in Japanese popular culture and important in the wider world as well. In particular, I’m thinking about the main entry for manga. Tim was an expert on manga, the sort of person you’d want to write the main article. Manga, however, is the kind of topic that attractions legions of enthusiastic fans and, alas, enthusiasm is not an adequate substitute for intellectual sophistication and wide-ranging knowledge and experience. So I got to see a bit of what’s sometimes called “edit wars” in Wikipedia. In this case it was more like edit skirmishes. But it was annoying.

After all, anyone can become an editor at Wikipedia; there’s no a priori test of knowledge. You just create an account and go to work on entries that interest you. An enthusiastic fan can question and countermand the judgement of an expert (like Tim Perper). If editing disputes become bad enough there are mechanisms for adjudicating them, though I don’t know how good they are. For all I know the current entries for, say, Donald Trump and Alexandria Ocasio-Cortez, are current battle grounds. Maybe they’re on lockdown because the fighting over the entries had been so intense. Or maybe everyone with a strong interest in those entries is in agreement. (Ha!)

On the second issue, breadth of coverage, would a traditional encyclopedia have an entry for manga? At this point, mostly likely yes (I don’t really know as I don’t consult traditional reference works any more, except for the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy). But not only does Wikipedia have an entry for manga, but it has entries for various genres of manga, important creators, and important titles. The same for anime. And film. And TV.

At the moment I’m watching “Battlestar Galactica” (the new millennium remake) and “Friday Night Lights”, two very different TV series that are available for streaming. “Galactica” has a large fan base and en extensive set of Wikipedia articles which includes a substantial entry for each episode in the four-year run as well as entries for the series as a whole, an entry that covers the characters, and one that covers the space craft. There may be more entries as well. Judging from Wikipedia entries, the fan base for “Friday Night Lights” is not so large. There is an entry for each season (of four), but not entries for individual episodes. But, just as the entry for the newer version of “Battlestar Galactica” links back to the original series (from the previous millennium), so the entry for “Friday Night Lights” links back to the movie and to the book on which the movie is based.

Beyond this, I note that I watch A LOT of streaming video, both movies and TV. And I frequently consult Wikipedia and other online resources. One observation I have is that plot summaries vary from very good to not very reliable. Writing good plot summaries is not easy. It may not require original thinking, but still, it’s not easy. This is particularly true when you’re dealing with an episode in an ongoing series that follows two or three strands of action. When you write the summary, do you summarize each strand of action in a single ‘lump’ or do you interleave the strands in the say they are presented in the episode? Off hand I’d prefer to see the latter, but I don’t know what I’d think if I actually got that – nor have I kept notes on just how it’s done in case after case after case (I’ve followed 10s of them in the past decade or so).

Which brings me to the third point, if I were still teaching I’d involve students in editing Wikipedia. I know that others have done this, I’m thinking in particular of feminists who are concerned about entries for women, though, alas, I can offer no citations. Still, I’m thinking that writing plot summaries for this that or the other would be a useful thing to do, and something within the capacities of graduate students and advanced undergraduates. Not only could they do it, but doing it would be a good way of teaching them to focus on just what happens in a story. But how would you do it?

For example, I’d like to see plot summaries for each episode of “Friday Night Lights”. What kind of course would provide a rationale for doing that? Obviously a course devoted to the series. Would I want to teach such a course? I don’t know. At the moment I’ve finished watching the first of four seasons; that’s 22 episodes. I find it hard to justify teaching a course, at whatever level, devoted entirely to that series, though I have no trouble imagining a detailed discussion of each episode. But how do you discuss some 80 or 90 episodes of one TV series in a course with, say, 12 to 30 sessions? Does that make any kind of sense at all? And you can repeat the question for any number of TV series, anime series, whatever?

What about the Harry Potter novels, or Stephen King? Of course, one can dismiss these materials as mere popular culture. I’m not sure that is wise.

There’s some kind of opportunity here, but I’m not at all sure of what it is, in detail.

"Everything is subjective? – Really? Do we want to do down that rabbit hole?


No, I don't think we do, though it's all to 'ready at hand' for many humanists.

One thing we should do is read John Searle on objectivity and subjectivity. Alas, that's likely to make things a bit complicated. But the issue is an important one, so we should be willing to shoulder the complexity. See, e.g., these posts:

Monday, January 14, 2019

Some interesting throw-ups that no longer exist because the "canvas" on which thye've been painted has been demolished [Jersey City]

On the primacy of music

As chair of the National Endowment for the Arts (2003–2009), he created the largest programs in the endowment’s history, several of which, including the Big Read, Operation Homecoming, and Poetry Out Loud, continue as major presences in American cultural life. For many years, Gioia served on Image’s editorial advisory board, and he has been a guest lecturer for the Seattle Pacific University MFA program in creative writing. In 2010 he won the prestigious Laetare Medal from Notre Dame. Last year, he was appointed the Judge Widney Professor of Poetry and Public Culture at the University of Southern California—his first regular teaching post.
What he says about music:
Image: I once heard you say that if you could only have one art form, it would be music. Why?

Dana Gioia: I could give you reasons, but that would suggest that my response is rational. It isn’t. My choice of music is simply a deep emotional preference. I like the physicality of music. It is a strange art—not only profoundly beautiful, but also communal, portable, invisible, and repeatable. Its most common form is song, a universal human art that also includes poetry.

Image: As a young man, you intended to be a composer. What led to your discovery of poetry as your vocation?

DG: I started taking piano lessons at six, and I eventually also learned to play the clarinet and saxophone. During my teenage years, music was my ruling passion. At nineteen I went to Vienna to study music and German. But living abroad for the first time, I changed direction. I reluctantly realized that I lacked the passion to be a truly fine composer. I was also out of sympathy with the dull and academic twelve-tone aesthetic then still dominant. Meanwhile, I became fascinated with poetry. I found myself spending most of my time reading and writing. Poetry chose me. I couldn’t resist it.

Image: What does it mean to be a poet in a post-literate world? Or to be a librettist in an age where opera is a struggling art form?

DG: It doesn’t bother me much. I wasn’t drawn to poetry or opera because of their popularity. It was their beauty and excitement that drew me. Of course, I would like these arts to have larger audiences, but the value of an art isn’t in the size of its audience. It’s in the truth and splendor of its existence.

All that being said, let me observe that a post-print world is not a bad place for poetry. Poetry is an art that predates writing. It’s essentially an auditory art. A poet today has the potential to speak directly to an audience—through public readings, radio broadcasts, recordings, and the internet. Most people may not want to read poetry, but they do like to hear good poems recited well. I’ve always written mostly for the ear, and I find large and responsive audiences all over the country. The current cultural situation is tough on novelists and critics, but it isn’t all that bad for poets.

Image: Duke Ellington objected to his music being labeled jazz, since he just considered it music. This led me to wonder if you are bothered by the term “New Formalism” being applied to your poetry.

DG: I have never liked the term “New Formalism.” It was coined in the 1980s as a criticism of the new poetry being written by younger poets that employed rhyme, meter, and narrative. I understand the necessity of labels in a crowded and complex culture, but labels always entail an element of simplification, especially when the terms offer an easy dichotomy.

I have always written both in form and free verse. It seems self-evident to me that a poet should be free to use whatever techniques the poem demands. My work falls almost evenly into thirds—one third of it is written in free verse, one third in rhyme and meter, and one third in meter without rhyme. I do believe that all good art is in some sense formal. Every element in a work of art should contribute to its overall expressive effect. That is what form means. Whether the form is regular or irregular, symmetrical or asymmetrical is merely a means of achieving the necessary integrity of the work.
Do I go with music? Can't say, but obviously I'm sympathetic.

Fashion and art cycles

Peter Klimek, Robert Kreuzbauer, Stefan Thurner, Fashion and art cycles are driven by counter-dominance signals of elite competition: quantitative evidence from music styles, 10 Jan 2019, arXiv:1901.03114v1 [physics.soc-ph]
Abstract: Human symbol systems such as art and fashion styles emerge from complex social processes that govern the continuous re-organization of modern societies. They provide a signaling scheme that allows members of an elite to distinguish themselves from the rest of society. Efforts to understand the dynamics of art and fashion cycles have been based on 'bottom-up' and 'top down' theories. According to 'top down' theories, elite members signal their superior status by introducing new symbols (e.g., fashion styles), which are adopted by low-status groups. In response to this adoption, elite members would need to introduce new symbols to signal their status. According to many 'bottom-up' theories, style cycles evolve from lower classes and follow an essentially random pattern. We propose an alternative explanation based on counter-dominance signaling. There, elite members want others to imitate their symbols; changes only occur when outsider groups successfully challenge the elite by introducing signals that contrast those endorsed by the elite. We investigate these mechanisms using a dynamic network approach on data containing almost 8 million musical albums released between 1956 and 2015. The network systematically quantifies artistic similarities of competing musical styles and their changes over time. We formulate empirical tests for whether new symbols are introduced by current elite members (top-down), randomness (bottom-up) or by peripheral groups through counter-dominance signals. We find clear evidence that counter-dominance-signaling drives changes in musical styles. This provides a quantitative, completely data-driven answer to a century-old debate about the nature of the underlying social dynamics of fashion cycles.
A note on their method:
Empirical tests are then needed to determine which model mechanism best describe s the actual evolution of musical styles. To this end we developed a method to quantify musical styles by determining each style’s typical instrumentation. From a dataset containing almost eight million albums that have been released since 1950, we extracted information about a user - created taxonomy of fifteen musical genres, 422 musical styles, and 570 different instruments. The instruments that are typically associated with a given genre (or style) were shown to be a suitable approximation to formally describe the characteristics of a style [ 29 ]. Therefore, the similarity between styles can be quantified through the similarity of their instrumentation. For instance, in Figure 1A we show an example of four different musical styles (blue circles) that are linked to five instruments (green squares). Here a link indicates that the instrument is (typically) featured in a release belonging to that style. The higher the overlap in instruments between two styles, the higher is their similarity and the thicker is the line that connects the styles in Figure 1A.