Wednesday, January 16, 2019

Sabine Hossenfelder thinks a bigger collider would be a poor investment

CERN is dreaming of a new and larger particle collider, called Future Circular Collider (FCC). The cost would be in the low 10s of billions (dollars or Euros, makes little difference). Hossenfelder concludes:
... investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

No, I did not have dinner at the White House



On the vicissitudes of authors and intentions in literary criticism

John Farrell, Why Literature Professors Turned Against Authors – Or Did they?, Los Angeles Review of Nooks, 13 January 2018. The opening paragraphs:
SINCE THE 1940s among professors of literature, attributing significance to authors’ intentions has been taboo and déclassé. The phrase literary work, which implies a worker, has been replaced in scholarly practice — and in the classroom — by the clean, crisp syllable text, referring to nothing more than simple words on the page. Since these are all we have access to, the argument goes, speculations about what the author meant can only be a distraction. Thus, texts replaced authors as the privileged objects of scholarly knowledge, and the performance of critical operations on texts became essential to the scholar’s identity. In 1967, the French critic Roland Barthes tried to cement this arrangement by declaring once and for all the “Death of the Author,” adding literary creators to the long list of artifacts that have been dissolved in modernity’s skeptical acids. Authors, Barthes argued, have followed God, the heliocentric universe, and (he hoped) the middle class into oblivion. Michel Foucault soon added the category of “the human” to the list of soon-to-be-extinct species.

Barthes also saw a bright side in the death of the author: it signaled the “birth of the reader,” a new source of meaning for the text, which readers would provide themselves. But the inventive readers who could replace the author’s ingenuity with their own never actually materialized. Instead, scholarly readers, deprived of the author as the traditional source of meaning, adopted a battery of new theories to make sense of the orphaned text. So what Barthes’s clever slogan really fixed in place was the reign in literary studies of Theory-with-a-capital-T. Armed with various theoretical instruments — structuralism, psychoanalysis, Marxism, to name just a few — critics could now pierce the verbal surface of the text to find hidden meanings and purposes unknown to those who created them.

But authorship and authorial intention have proven not so easy to dispose of. The most superficial survey of literary studies will show that authors remain a constant point of reference. The texts upon which theoretically informed readers perform their operations continue for the most part to be edited with the authors’ intentions in mind, and scholars continue to have recourse to background information about authors’ artistic intentions, as revealed in public pronouncements, private papers, and letters, though they do so with ritual apologies for committing the “intentional fallacy.” Politically minded critics, of which there are many, cannot avoid authors and their intended projects. And this is just a hint of the author’s continuing presence. All the while, it goes without saying, scholars continue to insist on their own authorial privileges, highlighting the originality of their insights while duly recording their debts to others. They take the clarity and stability of meaning in their own works as desirable achievements while, in the works created by their subjects, these qualities are presumed to be threats to the freedom of the reader.

Fortunately or unfortunately, it is impossible to get rid of authors entirely because the signs that constitute language are arbitrarily chosen and have no significance apart from their use. The dictionary meanings of words are only potentially meaningful until they are actually employed in a context defined by the relation between author and audience. So how did it happen that professors of literature came to renounce authors and their intentions in favor of a way of thinking — or at least a way of talking — that is without historical precedent, has scant philosophical support, and is to most ordinary readers not only counterintuitive but practically incomprehensible?
Farrell the goes on to sketch out how that happened, beginning with the late 18th century. One thing that happened is that the stock of the author soared to impossible heights:
The elevation of the literary author as the great purveyor of experience had profound effects. Now the past history of literature could be read as the production of superior souls speaking from their own experience. In the minds of Victorian readers, for example, understanding the works of Shakespeare involved following the poet’s personal spiritual and psychological journey, beginning with the bravery of the early histories and the wit of the early comedies, turning in mid-career to the visceral disgust with life evinced in the great tragedies, and arriving, finally, at the high plane of detachment and acceptance that comes into view in the late romances. Not the cause of Hamlet’s suicidal musings but the cause of Shakespeare’s own disillusionment — that was the question that troubled the 19th century. This obsession with Shakespeare’s great soul was wonderfully mocked by James Joyce in the library chapter of Ulysses.

It was not only literary history that could be reinterpreted in the heroic manner. For the boldest advocates of Romantic imagination, all of history became comprehensible now through the biographies of the great men who made it. Poets like Homer, Virgil, Dante, and Milton were no longer spokesmen for their cultures but its creators; as Percy Shelley famously put it, poets were the “unacknowledged legislators of the world.”
And so we arrive at the late 19th and early 20th century:
So, to return to the “Death of the Author,” not only did authors have it coming; they largely enacted their own death by making the renunciation of meaning — or even speech — a privileged literary maneuver. They set themselves above the vulgar garrulity of traditional forms to pursue subtle but evanescent sensations in an almost priestly atmosphere. [...] So the author’s role in the creation of literary meaning suffered a long decline, partly because that role had been inflated and personalized beyond what was sustainable, partly because authors found value in the panache of renouncing it, and partly because critics welcomed the new sources of authority offered by Freudian, Marxist, and other modes of suspicious decoding. Up to this point, the dethroning of the author centered entirely on the relation between authorial psychology and the creation and value of literary works; it did not question that the author’s intentions played an important role in determining a work’s actual meaning.
And then came the intentional fallacy and the New Criticism:
New Criticism offered a standardized method for everyone — poets, students, and critics alike. Eliot called it the “lemon-squeezer school” of criticism. His grand, impersonal stance, which governed the tastes of a generation, had undoubtedly done a great deal to shape the detached attitude of criticism that emerged in the wake of “The Intentional Fallacy,” but his influence as a poet-legislator was also one of that article’s targets. Not only were Eliot’s critical judgments the expression of an unmistakably personal sensibility, but he had inadvertently stirred up trouble by adding his own notes to The Waste Land, the poem that otherwise offered the ideal object for New Critical decipherment. In order to short-circuit the poet’s attempt to control the reading of his own work, Wimsatt and Beardsley argued that the notes to The Waste Land should not be read as an independent source of insight into the author’s intention; instead, they should be judged like any other part of the composition — which amounts to transferring them, implicitly, from the purview of the literary author to that of the poetic speaker. Thus, rather than providing an undesirable clarification of its meaning, the notes were to be judged in terms of the internal drama of the poem itself. Few scholars of Eliot took this advice, showing once again the difficulty of abiding by the intentional taboo. [...]

In hindsight we can see that the long-term result of the trend Barthes called the “Death of the Author” was that meaning emigrated in all directions — to mere texts, to functions of texts like poetic speakers and implied authors, to the structures of language itself apart from speakers, to class and gender ideologies, to the unconscious, and to combinations of all of these, bypassing authors and their intentions. While following these various flights, critics have nonetheless continued to rely upon authorial intention in the editing and reading of texts, in the use of background materials, in the advocacy of political agendas, in the establishing of their own intellectual property, and in many other ways.
And in conclusion:
So why does it matter at this late date if literary scholars continue to reject the notion of intention in theory, given that they no longer avoid it in practice? Of the many reasons, I will note four.

First, the simple contradiction between theory and practice undermines the intellectual coherence of literary studies as a whole, cutting it off both from practitioners of other disciplines and from ordinary readers, including students in the classroom. In an age when the humanities struggle to justify their existence, this does not make that justification any easier.

Second, the removal of the author from the equation of literature, even if only in theory, facilitates the excessive recourse to hidden sources of meaning — linguistic, social, economic, and psychological. It gives license to habits of thought that resemble paranoia, or what Paul Ricoeur has called “the hermeneutics of suspicion.” Just as the New Critics feared the stability of meaning they associated with the reductive language of science, so critics on the left fear the stability of meaning they associate with the continuing power of metaphysics and tradition. Such paranoia is a poor antidote to naïveté. It puts critics in a position of superiority to their subjects, a position as unequal as the hero-worshipping stance of the 19th century, giving free rein to what E. P. Thompson memorably called “the enormous condescension of posterity.”

Third, the question regarding which kinds of authorial intention are relevant to which critical concerns is still a live and pressing one, as the case of Frankenstein suggests.

Fourth and finally, objectifying literary authors as mere functions of the text, or mere epiphenomena of language, is a radically dehumanizing way to treat them. For a discipline that is rightly concerned with recovering suppressed voices and with the ways in which all manner of people can be objectified, acquiescence to the objectification of authors is a temptation to be resisted. As Hegel pointed out long ago in his famous passage on masters and slaves, to degrade the humanity of others with whom we could be in conversation is to impoverish our own humanity.

Space as a framework for representing mental contents in the brain

Jordana Cepelewicz, The Brain Maps Out Ideas and Memories Like Spaces, Quanta Magazine, January 14, 2019. Opening paragraphs:
We humans have always experienced an odd — and oddly deep — connection between the mental worlds and physical worlds we inhabit, especially when it comes to memory. We’re good at remembering landmarks and settings, and if we give our memories a location for context, hanging on to them becomes easier. To remember long speeches, ancient Greek and Roman orators imagined wandering through “memory palaces” full of reminders. Modern memory contest champions still use that technique to “place” long lists of numbers, names and other pieces of information.

As the philosopher Immanuel Kant put it, the concept of space serves as the organizing principle by which we perceive and interpret the world, even in abstract ways. “Our language is riddled with spatial metaphors for reasoning, and for memory in general,” said Kim Stachenfeld, a neuroscientist at the British artificial intelligence company DeepMind.

In the past few decades, research has shown that for at least two of our faculties, memory and navigation, those metaphors may have a physical basis in the brain. A small seahorse-shaped structure, the hippocampus, is essential to both those functions, and evidence has started to suggest that the same coding scheme — a grid-based form of representation — may underlie them. Recent insights have prompted some researchers to propose that this same coding scheme can help us navigate other kinds of information, including sights, sounds and abstract concepts. The most ambitious suggestions even venture that these grid codes could be the key to understanding how the brain processes all details of general knowledge, perception and memory.
And so on and so forth:
This kind of grid network, or code, constructs a more intrinsic sense of space than the place cells do. While place cells provide a good means of navigating where there are landmarks and other meaningful locations to provide spatial information, grid cells provide a good means of navigating in the absence of such external cues. In fact, researchers think that grid cells are responsible for what’s known as path integration, the process by which a person can keep track of where she is in space — how far she has traveled from some starting point, and in which direction — while, say, blindfolded.

“The idea is that the grid code could therefore be some sort of metric or coordinate system,” said Jacob Bellmund, a cognitive neuroscientist affiliated with the Max Planck Institute in Leipzig and the Kavli Institute for Systems Neuroscience in Norway. “You can basically measure distances with this kind of code.” Moreover, because of how it works, that coding scheme can uniquely and efficiently represent a lot of information.

And not just that: Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.
Still further on:
Some researchers are making even bolder claims. Jeff Hawkins, the founder of the machine intelligence company Numenta, leads a team that’s working on applying the grid code not just to explain the memory-related functions of the hippocampal region but to understand the entire neocortex — and with it, to explain all of cognition, and how we model every aspect of the world around us. According to his “thousand brains theory of intelligence,” he said, “the cortex is not just processing sensory input alone, but rather processing and applying it to a location.” When he first thought of the idea, and how grid cells might be facilitating it, he added, “I jumped out of my chair, I was so excited.”
Here's a Hawkins article:
Jeff Hawkins*, Marcus Lewis, Mirko Klukas, Scott Purdy and Subutai Ahmad, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, Front. Neural Circuits, 11 January 2019 | https://doi.org/10.3389/fncir.2018.00121.

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework.

Tuesday, January 15, 2019

Tulsi Gabbard: Bolton on Iran must be shut down


Yeah, I know it may be creepy. But it's life, or death. Whatever. A cemetary. With a touch of red / life?

Some thoughts about Wikipedia

I subscribe to a listserve devoted to the digital humanities. Recently another subscriber asked us for our thoughts about Wikipedia. Here's my response.

* * * * *

I’ve got three core comments on Wikipedia: 1) I’ve been using it happily for years and am, for the most part, satisfied. 2) I think it’s important to note that it covers a much wider range of topics than traditional encyclopedias. 3) If I were teaching, I would probably have graduate students, and perhaps advanced undergraduates as well, involved in editing Wikipedia.

On the first point, Wikipedia is my default reference work on a wide range or topics (though not philosophy, where I first go to the Stanford Encyclopedia of Philosophy). This seems to be the case for many people. Depending on what I’m interested in at the moment I may consult other sources as well, some referenced in a Wikipedia article, others from a general search. I have seen Wikipedia used a source in scholarly publications that have been peer reviewed though I don’t know, off hand, whether or not I’ve done so in any of my publications in the academic literature. But I certainly reference Wikipedia in my blog posts and in the working papers derived from them.

Depending on this and that I may consult the “Talk” page for an article and/or its edit history as well, the former more likely than the latter. For example, I have a particular interest in computational linguistics. Wikipedia has an entry for computational linguistics, but also one for natural language processing (NLP). The last time I checked (several months ago) the “Talk” pages for both articles raised the issue of the relationship between the two articles. Should they in fact be consolidated into one article or is it best to leave them as two? How do we handle the historical relationship between the two? I have no particular opinion on that issue, but I can see that it’s an important issue. Sophisticated users of Wikipedia need to know that such issues exist. Such issues also exist in more traditional reference works, but there’s no way to know about them as there is no way to “look under the hood”, so to speak, to see how the entry came about.

I’ve written one Wikipedia entry from scratch, the one for David G. Hays, the computational linguist. I hesitated about writing the article as I’m a student of his and so can hardly claim to be an unbiased source. But, he was an important figure in the development of the discipline and there was no article about him. So I wrote one. I did that several years ago and so far no one has questioned the article (I haven’t checked it in a month or three). Now maybe that’s an indication that I did a good job, but I figure it’s just as likely an indication that few people are interested in the biography of a dead founder of a rapidly changing technical subject.

I also helped the late Tim Perper on some articles about manga and anime – pervasive in Japanese popular culture and important in the wider world as well. In particular, I’m thinking about the main entry for manga. Tim was an expert on manga, the sort of person you’d want to write the main article. Manga, however, is the kind of topic that attractions legions of enthusiastic fans and, alas, enthusiasm is not an adequate substitute for intellectual sophistication and wide-ranging knowledge and experience. So I got to see a bit of what’s sometimes called “edit wars” in Wikipedia. In this case it was more like edit skirmishes. But it was annoying.

After all, anyone can become an editor at Wikipedia; there’s no a priori test of knowledge. You just create an account and go to work on entries that interest you. An enthusiastic fan can question and countermand the judgement of an expert (like Tim Perper). If editing disputes become bad enough there are mechanisms for adjudicating them, though I don’t know how good they are. For all I know the current entries for, say, Donald Trump and Alexandria Ocasio-Cortez, are current battle grounds. Maybe they’re on lockdown because the fighting over the entries had been so intense. Or maybe everyone with a strong interest in those entries is in agreement. (Ha!)

On the second issue, breadth of coverage, would a traditional encyclopedia have an entry for manga? At this point, mostly likely yes (I don’t really know as I don’t consult traditional reference works any more, except for the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy). But not only does Wikipedia have an entry for manga, but it has entries for various genres of manga, important creators, and important titles. The same for anime. And film. And TV.

At the moment I’m watching “Battlestar Galactica” (the new millennium remake) and “Friday Night Lights”, two very different TV series that are available for streaming. “Galactica” has a large fan base and en extensive set of Wikipedia articles which includes a substantial entry for each episode in the four-year run as well as entries for the series as a whole, an entry that covers the characters, and one that covers the space craft. There may be more entries as well. Judging from Wikipedia entries, the fan base for “Friday Night Lights” is not so large. There is an entry for each season (of four), but not entries for individual episodes. But, just as the entry for the newer version of “Battlestar Galactica” links back to the original series (from the previous millennium), so the entry for “Friday Night Lights” links back to the movie and to the book on which the movie is based.

Beyond this, I note that I watch A LOT of streaming video, both movies and TV. And I frequently consult Wikipedia and other online resources. One observation I have is that plot summaries vary from very good to not very reliable. Writing good plot summaries is not easy. It may not require original thinking, but still, it’s not easy. This is particularly true when you’re dealing with an episode in an ongoing series that follows two or three strands of action. When you write the summary, do you summarize each strand of action in a single ‘lump’ or do you interleave the strands in the say they are presented in the episode? Off hand I’d prefer to see the latter, but I don’t know what I’d think if I actually got that – nor have I kept notes on just how it’s done in case after case after case (I’ve followed 10s of them in the past decade or so).

Which brings me to the third point, if I were still teaching I’d involve students in editing Wikipedia. I know that others have done this, I’m thinking in particular of feminists who are concerned about entries for women, though, alas, I can offer no citations. Still, I’m thinking that writing plot summaries for this that or the other would be a useful thing to do, and something within the capacities of graduate students and advanced undergraduates. Not only could they do it, but doing it would be a good way of teaching them to focus on just what happens in a story. But how would you do it?

For example, I’d like to see plot summaries for each episode of “Friday Night Lights”. What kind of course would provide a rationale for doing that? Obviously a course devoted to the series. Would I want to teach such a course? I don’t know. At the moment I’ve finished watching the first of four seasons; that’s 22 episodes. I find it hard to justify teaching a course, at whatever level, devoted entirely to that series, though I have no trouble imagining a detailed discussion of each episode. But how do you discuss some 80 or 90 episodes of one TV series in a course with, say, 12 to 30 sessions? Does that make any kind of sense at all? And you can repeat the question for any number of TV series, anime series, whatever?

What about the Harry Potter novels, or Stephen King? Of course, one can dismiss these materials as mere popular culture. I’m not sure that is wise.

There’s some kind of opportunity here, but I’m not at all sure of what it is, in detail.

"Everything is subjective? – Really? Do we want to do down that rabbit hole?


No, I don't think we do, though it's all to 'ready at hand' for many humanists.

One thing we should do is read John Searle on objectivity and subjectivity. Alas, that's likely to make things a bit complicated. But the issue is an important one, so we should be willing to shoulder the complexity. See, e.g., these posts:

Monday, January 14, 2019

Some interesting throw-ups that no longer exist because the "canvas" on which thye've been painted has been demolished [Jersey City]

On the primacy of music

As chair of the National Endowment for the Arts (2003–2009), he created the largest programs in the endowment’s history, several of which, including the Big Read, Operation Homecoming, and Poetry Out Loud, continue as major presences in American cultural life. For many years, Gioia served on Image’s editorial advisory board, and he has been a guest lecturer for the Seattle Pacific University MFA program in creative writing. In 2010 he won the prestigious Laetare Medal from Notre Dame. Last year, he was appointed the Judge Widney Professor of Poetry and Public Culture at the University of Southern California—his first regular teaching post.
What he says about music:
Image: I once heard you say that if you could only have one art form, it would be music. Why?

Dana Gioia: I could give you reasons, but that would suggest that my response is rational. It isn’t. My choice of music is simply a deep emotional preference. I like the physicality of music. It is a strange art—not only profoundly beautiful, but also communal, portable, invisible, and repeatable. Its most common form is song, a universal human art that also includes poetry.

Image: As a young man, you intended to be a composer. What led to your discovery of poetry as your vocation?

DG: I started taking piano lessons at six, and I eventually also learned to play the clarinet and saxophone. During my teenage years, music was my ruling passion. At nineteen I went to Vienna to study music and German. But living abroad for the first time, I changed direction. I reluctantly realized that I lacked the passion to be a truly fine composer. I was also out of sympathy with the dull and academic twelve-tone aesthetic then still dominant. Meanwhile, I became fascinated with poetry. I found myself spending most of my time reading and writing. Poetry chose me. I couldn’t resist it.

Image: What does it mean to be a poet in a post-literate world? Or to be a librettist in an age where opera is a struggling art form?

DG: It doesn’t bother me much. I wasn’t drawn to poetry or opera because of their popularity. It was their beauty and excitement that drew me. Of course, I would like these arts to have larger audiences, but the value of an art isn’t in the size of its audience. It’s in the truth and splendor of its existence.

All that being said, let me observe that a post-print world is not a bad place for poetry. Poetry is an art that predates writing. It’s essentially an auditory art. A poet today has the potential to speak directly to an audience—through public readings, radio broadcasts, recordings, and the internet. Most people may not want to read poetry, but they do like to hear good poems recited well. I’ve always written mostly for the ear, and I find large and responsive audiences all over the country. The current cultural situation is tough on novelists and critics, but it isn’t all that bad for poets.

Image: Duke Ellington objected to his music being labeled jazz, since he just considered it music. This led me to wonder if you are bothered by the term “New Formalism” being applied to your poetry.

DG: I have never liked the term “New Formalism.” It was coined in the 1980s as a criticism of the new poetry being written by younger poets that employed rhyme, meter, and narrative. I understand the necessity of labels in a crowded and complex culture, but labels always entail an element of simplification, especially when the terms offer an easy dichotomy.

I have always written both in form and free verse. It seems self-evident to me that a poet should be free to use whatever techniques the poem demands. My work falls almost evenly into thirds—one third of it is written in free verse, one third in rhyme and meter, and one third in meter without rhyme. I do believe that all good art is in some sense formal. Every element in a work of art should contribute to its overall expressive effect. That is what form means. Whether the form is regular or irregular, symmetrical or asymmetrical is merely a means of achieving the necessary integrity of the work.
Do I go with music? Can't say, but obviously I'm sympathetic.

Fashion and art cycles

Peter Klimek, Robert Kreuzbauer, Stefan Thurner, Fashion and art cycles are driven by counter-dominance signals of elite competition: quantitative evidence from music styles, 10 Jan 2019, arXiv:1901.03114v1 [physics.soc-ph]
Abstract: Human symbol systems such as art and fashion styles emerge from complex social processes that govern the continuous re-organization of modern societies. They provide a signaling scheme that allows members of an elite to distinguish themselves from the rest of society. Efforts to understand the dynamics of art and fashion cycles have been based on 'bottom-up' and 'top down' theories. According to 'top down' theories, elite members signal their superior status by introducing new symbols (e.g., fashion styles), which are adopted by low-status groups. In response to this adoption, elite members would need to introduce new symbols to signal their status. According to many 'bottom-up' theories, style cycles evolve from lower classes and follow an essentially random pattern. We propose an alternative explanation based on counter-dominance signaling. There, elite members want others to imitate their symbols; changes only occur when outsider groups successfully challenge the elite by introducing signals that contrast those endorsed by the elite. We investigate these mechanisms using a dynamic network approach on data containing almost 8 million musical albums released between 1956 and 2015. The network systematically quantifies artistic similarities of competing musical styles and their changes over time. We formulate empirical tests for whether new symbols are introduced by current elite members (top-down), randomness (bottom-up) or by peripheral groups through counter-dominance signals. We find clear evidence that counter-dominance-signaling drives changes in musical styles. This provides a quantitative, completely data-driven answer to a century-old debate about the nature of the underlying social dynamics of fashion cycles.
A note on their method:
Empirical tests are then needed to determine which model mechanism best describe s the actual evolution of musical styles. To this end we developed a method to quantify musical styles by determining each style’s typical instrumentation. From a dataset containing almost eight million albums that have been released since 1950, we extracted information about a user - created taxonomy of fifteen musical genres, 422 musical styles, and 570 different instruments. The instruments that are typically associated with a given genre (or style) were shown to be a suitable approximation to formally describe the characteristics of a style [ 29 ]. Therefore, the similarity between styles can be quantified through the similarity of their instrumentation. For instance, in Figure 1A we show an example of four different musical styles (blue circles) that are linked to five instruments (green squares). Here a link indicates that the instrument is (typically) featured in a release belonging to that style. The higher the overlap in instruments between two styles, the higher is their similarity and the thicker is the line that connects the styles in Figure 1A.

Sunday, January 13, 2019

From the diary of a (graffiti) writer in Moscow

Monday

11 a.m. Walk to Red Square, passing by the Kremlin and St. Basil’s Cathedral. Then to the new Zaryadye Park, designed by a New York-based architecture firm. It’s a totally different design aesthetic than the rest of Moscow.

1:30 p.m. Meet with local graffiti artist Cozek, who I consider to have the best style in Moscow. His crew, ADED (All Day Every Day), has been tapped for a collaboration with the fashion label Off-White that’s scheduled to release next week at KM20, a fashion-forward shop in town. We discuss how artists can leverage working with designer brands to benefit their careers. Cozek also has a collaboration with a furniture company debuting next week at the Cosmoscow art fair, and he was hired as the curator for Social Club, a new restaurant and private club opening next week in Patriarch Ponds. He wants to commission me to paint a mural at the restaurant while I’m in town.

2:30 p.m. Cozek gives me a tour of the space. The venue is beautifully designed and he invites me to choose any wall I want. All of the walls are exposed concrete, and if you paint it, there’s no going back. I seem more concerned about that than he does. I’m drawn to a horizontal wall that would be perfect for my work, but also recognize I have an 80-foot mural to paint and have my return flight scheduled for the end of the week. It would be great real estate, as this place will cater to Moscow society, but I’m reluctant to bite off more than I can chew with my limited time in town. [...]

Tuesday

12 p.m. Return to Winzavod to work on the mural. Many of today’s well-known street artists travel with an assistant, if not a team, to help bring their vision to life. Some artists are hands-on, while others don’t even touch the wall themselves. You can call me a perfectionist, or perhaps a masochist, but I typically travel alone, and create my works solely with my own two hands from start to finish. Which, admittedly, is not always most efficient. [...]

Wednesday

10 a.m. After breakfast, head to Winzavod intent on finishing my mural. After all of the letters are filled in, I repaint the background with a fresh coat of black, cleaning up all of the over-spray and dust that accumulated on the wall over the past few days. Once that’s done, it takes hours to refine the edges of the letters, pushing and pulling lines a quarter of an inch — making straight lines straighter and freehanding curves that could easily be mistaken for computer vectors.

Modern art [FDR]

Saturday, January 12, 2019

What do her Democratic colleagues in the House think of AOC?

Obviously, some don't like her anti-establishment ways. From Politico:
Democratic leaders are upset that she railed against their new set of House rules on Twitter the first week of the new Congress. Rank and file are peeved that there’s a grassroots movement to try to win her a top committee post they feel she doesn’t deserve.

Even some progressives who admire AOC, as she’s nicknamed, told POLITICO that they worry she’s not using her notoriety effectively.

“She needs to decide: Does she want to be an effective legislator or just continue being a Twitter star?” said one House Democrat who’s in lockstep with Ocasio Cortez’s ideology. “There’s a difference between being an activist and a lawmaker in Congress.”

It’s an open question whether Ocasio-Cortez can be checked. She’s barely been in Congress a week and is better known than almost any other House member other than Nancy Pelosi and John Lewis. A media throng follows her every move, and she can command a national audience practically at will.

None of that came playing by the usual rules: Indeed, Ocasio-Cortez’s willingness to take on her party establishment with unconventional guerrilla tactics is what got her here. It’s earned her icon status on the progressive left, it’s where the 29-year-old freshman derives her power — and, by every indication, it’s how she thinks she can pull the Democratic Party in her direction.
See this twitter thread for commentary on that article:

Come on in



AI in China

Read the whole thread.