Thursday, January 31, 2019

The origins of the Anthropcene and of the modern world in the post 1492 death of indigenous peoples in the Americas

Alexander Koch, Chris Brierley, Mark M. Maslin, Simon L.Lewis, Earth system impacts of the European arrival and Great Dying in the Americas after 1492, Quaternary Science Reviews, Volume 207, 1 March 2019, Pages 13-36, https://doi.org/10.1016/j.quascirev.2018.12.004.
Highlights
  • Combines multiple methods estimating pre-Columbian population numbers.
  • Estimates European arrival in 1492 lead to 56 million deaths by 1600.
  • Large population reduction led to reforestation of 55.8 Mha and 7.4 Pg C uptake.
  • 1610 atmospheric CO2 drop partly caused by indigenous depopulation of the Americas.
  • Humans contributed to Earth System changes before the Industrial Revolution.
Abstract: Human impacts prior to the Industrial Revolution are not well constrained. We investigate whether the decline in global atmospheric CO2 concentration by 7–10 ppm in the late 1500s and early 1600s which globally lowered surface air temperatures by 0.15∘C, were generated by natural forcing or were a result of the large-scale depopulation of the Americas after European arrival, subsequent land use change and secondary succession. We quantitatively review the evidence for (i) the pre-Columbian population size, (ii) their per capita land use, (iii) the post-1492 population loss, (iv) the resulting carbon uptake of the abandoned anthropogenic landscapes, and then compare these to potential natural drivers of global carbon declines of 7–10 ppm. From 119 published regional population estimates we calculate a pre-1492 CE population of 60.5 million (interquartile range, IQR 44.8–78.2 million), utilizing 1.04 ha land per capita (IQR 0.98–1.11). European epidemics removed 90% (IQR 87–92%) of the indigenous population over the next century. This resulted in secondary succession of 55.8 Mha (IQR 39.0–78.4 Mha) of abandoned land, sequestering 7.4 Pg C (IQR 4.9–10.8 Pg C), equivalent to a decline in atmospheric CO2 of 3.5 ppm (IQR 2.3–5.1 ppm CO2). Accounting for carbon cycle feedbacks plus LUC outside the Americas gives a total 5 ppm CO2 additional uptake into the land surface in the 1500s compared to the 1400s, 47–67% of the atmospheric CO2 decline. Furthermore, we show that the global carbon budget of the 1500s cannot be balanced until large-scale vegetation regeneration in the Americas is included. The Great Dying of the Indigenous Peoples of the Americas resulted in a human-driven global impact on the Earth System in the two centuries prior to the Industrial Revolution.

Tuesday, January 29, 2019

Invariance principles of brain connectivity


Monday, January 28, 2019

And this is progress in computing?


Ramiforms in white and green


Group Minds at Wikipedia?


Simon DeDeo

(Submitted on 8 Jul 2014)

Abstract: Group-level cognitive states are widely observed in human social systems, but their discussion is often ruled out a priori in quantitative approaches. In this paper, we show how reference to the irreducible mental states and psychological dynamics of a group is necessary to make sense of large scale social phenomena. We introduce the problem of mental boundaries by reference to a classic problem in the evolution of cooperation. We then provide an explicit quantitative example drawn from ongoing work on cooperation and conflict among Wikipedia editors. We show the limitations of methodological individualism, and the substantial benefits that come from being able to refer to collective intentions and attributions of cognitive states of the form "what the group believes" and "what the group values".

Comments: 18 pages, 4 figures
Cite as: arXiv:1407.2210 [q-bio.NC]
(or arXiv:1407.2210v1 [q-bio.NC] for this version)

* * * * *


Simon DeDeo

PLoS ONE 9(6): e101511.
doi: 10.1371/journal.pone.0101511

Abstract: We investigate the computational structure of a paradigmatic example of distributed social interaction: that of the open-source Wikipedia community. We examine the statistical properties of its cooperative behavior, and perform model selection to determine whether this aspect of the system can be described by a finite-state process, or whether reference to an effectively unbounded resource allows for a more parsimonious description. We find strong evidence, in a majority of the most-edited pages, in favor of a collective-state model, where the probability of a “revert” action declines as the square root of the number of non-revert actions seen since the last revert. We provide evidence that the emergence of this social counter is driven by collective interaction effects, rather than properties of individual users.

* * * * *

Tuesday, January 22, 2019

Formal limits on machine learning

Ashutosh Jogalekar, Open Borders, 3 Quarks Daily, Jan 21, 2019:
The continuum hypothesis is related to two different kinds of infinities found in mathematics. When I first heard the fact that infinities can actually be compared, it was as if someone had cracked my mind open by planting a firecracker inside it. There is the first kind of infinity, the “countable infinity”, which is defined as an infinite set that maps one-on-one with the set of natural numbers. Then there’s the second kind of infinity, the “uncountable infinity”, a gnarled forest of limitless complexity, defined as an infinity that cannot be so mapped. Real numbers are an example of such an uncountable infinity. One of the staggering results of mathematics is that the infinite set of real numbers is somehow “larger” than the infinite set of natural numbers. The German mathematician Georg Cantor supplied the proof of the uncountable nature of the real numbers, sometimes called the “diagonal proof”. It is like a beautiful gem that has suddenly fallen from the sky into our lap; reading it gives one intense pleasure.

The continuum hypothesis asks whether there is an infinity whose size is between the countable infinity of the natural numbers and the uncountable infinity of the real numbers. The mathematicians Kurt Gödel and – more notably – Paul Cohen were unable to prove whether the hypothesis is correct or not, but they were able to prove something equally or even more interesting; that the continuum hypothesis cannot be decided one way or another within the axiomatic system of number theory. Thus, there is a world of mathematics in which the hypothesis is true, and there is one in which it is false. And our current understanding of mathematics is consistent with both these worlds.

Fifty years later, the computational mathematicians have found a startling and unexpected connection between the truth or lack thereof of the continuum hypothesis and the idea of learnability in machine learning. Machine learning seeks to learn the details of a small set of data and make correlative predictions for larger datasets based on these details. Learnability means that an algorithm can learn parameters from a small subset of data and accurately make extrapolations to the larger dataset based on these parameters. The recent study found that whether learnability is possible or not for arbitrary, general datasets depends on whether the continuum hypothesis is true. If it is true, then one will always find a subset of data that is representative of the larger, true dataset. If the hypothesis is false, then one will never be able to pick such a dataset. In fact in that case, only the true dataset represents the true dataset, much as only an accused man can best represent himself.

This new result extends both set theory and machine learning into urgent and tantalizing territory. If the continuum hypothesis is false, it means that we will never be able to guarantee being able to train our models on small data and extrapolate to large data.
Davide Castelvecchi, Machine learning leads mathematicians to unsolvable problem, Nature, Jan 9, 2019:
In the latest paper, Yehudayoff and his collaborators define learnability as the ability to make predictions about a large data set by sampling a small number of data points. The link with Cantor’s problem is that there are infinitely many ways of choosing the smaller set, but the size of that infinity is unknown.

They authors go on to show that if the continuum hypothesis is true, a small sample is sufficient to make the extrapolation. But if it is false, no finite sample can ever be enough. This way they show that the problem of learnability is equivalent to the continuum hypothesis. Therefore, the learnability problem, too, is in a state of limbo that can be resolved only by choosing the axiomatic universe.

The result also helps to give a broader understanding of learnability, Yehudayoff says. “This connection between compression and generalization is really fundamental if you want to understand learning.”
And here's the research paper: Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka & Amir Yehudayoff, Learnability can be undecidable, Nature Machine Intelligence 1, 44–48 (2019):
Abstract: The mathematical foundations of machine learning play a key role in the development of the field. They improve our understanding and provide tools for designing new learning paradigms. The advantages of mathematics, however, sometimes come with a cost. Gödel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate. We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. Our proof is based on the fact the continuum hypothesis cannot be proved nor refuted. We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis. The main idea is to prove an equivalence between learnability and compression.
And the conclusion:
The main result of this work is that the learnability of the family of sets ℱ∗ over the class of probability distributions 𝒫∗ is undecidable. While learning ℱ∗ over 𝒫∗ may not be directly related to practical machine learning applications, the result demonstrates that the notion of learnability is vulnerable. In some general yet simple learning frameworks there is no effective characterization of learnability. In other words, when trying to understand learnability, it is important to pay close attention to the mathematical formalism we choose to use.

How come learnability can neither be proved nor refuted? A closer look reveals that the source of the problem is in defining learnability as the existence of a learning function rather than the existence of a learning algorithm. In contrast with the existence of algorithms, the existence of functions over infinite domains is a (logically) subtle issue.

The advantages of the current standard definitions (that use the language of functions) is that they separate the statistical or information-theoretic issues from any computational considerations. This choice plays a role in the fundamental characterization of PAC learnability by the VC dimension. Our work shows that this set-theoretic view of learnability has a high cost when it comes to more general types of learning.

Sunday, January 20, 2019

Syncho-Dog


Saturday, January 19, 2019

Friday Fotos [a day late]: Food (at the Malibu)





Body sway and synchronization in musical performance

Andrew Chang, Haley E. Kragness, Steven R. Livingstone, Dan J. Bosnyak & Laurel J. Trainor, Body sway reflects joint emotional expression in music ensemble performance, Scientific Reports 9: 205 (2019) DOI:10.1038/s41598-018-36358-4.
Abstract: Joint action is essential in daily life, as humans often must coordinate with others to accomplish shared goals. Previous studies have mainly focused on sensorimotor aspects of joint action, with measurements reflecting event-to-event precision of interpersonal sensorimotor coordination (e.g., tapping). However, while emotional factors are often closely tied to joint actions, they are rarely studied, as event-to-event measurements are insufficient to capture higher-order aspects of joint action such as emotional expression. To quantify joint emotional expression, we used motion capture to simultaneously measure the body sway of each musician in a trio (piano, violin, cello) during performances. Excerpts were performed with or without emotional expression. Granger causality was used to analyze body sway movement time series amongst musicians, which reflects information flow. Results showed that the total Granger-coupling of body sway in the ensemble was higher when performing pieces with emotional expression than without. Granger-coupling further correlated with the emotional intensity as rated by both the ensemble members themselves and by musician judges, based on the audio recordings alone. Together, our findings suggest that Granger-coupling of co-actors’ body sways reflects joint emotional expression in a music ensemble, and thus provide a novel approach to studying joint emotional expression.
Granger causality = "is a statistical estimation of the degree to which one time series is predicted by the history of another time series, over and above prediction by its own history. The larger the value of Granger causality, the better the prediction, and the more information that is flowing from one time series to another."

From the introduction:
The performing arts represent one area in which joint emotional expression is essential. Emotional expression is a central goal in music performances15,16, and performers often depart from the notated score to communicate emotions and musical structure by introducing microvariations in intensity and speed17,18. Music ensemble performers therefore must coordinate not only their actions, but also their joint expressive goals19. For musicians in an ensemble, sharing a representation of a global performance outcome facilitates joint music performance20,21. Interpersonal event-to-event temporal precision has been widely used as a local index of sensorimotor aspects of joint action22,23,24. However, this method is likely insufficient to capture higher-order aspects of joint performance, which may involve stylistic asynchronies, complex leader-follower dynamics, and expressive variations in timbre, phrasing, and dynamics, which take place over longer time scales and are not necessarily reflected by event-to-event temporal precision. For example, a previous study examined the inter-onset intervals of piano duet keystrokes, but cross-correlation analysis failed to reveal leader-follower relationships, likely because these depend on aspects of joint performance involving longer time scales25.

Body sway among co-actors might be a useful measurement of joint emotional expression. Body sway is a domain-general index for measuring real-time, real-world interpersonal coordination and information sharing. Relations between co-actors’ body sway have been associated with joint action performance in many domains, including engaging in motor coordination tasks26,27, having a conversation28,29,30, and music ensemble performance25,31,32,33,34. Specifically in music performance, it has been associated with melodic phrasing35, suggesting it reflects the higher-order aspect of music performance, rather than lower-order note-to-note precision.

In a previous study, we experimentally manipulated leadership roles in a string quartet and examined the predictive relationships amongst the performers’ body sway movements36. Results showed that leaders’ body sway more strongly predicted other musicians’ body sway than did the body sway of followers, suggesting that body sway coupling reflects directional information flow. This effect was diminished, but still observed, even when musicians could not see each other, suggesting that body sway is, at least in part, a byproduct of psychological processes underlying the planning and production of music. This process is similar to how gestures during talking reflect thoughts and facilitate speech production, in addition to being directly communicative37. Furthermore, the total coupling strength in a quartet (averaged amount of total predictive movement across each pair of performers) positively correlated with performers’ self-ratings of performance quality, but it did not necessarily correlate with self-ratings of synchronization. This suggests that body sway coupling might reflect performance factors above and beyond interpersonal temporal precision (synchronization), and might reflect in part emotional expression.

Friday, January 18, 2019

Shrine of the Triceratops: A Graffiti Primer

Or: Indiana Jones and the Green Dinosaur, a Tale of Exploration, Deduction, and Interpretation in the Wilds of Jersey City


This is the first post I made about graffiti. It went up on The Valve on November 1, 2006, over 12 years ago. When I posted this I really didn't know what I was looking at. I didn't even know that this was a name...

IMGP1338rd.jpg

...much less that the name was Joe. In time I figured out that it was by Japan Joe, not Jersey Joe, and I found someone who was present when Joe did it. All I knew was that, like Alice, I'd tumbled to something wonderful, something I had to investigate. Note also that I framed the piece by a quote from Sir Gawain and the Green Knight, a great medieval poem. That still seems right.

* * * * *

“Now, indeed,” said Gawain, “here is a wizardy waste,

And this an ugly oratory, still overgrown with weeds;

Well it befits that wight, warped and wraped in green,
To hold his dark devotions here in the Devil's service!
Now, sure in my five wits I fear it is the foul fiend
Who here has led me far astray, the better to destroy me.

–Sir Gawain and the Green Knight

Though ignorance is not, as the saying implies, a stop on the highway to bliss, it has its uses. In my case, all-but-ignorance of the world of graffiti and hip-hop has allowed me to explore my immediate surroundings as though I were a child discovering neat things in the woods, like the abandoned electrical substation where Steve and I found the raccoon skeleton in one corner of a room, or the strange markings on Timmy's lawn that looked like landing tracks from a flying saucer.

If I had been well-informed about the current state of graffiti I would not have regarded the images I recently blundered into as objects of wonder. I would have known what and perhaps even why they were and thought nothing more of them. Thus I would have been unable to see that I had found a shrine to the spirit of the triceratops. To me it would have just been a large and interesting painting (actually, a “piece”) in a strange location, strange because it is out-doors and thus unprotected, and hidden from public view as well. What sort of artist deliberately does good work in a place where no one will see it?

Tags, Throw-ups, and Pieces

The adventure started about a week ago [remember, I wrote this in November of 2006] when I decided to take some pictures of my neighborhood, Hamilton Park, roughly a third of mile (as measured on Google Earth) from the Holland Tunnel in downtown Jersey City. It's mostly a residential neighborhood consisting of one, two, and three-family attached housing and small apartment buildings. But there are large warehouses nearby, a small abandoned rail yard, a small office building for the Port Authority of New York and New Jersey, and various signs and remnants of more substantial industrial use not so long ago. It's a gentrifying neighborhood where homeless people push their grocery carts on streets where Mazdas and Range Rovers are parked.

While walking the streets taking pictures of this and that I noticed “tags” (see Figure 1) on signs, sidewalks, walls, fire hydrants, dumpsters and other surfaces.

dumpster_872
Figure 1: Two tags on the dumpster behind my apartment building.

Rime Hael
Figure 2: Two “throw-ups” on the side of a building, Rime and Hael

I also saw some more elaborate graffiti of the kind known as “throw-ups” (see Figure 2) - though I didn't know the term when I started this adventure. They're generally, though not necessarily, larger than tags and have filled letters, with the outline and body in contrasting colors. They take more time to make than tags, but can still be thrown up rather quickly, necessary to avoid detection by the authorities.

I uploaded the first batch of photos into my computer and began examining and editing them in Photoshop. The more I looked at them, the more fascinated I became. I decided to roam the neighborhood looking for tags and other graffiti. I didn't know what I would find, but I had every reason to think there might be something interesting out there.

After all, this “piece” (Figure 3) - as they are called, from “masterpiece” - is on an embankment about 100 yards from my apartment building. Notice the lettering to the left and right and the strange long-nosed green creature in the center. Perhaps I would find one or two other pieces like it. I now know that such pieces are common enough, and have been so for years.

jj-elephunt.jpg
Figure 3: A piece on Jersey Avenue between 10th and 11th streets. It faces east toward the Holland Tunnel, the Hudson River, and Manhattan.

For that matter I had seen elaborately painted subway cars back in the 1970s. But I was only a visitor to New York at the time, such graffiti were not a part of my world. I had read about elaborate graffiti, about competitions between “writers” (the term of art for those who make graffiti) and their “crews,” about graffiti in the high art world of Andy Warhol, about Keith Haring and Jean-Michel Basquiat. But I did not live with the work. I only visited it.

Wednesday, January 16, 2019

Sabine Hossenfelder thinks a bigger collider would be a poor investment

CERN is dreaming of a new and larger particle collider, called Future Circular Collider (FCC). The cost would be in the low 10s of billions (dollars or Euros, makes little difference). Hossenfelder concludes:
... investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

No, I did not have dinner at the White House



On the vicissitudes of authors and intentions in literary criticism

John Farrell, Why Literature Professors Turned Against Authors – Or Did they?, Los Angeles Review of Nooks, 13 January 2018. The opening paragraphs:
SINCE THE 1940s among professors of literature, attributing significance to authors’ intentions has been taboo and déclassé. The phrase literary work, which implies a worker, has been replaced in scholarly practice — and in the classroom — by the clean, crisp syllable text, referring to nothing more than simple words on the page. Since these are all we have access to, the argument goes, speculations about what the author meant can only be a distraction. Thus, texts replaced authors as the privileged objects of scholarly knowledge, and the performance of critical operations on texts became essential to the scholar’s identity. In 1967, the French critic Roland Barthes tried to cement this arrangement by declaring once and for all the “Death of the Author,” adding literary creators to the long list of artifacts that have been dissolved in modernity’s skeptical acids. Authors, Barthes argued, have followed God, the heliocentric universe, and (he hoped) the middle class into oblivion. Michel Foucault soon added the category of “the human” to the list of soon-to-be-extinct species.

Barthes also saw a bright side in the death of the author: it signaled the “birth of the reader,” a new source of meaning for the text, which readers would provide themselves. But the inventive readers who could replace the author’s ingenuity with their own never actually materialized. Instead, scholarly readers, deprived of the author as the traditional source of meaning, adopted a battery of new theories to make sense of the orphaned text. So what Barthes’s clever slogan really fixed in place was the reign in literary studies of Theory-with-a-capital-T. Armed with various theoretical instruments — structuralism, psychoanalysis, Marxism, to name just a few — critics could now pierce the verbal surface of the text to find hidden meanings and purposes unknown to those who created them.

But authorship and authorial intention have proven not so easy to dispose of. The most superficial survey of literary studies will show that authors remain a constant point of reference. The texts upon which theoretically informed readers perform their operations continue for the most part to be edited with the authors’ intentions in mind, and scholars continue to have recourse to background information about authors’ artistic intentions, as revealed in public pronouncements, private papers, and letters, though they do so with ritual apologies for committing the “intentional fallacy.” Politically minded critics, of which there are many, cannot avoid authors and their intended projects. And this is just a hint of the author’s continuing presence. All the while, it goes without saying, scholars continue to insist on their own authorial privileges, highlighting the originality of their insights while duly recording their debts to others. They take the clarity and stability of meaning in their own works as desirable achievements while, in the works created by their subjects, these qualities are presumed to be threats to the freedom of the reader.

Fortunately or unfortunately, it is impossible to get rid of authors entirely because the signs that constitute language are arbitrarily chosen and have no significance apart from their use. The dictionary meanings of words are only potentially meaningful until they are actually employed in a context defined by the relation between author and audience. So how did it happen that professors of literature came to renounce authors and their intentions in favor of a way of thinking — or at least a way of talking — that is without historical precedent, has scant philosophical support, and is to most ordinary readers not only counterintuitive but practically incomprehensible?
Farrell the goes on to sketch out how that happened, beginning with the late 18th century. One thing that happened is that the stock of the author soared to impossible heights:
The elevation of the literary author as the great purveyor of experience had profound effects. Now the past history of literature could be read as the production of superior souls speaking from their own experience. In the minds of Victorian readers, for example, understanding the works of Shakespeare involved following the poet’s personal spiritual and psychological journey, beginning with the bravery of the early histories and the wit of the early comedies, turning in mid-career to the visceral disgust with life evinced in the great tragedies, and arriving, finally, at the high plane of detachment and acceptance that comes into view in the late romances. Not the cause of Hamlet’s suicidal musings but the cause of Shakespeare’s own disillusionment — that was the question that troubled the 19th century. This obsession with Shakespeare’s great soul was wonderfully mocked by James Joyce in the library chapter of Ulysses.

It was not only literary history that could be reinterpreted in the heroic manner. For the boldest advocates of Romantic imagination, all of history became comprehensible now through the biographies of the great men who made it. Poets like Homer, Virgil, Dante, and Milton were no longer spokesmen for their cultures but its creators; as Percy Shelley famously put it, poets were the “unacknowledged legislators of the world.”
And so we arrive at the late 19th and early 20th century:
So, to return to the “Death of the Author,” not only did authors have it coming; they largely enacted their own death by making the renunciation of meaning — or even speech — a privileged literary maneuver. They set themselves above the vulgar garrulity of traditional forms to pursue subtle but evanescent sensations in an almost priestly atmosphere. [...] So the author’s role in the creation of literary meaning suffered a long decline, partly because that role had been inflated and personalized beyond what was sustainable, partly because authors found value in the panache of renouncing it, and partly because critics welcomed the new sources of authority offered by Freudian, Marxist, and other modes of suspicious decoding. Up to this point, the dethroning of the author centered entirely on the relation between authorial psychology and the creation and value of literary works; it did not question that the author’s intentions played an important role in determining a work’s actual meaning.
And then came the intentional fallacy and the New Criticism:
New Criticism offered a standardized method for everyone — poets, students, and critics alike. Eliot called it the “lemon-squeezer school” of criticism. His grand, impersonal stance, which governed the tastes of a generation, had undoubtedly done a great deal to shape the detached attitude of criticism that emerged in the wake of “The Intentional Fallacy,” but his influence as a poet-legislator was also one of that article’s targets. Not only were Eliot’s critical judgments the expression of an unmistakably personal sensibility, but he had inadvertently stirred up trouble by adding his own notes to The Waste Land, the poem that otherwise offered the ideal object for New Critical decipherment. In order to short-circuit the poet’s attempt to control the reading of his own work, Wimsatt and Beardsley argued that the notes to The Waste Land should not be read as an independent source of insight into the author’s intention; instead, they should be judged like any other part of the composition — which amounts to transferring them, implicitly, from the purview of the literary author to that of the poetic speaker. Thus, rather than providing an undesirable clarification of its meaning, the notes were to be judged in terms of the internal drama of the poem itself. Few scholars of Eliot took this advice, showing once again the difficulty of abiding by the intentional taboo. [...]

In hindsight we can see that the long-term result of the trend Barthes called the “Death of the Author” was that meaning emigrated in all directions — to mere texts, to functions of texts like poetic speakers and implied authors, to the structures of language itself apart from speakers, to class and gender ideologies, to the unconscious, and to combinations of all of these, bypassing authors and their intentions. While following these various flights, critics have nonetheless continued to rely upon authorial intention in the editing and reading of texts, in the use of background materials, in the advocacy of political agendas, in the establishing of their own intellectual property, and in many other ways.
And in conclusion:
So why does it matter at this late date if literary scholars continue to reject the notion of intention in theory, given that they no longer avoid it in practice? Of the many reasons, I will note four.

First, the simple contradiction between theory and practice undermines the intellectual coherence of literary studies as a whole, cutting it off both from practitioners of other disciplines and from ordinary readers, including students in the classroom. In an age when the humanities struggle to justify their existence, this does not make that justification any easier.

Second, the removal of the author from the equation of literature, even if only in theory, facilitates the excessive recourse to hidden sources of meaning — linguistic, social, economic, and psychological. It gives license to habits of thought that resemble paranoia, or what Paul Ricoeur has called “the hermeneutics of suspicion.” Just as the New Critics feared the stability of meaning they associated with the reductive language of science, so critics on the left fear the stability of meaning they associate with the continuing power of metaphysics and tradition. Such paranoia is a poor antidote to naïveté. It puts critics in a position of superiority to their subjects, a position as unequal as the hero-worshipping stance of the 19th century, giving free rein to what E. P. Thompson memorably called “the enormous condescension of posterity.”

Third, the question regarding which kinds of authorial intention are relevant to which critical concerns is still a live and pressing one, as the case of Frankenstein suggests.

Fourth and finally, objectifying literary authors as mere functions of the text, or mere epiphenomena of language, is a radically dehumanizing way to treat them. For a discipline that is rightly concerned with recovering suppressed voices and with the ways in which all manner of people can be objectified, acquiescence to the objectification of authors is a temptation to be resisted. As Hegel pointed out long ago in his famous passage on masters and slaves, to degrade the humanity of others with whom we could be in conversation is to impoverish our own humanity.

Space as a framework for representing mental contents in the brain

Jordana Cepelewicz, The Brain Maps Out Ideas and Memories Like Spaces, Quanta Magazine, January 14, 2019. Opening paragraphs:
We humans have always experienced an odd — and oddly deep — connection between the mental worlds and physical worlds we inhabit, especially when it comes to memory. We’re good at remembering landmarks and settings, and if we give our memories a location for context, hanging on to them becomes easier. To remember long speeches, ancient Greek and Roman orators imagined wandering through “memory palaces” full of reminders. Modern memory contest champions still use that technique to “place” long lists of numbers, names and other pieces of information.

As the philosopher Immanuel Kant put it, the concept of space serves as the organizing principle by which we perceive and interpret the world, even in abstract ways. “Our language is riddled with spatial metaphors for reasoning, and for memory in general,” said Kim Stachenfeld, a neuroscientist at the British artificial intelligence company DeepMind.

In the past few decades, research has shown that for at least two of our faculties, memory and navigation, those metaphors may have a physical basis in the brain. A small seahorse-shaped structure, the hippocampus, is essential to both those functions, and evidence has started to suggest that the same coding scheme — a grid-based form of representation — may underlie them. Recent insights have prompted some researchers to propose that this same coding scheme can help us navigate other kinds of information, including sights, sounds and abstract concepts. The most ambitious suggestions even venture that these grid codes could be the key to understanding how the brain processes all details of general knowledge, perception and memory.
And so on and so forth:
This kind of grid network, or code, constructs a more intrinsic sense of space than the place cells do. While place cells provide a good means of navigating where there are landmarks and other meaningful locations to provide spatial information, grid cells provide a good means of navigating in the absence of such external cues. In fact, researchers think that grid cells are responsible for what’s known as path integration, the process by which a person can keep track of where she is in space — how far she has traveled from some starting point, and in which direction — while, say, blindfolded.

“The idea is that the grid code could therefore be some sort of metric or coordinate system,” said Jacob Bellmund, a cognitive neuroscientist affiliated with the Max Planck Institute in Leipzig and the Kavli Institute for Systems Neuroscience in Norway. “You can basically measure distances with this kind of code.” Moreover, because of how it works, that coding scheme can uniquely and efficiently represent a lot of information.

And not just that: Since the grid network is based on relative relations, it could, at least in theory, represent not only a lot of information but a lot of different types of information, too. “What the grid cell captures is the dynamic instantiation of the most stable solution of physics,” said György Buzsáki, a neuroscientist at New York University’s School of Medicine: “the hexagon.” Perhaps nature arrived at just such a solution to enable the brain to represent, using grid cells, any structured relationship, from maps of word meanings to maps of future plans.
Still further on:
Some researchers are making even bolder claims. Jeff Hawkins, the founder of the machine intelligence company Numenta, leads a team that’s working on applying the grid code not just to explain the memory-related functions of the hippocampal region but to understand the entire neocortex — and with it, to explain all of cognition, and how we model every aspect of the world around us. According to his “thousand brains theory of intelligence,” he said, “the cortex is not just processing sensory input alone, but rather processing and applying it to a location.” When he first thought of the idea, and how grid cells might be facilitating it, he added, “I jumped out of my chair, I was so excited.”
Here's a Hawkins article:
Jeff Hawkins*, Marcus Lewis, Mirko Klukas, Scott Purdy and Subutai Ahmad, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, Front. Neural Circuits, 11 January 2019 | https://doi.org/10.3389/fncir.2018.00121.

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework.

Tuesday, January 15, 2019

Tulsi Gabbard: Bolton on Iran must be shut down


Yeah, I know it may be creepy. But it's life, or death. Whatever. A cemetary. With a touch of red / life?

Some thoughts about Wikipedia

I subscribe to a listserve devoted to the digital humanities. Recently another subscriber asked us for our thoughts about Wikipedia. Here's my response.

* * * * *

I’ve got three core comments on Wikipedia: 1) I’ve been using it happily for years and am, for the most part, satisfied. 2) I think it’s important to note that it covers a much wider range of topics than traditional encyclopedias. 3) If I were teaching, I would probably have graduate students, and perhaps advanced undergraduates as well, involved in editing Wikipedia.

On the first point, Wikipedia is my default reference work on a wide range or topics (though not philosophy, where I first go to the Stanford Encyclopedia of Philosophy). This seems to be the case for many people. Depending on what I’m interested in at the moment I may consult other sources as well, some referenced in a Wikipedia article, others from a general search. I have seen Wikipedia used a source in scholarly publications that have been peer reviewed though I don’t know, off hand, whether or not I’ve done so in any of my publications in the academic literature. But I certainly reference Wikipedia in my blog posts and in the working papers derived from them.

Depending on this and that I may consult the “Talk” page for an article and/or its edit history as well, the former more likely than the latter. For example, I have a particular interest in computational linguistics. Wikipedia has an entry for computational linguistics, but also one for natural language processing (NLP). The last time I checked (several months ago) the “Talk” pages for both articles raised the issue of the relationship between the two articles. Should they in fact be consolidated into one article or is it best to leave them as two? How do we handle the historical relationship between the two? I have no particular opinion on that issue, but I can see that it’s an important issue. Sophisticated users of Wikipedia need to know that such issues exist. Such issues also exist in more traditional reference works, but there’s no way to know about them as there is no way to “look under the hood”, so to speak, to see how the entry came about.

I’ve written one Wikipedia entry from scratch, the one for David G. Hays, the computational linguist. I hesitated about writing the article as I’m a student of his and so can hardly claim to be an unbiased source. But, he was an important figure in the development of the discipline and there was no article about him. So I wrote one. I did that several years ago and so far no one has questioned the article (I haven’t checked it in a month or three). Now maybe that’s an indication that I did a good job, but I figure it’s just as likely an indication that few people are interested in the biography of a dead founder of a rapidly changing technical subject.

I also helped the late Tim Perper on some articles about manga and anime – pervasive in Japanese popular culture and important in the wider world as well. In particular, I’m thinking about the main entry for manga. Tim was an expert on manga, the sort of person you’d want to write the main article. Manga, however, is the kind of topic that attractions legions of enthusiastic fans and, alas, enthusiasm is not an adequate substitute for intellectual sophistication and wide-ranging knowledge and experience. So I got to see a bit of what’s sometimes called “edit wars” in Wikipedia. In this case it was more like edit skirmishes. But it was annoying.

After all, anyone can become an editor at Wikipedia; there’s no a priori test of knowledge. You just create an account and go to work on entries that interest you. An enthusiastic fan can question and countermand the judgement of an expert (like Tim Perper). If editing disputes become bad enough there are mechanisms for adjudicating them, though I don’t know how good they are. For all I know the current entries for, say, Donald Trump and Alexandria Ocasio-Cortez, are current battle grounds. Maybe they’re on lockdown because the fighting over the entries had been so intense. Or maybe everyone with a strong interest in those entries is in agreement. (Ha!)

On the second issue, breadth of coverage, would a traditional encyclopedia have an entry for manga? At this point, mostly likely yes (I don’t really know as I don’t consult traditional reference works any more, except for the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy). But not only does Wikipedia have an entry for manga, but it has entries for various genres of manga, important creators, and important titles. The same for anime. And film. And TV.

At the moment I’m watching “Battlestar Galactica” (the new millennium remake) and “Friday Night Lights”, two very different TV series that are available for streaming. “Galactica” has a large fan base and en extensive set of Wikipedia articles which includes a substantial entry for each episode in the four-year run as well as entries for the series as a whole, an entry that covers the characters, and one that covers the space craft. There may be more entries as well. Judging from Wikipedia entries, the fan base for “Friday Night Lights” is not so large. There is an entry for each season (of four), but not entries for individual episodes. But, just as the entry for the newer version of “Battlestar Galactica” links back to the original series (from the previous millennium), so the entry for “Friday Night Lights” links back to the movie and to the book on which the movie is based.

Beyond this, I note that I watch A LOT of streaming video, both movies and TV. And I frequently consult Wikipedia and other online resources. One observation I have is that plot summaries vary from very good to not very reliable. Writing good plot summaries is not easy. It may not require original thinking, but still, it’s not easy. This is particularly true when you’re dealing with an episode in an ongoing series that follows two or three strands of action. When you write the summary, do you summarize each strand of action in a single ‘lump’ or do you interleave the strands in the say they are presented in the episode? Off hand I’d prefer to see the latter, but I don’t know what I’d think if I actually got that – nor have I kept notes on just how it’s done in case after case after case (I’ve followed 10s of them in the past decade or so).

Which brings me to the third point, if I were still teaching I’d involve students in editing Wikipedia. I know that others have done this, I’m thinking in particular of feminists who are concerned about entries for women, though, alas, I can offer no citations. Still, I’m thinking that writing plot summaries for this that or the other would be a useful thing to do, and something within the capacities of graduate students and advanced undergraduates. Not only could they do it, but doing it would be a good way of teaching them to focus on just what happens in a story. But how would you do it?

For example, I’d like to see plot summaries for each episode of “Friday Night Lights”. What kind of course would provide a rationale for doing that? Obviously a course devoted to the series. Would I want to teach such a course? I don’t know. At the moment I’ve finished watching the first of four seasons; that’s 22 episodes. I find it hard to justify teaching a course, at whatever level, devoted entirely to that series, though I have no trouble imagining a detailed discussion of each episode. But how do you discuss some 80 or 90 episodes of one TV series in a course with, say, 12 to 30 sessions? Does that make any kind of sense at all? And you can repeat the question for any number of TV series, anime series, whatever?

What about the Harry Potter novels, or Stephen King? Of course, one can dismiss these materials as mere popular culture. I’m not sure that is wise.

There’s some kind of opportunity here, but I’m not at all sure of what it is, in detail.

"Everything is subjective? – Really? Do we want to do down that rabbit hole?


No, I don't think we do, though it's all to 'ready at hand' for many humanists.

One thing we should do is read John Searle on objectivity and subjectivity. Alas, that's likely to make things a bit complicated. But the issue is an important one, so we should be willing to shoulder the complexity. See, e.g., these posts:

Monday, January 14, 2019

Some interesting throw-ups that no longer exist because the "canvas" on which thye've been painted has been demolished [Jersey City]

On the primacy of music

As chair of the National Endowment for the Arts (2003–2009), he created the largest programs in the endowment’s history, several of which, including the Big Read, Operation Homecoming, and Poetry Out Loud, continue as major presences in American cultural life. For many years, Gioia served on Image’s editorial advisory board, and he has been a guest lecturer for the Seattle Pacific University MFA program in creative writing. In 2010 he won the prestigious Laetare Medal from Notre Dame. Last year, he was appointed the Judge Widney Professor of Poetry and Public Culture at the University of Southern California—his first regular teaching post.
What he says about music:
Image: I once heard you say that if you could only have one art form, it would be music. Why?

Dana Gioia: I could give you reasons, but that would suggest that my response is rational. It isn’t. My choice of music is simply a deep emotional preference. I like the physicality of music. It is a strange art—not only profoundly beautiful, but also communal, portable, invisible, and repeatable. Its most common form is song, a universal human art that also includes poetry.

Image: As a young man, you intended to be a composer. What led to your discovery of poetry as your vocation?

DG: I started taking piano lessons at six, and I eventually also learned to play the clarinet and saxophone. During my teenage years, music was my ruling passion. At nineteen I went to Vienna to study music and German. But living abroad for the first time, I changed direction. I reluctantly realized that I lacked the passion to be a truly fine composer. I was also out of sympathy with the dull and academic twelve-tone aesthetic then still dominant. Meanwhile, I became fascinated with poetry. I found myself spending most of my time reading and writing. Poetry chose me. I couldn’t resist it.

Image: What does it mean to be a poet in a post-literate world? Or to be a librettist in an age where opera is a struggling art form?

DG: It doesn’t bother me much. I wasn’t drawn to poetry or opera because of their popularity. It was their beauty and excitement that drew me. Of course, I would like these arts to have larger audiences, but the value of an art isn’t in the size of its audience. It’s in the truth and splendor of its existence.

All that being said, let me observe that a post-print world is not a bad place for poetry. Poetry is an art that predates writing. It’s essentially an auditory art. A poet today has the potential to speak directly to an audience—through public readings, radio broadcasts, recordings, and the internet. Most people may not want to read poetry, but they do like to hear good poems recited well. I’ve always written mostly for the ear, and I find large and responsive audiences all over the country. The current cultural situation is tough on novelists and critics, but it isn’t all that bad for poets.

Image: Duke Ellington objected to his music being labeled jazz, since he just considered it music. This led me to wonder if you are bothered by the term “New Formalism” being applied to your poetry.

DG: I have never liked the term “New Formalism.” It was coined in the 1980s as a criticism of the new poetry being written by younger poets that employed rhyme, meter, and narrative. I understand the necessity of labels in a crowded and complex culture, but labels always entail an element of simplification, especially when the terms offer an easy dichotomy.

I have always written both in form and free verse. It seems self-evident to me that a poet should be free to use whatever techniques the poem demands. My work falls almost evenly into thirds—one third of it is written in free verse, one third in rhyme and meter, and one third in meter without rhyme. I do believe that all good art is in some sense formal. Every element in a work of art should contribute to its overall expressive effect. That is what form means. Whether the form is regular or irregular, symmetrical or asymmetrical is merely a means of achieving the necessary integrity of the work.
Do I go with music? Can't say, but obviously I'm sympathetic.

Fashion and art cycles

Peter Klimek, Robert Kreuzbauer, Stefan Thurner, Fashion and art cycles are driven by counter-dominance signals of elite competition: quantitative evidence from music styles, 10 Jan 2019, arXiv:1901.03114v1 [physics.soc-ph]
Abstract: Human symbol systems such as art and fashion styles emerge from complex social processes that govern the continuous re-organization of modern societies. They provide a signaling scheme that allows members of an elite to distinguish themselves from the rest of society. Efforts to understand the dynamics of art and fashion cycles have been based on 'bottom-up' and 'top down' theories. According to 'top down' theories, elite members signal their superior status by introducing new symbols (e.g., fashion styles), which are adopted by low-status groups. In response to this adoption, elite members would need to introduce new symbols to signal their status. According to many 'bottom-up' theories, style cycles evolve from lower classes and follow an essentially random pattern. We propose an alternative explanation based on counter-dominance signaling. There, elite members want others to imitate their symbols; changes only occur when outsider groups successfully challenge the elite by introducing signals that contrast those endorsed by the elite. We investigate these mechanisms using a dynamic network approach on data containing almost 8 million musical albums released between 1956 and 2015. The network systematically quantifies artistic similarities of competing musical styles and their changes over time. We formulate empirical tests for whether new symbols are introduced by current elite members (top-down), randomness (bottom-up) or by peripheral groups through counter-dominance signals. We find clear evidence that counter-dominance-signaling drives changes in musical styles. This provides a quantitative, completely data-driven answer to a century-old debate about the nature of the underlying social dynamics of fashion cycles.
A note on their method:
Empirical tests are then needed to determine which model mechanism best describe s the actual evolution of musical styles. To this end we developed a method to quantify musical styles by determining each style’s typical instrumentation. From a dataset containing almost eight million albums that have been released since 1950, we extracted information about a user - created taxonomy of fifteen musical genres, 422 musical styles, and 570 different instruments. The instruments that are typically associated with a given genre (or style) were shown to be a suitable approximation to formally describe the characteristics of a style [ 29 ]. Therefore, the similarity between styles can be quantified through the similarity of their instrumentation. For instance, in Figure 1A we show an example of four different musical styles (blue circles) that are linked to five instruments (green squares). Here a link indicates that the instrument is (typically) featured in a release belonging to that style. The higher the overlap in instruments between two styles, the higher is their similarity and the thicker is the line that connects the styles in Figure 1A.

Sunday, January 13, 2019

From the diary of a (graffiti) writer in Moscow

Monday

11 a.m. Walk to Red Square, passing by the Kremlin and St. Basil’s Cathedral. Then to the new Zaryadye Park, designed by a New York-based architecture firm. It’s a totally different design aesthetic than the rest of Moscow.

1:30 p.m. Meet with local graffiti artist Cozek, who I consider to have the best style in Moscow. His crew, ADED (All Day Every Day), has been tapped for a collaboration with the fashion label Off-White that’s scheduled to release next week at KM20, a fashion-forward shop in town. We discuss how artists can leverage working with designer brands to benefit their careers. Cozek also has a collaboration with a furniture company debuting next week at the Cosmoscow art fair, and he was hired as the curator for Social Club, a new restaurant and private club opening next week in Patriarch Ponds. He wants to commission me to paint a mural at the restaurant while I’m in town.

2:30 p.m. Cozek gives me a tour of the space. The venue is beautifully designed and he invites me to choose any wall I want. All of the walls are exposed concrete, and if you paint it, there’s no going back. I seem more concerned about that than he does. I’m drawn to a horizontal wall that would be perfect for my work, but also recognize I have an 80-foot mural to paint and have my return flight scheduled for the end of the week. It would be great real estate, as this place will cater to Moscow society, but I’m reluctant to bite off more than I can chew with my limited time in town. [...]

Tuesday

12 p.m. Return to Winzavod to work on the mural. Many of today’s well-known street artists travel with an assistant, if not a team, to help bring their vision to life. Some artists are hands-on, while others don’t even touch the wall themselves. You can call me a perfectionist, or perhaps a masochist, but I typically travel alone, and create my works solely with my own two hands from start to finish. Which, admittedly, is not always most efficient. [...]

Wednesday

10 a.m. After breakfast, head to Winzavod intent on finishing my mural. After all of the letters are filled in, I repaint the background with a fresh coat of black, cleaning up all of the over-spray and dust that accumulated on the wall over the past few days. Once that’s done, it takes hours to refine the edges of the letters, pushing and pulling lines a quarter of an inch — making straight lines straighter and freehanding curves that could easily be mistaken for computer vectors.

Modern art [FDR]

Saturday, January 12, 2019

What do her Democratic colleagues in the House think of AOC?

Obviously, some don't like her anti-establishment ways. From Politico:
Democratic leaders are upset that she railed against their new set of House rules on Twitter the first week of the new Congress. Rank and file are peeved that there’s a grassroots movement to try to win her a top committee post they feel she doesn’t deserve.

Even some progressives who admire AOC, as she’s nicknamed, told POLITICO that they worry she’s not using her notoriety effectively.

“She needs to decide: Does she want to be an effective legislator or just continue being a Twitter star?” said one House Democrat who’s in lockstep with Ocasio Cortez’s ideology. “There’s a difference between being an activist and a lawmaker in Congress.”

It’s an open question whether Ocasio-Cortez can be checked. She’s barely been in Congress a week and is better known than almost any other House member other than Nancy Pelosi and John Lewis. A media throng follows her every move, and she can command a national audience practically at will.

None of that came playing by the usual rules: Indeed, Ocasio-Cortez’s willingness to take on her party establishment with unconventional guerrilla tactics is what got her here. It’s earned her icon status on the progressive left, it’s where the 29-year-old freshman derives her power — and, by every indication, it’s how she thinks she can pull the Democratic Party in her direction.
See this twitter thread for commentary on that article:

Come on in



AI in China

Read the whole thread.


Benjamin Wittes on collusion and obstruction of justice centered on Trump

The public understanding of and debate over the Mueller investigation rests on several discrete premises that I believe should be reexamined. The first is the sharp line between the investigation of “collusion” and the investigation of obstruction of justice. The second is the sharp line between the counter-intelligence components of the investigation and the criminal components. The third and most fundamental is the notion that the investigation was, in the first place, an investigation of the Trump campaign and figures associated with it.

These premises are deeply embedded throughout the public discussion. When Bill Barr challenges what he imagines to be the predicate for the obstruction investigation, he is reflecting one of them. When any number of commentators (including Mikhaila Fogel and me on Lawfare last month) describe separate investigative cones for obstruction and collusion, they are reflecting it. When the president’s lawyers agree to have their client answer questions on collusion but draw a line at obstruction, they are reflecting it too.

But I think, and the Times’s story certainly suggests, that the story may be more complicated than that, the lines fuzzier, and the internal understanding of the investigation very different along all three of these axes from the ones the public has imbibed.
Yada yada yada... And in conclusion:
First, if this analysis is correct, it mostly—though not entirely—answers the question of the legal basis of the obstruction investigation. The president’s lawyers, Barr in his memo, and any number of conservative commentators have all argued that Mueller cannot reasonably be investigating obstruction offenses based on the president’s actions within his Article II powers in firing Comey; such actions, they contend, cannot possibly violate the obstruction laws. While this position is disputed, a great many other commentators, including me, have scratched their heads about Mueller’s obstruction theory.

But if the predicate for the investigation was rooted in substantial part in counterintelligence authorities—that is, if the theory was not just that the president may have violated the criminal law but also that he acted in a fashion that may constitute a threat to national security—that particular legal puzzle goes away. After all, the FBI doesn’t need a possible criminal violation to open a national security investigation.

The problem does not entirely go away, because as the Times reports, the probe was partly predicated as a criminal matter as well. So the question of Mueller’s criminal theory is still there. But the weight on it is dramatically less. [...]

Second, if it is correct that the FBI’s principle interest in obstruction was not as a discrete criminal fact pattern but as a national security threat, this significantly blurs the distinction between the obstruction and collusion aspects of the investigation. In this construction, obstruction was not a problem distinct from collusion, as has been generally imagined. Rather, in this construction, obstruction was the collusion, or at least part of it. The obstruction of justice statutes become, in this understanding, merely one set of statutes investigators might think about using to deal with a national security risk—specifically, the risk of a person on the U.S. side coordinating with or supporting Russian activity by shutting down the investigation.

It was about Russia. It was always about Russia. Full stop.

Friday, January 11, 2019

The VC wisdom of funding undergraduate education in creative industries

Daniel Davis on Creative arts and investing in systems at Crooked Timber:
The thing about the arts industries is that they’re very hits-driven; talking about what happens to the median person going into them is always going to massively underestimate the value of the system as a whole. They share this characteristic with pharmaceuticals and, famously, the oil industry (as the wildcatter proverb has it, “part of the cost of a gusher is the dry holes you drilled”). You can’t tell ex ante which spotty undergraduate is going to turn into a claymation genius and retrospectively justify the last decade of investment. Importantly, nor can they. As far as I can see, if you were to set it up without subsidy, you would most likely get too few people going into the creative arts, as they would rationally decide that they were more likely to be one of the ones that didn’t make it than one of the Nick Parks.

This is really not all that unorthodox; it’s just the application of venture capital thinking to what people are (wrongly in my opinion) analysing as a debt problem. The undergraduate education subsidy system ought to be thought of as one where the government makes loads and loads of smallish VC investments, effectively buying a roughly 30% shareholding for a five figure investment, with diversification across an entire undergraduate cohort every year. If you’re given that sort of an opportunity, then obviously you go for some moonshots, particularly when you’re the government of a country that famously does very well in creative industries compared to its peers.

But it’s actually possible to push this line of thinking somewhat further into a general point about arts funding, making use of the fact noted two paragraphs ago that not only is it impossible for an outsider to pick winners, it’s usually very difficult for the artists themselves. Where I think that leads to is the conclusion that when you’re looking at the rate of return on arts subsidies, there is no coherent way to measure ROI at any level more disaggregated than the entire system.
Check out my post, Chaos in the Movie Biz: A Review of Hollywood Economics.