Tuesday, October 27, 2020

Covid-19 virus has mutated during the course of the pandemic

Further tweets in the stream:

If hamsters are shedding more virus in their upper respiratory tracts, then they are more likely to be contagious by producing infectious droplets and aerosols with higher concentrations of infectious virus. This is the type of data we need to get to understand this question, BUT

Transmission itself wasn't tested. Ideally, they would have set up two hamster cages in the same room or in cages with shared air to see if G614 was transmitted more efficiently or to more animals than D614 virus.

Since transmission studies weren't part of this paper, ultimately whether or not D614G increases transmissibility remains unknown. No doubt those studies are in progress. Ideally these will be performed in multiple transmission models to confirm the findings.

The good news: at least in hamsters, this mutation didn't appear to confer any observable differences in pathogenicity. The virus may be mutating—because that's what RNA viruses do—but it's not becoming more virulent.

The second good news: serum from hamsters infected with D614 virus neutralized G614 virus in culture, suggesting that vaccines (all designed against D614 spike) will work against either variant.

My take home: while the jury's still out on whether this variant is more transmissible in the real world, it still can be neutralized by antibodies against all the vaccine candidates. It's not mutating into something more dangerous or pathogenic. It's behaving like a normal virus

Ultimately it doesn't matter whether D614G is more transmissible or not: the message to everyone is avoid getting EITHER variant of SARS-CoV-2.

Stay the course and stay safe, so that you can be protected against all known variants when a safe, effective vaccine is ready.

"Wall of Fame" [now gone]

Adam Roberts reviews Kim Stanley Robinson's "Ministry of the Future" – Reimagining how we think about the future

He’s ambivalent:

It’s a righteous novel, and I’m a KSR fan of longstanding, so I expected to like Ministry of the Future. And I did, if only up to a point. Beyond that point I ... didn't, really. Nonetheless, I’d suggest, or I would if it didn’t just look perversely contradictory, that the very reason I didn’t much like The Ministry of the Future is actually an index of its success: its ambition, its throughline and above all its—well, it’s ministry.

He goes on, “fat book…thin on plot,” which is hardly surprising. The plot is strung between two poles, one located in the Ministry based in Zurich and the other in a clinic in Northern India. Lots of leaden dialog, lots of information dumps, plenty of future tech, some gee-whiz, some not so much. This is all standard book-review stuff and Roberts handles it very well. Where he shines, though, is in his final two paragraphs:

The negative way of spinning all this would be to say that this novel can be a dry read, sometimes positively drought-dry. There are stretches here which are, baldly stated, an effort for the reader to push through. But the positive way to spin it would be to see it as a novel not just about climate change, but about the kind of stories we tell ourselves about disasters like climate change. Those stories are, clearly, not helping. Take ‘eucatastrophe’, Tolkien’s term for a thrilling story in which disaster impends, becomes more and more inevitable and then is averted at the very last moment. It’s a real workhorse of storytelling nowadays, the eucatastrophe, especially in cinema. There is a threat to the whole world! Let’s imagine that as a singular, external thing: an asteroid on collision course, a huge invading alien spaceship. Then let’s draw out the approaching disaster and make it seem like it could never be overcome. Finally, bam: rabbit from hat, the hero saves the day at the last minute.

The Ministry for the Future is, in effect, saying: that’s a bad story—not bad in entertainment terms but bad in verisimilitude terms. It is saying: we are actually, right now, indeed facing a threat to the whole world, but it’s not a single thing it’s a complex and deeply-embedded function of human interrelation and social praxis. It’s not exterior to us, it is us. And it won’t be solved by a single heroic flourish in the nick of time. It will be solved by a congeries of difficult, drawn-out, collective labour, much of which is so inimical to ‘popular narrative’ that we dismiss it as boring. It’s not boring, though: it’s literally life-and-death. And so one part of our large, human task will be: to reconfigure the kinds of stories we are telling ourselves about disaster and how to avert it.

Notice that word, “verisimilitude”, used about a work of fiction pitched in the future, albeit the near-enough future, a future at least some of the readers of the book will live into.

* * * * *

Check out KSR’s recent interview in The Jacobin:

I’ve steeped myself in the utopian tradition. It’s not a big body of literature, it’s easy to read the best hits of the utopian tradition. You could make a list, I mean roughly twenty or twenty-five books would be the highlights of the entire four hundred years, which is a little shocking. And maybe there’s more out there that hasn’t stayed in the canon. But if you talk about the utopian canon, it’s quite small — it’s interesting, it has its habits, its problems, its gaps.

Famously, from Thomas More (Utopia) on, there’s been a gap in the history — the utopia is separated by space or time, by a disjunction. They call it the Great Trench. In Utopia, they dug a great trench across the peninsula so that their peninsula became an island. And the Great Trench is endemic in utopian literature. There’s almost always a break that allows the utopian society to be implemented and to run successfully. I’ve never liked that because one connotation of the word “utopian” is unreality, in the sense that it’s “never going to happen.”

So we have to fill in this trench. When Jameson said it’s easier to imagine the end of the world than the end of capitalism, I think what he was talking about is that missing bridge from here to there. It’s hard to imagine a positive history, but it’s not impossible. And now, yes, it’s easy to imagine the end of the world because we are at the start of a mass extinction event. But he’s talking about hegemony, and a kind of Marxist reading of history, and the kind of Gramscian notion that everybody’s in the mindset that capitalism is reality itself and that there can never be any other way — so it’s hard to imagine the end of capitalism. But I would just flip it and say, it’s hard to imagine how we get to a better system. Imagining the better system isn’t that hard; you just make up some rules about how things should work. You could even say socialism is that kind of utopian imaginary. Let’s just do it this way, a kind of society of mutual aid. And I would agree with anyone who says, “Well, that’s a good system.”

The interesting thing, and also the new stories to tell if you’re a science fiction novelist, if you’re any kind of novelist — almost every story’s been told a few times — but the story of getting to a new and better social system, that’s almost an empty niche in our mental ecology. So I’ve been throwing myself into that attempt. It’s hard, but it’s interesting.

Friday, October 23, 2020

Friday Fotos: Foreign places in the key of burnt orange

Is the fact that ideas are nonrival the key to economic growth in the 21st century? Or: What’s an idea? [the peculiarities of economic models]

I’ve been chewing on one particular paragraph, the final one, of Bloom et al. “Are Ideas Getting Harder to Find?”[1] Why? Because it bears on just what (these particular) economists mean by “idea”. Early in the paper they noted that “ideas are hard to measure” (p. 1108), noting that appropriate units of measure are far from obvious. They went on to note that “in some ways a more accurate title for this paper would be ‘Is Exponential Growth Getting Harder to Achieve?’” Which brings up the question of why didn’t they choose that more accurate title? Custom, perhaps? I don’t know.

How do you measure ideas?

I understand the problem. I’m not at all sure that “idea” can even be a properly technical term, thinking perhaps it’s better regarded as an informal common-sense term with but limited use in technical work. In any event, when it comes to actually measuring ideas, the authors use proxies in two of their three case studies. In their study of semi-conductor manufacture they use research effort as measured by wages as a proxy for ideas (p. 1129) and in their study of seed lines they use R & D expenditure (pp. 1120-1121). Their measure was more direct in the case of pharmaceutical development; they counted articles in the PubMed database as identified by appropriate key words (pp. 1125-1126).

I have no problem with that. But it does mean they tend to tread ideas as atomic entities with no properties beyond the fact that they can be counted, if only indirectly, and that they can be shared. And that brings us to the final paragraph of the article.

Key insight: Ideas are nonrival

Economist distinguish between things that are rival and things that are non-rival. When something is a rival good only one person or entity can use it. If Amalgamated Mining owns a particular deposit of iron ore that means that, for example, Universal Minerals cannot mine that deposit. Ideas, in contrast, are nonrival. The fact that Jim Manley knows Newton’s laws of motion doesn’t preclude anyone else from understanding and using them.

With that mind, considered the highlighted passage from the final paragraph (p. 1139) of Bloom et al.:

That one particular aspect of endogenous growth theory should be reconsidered does not diminish the contribution of that literature. Quite the contrary. The only reason models with declining research productivity can sustain exponential growth in living standards is because of the key insight from that literature: ideas are nonrival. For example, if research productivity were constant, sustained growth would actually not require that ideas be nonrival; Akcigit, Celik, and Greenwood (2016) shows that rivalrous ideas can generate sustained exponential growth in this case. Our paper therefore suggests that a fundamental contribution of endogenous growth theory is not that research productivity is constant or that subsidies to research can necessarily raise growth. Rather it is that ideas are different from all other goods in that they can be used simultaneously by any number of people. Exponential growth in research leads to exponential growth in At. And because of nonrivalry, this leads to exponential growth in per capita income.

The first highlighted passage seems to suggest that that idea that ideas are nonrival is due to the tradition of research on endogenous growth theory. That doesn’t make any sense since the nonrival nature of ideas follows from the definition of “nonrival,” which is independent of that research tradition.

What’s going on? Two paragraphs earlier they had noted that: 1) endogenous growth theory assumes constant exponential growth given constant research productivity, and 2) their article reports a variety of work showing that, in fact, over past few decades it requires more and more research to sustain exponential growth. This final paragraph is an effort to reconcile theory with evidence. The rest of the paragraph after the highlighted section does that.

How does it do it? Not very well, it seems to me, not very well. They cite a paper showing that it is possible to get sustained exponential growth from constant productivity if ideas were rival. However, it turns out that productivity is not constant (the burden of the article) and, wouldn’t you know, ideas aren’t rival either. Surely that must be why exponential growth remains possible.

Really? I understand that that works within the bounds of endogenous growth theory. But it seems awfully flimsy to me. It amounts to little more than saying exponential growth remains possible because ideas are ideas. And that’s not very helpful. Ideas were always nonrival; it’s not as though that property miraculously emerged in time to allow endogenous growth theory to save the appearances – a phrase, incidentally, that dates back to Plato.

It might be more useful to figure out what it is about the current run of ideas that makes them less productive. That’s what I’ve done in my working paper, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, which is what I’ve done in my recent working paper on stagnation [2]. But there I was concerned with cognitive architecture and the relationship between ideas and the world. I didn’t treat ideas merely as countable atomic units. Whether my argument is going in the right direction, that’s another matter. But it doesn’t depend on a truism.

References

[1] Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb, Are Ideas Getting Harder to Find? American Economic Review 2020, 110(4), https://doi.org/10.1257/aer.20180338.

[2] William Benzon, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, Version 2, Working Paper, August 2, 2019, 62 pp., https://www.academia.edu/39927897/Stagnation_and_Beyond_Economic_growth_and_the_cost_of_knowledge_in_a_complex_world.

Thursday, October 22, 2020

Mid-day sun on an overcast day, with trees

Environmental change forced humans to become more adaptable 320,000 years ago

Smithsonian Magazine, "Turbulent era sparked leap in human behavior, adaptability 320,000 years ago", October 21, 2020:

For hundreds of thousands of years, early humans in the East African Rift Valley could expect certain things of their environment. Freshwater lakes in the region ensured a reliable source of water, and large grazing herbivores roamed the grasslands. Then, around 400,000 years ago, things changed. The environment became less predictable, and human ancestors faced new sources of instability and uncertainty that challenged their previous long-standing way of life.

The first analysis of a new sedimentary drill core representing 1 million years of environmental history in the East African Rift Valley shows that at the same time early humans were abandoning old tools in favor of more sophisticated technology and broadening their trade networks, their landscape was experiencing frequent fluctuations in vegetation and water supply that made resources less reliably available. The findings suggest that instability in their surrounding climate, land and ecosystem was a key driver in the development of new traits and behaviors underpinning human adaptability.

H/t Tyler Cowen.

The underlying research, Richard Potts et al. Increased ecological resource variability during a critical transition in hominin evolution, Science Advances 21 Oct 2020: Vol. 6, no. 43, eabc8975, DOI: 10.1126/sciadv.abc8975

Abstract: Although climate change is considered to have been a large-scale driver of African human evolution, landscape-scale shifts in ecological resources that may have shaped novel hominin adaptations are rarely investigated. We use well-dated, high-resolution, drill-core datasets to understand ecological dynamics associated with a major adaptive transition in the archeological record ~24 km from the coring site. Outcrops preserve evidence of the replacement of Acheulean by Middle Stone Age (MSA) technological, cognitive, and social innovations between 500 and 300 thousand years (ka) ago, contemporaneous with large-scale taxonomic and adaptive turnover in mammal herbivores. Beginning ~400 ka ago, tectonic, hydrological, and ecological changes combined to disrupt a relatively stable resource base, prompting fluctuations of increasing magnitude in freshwater availability, grassland communities, and woody plant cover. Interaction of these factors offers a resource-oriented hypothesis for the evolutionary success of MSA adaptations, which likely contributed to the ecological flexibility typical of Homo sapiens foragers.

Wednesday, October 21, 2020

On the beach

On why comparisons between brains and computers are problematic at best [the brain is analog, not digital]

Matthew Hutson, How Much Can Your Brain Actually Process? Don’t Ask. Slate, March 29, 2016.

This is a useful summary comparison between digital computers and the brain: "Reported estimates of how much data the brain holds in long-term memory range from 100 megabytes to 10 exabytes—in terms of Thriller on MP3, that’s either one album or 100 billion albums. This range alone should give you an immediate sense of how seriously to take the estimates." Here's the good stuff:

The fundamental difference between analog and digital information is that analog information is continuous and digital information is made of discrete chunks. Digital computers work by manipulating bits, ones, and zeroes. And operations on these bits occur in discrete steps. With each step, transistors representing bits switch on or off. Jiggle a particular atom on a transistor this way or that, and it will have no effect on the computation, because with each step the transistor’s status is rounded up or down to a one or a zero. Any drift is swiftly corrected.

On a neuron, however, jiggle an atom this way or that, and the strength of a synapse might change. People like to describe the signals between neurons as digital, because a neuron either fires or it doesn’t, sending a one or a zero to its neighbors in the form of a sharp electrical spike or lack of one. But there may be meaningful variation in the size of these spikes and in the possibility that nearby neurons will spike in response. The particular arrangement of the chemical messengers in a synapse, or the exact positioning of the two neurons, or the precise timing between two spikes—these all can have an impact on how one neuron reacts to another and whether a message is passed along.

Plus, synaptic strength is not all that matters in brain function. There are myriad other factors and processes, both outside neurons and inside neurons: network structure, the behavior of support cells, cell shape, protein synthesis, ion channeling, vesicle formation. How do you calculate how many bits are communicated in one molecule’s bumping against another? How many computational “operations” is that? “The complexity of the brain is much higher at the biochemical level” than models of neural networks would have you believe, according to Terrence Sejnowski, the head of the Salk Institute’s Computational Neurobiology Laboratory. “The problem is that we don’t know enough about the brain to interpret the relevant measurement or metric at that level.”

There's more at the link.

Fighting the Big Tech ecosystem

Kara Swisher, The Justice Dept.’s Lawsuit Against Google: Too Little, Too Late, Oct. 20, 2020.

There’s no such thing as a single entity called Big Tech, and just saying it exists will not cut it. The challenges plaguing the tech industry are so complex that it is impossible to take action against one without understanding the entire ecosystem, which hinges on many monster companies, with many big problems, each of which requires a different remedy.

Certainly reforming Section 230 could help. But other tools may be needed, like significant fines, as well as new state and federal laws, enforcement of existing regulations and increased funding for agencies like the Federal Trade Commission, along with more aggressive consumer action and media scrutiny.

Apple’s control over the App Store and its developers? Perhaps some fairer rules over how to operate when it comes to fees and approvals, since separating the apps from the phones is a near impossible task.

Amazon’s problem with owning a critical marketplace platform where it sells its own goods alongside third-party retailers? Simply put, should Amazon be allowed to sell its own batteries if it also controls the store for a lot of batteries? It sounds like separating Amazon retail products from the store itself might be a possible solution, as well as establishing much less porous walls between the various Amazon businesses.

Facebook and its damning “land grab” and its “neutralize” emails (referring to squelching rivals), as well as its worrisome domination of the online discourse and news distribution across much of the world? This one is harder, but some breakup of its units, say a cleaving of Instagram and WhatsApp, might be a step in the right direction, along with figuring out a way to make its controversial editorial decisions more transparent and systemic rather than the more random Whatever Mark Zuckerberg Says This Week they have become.

And Google, of course, which is now for the first time ever in a real fight with the United States government? It was in early 2013 that the F.T.C. commissioners decided unanimously to scuttle the agency’s investigation of Google after getting the company to make some voluntary changes to the way it conducted its business. This despite a harsher determination by its own staff in a 160-page report, which came to light in 2015, that Google had done a lot of the things that the Justice Department is now alleging, including that its search and advertising dominance violated federal antitrust laws.

Tuesday, October 20, 2020

How fastText and BERT encode linguistic features

Abstract for the linked article:

Most modern NLP systems make use of pretrained contextual representations that attain astonishingly high performance on a variety of tasks. Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it. In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing pop- ular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted. To enable intrinsic probing, we propose a novel frame- work based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. We then probe fastText and BERT for various morphosyntactic attributes across 36 languages. We find that most attributes are reliably encoded by only a few neurons, with fastText concentrating its linguistic structure more than BERT.

From Hoboken's Pier 13 to the Chrysler Building

Are Inventors or Firms the Engines of Innovation?

Article by Ajay Bhaskarabhatla, Luis Cabral, Deepak Hegde, and Thomas Peeters in Management Science, published online Oct. 7, 2020.

Abstract: In this study, we empirically assess the contributions of inventors and firms for innovation using a 37-year panel of U.S. patenting activity. We estimate that inventors’ human capital is 5–10 times more important than firm capabilities for explaining the variance in inventor output. We then examine matching between inventors and firms and find highly talented inventors are attracted to firms that (i) have weak firm-specific invention capabilities and (ii) employ other talented inventors. A theoretical model that incorporates worker preferences for inventive output rationalizes our empirical findings of negative assortative matching between inventors and firms and positive assortative matching among inventors.

H/t Tyler Cowen

Monday, October 19, 2020

The sad truth about social media