Saturday, February 29, 2020

Netflix goes to Africa: Queen Sono

Friday, February 28, 2020

The Hudson, an orange bouy, and the George Washington Bridge

The world is changing, and increasingly uncertain

Farhad Manjoo, Admit It: You Don’t Know What Will Happen Next, NYTimes, Feb 26, 2020.
A projection of certainty is often a crucial part of commentary; nobody wants to listen to a wishy-washy pundit. But I worry that unwarranted certainty, and an under-appreciation of the unknown, might be our collective downfall, because it blinds us to a new dynamic governing humanity: The world is getting more complicated, and therefore less predictable.

Yes, the future is always unknowable. But there’s reason to believe it’s becoming even more so, because when it comes to affairs involving masses of human beings — which is most things, from politics to markets to religion to art and entertainment — a range of forces is altering society in fundamental ways. These forces are easy to describe as Davos-type grand concepts: among others, the internet, smartphones, social networks, the globalization and interdependence of supply chains and manufacturing, the internationalization of culture, unprecedented levels of travel, urbanization and climate change. But their effects are not discrete. They overlap and intertwine in nonlinear ways, leaving chaos in their wake.

In the last couple of decades, the world has become unmoored, crazier, somehow messier. The black swans are circling; chaos monkeys have been unleashed. And whether we’re talking about the election, the economy, or most any other corner of humanity, we in the pundit class would do well more often to strike a note of humility in the face of the expanding unknown. We ought to add a disclaimer to everything we say: “I could be wrong! We all could be wrong!”

Thursday, February 27, 2020

Norman Rockwell gets political

Hudson River, Verrazano-Narrows Bridge in the distance

A guerilla attack on (bogus) copyright claims over musical melodies: Create them all and release them into the publid domain

Two programmer-musicians wrote every possible MIDI melody in existence to a hard drive, copyrighted the whole thing, and then released it all to the public in an attempt to stop musicians from getting sued.

Programmer, musician, and copyright attorney Damien Riehl, along with fellow musician/programmer Noah Rubin, sought to stop copyright lawsuits that they believe stifle the creative freedom of artists.

Often in copyright cases for song melodies, if the artist being sued for infringement could have possibly had access to the music they're accused of copying—even if it was something they listened to once—they can be accused of "subconsciously" infringing on the original content. One of the most notorious examples of this is Tom Petty's claim that Sam Smith's “Stay With Me” sounded too close to Petty's “I Won’t Back Down." Smith eventually had to give Petty co-writing credits on his own chart-topping song, which entitled Petty to royalties.

Defending a case like that in court can cost millions of dollars in legal fees, and the outcome is never assured. Riehl and Rubin hope that by releasing the melodies publicly, they'll prevent a lot of these cases from standing a chance in court.

In a recent talk about the project, Riehl explained that to get their melody database, they algorithmically determined every melody contained within a single octave.

Monday, February 24, 2020

Wynton Marsalis on the difference between African rhythm and jazz rhythm

Ethan Iverson [EI] is interviewing Wynton Marsalis [WM] about his composition Congo Square, which combines the Lincoln Center Jazz Archestra with Odadaa!, a West African drum ensemble led by Yacub Addy. At various points in the interview Iverson plays a short clip for Marsalis. He did so just before this passage:
EI: Now, what is that break?

WM: Carlos Henriquez showed me that one. If you hear it in 4, it’s easy, but if you hear it in 6, it’s hard. But in 4 it is square, right on the beat, but maybe we “place” them a little bit. We have to adjust to the 6 Odadaa! is playing, especially since they are in the middle of a phrase. As conductor, I adjust to the bell.

EI: That’s a mysterious moment; that’s why I like it so much.

WM: The hardest thing is to get us to play with the bell pattern.

EI: The up-and-down of the beat is not American.

WM: No, it’s not, it’s more like a clave. And like Yacub told me: “In order for us all to play together, y’all will have to play with us.” For me, it was a blessing to have Carlos and Ali [Jackson], who spent a lot of time at night working it out; a real labor of love. They would sit up with me and go through rhythm patterns and say, “No, that’s not it.” Then, eventually, “This is it.”
YES! to this:  "The up-and-down of the beat is not American." I learned that from years of playing with the late Ade Knowles when I was living in Troy, New York. Early in his career Ade had toured as a drummer and percussionist with Gil Scott-Heron. I met him when he was an administrator at RPI (Rensselaer Polytechnic Institute) and I was on the faculty. "The Magic of the Bell" is a piece I wrote about a particularly magical rehearsal with Ade.

Abstract words in prose fiction [#DH]

Friday, February 21, 2020

Of telomeres, senescence, cancer, laboratory mice, evolutionary biology, and institutional failure: From the life of Bret Weinstein

This is a long podcast (over two hours), and it takes awhile to get off the ground, but it's worth your attention.



About the podcast:
All of our Mice are Broken.

On this episode of The Portal, Bret and Eric sit down alone with each other for the first time in public. There was no plan.

There was however, a remarkable story of science at its both best and worst that had not been told in years. After an initial tussle, we dusted off the cobwebs and decided to reconstruct it raw and share it with you, our Portal audience, for the first time. I don't think it will be the last as we are now again looking for our old notes to tighten it up for the next telling. We hope you find it interesting, and that it inspires you younger and less established scientists to tell your stories using this new medium of long form podcasting. We hope the next place you hear this story will be in a biology department seminar room in perhaps Cambridge, Chicago, Princeton, the Bay Area or elsewhere. Until then, be well and have a listen to this initial and raw version.

Louis Armstrong on the cover of Time Magazine

Thursday, February 20, 2020

The egalitarian proclivities of Louis Armstrong

M.H, Miller, Louis Armstrong, The King of Queens, NYTimes, 20 Feb 2020:
Armstrong was born in New Orleans in 1901, dropped out of school as a child and was a successful touring musician in his early 20s. By 1929, he was living in Harlem, though as one of the most popular recording artists in the country, he traveled about 300 nights a year. In 1939, he met his fourth and final wife, Lucille Wilson, a dancer at Harlem’s Cotton Club. Lucille, who spent part of her childhood in Corona, decided it was time for her husband to settle down in a house, a real house, instead of living out of hotel rooms. (Even their wedding took place on the road, in St. Louis, at the home of the singer Velma Middleton.) One day, when Armstrong was away at a gig, she put a down payment of $8,000 (around $119,000 in today’s money) on 34-56 107th Street. She didn’t tell him she’d done this until eight months later, during which time she made the mortgage payments herself. [...]

From the outside, the two-bedroom, 3,000-square-foot house looks just like any other on the block, which was deliberate. Armstrong often referred to himself as “a salary man” and felt at ease alongside the telephone operators, schoolteachers and janitors of Corona, a neighborhood that, in a testament to how much of his life was spent in jazz clubs, he referred to affectionately as “that good ol’ country life.” One of the earliest integrated areas of New York, Corona was mostly home to middle-class African-Americans and Italian immigrants when the Armstrongs moved in. The demographics would change in the coming decades — Latin Americans began replacing the Italians in the ’60s, and now make up most of the neighborhood — but not much else. There was never a mass wave of gentrification or development here, and Armstrong himself was so concerned with blending in with his working-class neighbors that when his wife decided to give the house a brick facade, Armstrong went door-to-door down the block asking the other residents if they wanted him to pay for their houses to receive the same upgrade. (A few of his neighbors took him upon the offer, which accounts for the scattered presence of brick homes on the street to this day.) [...]

He played behind the Iron Curtain during the Cold War and in the Democratic Republic of Congo during decolonization in 1960, during which both sides of a civil war called a truce to watch him perform, then picked up fighting again once his plane took off. There are few American figures as legendary and beloved, and yet, as Harris told me, a common reaction people have upon entering his home is, “This reminds me of my grandmother’s house.” Certainly the living room recalls a ’60s vision of Modernism with a vaguely minimalist formality.

Wednesday, February 19, 2020

Illustrated Japanese books from 1600-1912

Surveillance tech is deeply flawed [[Surprise! Surprise!]]

Charlie Warzel, All This Dystopia, and for What?, NYTimes 20 Feb 2020.
The above examples all represent a different, equally troubling brand of dystopia — one full of false positives, confusion and waste. In these examples the technology is no less invasive. Your face is still scanned in public, your online information is still leveraged against you to manipulate your behavior and your financial data is collected to compile a score that may determine if you can own a home or a car. Your privacy is still invaded, only now you’re left to wonder if the insights were accurate.

As lawmakers ponder facial recognition bans and comprehensive privacy laws, they’d do well to consider this fundamental question: Setting aside even the ethical concerns, are the technologies that are slowly eroding our ability to live a private life actually delivering on their promises? Companies like NEC and others argue that outright bans on technology like facial recognition “stifle innovation.” Though I’m personally not convinced, there may be kernels of truth to that. But before giving these companies the benefit of the doubt, we should look deeper at the so-called innovation to see what we’re really gaining as a result of our larger privacy sacrifice.

Friday, February 14, 2020

Romantic kissing is not universal

From pop culture to evolutionary psychology, we have come to take kissing for granted as universally desirable among humans and inseparable from other aspects of affection and intimacy. However, a recent article in American Anthropologist by Jankowiak, Volsche and Garcia questions the notion that romantic kissing is a human universal by conducting a broad cross cultural survey to document the existence or non-existence of the romantic-sexual kiss around the world.

The authors based their research on a set of 168 cultures compiled from eHRAF World Cultures (128 cultures) as well as the Standard Cross Cultural Sample (27 cultures) and by surveying 88 ethnographers (13 cultures). The report’s findings are intriguing: rather than an overwhelming popularity of romantic smooching, the global ethnographic evidence suggests that it is common in only 46% (77) of the cultures sampled. The remaining 54% (91) of cultures had no evidence of romantic kissing. In short, this new research concludes that romantic-sexual kissing is not as universal as we might presume.

The report also reveals that romantic kissing is most common in the Middle East and Asia, and least common of all among Central American cultures. Similarly, the authors state that “no ethnographer working with Sub-Saharan African, New Guinea, or Amazonian foragers or horticulturalists reported having witnessed any occasion in which their study populations engaged in a romantic–sexual kiss”, whereas it is nearly ubiquitous in northern Asia and North America.

In addition, cross-cultural ethnographic data was used to analyze the relationship between any presence of romantic kissing and a culture’s complexity of social stratification. The report finds that complex societies with distinct social classes (e.g. industrialized societies) have a much more frequent occurrence of this type of kissing than egalitarian societies (e.g. foragers).
More at the link (H/t Tyler Cowen).

Thursday, February 13, 2020

Conjunctions: transportation and graffiti [Jersey City]


Bernie Sanders isn't a socialist

The thing is, Bernie Sanders isn’t actually a socialist in any normal sense of the term. He doesn’t want to nationalize our major industries and replace markets with central planning; he has expressed admiration, not for Venezuela, but for Denmark. He’s basically what Europeans would call a social democrat — and social democracies like Denmark are, in fact, quite nice places to live, with societies that are, if anything, freer than our own.

So why does Sanders call himself a socialist? I’d say that it’s mainly about personal branding, with a dash of glee at shocking the bourgeoisie. And this self-indulgence did no harm as long as he was just a senator from a very liberal state.

But if Sanders becomes the Democratic presidential nominee, his misleading self-description will be a gift to the Trump campaign. So will his policy proposals. Single-payer health care is (a) a good idea in principle and (b) very unlikely to happen in practice, but by making Medicare for All the centerpiece of his campaign, Sanders would take the focus off the Trump administration’s determination to take away the social safety net we already have.

Has "civilization" entered a phase of decadence?

That's what Ross Douthat argues in his new book, The Decadent Society: How We Became the Victims of Our Own Success, which Damon Linker reviews in The Week, February 13, 2020. Decadence?
By calling us "decadent," Douthat doesn't mean that we're succumbing to imminent decline and collapse. Following esteemed cultural critic Jacques Barzun, Douthat instead defines decadence as a time when art and life seem exhausted, when institutions creak, the sensations of "repetition and frustration" are endemic, "boredom and fatigue are great historical forces," and "people accept futility and the absurd as normal."

Douthat goes on to refine the definition:
Decadence refers to economic stagnation, institutional decay, and cultural and intellectual exhaustion at a high level of material prosperity and technological development. It describes a situation in which repetition is more the norm than innovation; in which sclerosis afflicts public institutions and private enterprises alike; in which intellectual life seems to go in circles; in which new developments in science, new exploratory projects, underdeliver compared with what people recently expected. And crucially, the stagnation and decay are often a direct consequence of previous development: the decadent society is, by definition, a victim of its own significant success.
Douthat certainly isn't a favorite of mine, and I've got problems with the word "decadent", but that description is consistent with my own view, based on the theory of cultural ranks that David Hays and I developed,  that we're exhausting the cultural resources we've inherited but have not yet managed to invent new modes of thinking, feeling, living, and exploring.

Near the end Linker observes:
Interestingly, one way to describe the populist insurgencies taking place around us is to say that they're a rebellion against the decadence of the post-Cold War world — the sense that history came to an end in 1989, with all significant ideological disputes resolved and politics reduced to the fine-tuning of liberal democratic government. Francis Fukuyama's own high-level punditry on the subject was actually far more ambivalent than it's usually credited with being. Although Fukuyama argued that liberal democracy triumphed over communism because it was more capable of fulfilling humanity's material and spiritual needs than any other political and economic system, he also worried with uncanny prescience that a world in which liberal democracy was the only available option could be marked by boredom, repetition, and sterility — and that the intolerable character of such decadence could inspire anti-liberal movements that aimed to restart history once again.

Douthat's book can be read as a melancholy sequel to Fukuyama's The End of History and the Last Man that confirms the author's darkest predictions but without endorsing (or seriously wrestling with) any of the concrete efforts going on around us to overcome our own malaise by breaking away from decadent liberalism — whether it's Donald Trump's MAGA presidency, the Catholic conservatism of Poland's Law and Justice Party, Marion Maréchal's National Rally in France, the National Conservatism spearheaded by Yoram Hazony, or Viktor Orban's anti-liberal and pro-natalist populism in Hungary. Given that Douthat is a conservative who longs for renewal, rebirth, and revitalization — for an end to the decadence he thinks plagues us — it's surprising that he has so little to say about these efforts in the book. [...]

Douthat sees a lot, and far more than most of our less profoundly discontented commentators. That makes him an excellent pundit — maybe the best of our moment. But in his new book he also avoids a forthright confrontation with the political correlates of his own moral, aesthetic, intellectual, and spiritual dissatisfactions. In its place we find idle speculations about alternative realities. Which may mean that, for all its strengths, Douthat's book about decadence is more than a little decadent itself.
That is to say that Douthat is himself trapped in the same exhausted cultural forms. 

Who among us isn't?

* * * * *

See Douthat's recent NYTimes oped (Feb, 7, 2020):  The Age of Decadence. Here's now it opens:
Everyone knows that we live in a time of constant acceleration, of vertiginous change, of transformation or looming disaster everywhere you look. Partisans are girding for civil war, robots are coming for our jobs, and the news feels like a multicar pileup every time you fire up Twitter. Our pessimists see crises everywhere; our optimists insist that we’re just anxious because the world is changing faster than our primitive ape-brains can process.

But what if the feeling of acceleration is an illusion, conjured by our expectations of perpetual progress and exaggerated by the distorting filter of the internet? What if we — or at least we in the developed world, in America and Europe and the Pacific Rim — really inhabit an era in which repetition is more the norm than invention; in which stalemate rather than revolution stamps our politics; in which sclerosis afflicts public institutions and private life alike; in which new developments in science, new exploratory projects, consistently underdeliver? What if the meltdown at the Iowa caucuses, an antique system undone by pseudo-innovation and incompetence, was much more emblematic of our age than any great catastrophe or breakthrough?

The truth of the first decades of the 21st century, a truth that helped give us the Trump presidency but will still be an important truth when he is gone, is that we probably aren’t entering a 1930-style crisis for Western liberalism or hurtling forward toward transhumanism or extinction. Instead, we are aging, comfortable and stuck, cut off from the past and no longer optimistic about the future, spurning both memory and ambition while we await some saving innovation or revelation, growing old unhappily together in the light of tiny screens.

The farther you get from that iPhone glow, the clearer it becomes: Our civilization has entered into decadence.

Wednesday, February 12, 2020

Pandemics and cooperation between nation-states

Thomas Bollyky and Samantha Kiernan, No Nation Can Fight Coronavirus on Its Own, Lawfare, February 12, 2020: "Infectious diseases were the first global problem that nation-states realized they could not solve without international cooperation." This came about in the mid-19th century:
For most of human history, plagues, parasites and pests were a domestic affair. Quarantine was the principal means by which nations contained the microbes that were brought by invading armies and the passengers, both human and vermin, on trading ships and caravans.

Those isolation measures proved ineffective, however, against the six pandemics of cholera that swept the United States, the Middle East, Russia and Europe in the 19th century. A terrifying disease that struck seemingly healthy people, cholera killed tens of thousands in the cities of Europe and the United States—and, very likely, many more in India, where the pandemics originated. The economic costs of uncoordinated quarantines hurt nations and merchants alike.

In 1851, European states gathered for the first International Sanitary Conference to discuss cooperation on cholera, plague and yellow fever. That convention, and those that followed, led to the first treaties on international infectious disease control and—in 1902—the International Sanitary Bureau, which later became the Pan American Health Organization. These international initiatives were the early models for later agreements and agencies on other transnational concerns, such as pollution, the opium trade and unsafe labor practices.

Microbes have continued to inspire episodes of cooperation among even bitter rivals. The WHO, the United Nation’s first specialized agency, was created in 1946 in response to the horrors of World War II. Its early days were devoted to international campaigns against the great scourges of that era, such as malaria, smallpox and tuberculosis. At the height of the Cold War, the smallpox immunization campaign motivated the United States and the Soviet Union to join forces in an effort that succeeded in eradicating the disease in 1980. In El Salvador, an international vaccination campaign against pediatric infections led to a pause in the country’s 14-year civil war for the sole purpose of immunizing children.
And the current coronavirus epidemic?
There is much we do not know yet about how easily the virus spreads or its severity. But there is reason to think that the scale of this coronavirus outbreak and the likelihood of epidemics of the virus occurring outside China may inspire more cooperation than even the five previous occasions that the WHO designated as international public health emergencies: the H1N1 influenza pandemic (2009), the re-emergence of polio in several nations (2014), the Ebola outbreak in West Africa (2014), the Zika virus outbreak (2016) and the Ebola virus outbreak in the Democratic Republic of Congo (2019).

In a little over one month, the coronavirus has more than five times the number of laboratory-confirmed cases (43,114 as of Feb. 11) than the outbreak of SARS did in four months (8,096). The novel coronavirus has already spread to at least 26 countries, far more than the current outbreak of the Ebola virus in the Democratic Republic of Congo, its predecessor in West Africa in 2013-2015, or during the resurgence of polio in Afghanistan, Nigeria and Pakistan in 2014. The mortality rate for known cases of the novel coronavirus has been about 2-3 percent, deadlier than the Zika virus or the 2009 H1N1 swine flu. [...]

Perhaps a pandemic of novel coronavirus, if it occurs, would be a sufficiently frightening antagonist to force international cooperation, even at a moment that otherwise has proved inhospitable to global governance. If so, this novel coronavirus will do what climate change, tariff threats and the prospect of nuclear proliferation on the Korean peninsula could not: force nations to work together.

Bird in red and black (flooded by light)


Rodney Brooks on AI and robotics

As you may know, Rodney Brooks is a pioneering robotics researcher and entrepreneur (his company markets the Roomba) who once headed the AI lab at MIT. He has a blog where he's been commenting on AI. Here's a post where he has links to eight posts on the future of AI and robotics that he posted between August of 2017 and July of 2018, Future of Robotics and Artificial Intelligence. This post is from July, 2018, where he gives a capsule overview of the history of AI, Steps Toward Super Intelligence I, How We Got Here. He lists for main approaches, with approximate start dates:
1. Symbolic (1956)
2. Neural networks (1954, 1960, 1969, 1986, 2006, …)
3. Traditional robotics (1968)
4. Behavior-based robotics (1985)
Neural networks, as you see, has a spotty history. The basic idea is relatively old (as work in AI goes). 1986 marks the advent of back-propagation along with multilayered networks while the 2006 dates marks some new techniques ("deep learning"), much more computing power, and huge sets of training data. I found this discussion particularly useful. He shows us the following photo:


A Google program was able to generate this caption, “A group of young people playing a game of Frisbee”, and goes on to note:
I think this is when people really started to take notice of Deep Learning. It seemed miraculous, even to AI researchers, and perhaps especially to researchers in symbolic AI, that a program could do this well. But I also think that people confused performance with competence (referring again to my seven deadly sins post). If a person had this level of performance, and could say this about that photo, then one would naturally expect that the person had enough competence in understanding the world, that they could probably answer each of the following questions:
  • what is the shape of a Frisbee?
  • roughly how far can a person throw a Frisbee?
  • can a person eat a Frisbee?
  • roughly how many people play Frisbee at once?
  • can a 3 month old person play Frisbee?
  • is today’s weather suitable for playing Frisbee?
But the Deep Learning neural network that produced the caption above can not answer these questions. It certainly has no idea what a question is, and can only output words, not take them in, but it doesn’t even have any of the knowledge that would be needed to answer these questions buried anywhere inside what it has learned.
Brooks' own work has been in the fourth approach, behavior-based robotics, where he is a pioneer. He remarks:
...I started to reflect on how well insects were able to navigate in the real world, and how they were doing so with very few neurons (certainly less that the number of artificial neurons in modern Deep Learning networks). In thinking about how this could be I realized that the evolutionary path that had lead to simple creatures probably had not started out by building a symbolic or three dimensional modeling system for the world. Rather it must have begun by very simple connections between perceptions and actions.

In the behavior-based approach that this thinking has lead to, there are many parallel behaviors running all at once, trying to make sense of little slices of perception, and using them to drive simple actions in the world. Often behaviors propose conflicting commands for the robot’s actuators and there has to be a some sort of conflict resolution. But not wanting to get stuck going back to the need for a full model of the world, the conflict resolution mechanism is necessarily heuristic in nature. Just as one might guess, the sort of thing that evolution would produce.

Behavior-based systems work because the demands of physics on a body embedded in the world force the ultimate conflict resolution between behaviors, and the interactions. Furthermore by being embedded in a physical world, as a system moves about it detects new physical constraints, or constraints from other agents in the world.
Finally, Brooks has created a predictions scorecard in three areas, self-driving cars, AI and machine learning, and space industry. He first posted it on January 1, 2018 and has updated it on Jan. 1 of 2019 and again, Jan. 1 2020.  The list contains (I would guess) over 50 specific items distributed over those categories with specific dates attached. It makes for very interesting reading.

Spain is now the world's healthiest country


Music versus algorithms

Alexis Petridis reviews Ted Gioia's current book, Music: A Subversive History:
In terms of scope, well, put it this way: it starts out talking about a bear’s thighbone that Neanderthal hunters apparently turned into a primitive flute somewhere between 43,000 and 82,000 years ago and ends up, 450 pages later, discussing K-pop and EDM. His central theory: music is a kind of magical, ungovernable force that connects us to ancient shamanistic rituals, it’s primarily fuelled by sex and violence – anyone horrified by the lyrics of drill or death metal should consider that the first instruments were made from body parts and would once have literally dripped with blood – and all attempts to reduce it to mathematical formulae or “quasi-science”, while useful, go against its intrinsic nature. He’s really not keen on Pythagoras, whose mathematical theories about tuning underpin “music as it is taught in every university and conservatory in the world today”.

I didn’t agree with everything Gioia had to say, but something about that central theory stuck with me. For one thing, there is something magical and ungovernable about music: that weird tingling sensation you get when you hear something you love - a friend of mine calls it the Holy Shiver - is involuntary. It just happens. And we live in an era when music has never been more governed by mathematics. Algorithms are supposed to be able to predict everything, from what you want to hear next to whether or not a song’s going to be a hit: the digital strategist who developed the software behind the AI record label that’s just launched was also “involved in the development and marketing of stars such as Avicii, Logic, Mike Posner and Swedish House Mafia”.
For a series of anecdotes illustrating music's power, see my working paper,  Emotion & Magic in Musical Performance.

Tuesday, February 11, 2020

This is your brain on art

Monday, February 10, 2020

Fallen angel

When I saw this at night I wondered what it was, perhaps some strange art project?


When I came back the next day I realized that it was simply a decorative angel that had fallen down.

Howard Rheingold on democracy and online media

Howard Rheingold, Democracy is losing the online arms race, February 4, 2020. Opening paragraphs:
Democracy is threatened by an arms race that the forces of deception are winning. While microtargeted computational propaganda, organized troll brigades, coordinated networks of bots, malware, scams, epidemic misinformation, miscreant communities such as 4chan and 8chan, and professionally crafted rivers of disinformation continue to evolve, infest, and pollute the public sphere, the potential educational antidotes – widespread training in critical thinking, media literacies, and crap detection – are moving at a leisurely pace, if at all.

When I started writing about the potential for computer-mediated communication, decades before online communication became widely known as “social media,” my inquiries about where the largely benign online culture of the 1980s might go terribly wrong led me to the concept of the “public sphere,” most notably explicated by the German political philosopher Jurgen Habermas. “What is the most important critical uncertainty about mass adoption of computer mediated communication?” was the question I asked myself, and I decided that the most serious outcome of this emerging medium would have to do with whether citizens gain or lose liberty with the rising adoption of digital media and networks. It didn’t take a lot of seeking to find Habermas’ work when I started pursuing this question.

Although Habermas’ prose is dense, the notion is simple: Democracies are not just about voting for leaders and policy-makers; democratic societies can only take root in populations that are educated enough and free enough to communicate about issues of concern and to form public opinion that influences policy.
Five skillsets for online life:
When I set out to write Net Smart: How to Thrive Online, I decided that five essential skillsets/bodies of lore/skills were necessary to thrive online – and by way of individual thriving, to enhance the value of the commons: literacies of attention, crap detection, participation, collaboration, and network awareness:

· Attention because it is the foundation of thought and communication, and even a decade ago it was clear that computer and smartphone screens were capturing more and more of our attention.

· Crap detection because we live in an age where it is possible to ask any question, any time, anywhere, and get a million answers in a couple seconds – but where it is now up to the consumer of information to determine whether the information is authentic or phony.

· Participation because the birth and the health of the Web did not come about because and should not depend upon the decisions of five digital monopolies, but was built by millions of people who put their cultural creations and their inventions online, nurtured their own communities, invented search engines in their dorm rooms and the Web itself in a physics lab.

· Collaboration because of the immense power of social production, virtual communities, collective intelligence, smart mobs afforded by access to tools and knowledge of how to use them.

· Network awareness because we live in an age of social, political, and technological networks that affect our lives, whether we understand them or not.

In an ideal world, the social and political malignancies of today’s online culture could be radically reduced, although not eliminated, if a significant enough portion of the online population was fluent or at least basically conversant in these literacies – in particular, while it seems impossible to stem the rising tide of crap at its sources, its impact could be significantly reduced if most of the online population was educated in crap detection.
On attention:
I confronted issues of attention in the classroom during my decade of teaching at UC Berkeley and Stanford – as does any instructor who faces a classroom of students who are looking at their laptops and phones in class. Because I was teaching social media issues and social media literacies, it seemed to me to be escaping the issue by simply banning screentime in class – so we made our attention one of our regular activities. I asked my co-teaching teams (I asked teams of three learners to take responsibility for driving conversation during one-third of our class time) to make up “attention probes” that tested our beliefs and behavior. When I researched attentional discipline for Net Smart, I found an abundance of evidence from millennia-old contemplative traditions to contemporary neuroscience for the plasticity of attention. Simply paying attention to one’s attention – the methodology at the root of mindfulness meditation – can be an important first step to control. It doesn’t seem that attention engineers, despite their wild success, have the overwhelming advantage in the arms race with attention education that surveillance capitalists and computational propagandists deploy with their big data, bots, and troll armies.
However:
The lopsided arms race is what leads me to conclude that education in crap detection, attention control, media literacy, and critical thinking are important, but are not sufficient. Regulation of the companies who wield these new and potentially destructive powers will also be necessary.
There's more at the link.

Saturday, February 8, 2020

Wild Child

The hill was covered with strange grassy mounds about the size of molehills. The adults had no idea what they were — which was very exciting to me, realizing that there were things in the world that not even the adults understood. So I filled in the blanks for myself and decided they must be burial mounds for fairies. This was the magical landscape that inspired my book “The Wizards of Once.”

For the wildwood in that book, I took particular inspiration from the ancient wood of Kingley Vale in Sussex. Its trees have gnarled, expressive faces, and roots that embed into the earth with an almost visceral power. The more you learn about trees, the more magical you realize they are. Did you know, for example, that trees can communicate with each other through their roots, even when they are many miles apart?

Trees grow throughout children’s books. From “Peter Pan” to “A Monster Calls,” “The Lord of the Rings” to “Harry Potter,” trees are refuges, prisons and symbols of nature’s potency. They can be a friendly home, like the Hundred Acre Wood in “Winnie-the-Pooh,” or give a sense of menace, like the snowy forest in “The Lion, the Witch and the Wardrobe.” They can also be symbolic, like the cement-filled dying tree in “To Kill a Mockingbird.” The writers I loved when I was a child were similarly inspired by magical landscapes and nature: Ursula K. Le Guin, J.R.R. Tolkien, L. Frank Baum, Diana Wynne Jones, Lloyd Alexander, Robert Louis Stevenson, T.H. White — and so many others.

Today, children have much less unsupervised access to the countryside. I worry that they may never know the magic of the wilderness, the power of trees and the thrilling excitement of exploring nature without an adult hovering behind them. And so I write books for children who will never know what the freedom of my childhood was like.

Friday, February 7, 2020

Squash

“Undecidability, Uncomputability and the Unity of Physics. Part 1.”

That's the title of a post by Tim Palmer at Backreaction. Here's the opening:
Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.
What interests me is simply the conjunction mentioned in the opening line,  general relativity theory, quantum theory and chaos theory, which featured in a post in January, The third 20th-century revolution in physics [non-linear dynamics]. Here are two (of three) observations I made at the end of that post:
  • It seems to me that quantum mechanics and relativity are focused on explanatory principles whereas non-linear dynamics tends more toward description, description of a wide variety of phenomena. Moreover quantum mechanics and relativity are most strongly operative in different domains, the microscopic and macroscopic respectively.
  • Computation: many cases there are various computational paths from the initial state to the completion of the computation. As a simple example, when adding a group of numbers, the order of the numbers doesn't matter; the sum will be the same in each case. In the case of non-linear systems successive states in the computation 'mirror' successive states in the system being modeled so the temporal evolution of the computation is intrinsic to the model rather than extrinsic.
Does that second observation imply that (something like) computation is inherent in the physical nature of the universe and is not merely an intellectual operation carried out by various artificial means?

Life in the ocean depths, minerals too

Wil S. Hylton, History's Largest Mining Operation Is About to Begin, The Atlantic, January/February 2020. As the title indicates, the article is primarily about mining the ocean depths, but this passage struck me:
Until recently, marine biologists paid little attention to the deep sea. They believed its craggy knolls and bluffs were essentially barren. The traditional model of life on Earth relies on photosynthesis: plants on land and in shallow water harness sunlight to grow biomass, which is devoured by creatures small and large, up the food chain to Sunday dinner. By this account, every animal on the planet would depend on plants to capture solar energy. Since plants disappear a few hundred feet below sea level, and everything goes dark a little farther down, there was no reason to expect a thriving ecosystem in the deep. Maybe a light snow of organic debris would trickle from the surface, but it would be enough to sustain only a few wayward aquatic drifters.

That theory capsized in 1977, when a pair of oceanographers began poking around the Pacific in a submersible vehicle. While exploring a range of underwater mountains near the Galápagos Islands, they spotted a hydrothermal vent about 8,000 feet deep. No one had ever seen an underwater hot spring before, though geologists suspected they might exist. As the oceanographers drew close to the vent, they made an even more startling discovery: A large congregation of animals was camped around the vent opening. These were not the feeble scavengers that one expected so far down. They were giant clams, purple octopuses, white crabs, and 10-foot tube worms, whose food chain began not with plants but with organic chemicals floating in the warm vent water.

For biologists, this was more than curious. It shook the foundation of their field. If a complex ecosystem could emerge in a landscape devoid of plants, evolution must be more than a heliological affair. Life could appear in perfect darkness, in blistering heat and a broth of noxious compounds—an environment that would extinguish every known creature on Earth. “That was the discovery event,” an evolutionary biologist named Timothy Shank told me. “It changed our view about the boundaries of life. Now we know that the methane lakes on one of Jupiter’s moons are probably laden with species, and there is no doubt life on other planetary bodies.”
As for mining:
Deepwater plains are also home to the polymetallic nodules that explorers first discovered a century and a half ago. Mineral companies believe that nodules will be easier to mine than other seabed deposits. To remove the metal from a hydrothermal vent or an underwater mountain, they will have to shatter rock in a manner similar to land-based extraction. Nodules are isolated chunks of rocks on the seabed that typically range from the size of a golf ball to that of a grapefruit, so they can be lifted from the sediment with relative ease. Nodules also contain a distinct combination of minerals. While vents and ridges are flecked with precious metal, such as silver and gold, the primary metals in nodules are copper, manganese, nickel, and cobalt—crucial materials in modern batteries. As iPhones and laptops and electric vehicles spike demand for those metals, many people believe that nodules are the best way to migrate from fossil fuels to battery power.

The ISA has issued more mining licenses for nodules than for any other seabed deposit. Most of these licenses authorize contractors to exploit a single deepwater plain. Known as the Clarion-Clipperton Zone, or CCZ, it extends across 1.7 million square miles between Hawaii and Mexico—wider than the continental United States. When the Mining Code is approved, more than a dozen companies will accelerate their explorations in the CCZ to industrial-scale extraction. Their ships and robots will use vacuum hoses to suck nodules and sediment from the seafloor, extracting the metal and dumping the rest into the water. How many ecosystems will be covered by that sediment is impossible to predict. Ocean currents fluctuate regularly in speed and direction, so identical plumes of slurry will travel different distances, in different directions, on different days. The impact of a sediment plume also depends on how it is released. Slurry that is dumped near the surface will drift farther than slurry pumped back to the bottom. The circulating draft of the Mining Code does not specify a depth of discharge. The ISA has adopted an estimate that sediment dumped near the surface will travel no more than 62 miles from the point of release, but many experts believe the slurry could travel farther. A recent survey of academic research compiled by Greenpeace concluded that mining waste “could travel hundreds or even thousands of kilometers.