Sunday, June 30, 2019

Grassiness


DNA for reliable long-term data storage


From the article:
DNA strands are tiny and tricky to manage, but the biological molecules can store other data than the genes that govern how a cell becomes a pea plant or chimpanzee. Catalog uses prefabricated synthetic DNA strands that are shorter than human DNA, but uses a lot more of them so it can store much more data.

Relying on DNA instead of the latest high-tech miniaturization might sound like a step backward. But DNA is compact, chemically stable -- and given that it's the foundation of the Earth's biology, it's arguably not as likely to become as obsolete as the spinning magnetized platters of hard drives or CDs that are disappearing today the way floppy drives already vanished. [...]

Catalog uses an addressing system that means customers can use large data sets. And even though DNA stores data in long sequences, Catalog can read information stored anywhere using molecular probes. In other words, it's a form of random-access memory like a hard drive, not sequential access like the spools of magnetic tape you might remember from the heyday of mainframe computers a half century ago.

More on Fully Automated Luxury Communism (FALC)

Annie Lowry reviews Aaron Bastani's book in The Atlantic, 20 June 2019.
The most ardent advocate for FALC, Aaron Bastani, a London-based media executive and writer, has written a new book on the topic. ... Bastani believes that we are already living through a potentially epochal transformation of the economy, as epochal as the establishment of agriculture and the introduction of engines and electricity. Artificial intelligence, machine learning, and advanced computing might be about to eliminate the need for human labor in no small part, Bastani claims.

That could mean the continued ruination of the planet, as oligarchs throw thought conferences on yachts and the masses struggle to make rent. Or it could mean the healing of the planet and the thriving of all its inhabitants. What it might take is converting the world to solar and other renewable forms of energy, mining asteroids for raw materials, implementing Communist political systems, and guaranteeing everyone basic services. Enter utopia—a healthy world and an economy of abundance, free and accessible to all.

Bastani is certain about the viability of all of this, yet has a topsy-turvy understanding of recent history and the contemporary economy. He fails to give capitalism much credit for moving billions of lives out of poverty, for instance, and fails to recognize the preeminence of race and racism in explaining the success of President Donald Trump or Europe’s far right.
Not exactly a new idea. Lowry mentions Keynes (“Economic Possibilities for Our Grandchildren”) and Shulamith Firestone (“cybernetic communism”). "Yet the most complete picture of FALC or FALGSC might come not from radical leftists or academic economists, but from Star Trek." Yep.
Humans in rich societies could and arguably should work far less than they do, and might thrive far more if they did, FALC argues. There is no need for the world to look like Star Trek for that to become reality.

Where white musicians in on New Orleans jazz from the beginning?

Terry Teachout discusses Samuel Charters’s A Trumpet Around the Corner: The Story of New Orleans Jazz in "All That (White) Jazz", Commentary, Nov. 8, 2008. From the review:
As a result of Jim Crow laws passed in the late 19th century, black and white dance musicians active in New Orleans in and after 1900 were blocked from playing together or socializing. Consequently, two different jazz dialects, one black and one white, emerged out of the musical ferment that led to the music now known as jazz.

The majority of modern-day critics judge the music of such early white ensembles as the Original Dixieland Jazz Band to be derivative of and inferior to the playing of their black contemporaries. By contrast, Charters has come to regard these groups as fully the equals of their black counterparts. For him—as well as for many early black jazzmen, prominently including Louis Armstrong—the music performed by white New Orleans players was distinct in character but comparable in both quality and influence to that played by blacks.

Why did Charters not discuss these white musicians in Jazz: New Orleans? His answer is again admirably straightforward: “In the 1950’s many of us writing about jazz and blues reacted against the institutionalized racism we experienced in the South by placing special emphasis on the African American musical achievement.”

Score one more for the Xanadu meme

From the article:
What sets Xanadu apart is its approach. Many quantum computers require shed-sized chambers where specialized chips are cooled to just above the coldest possible temperature, minus 273.15 Celsius, so subatomic particles in the chips experience the quantum mechanical effects that are needed to unleash their computing power.

Xanadu, which raised $9-million in a 2018 financing also led by OMERS, instead uses a process called “squeezing light,” by firing lasers that enable photons to generate quantum effects on a thumbnail-sized chip. The method, based on Australia-born founder and CEO Christian Weedbrook’s PhD thesis at University of Queensland, happens at room temperature. That means Xanadu – which achieved light-based quantum effects on a chip last September, a feat that no one else has accomplished – believes it can develop its quantum computer much quicker and cheaper than others.
Ah, from "caves of ice" – those "shed-sized chambers" – to the "sunny dome" – lasers!

Here's my paper on the Xanadu meme: One Candle, a Thousand Points of Light: The Xanadu Meme.

Saturday, June 29, 2019

New York 2140, Back to the Future: A working paper


Title above. Links, abstract, contents, and introduction below.

Academia.edu:
https://www.academia.edu/39720382/New_York_2140_Back_to_the_Future
SSRN: https://ssrn.com/abstract=3412134
Research Gate: https://www.researchgate.net/publication/334112757_New_York_2140_Back_to_the_Future

DOI: 10.13140/RG.2.2.24693.01768

Abstract: An extended consideration of Kim Stanley Robinson’s New York 2140 (2017). The story is a heist plot engineered by residents of the Met Life Building in the year 2140, when the seas had risen 50 feet. Topics: Fiction and reality, patterns, transportation infrastructure, formal order encompassing narrative chaos, politics, from the micro politics the Met Life Tower, to New York City politics, and beyond that to the nation and the world financial system. Includes photographs of the current New York skyline in which you can see the Met Life Building as it currently exists.

Contents

Patterns: Truth and fiction, cause, agency, and free will – 2
A post-apocalyptic heist: Commentary on a passage from New York 2140 – 5
The New York skyline, two views – 10
New York 2140, Some notes about form and structure – 12
Is the 3D transportation geometry of New York 2140 feasible? – 16
Toward a narrative counterpoint for New York 2140 – 18
The Persistence of political forms and the Emperor’s New Clothes – 20
Through a 3D Glass Starkly, New York 2140 Redux – 22
Postscript: I think I’ve figured out how [to|I] think about space travel – 28
The Met Life Tower, a last view – 30

Patterns: Truth and fiction

When I saw the cover of New York 2140 I was intrigued: Lower Manhattan, but flooded! But then it’s 2140 and the seas have risen. I get it. So I got it. The world hadn’t ended, and those tall buildings in the background, what about them? Someone’s prospering, who?

What if anything could this work of fiction tell me about the future? I know, fiction, made up. But I’d read Robinson’s Mars trilogy, so I knew he made things up with care. What I really wanted to know what’s going to happen, but I knew that that’s not knowable. Oh, astronomers can predict the future movements of the bodies in our solar system with great accuracy. But that’s a strictly mechanical system, a bit complex, perhaps, but only a bit. As these things go, still pretty simple.

The earth’s climate is a mechanical system, too, but much more complex. Predicting the weather a week ahead of time is iffy. Predicting the climate a century hence, very very iffy. But we have to try. Because its heading toward us at a ferocious pace. We can’t avoid it. We’re going to have to live with it. But how?

Truth be told, the climate’s not completely mechanical. For the earth is covered with living creatures, all of whom are acted on by and who act on the climate system. We, of course, are among them. We create climate models so we can plan for the future. And we tell stories, such as the one Robinson tells in New York 2140.

Why tell stories? Entertainment, for sure. Escape, maybe, some of the time. But for truth as well. How can there be truth if it’s made up? Well there are truths and there are truths, and that’s a much larger discussion than I want to entertain here. Let’s be content with the observations that even stories that are made up have patterns within them. And those patterns may well prove true provided, of course, we are able properly to abstract them from the manifold details the author has assembled for us.

So I read the book, looking for patterns. I wrote some of them up in my blog, New Savanna, and at 3 Quarks Daily. That’s what’s in this document, most of those notes.

An online group discussion

I also participated in an online group reading of New York 2140 during July and August of 2018. It was lead by Bryan Alexander who, by his own account, is a “futurist, researcher, writer, speaker, consultant, and teacher, working in the field of how technology transforms education” and is currently a senior scholar at Georgetown University.

Here’s the initial post announcing the reading, giving some background, and laying out the reading schedule:


All of the posts are collected at this link:


What’s in the rest of this document

A post-apocalyptic heist: Commentary on a passage from New York 2140: A bit of explication de texte, where I take a single passage from the book (two thirds of the way through) and use it as a lens through which to view the book as a whole. One of the characters, Mutt, observes that they’re plotting “a fucking heist movie!” There’s also talk of the Met Life tower as “a kind of actor network that can do thing”, where “actor network” is a term from Bruno Latour.

New York skyline, two views: Two different photographs of the New York skyline, from different points of view. The Met Life Tower is visible in each, though just barely visible in the second photo. I point out other landmark buildings.

New York 2140, Some notes about form and structure: The book is divited into eight parts, each of which is divided into eight (six cases) or nine (two cases) parts. The main parts have thematic names while their component parts are named after the central characters in the book, all of whom live in the Met Life Tower. I outline this structure, which isn’t laid out in the book itself. This rigorous modular structure plays agains the opportunistic chaos that pervades the plot. The book a full of quotations located between the various sections.

Is the 3D transportation geometry of New York 2140 feasible?: The city below 50th street is under 50 feet of water. People move about by small boats and by walkable skybridges. I argue that this isn’t a feasible transportation system.

Toward a narrative counterpoint for New York 2140: But just how DID the residents of the Met Life Tower create such a strong sense of community? Robinson doesn’t really show us, but he does provide a hint in the very last chapter.

The Persistence of political forms and the Emperor’s New Clothes: The politics of New York 2140 mirror those in America after the financial crisis of 2008, but with a different ending in the works. But how do you get all those many scattered pockets of protest to rise up together? Robinson provides a very specific answer to that question.

Through a 3D Glass Starkly, New York 2140 Redux: About Hurricane Sandy, the future, the nature of science fiction (about the present) and just how long is the present? A couple of seconds, a year, a decade, a century? Events unfold in nested waves on various time scales. Can the patterns in fiction nonetheless be true, with their truth in some sense depending on the skillful interweaving of fact and imagination? More metaphysics.

Postscript: I think I’ve figured out how [to/I] think about space travel: No, mostly not about New York 2140, though I mention it. But it’s about the future, one very different from the present. We’ll be in a post-Singularity world, not the fantasy nonsense of superintelligent machines, but a world in which our understanding has gone up a level as, say, Newton’s understanding surpassed.

The Met Life Tower, a last view: Just what it says, a last photograph of the building.

In the garden, an economy of plants



The mystique of compound interest, a brief note about generalizing it to, well, just about everything

I’ve been following Tyler Cowen for years. Marginal Revolution, the blog he runs with Alex Tabarrok, is on my daily rounds of the web. One of Cowen’s themes is bigness: Bigness is good, or at any rate, the right kind of bigness is good. The way to freedom and prosperity for all is to grow the economy. Growth begets growth.

If you read enough by Cowen it becomes clear that the phenomenon of compound interest is at the imaginative center of his belief in bigness. What’s the route to prosperity, to bigness? Compound interest – and I believe Cowen holds that innovation is the way to compound that interest, though that’s a different discussion for a different day.

Compound interest is what happens when you put your money into a savings account and leave it there, year after year. It just grows and grows. How does that work?

You put some amount of money in a savings account. That’s called the principal. After an interval specified in the contract governing that account, the bank adds a bit of money to your principal. That is the money your principal has earned from the bank. We call that interest. If, as you are supposed to do, you refrain from withdrawing from your account, that interest will be compounded. For, when the bank pays interest at the next specified internal, it pays interest on the original principal plus the interest it had added to your account at the previous interval. And it keeps on doing this forever and forever. So, without you having to do anything, your money just grows and grows. And if you make continued contributions of your own to the account, as you are supposed to do, why so much the merrier.

Now, it is up to the bank to decide the rate and interval at which it pays interest on a savings account. That is within the bank’s control. As long as the bank is solvent it can continue to do this. The bank’s solvency depends on its ability to negotiate the current business environment.

The bank is able to pay you interest because it takes your money and loans it to others (or otherwise invests it) at a rate that is higher than what it is paying to you. As long as the bank’s money-making activities are successful, everyone is happy. But what happens if the bank is unable to meet its obligations?

Whoops!

It happens, of course it happens. Setting dishonesty and ordinary incompetence aside, bankers don’t control the world. So it might happen that, through no fault of its own, a bank’s management fails to generate enough income to cover its obligations. That’s troublesome. Add incompetence and dishonesty back into the mix and things just get worse.

Now, how do you generalize that relatively self-contained story to society as a whole, because that’s what you need to do if you are going to use the phenomenon of compound interest to animate and illuminate your worldview? Well, wouldn’t you know it, economists have developed models of economic growth – Robert Solow won the 1987 Nobel Prize in Economics for one such model. Of course a model is only a model, its not the whole world. Economists know that, we all know that.

But if the model is sufficiently rich, why it is easy to get lost in it, tinkering around, fitting and refitting, revising, even challenging and opposing. One can get lost in there. What then?

What then?

Bankers can pay compound interest as long as they can control their operating environment. But really, can bankers always and ever control their operating environment? No. The most skillful surfer in the world cannot control the wave, ever. S/he can only ride it, and ride it well.

At what point does the mystique of compound interest turn into a myth of total control?

* * * * *

On a different note, Cowen likes to travel, he likes food, and he likes to have conversations with interesting people. What kind of conversation would he have had with Anthony Bourdain?

Friday, June 28, 2019

Classical Japanese anatomical illustration


Why not computer in a can? Michael Nielsen on the varieties of material existence

I have a friend who is prone to thinking that the route from imagination to implementation isn't worth thinking about. I'd tease him with the idea of computer-in-a-can. It's an ordinary aerosol can. You spray computer-in-a-can on a convenient surface and voilà! you have a keyboard and monitor on that surface. You type whatever you need to type and the computer displays your result on the monitor. Just like that.

Michal Nielsen has written an interesting set of notes, The varieties of material existence (title swiped from William James). It's rather more serious and interesting than computer-in-a-can, but who knows? Some passages:
Using electrons, protons, and neutrons, it is possible to build: a waterfall; a superconductor; a living cell; a Bose-Einstein condensate; a conscious mind; a black hole; a tree; an iPhone; a Jupiter Brain; a working economy; a von Neumann replicator; an artificial general intellignece; a Drexlerian universal constructor (maybe); and much, much else. [...]

We usually think of all these things as separate phenomena, and we have separate bodies of knowledge for reasoning about each. Yet all are answers to the question “What can you build with electrons, protons, and neutrons?” [...]

What are the most interesting states of matter which have not yet been imagined? It’s remarkable that human consciousness, universal computing, superconductors, fractional quantum Hall systems (etc) are all pretty recent arrivals on planet Earth. Each is an amazing step, a qualitative change in what is possible with matter. What other states of matter are possible? What qualitatively new types of phenomena are possible, going beyond what we’ve yet conceived? Can we invent new states of matter as different from what came before as something like consciousness is from other states of matter? What states of matter are possible, in principle? In a sense, this is really a question about whether we can develop an overall theory of design? [...]

Much of my confusion is because the standard classification of matter into phases relies on that matter being at (or near) thermodynamic equilibrium. Parts of the human body are near thermodynamic equilibrium. But much is not. The thing that makes it all go, that makes life life – our metabolism – is all about energy flows that keep things away from equilibrium. [...]

I have two broad (and very different) frameworks for thinking about matter.

One of those frameworks is equilibrium statistical mechanics. This is the framework used by physicists to think about the different phases of matter, and (often) by chemists and materials scientists to think about what new materials are possible. It’s a powerful framework, and most stable matter in the world is of this type.

However, many of the most interesting systems – including universal computers, conscious minds, cells, economies, and others – don’t fit well into this framework. Rather, they have the three properties described above: many static components near thermodynamic equilibrium; many energy flows and dynamic components far from equilibrium; and surprising stability and resilience, often with built in self-healing or error-correction mechanisms.

Two gulls, a crane, and some orange

Has Dennett Undercut His Own Position on Words as Memes?

I'm bumping this post, originally from April 2015, to the top of the queue because it has elements, which I've highlighted, that are reminiscent of Rodney Brooks's recent talk in which he argues that the computer metaphor is not adequate to understanding what the mind/brain does. Brooks invokes the idea of adaptation and examples from biology. Dennett brings up thermodynamics. Does biology have an advantage there? Powerful computers use huge amounts of energy. Can electronic devices match the energy efficiency of neural tissue? See this remark by Mark P. Mills:

But it’s important to keep in mind that Moore’s Law is, as we’ve noted, fundamentally about finding ways to create ever tinier on-off states. In that regard, in the words of one the great physicists of the 20th century, Richard Feynman, “there’s plenty of room at the bottom” when it comes to logic engines. To appreciate how far away we still are from a “bottom,” consider the Holy Grail of computing, the human brain, which is at least 100 million times more energy efficient than the best silicon logic engine available.
* * * * *


Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:
Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, ...) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.
While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, "Aching Voids and Making Voids" (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage (p. 323):
But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’être.
In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug.

Thursday, June 27, 2019

The meaning of life?


Population size and human culture, some quick and dirty reflections

Economist Tyler Cowen has jones for size. He’s just published a book in praise of big business and he believes that the best way to bring prosperity to all is to encourage economic growth. Color me skeptical, but note as well that I check his website daily because, often enough, there is good stuff there.

Anyhow, one day I asked myself whether or not there was one thing that requires a large population. I suppose you can think of lots of things. But I quickly thought of cultural diversity. Culture lives in human populations. Any given cultural practice (of whatever kind) is going to require a population to practice and sustain it. If you want to listen lots of different kinds of music, you need populations of musicians to create it. Any given musician is going to be able to play different kinds of music with some degree of proficiency. But no musician can play them all.

I then asked: What’s the minimal population the world needs to sustain all existing bodies of cultural practice? How do you even begin to think about that one? For example, there are people worried about the demise of languages. Now, it’s one thing to document a language so that we can continue to study it when there’s no one left to speak it. But how many people are required to sustain a language as a living thing. And what’s the cost of maintaining that minimal population? And you can ask that question for every kind of cultural practice: clothing, cuisine, poetry, transportation, housing, and so forth. Of course, there are complex networks of interdependencies among cultural practices.

One might also ask just which practices are worth maintaining? A whole different kettle of fish. Many practices have been lost. The pyramids of Egypt remain, but their methods of construction are matters of conjecture. But then whole civilizations have been lost, some leaving artifacts and some, perhaps, leaving none.

One can go through a similar exercise for biological species, as some are. I hear that species are going extinct at an alarming rate, pushed out of existence by the expansion of humankind. But extinctions of course are not new. It’s the way of the biosphere, always has been. And then a big asteroid hits. Boom! 100s of 1000s of species obliterated.

And so it goes.

Let’s go about this a different way. Imagine that a mysterious virus wipes out all but a million people. What would those million people be capable of sustaining? Obviously a lot depends on the capability of those people. Do they all have one language in common? Does it make any different what language it is, say Mandarin, English, or Farsi? Note that language capability determines access to libraries. If they don’t have at least one language in common, what then?

Would a population of a million be able to support the use of smart phones? Note that this is very different from asking how many people, in the current world, are devoted to manufacturing and maintaining smart phones. We’re talking about a world where there are only a million people. Would smart phones be one of the many things those people keep up and running? If I had to guess, I’d guess they couldn’t. What about automobiles? Electric lighting?

It’s a strange question.

Jump and Kong take a ride on an elephant

Wednesday, June 26, 2019

Norman Mailer's "space-operatic heebie-jeebies" about Apollo 11

Life magazine commissioned Norman Mailer to write Apollo 11. They paid very well and he needed the case. So he did. He wrote three "three mega-installments" and collected them into a book, Of a Fire on the Moon. James Parker writes about that book for the July 2019 issue of The Atlantic, ‘A Work of Art Designed by the Devil’.
This is the glory of Of a Fire on the Moon—the fidelity of Aquarius [that is, Mailer] to his apprehensions; his space-operatic heebie-jeebies; his perverse, obsessive sense that under the achievement, something is dying. Plenty of people regarded the moonshot as a monstrous misallocation of resources. Aquarius alone—or alone in mass-market magazines—was ready to declare it a metaphysical catastrophe. In his stagy rhetoric, his mangled-by-moonbeams prose, he laments the lunar trespass by “strange, plasticized, half-communicating Americans,” and what it portends down here on Earth. Apollo’s success, he declares, “set electronic engineers and computer programs to dreaming of ways to attack the problems of society as well as they had attacked the problems of putting men on the moon.”

Horrific prospect. Midway through his dispatches, Aquarius has a sleepless night in Houston. The abyss gapes. Futuristic vistas assail him; by the witch’s light of insomnia he sees an America “gassed by the smog of computer logic,” where reason has become a higher insanity, and irrationalism a sanctuary.

Godzilla has doubled in size since it appeared in 1954. Why?


One proposal is that it reflects increasing anxiety: Nathaniel J. Dominy, Ryan Calsbeek, Godzilla’s extraordinary growth over time mirrors an increase in Anthropocene angst, Science, 28 May, 2019:
[Susan] Sontag argued that our taste for disaster films is constant and unchanging. On the contrary, we suggest that Godzilla is evolving in response to a spike in humanity’s collective anxiety. Whether reacting to geopolitical instability, a perceived threat from terrorists, or simply fear of “the other,” many democracies are electing nationalist leaders, strengthening borders, and bolstering their military presence around the world.
Oleg Sobchuk is not buying it, The “Science” of Godzilla in Science, Medium, 26 June 2019:
Personally, I think that another — simpler and less dramatic — hypothesis is more likely. More likely because it concerns not a particular film but a general pattern of the cultural evolution of art.

This hypothesis has to do with a common, though not very well studied, phenomenon of the intensification of artistic stimuli over time. Mnay artistic “devices” — techniques of manipulating pleasant emotions in the audience — tend to be used in more intense ways during their evolution. Godzilla is a device, similarly to King Kong or any other monster in the kaiju genre featuring giant monsters: somehow, the size on its own may be enough to evoke some pleasant fear in us.
Makes sense to me. Spreading anxiety is a reasonable way to think about the spread of Godzilla films, but stimulus intensification is a better way to talk about the monster's size.

Tuesday, June 25, 2019

Xanadu Swingers

Granted, there's that line about the woman wailing for her demon lover, still, I doubt this is what Coleridge had in mind.



If you're curious, here's a link that will take you there, though you'll have to jump through a few hoops: Xanadu-Manchester. Note, though, according to this and that on the web, this club may no longer be open and it wasn't much anyway. And then there's Xanadu of Greenville, SC (NSFW). I suppose if you look around you may find out something about it as well.

Surrogates



Fictional commitments – Star Trek DS9 S2 E16: Shadowplay [Media Notes 4]

From the opening of the Wikipedia plot summary:
Dax and Odo detect an unusual particle field emanating from a planet in the Gamma Quadrant, so they beam down to investigate. They discover the field is coming from a small village's power generator, but when a villager named Colyus discovers them, he is suspicious of them. Once Odo convinces Colyus of their intentions, Colyus explains that the village is a Yaderan colony and 22 people have disappeared in the past few days without a trace. Odo and Dax offer to help investigate the disappearances, but the village leader, Rurigan, seems unconvinced that the villagers will ever be found.
This that and the other happens, but we can skip over all that. Here’s the concluding paragraph of the plot summary:
When the Dominion arrived on Yadera Prime, Rurigan explains, it destroyed life as he knew it, so he escaped to an abandoned planet and recreated the world he had lost. He has been living in this illusion for over 30 years, and now he admits that none of it was real. However, Odo points out that were it not real Rurigan would not have been able to develop feelings for the villagers - after all, they are only holograms. He argues that Taya and the others are real and deserve a real chance to live. Dax and Rurigan repair the reactor, restoring the village, including the missing people. Before he and Dax leave, Odo realizes how close he has grown to Taya and the two share a heartfelt goodbye. Taya thanks him for finding her mother and wishes him luck in finding his own parents. Before they leave, Odo demonstrates his abilities by morphing into a toy that Taya played with earlier.
This a world of holographic people Rurigan had created for himself. This is an extreme version of a theme that recurs in the Star Trek universe.

There is, of course, the holodeck, which is always available and is used mostly in a self-contained way, for recreation, but also for work. But sometimes things get out of hand, such as in the episode sequence in The Next Generation involving the Sherlock Holmes world. Moriarty discovers that he is an imaginary, a created, being and that there is a “real” world “out there.” He even manages to take over the Enterprise and escape into that world but, as I recall, but is negotiated back into the holodeck on the promise that one day when the real people figure it out, he’ll be released into the real world (there is a love interest tangled up in this as well). And then we have the episodes, in TNG and Voyager, where Reginald Barclay all but loses himself in the holodeck, (TNG), and manages to establish something real in Voyager, communication between earth and Voyager.

All of this, of course, is playing on the meddlesome problem of fiction and reality, meddlesome because we know so much about reality through mediated representations of it – writing, recordings, and images of various kinds. How do we distinguish between representations of the real and representations of the imaginary? And it’s not simply a matter of epistemology, but of emotional commitment.

Then, in a more focused way, we have Data (TNG), an android with dreams of becoming fully human. And the holographic Doctor in Voyager who ends up fighting for the rights of holograms.

Stepping out of the Star Trek universe we find, for example, the Spielberg/Kubrick film A.I. Artificial Intelligence. He presents us with a future world in which Mecha (androids) serve humans in various subordinate capacities. The Swinton’s accept young David, a Mecha specifically created as surrogate child. Complications follow.

Finally, in real life in the here and now we have companion robots, some intended as companions for young children, others as companions for old people, and then we have, of course, sexbots. Except for the sexbots most of these are only loosely anthropomorphic, or not at all – thoughsome are animal-like.

Monday, June 24, 2019

A Jamaican $50 note


Should we do away with the categories of Linnaean classification in biology?

Christie Wilcox, What’s in a Name? Taxonomy Problems Vex Biologists, Quanta Magazine, June 24, 2019.
arl Linnaeus was probably not the first scientist to realize the inherent connectedness of life on this planet. But he articulated and codified it. In the 10th edition of his Systema Naturae, published in 1758, he established a system of naming and organizing life that endures to this day — what we still call Linnaean taxonomy, although today’s system is somewhat different from the five-rank hierarchy he proposed. The principle is the same, though: Life is organized into nested ranks, with each higher tier representing a larger group of related organisms to which the species at the bottom belong.

This ranked taxonomy — domain, kingdom, phylum, class, order, family, genus, species — is foundational to biology pedagogy. Every student learns it, often through a mnemonic like “Didn’t Know Popeyes Chicken Offered Free Gizzard Strips” or “Dear King Phillip Came Over For Great Spaghetti.”

But a growing number of researchers think it’s time for taxonomy to move away from these ranks, or even abandon them altogether. “When a student has to learn it, it also suggests to the student that there’s something special about these groups,” said Andreas Hejnol, a comparative developmental biologist at the University of Bergen in Norway. Yet there isn’t.

The problem that Hejnol sees with the whole system is that the ranks don’t mean anything specific or uniform across all groups of life. Even though species is arguably the most important rank across multiple fields of biology, there are dozens of species concepts in use — and biologists working with different groups of organisms can’t seem to agree on just one. You might think that the other end of the hierarchy would be more settled, but it wasn’t so long ago that domains simply didn’t exist — the three domains we use today (Archaea, Bacteria and Eucarya) were only proposed in 1990. At that time, the top rank was kingdom, and there were five of those; now there are at least six, though some say there should be as many as 32. Similar ambiguities plague all the taxonomic ranks in between — even those often considered to be major, distinct and unambiguous, like phyla.
Color me sympathetic, and I haven't even read the article. But I do know something about classification and I've thought about this off and on. Glad to see that some biologists are at least thinking about it. 

Still, given how classification pervades the discipline, it's obvious that changing things, however sensible it seems in the abstract, would be an enormous pain in the posterior. Just how do you get everyone to agree to a new system? How would you propagate such a change and what would happen to the existing literature?

Helicopter, cranes, trees, sky, the city


Literacy and the human mind/brain

Reading and writing is something most of us take for granted. Grabbing a pen to jot something down or using our smartphone to read or answer a text message or email is something to which we don’t give even a moment’s thought. Reading and writing, however, are amazingly complex skills – as Falk Huettig from the Max Planck Institute for Psycholinguistics in Nijmegen and his colleagues Régine Kolinsky and Thomas Lachmann outline in the foreword to “The effects of literacy on cognition and brain functioning”, a special issue of Language, Cognition and Neuroscience. To read and write, the brain must coordinate numerous perceptual and cognitive functions. These include, for example, basic visual skills, phonological perception, long-term memory and working memory. That’s why it takes years of practice for reading and writing to become so deeply engrained as to be effortless. Learning to read and write in turn modifies the structure and function of the brain.

Research in this area focuses on two fundamental questions. What conditions need to be in place for us to be able to learn to read and write? How does this complex skill affect our perception and cognition? In order to answer these questions, comparisons are particularly useful: for example, looking at the differences between good adult readers and adults who have never learnt to read; or the difference between children who learn to read easily and children who have more difficulty with this, or may have dyslexia.

In the case of dyslexia in particular, it is often hard to differentiate whether the associated deficits are causative or whether they occur because readers of comparable age have trained these cognitive skills through reading. Some years ago, José Morais from the University of Brussels determined that reading significantly improves phonological awareness (the ability to recognise specific sound structures of a language). People with dyslexia often find it difficult to distinguish these structures. John F. Stein from the University of Oxford argues that this is merely a side effect, rather than the cause, of reading difficulties.
More at the link. Here's the introduction to a special issue of Language, Cognition and Neuroscience:

Falk Huettig, Régine Kolinsky & Thomas Lachmann (2018) The culturally co-opted brain: how literacy affects the human mind, Language, Cognition and Neuroscience, 33:3,275-277, DOI: 10.1080/23273798.2018.1425803

Lost worlds at 3QD, Rider Haggard to Anthony Bourdain

I’ve published another article at 3 Quarks Daily:
Think of what’s happened in the world since the publication of Rider Haggard’s She in 1886 to the 2013 airing of the Congo episode of Anthony Bourdain’s Parts Unknown, a span of one and a quarter centuries. The European Scramble for Africa was heating up in consequence of the Berlin Conference of 1884 so that by 1914 90% of Africa was under some European heel. By that time World War I, aka the Great War, exploded and Europe was in flames and choking with poisonous gas. Then Europe took a break as Japan armed itself, invaded Manchuria and then China and before we knew it, World War II. Then Korea, then Vietnam, Iraq, Afghanistan, and at the moment Iran’s looking iffy.

But that’s not what the article’s about. Between She and Congo we have Heart of Darkness (Conrad) and Apocalypse Now (Coppola). But it’s it link between She and Heart of Darkness that most interests me at the moment. For one thing, I’ve already thought and written about Apocalypse Now and Heart of Darkness:
Anthony Bourdain is a different [kind of] story. Perhaps one day I’ll attempt to do him justice.

Back to She and Heart of Darkness, it’s only in the last week that I’ve been thinking about that (prompted by an article I found on my hard drive). There’s a connection there, a deep one that I’ve just barely limned over there at 3QD.

The basic idea is that you take She, strip the romance (that is, the romantic mystification of imperial conquest?) away, and you get Heart of Darkness. In She we have a trio: Ayesha herself, pining away for her beloved Kallikrates, who reappears in the form of his descendant Leo Vincey. They’re replaced by Kurtz and the Intended in Heart of Darkness and the romantic mystification disappears in the process. How does that work? The only thing that seems clear to me is Kurtz’s motive for going to Africa in the first place, to acquire wealth so that he’ll be worthy of his Intended (or rather, her family). Conrad doesn’t make a big deal of this, but it’s there. As for the romantic mystification, it’s been shoveled into Kurtz’s report for International Society for the Suppression of Savage Customs, the one he glossed with “Exterminate all the brutes!”.

(There, I like that, foisting the burden of romantic mystification on Kurtz so that he’s rather like Horace Holly, Leo Vincey and, well, H. Rider Haggard.)

https://en.wikipedia.org/wiki/She:_A_History_of_Adventure#cite_ref-92

I’ll leave you with a section from the Wikipedia entry on She:
Feminist literary historians have tended to define the figure of She as a literary manifestation of male alarm over the "learned and crusading new woman". In this view, Ayesha is a terrifying and dominant figure, a prominent and influential rendering of the misogynistic "fictive explorations of female authority" undertaken by male writers that ushered in literary modernism. Ann Ardis, for instance, views the fears Holly harbours over Ayesha's plan to return to England as being "exactly those voiced about the New Woman's entrance in the public arena". According to the feminist interpretation of the narrative, the death of She acts as a kind of teleological "judgement" of her transgression of Victorian gender boundaries, with Ardis likening it to a "witch-burning". However, to Rider Haggard, She was an investigation into love and immortality and the demise of Ayesha the moral end of this exploration:
When Ayesha in the course of ages grows hard, cynical, and regardless of that which stands between her and her ends, her love yet endures ... when at last the reward was in her sight ... she once more became (or at the moment imagined that she would become) what she had been before disillusion, disappointment, and two thousand wretched years of loneliness had turned her heart to stone ... and in her lover's very presence she is made to learn the thing she really is, and what is the end of earthly wisdom and of the loveliness she prised so highly.
Indeed, far from being a radical or threatening manifestation of womanhood, recent academics have noted the extent to which the character of She conforms to traditional conceptions of Victorian femininity; in particular her deferring devotion to Kallikrates/Leo, whom she swears wifely obedience to at the story's climax: "'Behold!' and she took his [Leo's] hand and placed it upon her shapely head, and then bent herself slowly down till one knee for an instant touched the ground – 'Behold! in token of submission do I bow me to my lord! Behold!' and she kissed him on the lips, 'in token of my wifely love do I kiss my lord'." Ayesha declares this to be the "first most holy hour of completed womanhood".

Sunday, June 23, 2019

Strange remains

The mind of the octopus

Yes, you have the right, mind, of an octopus. Why not?

Amia Srinivasan review two recent books in "The Sucker, the Sucker!", London Review of Books, 17 September 2017.
Other Minds: The Octopus and the Evolution of Intelligent Life by Peter Godfrey-Smith
Collins, 255 pp, £20.00, March 2017, ISBN 978 0 00 822627 5

The Soul of an Octopus: A Surprising Exploration into the Wonder of Consciousness by Sy Montgomery, Simon & Schuster, 272 pp, £8.99, April 2016, ISBN 978 1 4711 4675 6
From the review:
The octopus threatens boundaries. Its body, a boneless mass of soft tissue, has no fixed shape. Even large octopuses – the largest species, the Giant Pacific, has an arm span of more than six metres and weighs a hundred pounds – can fit through an opening an inch wide, or about the size of its eye. This, combined with their considerable strength – a mature male Giant Pacific can lift thirty pounds with each of its 1600 suckers – means that octopuses are difficult to keep in captivity. [...]

Peter Godfrey-Smith is a philosopher and diver who has been studying octopuses and other cephalopods in the wild, mostly off the coast of his native Sydney, for years. The alienness of octopuses, in his view, provides an opportunity to reflect on the nature of cognition and consciousness without simply projecting from the human example. Because of their evolutionary distance from us, octopuses are an ‘independent experiment in the evolution of large brains and complex behaviour’. Insofar as we are able to make intelligent contact with them – to understand octopuses and have them understand us – it is ‘not because of a shared history, not because of kinship, but because evolution built minds twice over’. The potential worry is that the evolutionary chasm between us and the octopus is too great to make mutual intelligibility possible. In that case the octopus will have something to teach us about the limits of our own understanding. [...]

Octopuses are indeed glutinous; according to Sy Montgomery, author of the splendid Soul of an Octopus, the slime on an octopus’s skin feels like a cross between drool and snot. But the octopus’s will is far from malignant, at least when it comes to humans. Octopuses do occasionally attack people, giving a venomous nip or stealing an underwater camera when threatened or annoyed, but in general they are gentle, inquisitive creatures. (Fishermen, by contrast, often kill octopuses by biting out their brains, and in many countries they are eaten alive.) Octopuses encountering divers in the wild will frequently meet them with a probing arm or two, and sometimes lead them by the hand on a tour of the neighbourhood. Aristotle, mistaking curiosity for a lack of intelligence, called the octopus a ‘stupid creature’ because of its willingness to approach an extended human hand. Octopuses can recognise individual humans, and will respond differently to different people, greeting some with a caress of the arms, spraying others with their siphons. This is striking behaviour in an animal whose natural life cycle is deeply antisocial. Octopuses live solitary lives in single dens and die soon after their young hatch. Many male octopuses, to avoid being eaten during mating, will keep their bodies as far removed from the female as possible, extending a single arm with a sperm packet towards her siphon, a manoeuvre known as ‘the reach’. [...]

What does it feel like to be an octopus? Does it feel like anything at all? Or are octopuses, as Godfrey-Smith puts it, ‘just biochemical machines for which all is dark inside’? This form of question – ‘what is it like to be a bat?’ Thomas Nagel asked in a hugely influential paper in 1974 – is philosophical shorthand for asking whether a creature is conscious. [...]

Godfrey-Smith starts with the conviction that consciousness is an evolved thing, and accepts the conclusion that it has more primitive precursors: that it comes in degrees after all. Consciousness – the possession of an ‘inner’ model of the ‘outer’ world, or the sense of having an integrated, subjective perspective on the world – is, on his view, just a highly evolved form of what he calls ‘subjective experience’. Many animals, Godfrey-Smith thinks, have some degree of subjective experience, even if it falls short of full-blown consciousness. He points to what the physiologist Derek Denton called the ‘primordial emotions’: thirst, lack of air, physical pain. These sensations intrude on our more complex mental processes, refusing to be dismissed. They hark back to a more rudimentary form of experiencing the world – a form, Godfrey-Smith thinks, that does not require a sophisticated inner model of the world. ‘Do you think,’ he asks, that pain, thirst or shortness of breath ‘only feel like something because of sophisticated cognitive processing in mammals that has arisen late in evolution? I doubt it.
FWIW, I was quite taken with cephalopods (squid, cuttlefish, and octopi) when I was young and wrote a term paper about them for 10th grade biology.

"Welcome to the artificial intelligence bullshit-industrial complex."

Mike Mallazzo, The BS-Industrial Complex of Phony A.I., Medium, June 12, 2019:
The core feature of a B.S.-industrial complex is that every member of the ecosystem knows about the charade, but is incentivized to keep shoveling. It’s not so much that we reach a point where we convince ourselves our bullshit is true; it’s that the difference between truth and bullshit has become purely semantic. The definition of something, like artificial intelligence, becomes so jumbled that any application of the term becomes defensible.

Let’s break down the key components:

The marketers know it’s bullshit. At some point, it probably began innocently enough: A clever product marketer, looking to differentiate a technology that three of his competitors were also hawking, likely started out by declaring that his email capture tool was powered by dragon’s tears. When that failed, he said it was powered by artificial intelligence. The next week, customer relationship management solutions became A.I., then sales outreach platforms and eventually… bodegas. Then it became a demand-side problem. Requests for proposal began to ask how technology vendors “leverage A.I.” while investors began to inquire around how incorporating artificial intelligence at scale would reduce churn.

Saturday, June 22, 2019

Japanese fireflies


Frame-grabs from Ninja Scroll:


Brain-to-brain synchrony in the classroom

Dikker et al., 2017, Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom, Current Biology 27, 1375–1380May 8, 2017
Highlights
  • We report a real-world group EEG study, in a school, during normal class activities
  • EEG was recorded from 12 students simultaneously, repeated over 11 sessions
  • Students’ brain-to-brain group synchrony predicts classroom engagement
  • Students’ brain-to-brain group synchrony predicts classroom social dynamics
Summary

The human brain has evolved for group living [1 ]. Yet we know so little about how it supports dynamic group interactions that the study of real-world social exchanges has been dubbed the “dark matter of social neuroscience” [2 ]. Recently, various studies have begun to approach this question by comparing brain responses of multiple individuals during a variety of (semi-naturalistic) tasks [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ]. These experiments reveal how stimulus properties [13 ], individual differences [14 ], and contextual factors [15 ] may underpin similarities and differences in neural activity across people. However, most studies to date suffer from various limitations: they often lack direct face-to-face interaction between participants, are typically limited to dyads, do not investigate social dynamics across time, and, crucially, they rarely study social behavior under naturalistic circumstances. Here we extend such experimentation drastically, beyond dyads and beyond laboratory walls, to identify neural markers of group engagement during dynamic real-world group interactions. We used portable electroencephalogram (EEG) to simultaneously record brain activity from a class of 12 high school students over the course of a semester (11 classes) during regular classroom activities (Figures 1A–1C; Supplemental Experimental Procedures, section S1). A novel analysis technique to assess group-based neural coherence demonstrates that the extent to which brain activity is synchronized across students predicts both student class engagement and social dynamics. This suggests that brain-to-brain synchrony is a possible neural marker for dynamic social interactions, likely driven by shared attention mechanisms. This study validates a promising new method to investigate the neuroscience of group interactions in ecologically natural settings.

17 Versions of "Summertime"


A Rare African Flower, Saved

Here's another post about Rider Haggard, also from The Valve, from 28 May 2009. Like She (1986) it too centers on a exotic white race at the heart of Africa (though this post is not about that aspect of the story).
When flowers are not being flowers, they are sometimes put to use as symbols. I’m interested in one such usage, in Rider Haggard’s Allan Quatermain, though I don’t think it’s quite symbolic. Or rather, yes, it is easy to read it as symbolic, but to say so would be to paper over the fact that I don’t really understand how this usage works.

Regardless of exactly how the flower imagery works, it is being recruited to sexual service as suggested by the word “deflower,” but the potential victim is a 10-year old girl. While the girl escapes unharmed, Haggard has her in jeopardy for three chapters, three chapters where the reader doesn’t know what has happened or what might happen to her. Why does Haggard put the reader through this?

First I tell this girl’s story absent almost all of the flower imagery. Then I go back and present that imagery, not to analyze it in detail, but just to lay it out, to show how much of it there is and how specifically it is connected to the girl and her plight. Finally, I confront the question: Why?

Flossie Saved from the Masai

Allan Quatermain (1887) is a sequel to King Solomon’s Mines (1885), with the same three men – Allan Quatermain, Capt. John Good, and Sir Henry Curtis – traveling to a lost world deep inside Africa. The story opens in England three years after the trio had returned from the mines with a small pouch of diamonds and a large stash of adventuresome memories. They’re bored with civilization and itching for adventure. As Quatermain told himself:
And so in my trouble, as I walked up and down the oak-paneled vestibule of my house there in Yorkshire, I longed once more to throw myself into the arms of Nature. Not the Nature which you know, the Nature that waves in well-kept woods and smiles out in corn-fields, but Nature as she was in the age when creation was complete, undefiled as yet by any human sinks of sweltering humanity. I would go again where the wild game was, back to the land whereof none know the history, back to the savages, whom I love, although some of them are almost as merciless as Political Economy.
And so, acting on a vague tale about a lost white race, the trio returns to East Africa, heading toward Mt. Kenya by way of the Tana River. Their immediate goal is a mission station run by a Scotsman, Mr. Mackenzie, whom they believe may have more specific information about this lost race.

They arrive at the mission after the usual dangers, which included a bunch of Masai warriors who had been set upon them to settle a score. They escape the Masai – which means that the score has not been settled. The Masai will return.

Local exotic [sun]

Friday, June 21, 2019

Emotion, the final frontier of AI?

Meredith Somers, Emotion AI, explained, March 8, 2019.
Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.
Interesting. I'm pretty sure, though, that this tech wouldn't make Commander Data jealous. This isn't about computers having emotion; it's about computers being able to recognize human affective states.

H/t 3QD.

I wonder if companion robots can read the affective states of their humans?

Is Rider Haggard’s She taught in university, either at the undergraduate or graduate level?

It certainly wouldn’t be taught under a rubric based on the premise that that students must study the (very) best narratives. That justification may have reigned up to the middle of the 20th century, but it has since been supplemented by an interest in popular culture. And in THAT context She seems unavoidable. According to the Wikipedia article on She, it’s been enormously popular and influential, having sold 83 million copies. It has influenced Rudyard Kipling, Henry Miller, Graham Greene, J.R.R. Tolkien, and Margaret Atwood and sold 83 million copies. Moreover it’s been adapted into at least 11 films. And it seems to have some purchase in the scholarly literature. It may not be a great novel, but it's a culturally important one.

All of which is well and good. But is the text taught to undergraduates, or graduate students? I can’t see it being taught in a course on the !9th century British novel, thought that is what it is. But a course on fantasy, or adventure? What about a course that includes one or more of the Indiana Jones films?

If so, under what circumstances? If not, why not?

The politics of order in informal markets: Evidence from Lagos

A paper by Shelley Grossman, June 18, 2019, forthcoming at World Politics.
Abstract

Property rights are important for economic exchange, but in much of the world they are not publicly guaranteed. Private market associations can fill this gap by providing an institutional structure to enforce agreements, but with this power comes the ability to extort from group members. Under what circumstances do private associations provide a stable environment for economic activity? Using survey data collected from 1,179 randomly sampled traders across 199 markets in Lagos, I find that markets maintain institutions to support trade not in the absence of government, but rather in response to active government interference. I argue that associations develop pro-trade institutions when threatened by politicians they perceive to be predatory, and when the organization can respond with threats of its own; the latter is easier when traders are not competing with each other. In order to maintain this balance of power, the association will not extort because it needs trader support to maintain the credibility of its threats to mobilize against predatory politicians.

The meanderings of the Mississippi


Friday Fotos: More flowers





On finding Donkey Kong transistors in a MOS 6502 microprocessor chip – Whoops! the methods of the neurosciences have problems, no?

A couple of days ago I posted a conversation with Rodney Brooks on the limitations of the computing metaphor as a vehicle for understanding the brain. Brooks mentioned an article, "Could a Neuroscientist Understand a Microprocessor?". The point of the article is that if you attempt to understand a microprocessor using the same methods neuroscientists use to understand the brain you're going to come up with gibberish.

I've located that article along with an informal account of the work in The Atlantic. I conclude some some observations of my own.

* * * * *

Jonas E, Kording KP (2017) Could a Neuroscientist Understand a Microprocessor? PLoS Comput Biol 13(1): e1005268. https://doi.org/10.1371/journal.pcbi.1005268
Abstract

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

Author Summary

Neuroscience is held back by the fact that it is hard to evaluate if a conclusion is correct; the complexity of the systems under study and their experimental inaccessability make the assessment of algorithmic and data analytic technqiues challenging at best. We thus argue for testing approaches using known artifacts, where the correct interpretation is known. Here we present a microprocessor platform as one such test case. We find that many approaches in neuroscience, when used naïvely, fall short of producing a meaningful understanding.

* * * * *

Ed Yong, Can Neuroscience Understand Donkey Kong, Let Alone a Brain? The Atlantic, June 2, 2016. From the article:
The human brain contains 86 billion neurons, underlies all of humanity’s scientific and artistic endeavours, and has been repeatedly described as the most complex object in the known universe. By contrast, the MOS 6502 microchip contains 3510 transistors, runs Space Invaders, and wouldn’t even be the most complex object in my pocket. We know very little about how the brain works, but we understand the chip completely. [...]

Even though the duo knew everything about the chip—the state of each transistor and the voltage along every wire—their inferences were trivial at best and seriously misleading at worst. “Most of my friends assumed that we’d pull out some insights about how the processor works,” says Jonas. “But what we extracted was so incredibly superficial. We saw that the processor has a clock and it sometimes reads and writes to memory. Awesome, but in the real world, this would be a millions-of-dollars data set.”

Last week, the duo uploaded their paper, titled “Could a neuroscientist understand a microprocessor?” after a classic from 2002. It reads like both a playful thought experiment (albeit one backed up with data) and a serious shot across the bow. And although it has yet to undergo formal peer review, other neuroscientists have already called it a “landmark paper”, a “watershed moment”, and “the paper we all had in our minds but didn't dare to write”. “While their findings will not necessarily be surprising for a chip designer, they are humbling for a neuroscientist,” wrote Steve Fleming from University College London on his blog. “This kind of soul-searching is exactly what we need to ensure neuroscience evolves in the right direction.”
Five Observations

First, I've noted at various times that I find philosophical arguments about the human mind/brain and computing to be rather empty, mainly because they don't usefully engage the ideas actually used in investigating the mind or the brain. They don't provide pointers for doing better, whether in computational or other terms. I suspect that some of my humanist colleagues are attracted to these arguments because they don't want to entertain, even at a distance, any explicit account of mental operations. Some of them might balk at mind-body dualism as an explicit intellectual program, but they are, effectively, mind-body dualists and therefor mysterians as well. That is they want the mind to be shrouded in mystery. Is humanistic thought (in that view) such that it cannot in principle embrace or approach explicit accounts of the mind? If so, why? Is this a methodological or a metaphysical commitment (does it matter?)?

Second, if the computer metaphor isn't adequate, does it nonetheless have some role to play in understanding the mind? For instance, I've been arguing that language is the simplest thing humans do that involves computation. In this view computation is a very high-level brain process. That implies, of course, that we're going to need other concepts for understanding the brain – such as control theory and complex dynamics. [I note in passing basic that  the arithmetic computation we learn in primary school is a highly constrained and specialized form of language.]

Third, we do know quite a bit about biological mechanisms at the molecular and cellular levels. But we don't yet know how those processes "add up to" a brain, through we're working on it. See, for example, the OpenWorm project, which is an attempt to simulate the roundworm Caenorhabditis elegans at the cellular level. C. elegans has 959 cells, including 302 neurons and 95 muscle cells. Then we have the Blue Brain Project, which is an attempt to simulate a rodent brain at some neuronal level. What's going on in these simulations? I note that, while Searle said nothing about biology in his original formulation of his well-known Chinese Room argument (against the computational view of mind), he has some more recent observations in which he explicitly references biology, wondering how much of biological mechanism is "essential for duplicating the causal powers of the original."

Fourth, the upshot of Searle's Chinese Room argument is that computation is a purely syntactic process. It cannot encompass meaning. It lacks intention and so cannot be about anything. All of which is to say, it cannot connect with a world outside itself. A Universal Turing Machine certainly seems to be that kind of thing, doesn't it?

Fifth, it is nonetheless interesting and telling that computers give us a way simulating anything we can describe in sufficient detail of just the right kind. Including neurons and brains.