Monday, July 16, 2018

Shoot for the stars

20180707-_IGP2366

Continuity and discontinuity in literary history, H vs. DH? (Literary Lab 4) – Tales from the Twitterverse [#DH]

If, a few years ago, you’d asked me whether or not a half dozen scholars could have an interesting and fruitful conversation 280 characters at a time, I’d have said “Are you freakin’ kidding me?”–or words to that effect. But it happens, not all the time, but often enough, and it even happened when we were restricted to 140 characters at a time. Such is life in the academic Twittersphere.

Ted Underwood kicked one off on Friday (the 13th, FWIW) and it continued on into Saturday. A half dozen or so joined in and who knows how many followed along–a dozen, 30, 50, more, who knows? I don’t know what Ted expected when he threw that first tweet into the maelstrom, nor does that much matter. What matters is what happened, and that was unplanned, spontaneous, even fun.

Of course, it requires interlocutors who understand the issues at hand well enough that they can speak in code. And they need to read one another charitably and in good faith. But, given those conditions, it is possible to do some good work.

But enough of this. Let’s get to it.

In the next section of this post I reproduce three tweets from that conversation and add some commentary. These set the stage for the next two sections, where I use the well-known Stanford Literary Lab Pamphlet 4 to interrogate the issues raised in the first section; this fleshes out an argument I tossed into the conversation in a five-tweet string.

Dropping science in the Twitterverse

Here’s the tweet that started things off:

So, yeah, it’s a bit aggressive: maybe the received wisdom ain’t necessarily so (as the song goes). And periodization is just dropped in there at the end, as a for-example. Of course, it’s an important case because literary studies is more or less organized according to periods: fields of study, journals, conferences, professional organizations, coursework, all organized by period. To be, period isn’t the only parameter of organization, we’ve also got author, genre, and a blither or theoretical proclivities, but it’s an important one.

And it’s one that Underwood has investigated. Here’s the abstract of an article he and Jordan Sellers recently published, The Longue Durée of Literary Prestige [1]:
A history of literary prestige needs to study both works that achieved distinction and the mass of volumes from which they were distinguished. To understand how those patterns of preference changed across a century, we gathered two samples of English-language poetry from the period 1820–1919: one drawn from volumes reviewed in prominent periodicals and one selected at random from a large digital library (in which the majority of authors are relatively obscure). The stylistic differences associated with literary prominence turn out to be quite stable: a statistical model trained to distinguish reviewed from random volumes in any quarter of this century can make predictions almost as accurate about the rest of the period. The “poetic revolutions” described by many histories are not visible in this model; instead, there is a steady tendency for new volumes of poetry to change by slightly exaggerating certain features that defined prestige in the recent past.
Those “poetic revolution” imply period boundaries, but those boundaries didn’t show up. To be sure, the model did show change, but it was gradual and seemed to have a historical direction–which is a whole different kettle of conceptual and perhaps even ideological fish.

I rather liked that work, and took a close look at it [2]. I particularly liked the apparent directionality it revealed, but let’s set that aside. As for the lack of period boundaries...well, I don’t think periodization is simply a matter of disciplinary hallucination. So I’m inclined to think that the method Underwood and Sellers used simply isn’t sensitive to such matters. But that’s not an argument; it’s merely a casual assertion. Underwood is right, there IS work to be done.

And that’s what caught people’s attention. Daniel Shore entered and exchanged tweets with Ted. Then I suggest color perception as an analogy:

In making that suggestion I had more in mind than the mere presence of categories over continuity. Color perception is tricky, and much studied. We know, and have know so for quite awhile, that there isn’t a simple and direct relationship between perceived color and wavelength of light. This is not the place for a primer in color vision (the Wikipedia article is a reasonable place to begin). But I can offer a few remarks.

Blue, for example, does not map directly to a fixed region in the electromagnetic spectrum, with violet and green in adjacent fixed regions. The color of a given region in the visual field is calculated over input from three kinds of retinal receptors (cones) having different sensitivities. Moreover the color of one region is “normalized” over adjacent regions. The upshot is that a red apple will appear to be red under a wide range of viewing conditions. Different illumination means different wavelengths incident on the apple. Hence light that the apple reflects to the eye varies in different situation. Because the brain normalizes, the apple’s color remains constant.

We’ll return to part of that story a bit later.

Let’s return to the twitter conversation with a pair of remarks by Ryan Heuser:

The conversation continued on. Others joined. It forked here and there. And somewhere in there I introduced a string of tweets about Lit Lab 4.

Is it moral to raise children in a doomed world?

I cried two times when my daughter was born. First for joy, when after 27 hours of labor the little feral being we’d made came yowling into the world, and the second for sorrow, holding the earth’s newest human and looking out the window with her at the rows of cars in the hospital parking lot, the strip mall across the street, the box stores and drive-throughs and drainage ditches and asphalt and waste fields that had once been oak groves. A world of extinction and catastrophe, a world in which harmony with nature had long been foreclosed.
What to do?
Take the widely cited 2017 research letter by the geographer Seth Wynes and the environmental scientist Kimberly Nicholas, which argues that the most effective steps any of us can take to decrease carbon emissions are to eat a plant-based diet, avoid flying, live car free and have one fewer child — the last having the most significant impact by far. Wynes and Nicholas argue for teaching these values in high school, thus transforming society through education. On its face, this proposal might seem sensible. But when values taught in the classroom don’t match the values in the rest of society, the classroom rarely wins. The main problem with this proposal isn’t with the ideas of teaching thrift, flying less or going vegetarian, which are all well and good, but rather with the social model such recommendations rely on: the idea that we can save the world through individual consumer choices. We cannot.
Tragedy?
Of course, nobody really needs to have children. It just happens to be the single strongest drive humans have, the fundamental organizing principle of every human society and the necessary condition of a meaningful human world. Procreation alone makes possible the persistence of human culture through time.

To take Wynes and Nicholas’s recommendations to heart would mean cutting oneself off from modern life. It would mean choosing a hermetic, isolated existence and giving up any deep connection to the future. Indeed, taking Wynes and Nicholas’s argument seriousaly would mean acknowledging that the only truly moral response to global climate change is to commit suicide. There is simply no more effective way to shrink your carbon footprint. [...]

When my daughter was born I felt a love and connection I’d never felt before: a surge of tenderness harrowing in its intensity. I knew that I would kill for her, die for her, sacrifice anything for her, and while those feelings have become more bearable since the first delirious days after her birth, they have not abated. And when I think of the future she’s doomed to live out, the future we’ve created, I’m filled with rage and sorrow.

Every day brings new pangs of grief. Seeing the world afresh through my daughter’s eyes fills me with delight, but every new discovery is haunted by death.
Read the rest and think on it.

REFLEX

20180712-_IGP2495

Sunday, July 15, 2018

Google AI in Ghana

The question everyone was asking after Jeff Dean (the Google AI and Google Brain team lead) announced that Google AI Lab was coming to Accra was "Why Ghana?"

The answer to that has been clear for a while now but has eluded most people.

When Barrack Obama decided to visit Ghana as the first African country during his Presidency, it was the same question most people asked - "Why Ghana?"

The answer is simple; Ghana is the future of Africa.

When I decided to move away from Nigeria to Ghana almost a decade ago, most people could not understand why because they had not visited Ghana, I did and I fell in love, literally.

I married a Fanti woman. I had already fallen for the country before I met my wife because the place was different.

It didn't have the hardcore market edge of places like Nigeria and South Africa, but it was a place where I could live and work.

Ghana has relatively stable electricity, relative security, and decent internet infrastructure. It also has some of the best tourist destinations in the developing world. All of this is present without any hype. I moved our business there and didn't look back ever since.

This choice is in spite of the challenges the country has gone through in recent times. I have remained and will continue to do so.

Google probably, however, has different reasons for choosing Ghana, and Jeff Dean tried to explain that it had to do with the robust network of academic institutions as well as infrastructure.

Google has been a significant investor in strengthening those institutions and the infrastructure around it.

An Alphabet Subsidiary named CSquared spun out of Google has quietly been laying an extensive fiber optic backbone in Accra and Kampala to help solve the last-mile internet problem that Eric Schmidt mentioned in Barcelona.

The internet speeds I get in the office and at home in Accra are now comparable with California speeds.

Ghana has also become a melting pot for education in the sub-region over the years because of the relative stability of the country and the high standards of its institutions, such as the highly-regarded Ashesi University.

Jordan Peterson interviews Nina Paley


Nina Paley is an animator and artist who makes unbelievably beautiful films. We discussed her life, her views, and her work, interspersing her animation throughout. Nina has done a particularly brilliant job of animating Exodus as a feature length film (see www.sedermasochism.com, as well as her Vimeo channel https://vimeo.com/user2983855
The interview took place some time last year, before Seder-Masochism was finished. They also discuss Sita Sings the Blues, religion, her artistic process, and copyright. I've got a bunch of posts on SSTB.

BTW, Nina has a prayer to her muse. She recites it at about 50:20:
Our idea, which art in the ether, that cannot be named, thy vision come, thy will be done, on earth as it is in abstraction. Give us this day, our daily spark, and forgive us our criticism, as we forgive those who critique against us. And lead us not into stagnation, but deliver us from ego; for thine is the vision, the power, and the glory forever. Amen.
Peterson remarks:
I would interpret that as a mantra that opens up the gateway between you and this transcendent force that allows people religious inspiration. And you're doing something like clearing out your ego. And I think  it is very interesting that it is associated with something like The Lord's Prayer.

Well, hello there!

20180707-_IGP2373

Two books on universal basic income (UBI)

Annie Lowrey, GIVE PEOPLE MONEY, How a Universal Basic Income Would End Poverty, Revolutionize Work, and Remake the World, 263 pp. Crown. $26.

Andrew Yang, THE WAR ON NORMAL PEOPLE: The Truth About America’s Disappearing Jobs and Why Universal Basic Income Is Our Future, 284 pp. Hachette Books. $28.
From the review:
The two books cover so much of the same terrain that I’m tempted to wonder whether they were written by the same robot, programmed for slightly different levels of giddy enthusiasm. Both cite Martin Luther King Jr., Richard Nixon and Milton Friedman as early supporters of a U.B.I. Both urge that a U.B.I. be set at $1,000 a month for every American. Both point out that with poverty currently defined as an income for a single adult of less than $12,000 a year, such a U.B.I. would, by definition, eliminate poverty for the 41 million Americans now living below the poverty line. It would also improve the bargaining power of millions of low-wage workers — forcing employers to increase wages, add benefits and improve conditions in order to retain them. If a U.B.I. replaced specific programs for the poor, it would also reduce government bureaucracy, minimize government interference in citizens’ lives and allow people to avoid the stigma that often accompanies government assistance. By virtue of being available to all, a U.B.I. would not only guarantee the material existence of everyone in a society; it would establish a baseline for what membership in that society means.

U.B.I.’s critics understandably worry that it would spur millions to drop out of the labor force, induce laziness or at least rob people of the structure and meaning work provides. Both Yang and Lowrey muster substantial research to rebut these claims. I’m not sure they need it. After all, $12,000 a year doesn’t deliver a comfortable life even in the lowest-cost precincts of America, so there would still be plenty of incentive to work. Most of today’s jobs provide very little by way of fulfillment or creativity anyway.

A U.B.I. might give recipients a bit more time to pursue socially beneficial activities, like helping the elderly or attending to kids with special needs or perhaps even starting a new business. Yang suggests it would spur a system of “social credits” in which people trade their spare time by performing various helpful tasks for one another. (I.R.S. be warned.) Surely a U.B.I. would help compensate many people — especially women — for the unpaid labor they already contribute. As Lowrey points out, some 40 million family caregivers in America provide half a trillion dollars of unpaid adult care annually. Child care has become so expensive that one of every three stay-at-home mothers today lives below the poverty line (compared with 14 percent in 1970). [...]

Whatever the source of funds, it seems a safe bet that increased automation will allow the economy to continue to grow, making a U.B.I. more affordable. A U.B.I. would itself generate more consumer spending, stimulating additional economic activity. And less poverty would mean less crime, incarceration and other social costs associated with deprivation.

Saturday, July 14, 2018

Hasui Kawase, Japanese print-maker

Friday, July 13, 2018

Subjectivity vs. Objectivity in the epistemic and ontological senses (John Searle)

I bumped this to the top of the queue because I added another (very interesting) video at the very end, where Searle sets up an analogy between building an artificial heart and building and honest-to-dog artificial brain.

Here’s a video in which John Searle discusses AI with Luciano Floridi. I haven’t watched Floridi’s section, nor the discussion afterward. Here and now I’m interested in a distinction Searle makes, starting at about 9:12:



He points out that the distinction between objectivity and subjectivity has both an ontological and epistemic aspect, which is generally neglected. This is VERY important. He uses it to clarify what’s doing on when we worry about whether or not or in what respect computers can be said to think, or be intelligent. That’s a complicated and worthwhile discussion and if the topic interests you, by all means listen to what Searle has to say.

My immediate interest is somewhat more restricted. For some time now I’ve been complaining that, unfortunately, “subjective” has come to mean something like “idiosyncratically variable among different individuals.” But a more basic meaning is simply, “of or pertaining to subjects.” Well, Searle informs me that I’m distinguishing between “subjective” in the epistemic sense (idiosyncratically variable) and “subjective” in the ontological sense.

Ontologically, subjectivity has to do with existence, being experienced by a subject. The experience of a literary text is certainly subjective in this sense. And, as the meaning of texts depends on experiencing them, meaning must be subjective as well. This is a matter of ontology.

Claims about the meaning of a text must necessarily be observer relative and so those claims are epistemically subjective. There are no objective claims about the meanings of texts, though some claims may well be intersubjectively held among some group of readers (an interpretive community in Fish’s sense?).

My claim about literary form is that it is an objective property of the interaction between texts and readers. It is thus not subjective in either the ontological or epistemic senses. By studying the form, however, we can learn about how literary subjectivity works. For literary subjectivity is a real phenomenon of the human world.

Friday Fotos: Wild Art in Jersey City

20180712-_IGP2503

20180712-_IGP2483

20180712-_IGP2504

20180712-_IGP2492

20180712-_IGP2496

Training your mind, Michael Nielsen on Anki and human augmentation

Michael Nielsen, an AI researcher at Y Combinator Research, has written a long essay, Augmenting Long-term Memory, which is about Anki, a computer-based tool for training long-term memory.
In this essay we investigate personal memory systems, that is, systems designed to improve the long-term memory of a single person. In the first part of the essay I describe my personal experience using such a system, named Anki. As we'll see, Anki can be used to remember almost anything. That is, Anki makes memory a choice, rather than a haphazard event, to be left to chance. I'll discuss how to use Anki to understand research papers, books, and much else. And I'll describe numerous patterns and anti-patterns for Anki use. While Anki is an extremely simple program, it's possible to develop virtuoso skill using Anki, a skill aimed at understanding complex material in depth, not just memorizing simple facts.

The second part of the essay discusses personal memory systems in general. Many people treat memory ambivalently or even disparagingly as a cognitive skill: for instance, people often talk of “rote memory” as though it's inferior to more advanced kinds of understanding. I'll argue against this point of view, and make a case that memory is central to problem solving and creativity. Also in this second part, we'll discuss the role of cognitive science in building personal memory systems and, more generally, in building systems to augment human cognition. In a future essay, Toward a Young Lady's Illustrated Primer, I will describe more ideas for personal memory systems.

The essay is unusual in style. It's not a conventional cognitive science paper, i.e., a study of human memory and how it works. Nor is it a computer systems design paper, though prototyping systems is my own main interest. Rather, the essay is a distillation of informal, ad hoc observations and rules of thumb about how personal memory systems work. I wanted to understand those as preparation for building systems of my own. As I collected these observations it seemed they may be of interest to others. You can reasonably think of the essay as a how-to guide aimed at helping develop virtuoso skills with personal memory systems. But since writing such a guide wasn't my primary purpose, it may come across as a more-than-you-ever-wanted-to-know guide.

To conclude this introduction, a few words on what the essay won't cover. I will only briefly discuss visualization techniques such as memory palaces and the method of loci. And the essay won't describe the use of pharmaceuticals to improve memory, nor possible future brain-computer interfaces to augment memory. Those all need a separate treatment. But, as we shall see, there are already powerful ideas about personal memory systems based solely on the structuring and presentation of information.
The method of loci is well-known, and I'm sure you can come up with a lot of information just by googling the term. You might even come up with my encyclopedia article, Visual Thinking, where I treat it as one form of visual thinking among others.

Before returning to Nielson and Anki, I want to digress to a different form of mental training. When I was young people didn't have personal computers, nor even small hand-held electronic calculators. If you had to make a lot of calculations, you might have used a desktop mechanical calculator, a slide rule–my father had become so fluent with his that he didn't even have to look at it while doing complex multi-step calculations, or you might have mastered mental calculation.

Some years ago I reviewed biographies of John von Neumann and Richard Feynman; both books mentioned that their subjects were wizards of mental calculation. I observe:
Feynman and von Neumann worked in fields were calculational facility was widespread and both were among the very best at mental mathematics. In itself such skill has no deep intellectual significance. Doing it depends on knowing a vast collection of unremarkable calculational facts and techniques and knowing one's way around in this vast collection. Before the proliferation of electronic calculators the lore of mental math used to be collected into books on mental calculation. Virtuosity here may have gotten you mentioned in "Ripley's Believe it or Not" or a spot on a TV show, but it wasn't a vehicle for profound insight into the workings of the universe.

Yet, this kind of skill was so widespread in the scientific and engineering world that one has to wonder whether there is some connection between mental calculation, which has largely been replaced by electronic calculators and computers, and the conceptual style, which isn't going to be replaced by computers anytime soon. Perhaps the domain of mental calculations served as a matrix in which the conceptual style of Feynman, von Neumann, (and their peers and colleagues) was nurtured.
Then, citing the work of Jean Piaget, I suggest every so briefly why that might be so. However, once powerful handheld calculators became widely available, skill in mental calculation was no longer necessary. These days one may hear of savants who have such skills, but that's pretty much it.

Returning to Nielsen and Anki, as his essay evolves, he suggests that more than mere memory is at stake. After explaining Anki basics he describes how he used Anki to help him learning enough about AlphaGo–the first computer system that beat the best human experts at Go–to write an article for Quanta Magazine. Alas
I knew nothing about the game of Go, or about many of the ideas used by AlphaGo, based on a field known as reinforcement learning. I was going to need to learn this material from scratch, and to write a good article I was going to need to really understand the underlying technical material.
He then explains what he did. The upshot:
This entire process took a few days of my time, spread over a few weeks. That's lot of work. However, the payoff was that I got a pretty good basic grounding in modern deep reinforcement learning. This is an immensely important field, of great use in robotics, and many researchers believe it will play an important role in achieving general artificial intelligence. With a few days work I'd gone from knowing nothing about deep reinforcement learning to a durable understanding of a key paper in the field, a paper that made use of many techniques that were used across the entire field. Of course, I was still a long way from being an expert. There were many important details about AlphaGo I hadn't understood, and I would have had to do far more work to build my own system in the area. But this foundational kind of understanding is a good basis on which to build deeper expertise.
He then explains how he he used Anki to do shallow reads of papers. I'm not going excerpt or summarize that material, but I'll point out that doing shallow reads is a very useful skill. When I was in graduate school I prepared abstracts of the current literature for The Journal of Computational Linguistics. While some articles and tech reports had good abstracts, many did not. In those cases I'd have to read the article and write an abstract; I gave myself an hour, perhaps a bit more to write a 250-word abstract. I gave those articles a shallow read. How'd I do it? Hmmmm... I'll get back to you on that. It's quite possible that Nielsen's Anki process is better than the one I used.

Yet:
Really good resources are worth investing time in. But most papers don't fit this pattern, and you quickly saturate. If you feel you could easily find something more rewarding to read, switch over. It's worth deliberately practicing such switches, to avoid building a counter-productive habit of completionism in your reading. It's nearly always possible to read deeper into a paper, but that doesn't mean you can't easily be getting more value elsewhere. It's a failure mode to spend too long reading unimportant papers.
My process was certainly good enough to make that go-nogo decision.
Nielson then goes on to discuss this and that use of Anki, suggesting:
Anki isn't just a tool for memorizing simple facts. It's a tool for understanding almost anything. It's a common misconception that Anki is just for memorizing simple raw facts, things like vocabulary items and basic definitions. But as we've seen, it's possible to use Anki for much more advanced types of understanding. My questions about AlphaGo began with simple questions such as “How large is a Go board?”, and ended with high-level conceptual questions about the design of the AlphaGo systems – on subjects such as how AlphaGo avoided over-generalizing from training data, the limitations of convolutional neural networks, and so on.

Part of developing Anki as a virtuoso skill is cultivating the ability to use it for types of understanding beyond basic facts. Indeed, many of the observations I've made (and will make, below) about how to use Anki are really about what it means to understand something.
That's the good stuff.

Where's he going? Human augmentation:
The human-computer interaction (HCI) community has tried to achieve it in the systems they build, not just for memory, but for augmenting human cognition in general. But I don't think it's worked so well. It seems to me that they've given up a lot of boldness and imagination and aspiration in their design** As an outsider, I'm aware this comment won't make me any friends within the HCI community. On the other hand, I don't think it does any good to be silent, either. When I look at major events within the community, such as the CHI conference, the overwhelming majority of papers seem timid when compared to early work on augmentation. It's telling that publishing conventional static papers (pdf, not even interactive JavaScript and HTML) is still so central to the field. . At the same time, they're not doing full-fledged cognitive science either – they're not developing a detailed understanding of the mind. Finding the right relationship between imaginative design and cognitive science is a core problem for work on augmentation, and it's not trivial.

In a similar vein, it's tempting to imagine cognitive scientists starting to build systems. While this may sometimes work, I think it's unlikely to yield good results in most cases. Building effective systems, even prototypes, is difficult. Cognitive scientists for the most part lack the skills and the design imagination to do it well.

This suggests to me the need for a separate field of human augmentation. That field will take input from cognitive science. But it will fundamentally be a design science, oriented toward bold, imaginative design, and building systems from prototype to large-scale deployment.
* * * * *

See also my post, Beyond "AI" – toward a new engineering discipline, in which I excerpt Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet". Jordan discusses human augmentation under the twin rubrics of "Intelligence Augmentation" and "Intelligence Infrastructure".

Thursday, July 12, 2018

Stairway to the sun

20171207-_IGP1500

Innovative governance, side-slipping the nation-state

Mark Lutter, Local Governments Are Changing the World, Cato Unbound, July 11, 2018.
The innovative governance movement is interested in improving governance via the creation of new jurisdictions with significant degrees of autonomy. These new jurisdictions could import successful institutions to create the conditions for catch up growth. Or the new jurisdictions could experiment with new forms of governance, to push the frontier. The overarching thesis of innovative governance is that the existing equilibrium of political units is overly resistant to change, and small, new jurisdictions, particularly on greenfield sites, are an effective mechanism to institutional improvements.

The modern innovative governance movement was launched ten years ago when Patri Friedman and Wayne Gramlich created the Seasteading Institute. Critical of the lack of success of traditional libertarian attempts at social change, the Seasteading Institute argued that new societies, “seasteads,” could be created in international waters. Seasteads would provide a blank slate for institutional innovation and experimentation. Successful models of governance could attract new residents, while unsuccessful ones would fail. This iterative, evolutionary process of governance improvements could help push the frontier of the optimal type of government. [...]

Historically, the innovative governance movement has been heavily influenced, and arguably led, by techno-libertarians, with Romer being the obvious exception. This is beginning to change. However, while the techno-libertarian attitude was arguably important for the vision, it hampered the development of more practical capacities necessary for the creation of charter cities. [...]

Luckily, things are changing, making charter cities more viable than they were ten years ago. A handful of influential groups are beginning to think about charter cities. That said, they’re coming at it from different angles, and few have the full vision. However, with proper coordination, it’s possible to rapidly, within 2 to 3 years, create the environment within which several charter city projects can be launched. Let’s consider some of the perspectives at hand.

Economists: Most development economists are sympathetic to charter cities. While some are strongly critical, there is nevertheless a general sense that charter cities are an idea worth trying. The downside is that economists don’t get career points for discussing charter cities. For example, Romer, who is frequently listed as a contender for the Nobel Prize, gave a TED talk on charter cities rather than publishing an academic article.

Silicon Valley: Silicon Valley is interested in cities. YCombinator made a big splash about building a city, though it was later toned down to research. Seasteading is big enough to be made fun of on HBO’s Silicon Valley. Multiple unicorn founders have told me they are building up a war chest such that they can build a city when they exit.

Humanitarians: While the refugee crisis has dropped out of the news recently, there remains interest in improving refugee camps via special economic zones and creating charter cities as a mechanism for economic development to lower the demand for emigration. The Jordan Compact gives aid and favorable grants to Jordan in exchange for work rights for refugees and increasing refugee participation in special economic zones. Kilian Kleinschmidt, who formerly ran the Za’atari refugee camp in Jordan and is on the Board of Advisers of my nonprofit, the Center for Innovative Governance Research, argues for special development zones for migrants. And of course, there is the aforementioned nonprofit Refugee Cities. Michael Castle-Miller is developing the legal and institutional frameworks for these charter cities via his teams at Politas Consulting.

New-city projects: There are dozens of new city projects around the world. These new city projects are real estate plays, building satellite cities of 50,000 or more residents. Investments in these new cities is rarely under $1 billion. Nkwashi, a new city project in Zambia, is one of my favorite examples. Mwiya Musokotwane, the CEO, is on the Board of Advisers for the Center for Innovative Governance Research.

Some of the new city projects are beginning to think about governance, which is a natural path as their revenues are based on land values.

A final interesting development, which is hard to place or categorize, is that Anders Fogh Rasmussen, the former Prime Minister of Denmark and former Secretary General of NATO, also has an interest in charter cities and special economic zones. He recently launched a new foundation, the Alliance of Democracies Foundation. One of the key initiatives of the Foundation is Expeditionary Economics, which is focusing, as previously mentioned, on charter cities and special economic zones.

Wednesday, July 11, 2018

Changizi on what the arts reveal about the mind

TH: Stepping back for a moment, how do you conceive of the relationship between the arts and sciences in general? Are there genuine tensions there, or is that idea a mere artifact of history and institutional traditions and ruts?

MC: Often this question ends up about whether science can ever come to understand the arts. But I think this misses the more important arrow here. It’s not about how science can illuminate the arts, but the other way around . . . science is simple. Experimentally, we’re ingenious in our controls, but even so the complexity of stimuli is ridiculously simple. The number of parameters that we can play with is only a handful at a time. If some fantastically complex stimulus turns out to hit some sweet spot for us, our careful lab manipulations won’t ever find it.

But artists can find and have found these sweet spots. Not any artist alone, necessarily. But together they act as a cultural evolutionary process that finds sweet spots of stimuli that evoke humans in some way, and in ways science would never find. It’s the arts that discovered music, not science. In this light, the arts is a massive laboratory experiment of sorts. Not well-controlled in the usual lab sense. But one capable, in a sense, of making scientific discoveries. This is why, in my own research, I often use massive amounts of artistic stimuli as data. What have these cultural-artistic tendencies discovered? What does it tell us about the brain, or about the innate mechanisms we possess?
A few years ago I wrote a pair of posts that speak to this: