Thursday, August 22, 2019

A window of opportunity?

A critique of pure learning and what artificial neural networks can learn from animal brains


The article linked in the tweet: Anthony M. Zador,  A critique of pure learning and what artificial neural networks can learn from animal brains, Nature Communication:
Abstract: Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised— but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning
From the article, on learning:
In ANN research, the term “learning” has a technical usage that is different from its usage in neuroscience and psychology. In ANNs, learning refers to the process of extracting structure—statistical regularities—from input data, and encoding that structure into the parameters of the network. These network parameters contain all the information needed to specify the network. For example, a fully connected network with 𝑁 neurons might have one parameter (e.g., a threshold) associated with each neuron, and an additional 𝑁^2 parameters specifying the strengths of synaptic connections, for a total of 𝑁+𝑁^2 free parameters. Of course, as the number of neurons 𝑁 becomes large, the total parameter count in a fully connected ANN is dominated by the 𝑁^2 synaptic parameters.

There are three classic paradigms for extracting structure from data, and encoding that structure into network parameters (i.e., weights and thresholds). In supervised learning, the data consist of pairs—an input item (e.g., an image) and its label (e.g., the word “giraffe”)—and the goal is to find network parameters that generate the correct label for novel pairs. In unsupervised learning, the data have no labels; the goal is to discover statistical regularities in the data without explicit guidance about what kind of regularities to look for. For example, one could imagine that with enough examples of giraffes and elephants, one might eventually infer the existence of two classes of animals, without the need to have them explicitly labeled. Finally, in reinforcement learning, data are used to drive actions, and the success of those actions is evaluated based on a “reward” signal.

Much of the progress in ANNs has been in developing better tools for supervised learning. If a network has too many free parameters, the network risks “overfitting” data, i.e. it will generate the correct responses on the training set of labeled examples, but will fail to generalize to novel examples. In ANN research, this tension between the flexibility of a network (which scales with the number of neurons and connections) and the amount of data needed to train the network (more neurons and connections generally require more data) is called the “bias-variance tradeoff” (Fig. 1). A network with more flexibility is more powerful, but without sufficient training data the predictions that network makes on novel test examples might be wildly incorrect—far worse than the predictions of a simpler, less powerful network. To paraphrase “Spiderman”: With great power comes great responsibility (to obtain enough labeled training data). The bias-variance tradeoff explains why large networks require large amounts of labeled training data.
Much later:
In this view, supervised learning in ANNs should not be viewed as the analog of learning in animals. Instead, since most of the data that contribute an animal’s fitness are encoded by evolution into the genome, it would perhaps be just as accurate (or inaccurate) to rename it “supervised evolution.” Such a renaming would emphasize that “supervised learning” in ANNs is really recapitulating the extraction of statistical regularities that occurs in animals by both evolution and learning. In animals, there are two nested optimization processes: an outer “evolution” loop acting on a generational timescale, and an inner “learning” loop, which acts on the lifetime of a single individual. Supervised (artificial) evolution may be much faster than natural evolution, which succeeds only because it can benefit from the enormous amount of data represented by the life experiences of quadrillions of individuals over hundreds of millions of years.
And so:

Cutting the financial sector down to size [it's in the air]

That’s the provisional title I used for my latest piece in Inside Story. Peter Browne, the editor, gave it the longer and clearer title “Want to reduce the power of the finance sector? Start by looking at climate change”.

The central idea is a comparison between the process of decarbonizing the world economy and that of definancialising it, by reducing the power and influence of the financial sector. Both seemed almost impossible only a decade ago, but the first is now well under way.

There’s also an analogy between the favored economists’ approach in both cases: reliance on price based measures such as carbon taxes and Tobin taxes. Despite the theoretical appeal of such measures, it looks as if regulation will end up doing much of the heavy work.
Note yesterday's post about Farhad Manjoo making pretty much the same call

Wednesday, August 21, 2019

Farhad Manjoo sees revolution on the horizon, around the bend, and at the end of the tunnel

Really?

Here's his NYTimes column, "C.E.O.s Should Fear a Recession. It Could Mean Revolution." He's been reflecting on the recent announcement by the Business Roundtable (CEO's of 200 megacorporations):
that the era of soulless corporatism was over. The Business Roundtable once held that a corporation’s “paramount duty” was to its shareholders. Now, the Roundtable is singing a new, more inclusive tune. A corporation, it says, should balance the interests of its shareholders with those of other “stakeholders,” including customers, employees, suppliers and local communities.
He think's that's empty PR nonsense. I think he's right.

He believes they're scared: "A recession looms". And they may well be scared. But revolution? He points out that, while many people lost their homes and rural areas were devastated in the wake of the 2008 financial implosion,
Corporate profits grew as if there were no tomorrow, but they didn’t trickle down to everyone else. Instead, dividends and stock buybacks got bigger while C.E.O. pay went through the rose-gold roof. The rest of us got smartphones, money-losing conveniences — Uber, WeWork, Netflix and meal delivery apps — and mountains of student debt.
What happens when the next recession hits? Who knows, but we'll find out soon enough. But here's what Manjoo thinks/hopes:
And so, when recession comes, we’ll be right to ask: Was that it? Is this the best it gets? And if so, isn’t it time to go full Elizabeth Warren — to make some fundamental, radical changes to how the American economy works, so that we might prevent decades more of growth that disproportionally benefits the titans among us?
And, so he thinks, we the people will revolt. Just how we'll do that, he doesn't say.

Now, as some of you may know, Kim Stanley Robinson wrote a book about that, New York 2140. As the title indicates, it's set in the future, a very different world where the seas have risen 50 feet. But the institutional structure of that (imagined) world is much like that of our current world. Disaster strikes and millions of people are saddled with housing debt they can't pay. They go on rents strikes and so forth and this time the banks get nationalized as a condition of bailout (p 602): "Finance was now for the most part a privately operated public utility." The revolt worked. Perhaps. Robinson ends the book at that point, so we don't know how things worked out. 

But I don't see that happening now. I don't think the organizational infrastructure is in place to make it happen.

Alas.

But who knows?

Synaptic overload, orthogonal views along the galectic


This is your brain on stories – "the representation of language semantics is independent of the sensory modality through which the semantic information is received"


The article cited: Deniz et al., "The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality":
Abstract

An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically-selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.

SIGNIFICANCE STATEMENT

Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.

Tuesday, August 20, 2019

Those were the days, 51 Pacific, Fall 2012

Outside & Up Top (w/ flare) (Bayonne in the distance)

Inside, Work in Progress (reference material, yes, why not?)

Deep Inside, contemplation ("she came in thru the bathroom window")

Deirdre McCloskey on liberalism (in the classical sense)

Eric Wallach interviews economist Deirdre McClosky in The Politic, February 2019.
The Yale Politic: According to Bourgeois Equality, the average U.S. resident’s real income per head increased from $3/day in 1800 to $132/day in 2010– an increase of 44x. You attribute this ‘betterment’ to the ideas of dignity and liberty. What do you mean more concretely?

Deirdre McCloskey: It’s not exactly “according to [that excellent volume of 2016] Bourgeois Equality.” It’s rather “according to the solid scientific consensus of economic historians.” Concretely I mean that the bizarre 18th-century idea of liberalism—which is the theory of a society composed entirely of free people, liberi, and no slaves—gave ordinary people the notion that they could have a go. And go they did. In the earliest if hesitatingly liberal societies such as Britain and France, and among the liberi in societies still fully dominated by traditional hierarchies such as Russia and much of Italy, or the slave states of the United States, the turn of the 19th century saw a sharp rise of innovation. “Innovation” means new ideas in technology and organization and location, ranging from the electric motor to shipping containers to opening a new hairdressing salon in town, or to moving to Chicago away from Jim Crow and sharecropping. Since 1800, with no believable signs of letting up, it has improved the material lives of the poorest among us by startling percentages—4,300 percent in some places (that factor of 44), or 10,000 percent including improvements in quality, or at worst 1,000 percent worldwide by conventional measures including stagnant places, in a world in which rises of 100 percent had been rare and on Malthusian grounds temporary.
This reminded me of a passage from Health of Nations (Basic Books 1987, p. 184), by Leonard Sagan:
The history of rapid health gains in the United States is not unique; the rate at which death rates have fallen is even more rapid in more recently modernizing countries. The usual explanations for this dramatic improvement—better medical care, nutrition, or clean water—provide only partial answers. More important in explaining the decline in death worldwide is the rise of hope ... [through] the introduction of the transistor radio and television, bringing into the huts and shanties of the world the message that progress is possible, that each individual is unique and of value, and that science and technology can provide the opportunity for fulfillment of these hopes.
I note as well "the bizarre 18th-century idea of liberalism" is a Rank 3 idea, to invoke the account of cultural ranks that David Hays and I have developed.

Continuing with the interview:
In Bourgeois Equality, you caution: “But in any event the safety net, with or without holes, is not the main lifter of the poor in the United States, the Netherlands, Switzerland, Japan, Sweden, or the others. The way to lift is the Great Enrichment.” To what extent is the social safety net instrumentally valuable for ‘betterment?’ What might the U.S. look like with no safety net whatsoever, and what might it look like with a perfect safety net?

It’s unwise to turn the issue of helping the poor into an on/off, none/perfect, exist/not question. We need to be seriously quantitative about such matters. On/off doesn’t answer the important question, which is always more/less. People think they are making a clever remark against liberalism by saying, “Well, we need some government.” Yes, certainly. But how much? (Will Rogers in the 1920s used to say, “Just be glad you don’t get the government you pay for.”) And the liberals think they are making a clever remark in reply when they say, “But look at such-and-such an example of governmental failure.” Neither makes a lot of sense. We need to know How Much, how much the market fails, how much the government fails, what number between zero and 100 percent should be run by experts in Washington as against you in your neighborhood and business and home. I have an essay a few years ago making the point in technical economics, “The Two Movements in Economic Thought, 1700-2000: Empty Economic Boxes Revisited.”

But from the non-technical point of view one can assemble the ethical justification for liberalism by honoring both versions of the Golden Rules (and not Trump’s version: “Those who have the gold, rule”). The late first-century BCE Jewish sage Hillel of Babylon put it negatively yet reflexively: “Do not do unto other what you would not want done unto yourself.” It’s masculine, a guy-liberalism, a gospel of justice, roughly the so-called Non-Aggression Axiom as articulated by libertarians 1.0 since the word “libertarian” was coined in the 1950s. [...]

On the other hand, the early first-century CE Jewish sage Jesus of Nazareth put it positively: “Do unto others as you would have them do unto you.” It’s gal-liberalism, a gospel of love, placing upon us an ethical responsibility to do more than pass by on the other side. Be a good Samaritan. Be nice. It is “positive” liberty, which Berlin and I think is a misuse of “liberty,” yet agree that some amount of it is anyway an ethical responsibility. No woman is an island, entire of herself.

In treating others, a humane libertarianism 2.0 attends to both Golden Rules. The one corrects a busybody pushing around. The other corrects an inhumane selfishness

So here’s what a Liberalism 2.0 favors. It favors a social safety net, which is to say a clean transfer of money from you and me to the very poor in distress, a hand up so they can take care of their families. It favors financing pre- and post-natal care and nursery schools for poor kids, which would do more to raise health and educational standards than almost anything we can do later. It favors compulsory measles vaccination, to prevent the big spillover of contagion that is happening now in Clark County, Washington. It favors compulsory school attendance, financed by you and me, though not the socialized provision of public schools. The Swedes have since the 1990s had a national voucher system, liberal-style. It favors a small army/coast-guard to protect as against the imminent threat of invasion by Canada and Mexico, and a pile of nuclear weapons and delivery systems to prevent the Russians or Chinese or North Koreans from extorting us. All this is good, and would result in the government at all levels taking and regulating perhaps 10 percent of the nation’s production. Put me down for 10 percent slavery to government. Not the 30 to 55 percent at present that rich countries enslave.
On the Nordic Model: 
People who dote on the Nordic Model need to realize that such folk are, well, Nordic, with astonishingly high standards of integrity in public administration by world standards. It’s not genetic, but may be Lutheran, and is certainly historical. I have a professor friend in Gothenburg who served on an ad hoc committee to look into a terrible case of corruption in the city. The corruption? A company had bought a city councilor a luncheon.

Transparency International in Berlin ranks annually the 190 or so countries in the world in perceived integrity from the top (New Zealand, Denmark) to the bottom (North Korea, Zimbabwe). Suppose we take the top 30 of the 190 in 2016 as capable of running an efficient safety net without horrible malfeasance, in the net and elsewhere. Portugal is on the margin. Italy, sadly, is ranked 79th. The U.S. makes it into the top 30, but many individual states—my own Illinois, for example—would rank lower. All right, what is the percentage of the world’s population wretchedly governed in what everyone agrees is a horribly incompetent and corrupt fashion? Ninety percent. Such are the governments to which you wish to give more money and power. Gothenburg, sure. Des Moines, OK. Chicago, not. Palermo and Moscow–are you nuts?
On inequality:
Yes, I know, we do lament inequality, by confusing it with poverty, which poverty all should in liberal justice lament. The Liberal Lady Glencora Palliser (charmingly, née M’Cluskie) in Anthony Trollope’s political novel Phineas Finn (1867–1868) declares, “Making men and women all equal. That I take to be the gist of our political theory,” as against the Conservative delight in rank and privilege. But Joshua Monk, one of the novel’s radicals in the Cobden-Bright-Mill mold, sees the ethical point more clearly, and replies to her: “Equality is an ugly word, and frightens,” as indeed it had long frightened the political class in Britain, traumatized by wild French declarations for égalité, and by the example of American egalitarianism (well . . . egalitarianism for male, straight, white, Anglo, middle-aged, educated, high-income, nonimmigrant, Republican, New-England mainline Protestants). The motive of the true liberal, Monk continues, should not be equality but “the wish of every honest man . . . to assist in lifting up those below him.” (“Honest” at the time also meant “honorable.”) That’s right. Lifting up the poor, following the philosopher John Rawls, is what we should focus on, and that is achieved chiefly by 4,300% increases in average income, out of innovation, which might well earn a Steve Jobs a bundle, too. That we pay to see Wilt Chamberlain make jump shots, as the philosopher Robert Nozick pointed out, and Wilt therefore ends up richer than we are, is not an ethical problem.
H/t Tyler Cowen.

Orange wind sock

Monday, August 19, 2019

Behind the scenes at Davos: The coming revolution shall be celebrated, defanged, and neutered

Jonah Bennett, A Trip Behind The Spectacle At Davos, Palladium, Feb. 9, 2019.
There’s a great NowThis video that’s been circulating around from the Forum of first-time attendee and historian Rutger Bregman explicitly calling out the use of ‘saving the planet’ rhetoric, even as elites attend Davos in private jets, and noting how philanthropy is just masking the real problem, namely that elites aren’t paying their fair share of taxes. Oxfam called for more taxes, too.

The Davos audience loudly applauded. This is actually quite typical and unsurprising. One characteristic of Davos attendees is that they love being called out in a safe and defanged manner, and they love safe and defanged activism. It’s a comfortable dialectic. [...]

As another example of this dynamic, sixteen-year-old climate activist Greta Thunberg camped out in -18 °C weather, rather than staying in a hotel. She spoke on a panel alongside Salesforce CEO Marc Benioff, Bono, Jane Goodall, and will.i.am, where she called out members of the audience as being directly responsible for contributing to climate change.

Here, too, the reaction was not of genuine fear regarding a substantial threat to existing power structures. Instead, the response was, “Oh, how sweet and lovely that she cares. Yes, of course we must remember to save the Earth.”

This is what activism looks like when it’s highly normalized, defanged, and incorporated into the power structure’s mode of being. It is not a direct challenge to power from a wholly oppositional force, but rather an acceleration of shared pieties. A couple of CEOs complained about proposals for high marginal tax rates, and elsewhere a separate audience laughed at the idea of rates as high as 70%, but these seemed to be disagreements about means, not ends. The dangers of rising inequality have become clear to nearly all involved. Besides, self-preservation is one of the primary activities of power.

Whether those participating are conscious of it or not, this praise is a key part of recuperation: the process of co-opting potentially threatening radical movements and discourses. This has been long-standing practice throughout the 20th and 21st centuries. In our walks through Davos, panels and exhibits on everything from women’s issues to ecological crisis to cryptocurrency testified to the successful pacification of groups and causes once thought irreconcilable with elite power. Political energy directed against the ruling class weakens as some proportion of the radical movement defects in exchange for personal and ideological advancement.
This is rather a bit like what Marcuse called this expressive desublimation.

The bottom line, as it were:
The great thing about Davos is that it has more prestige than a Washington, D.C. trade show. After that, attendance is what you make of it. Most of the panels lack insight, but walk into the right party and it’s possible to encounter real talent.

Machine creativity Zuckerberg style: "move fast and break things"


The paper has 34 authors and is entitled: "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities". Here's the abstract:
Abstract: Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed,experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems

Invitation to a discussion

Joe Rogan and his middle-bro audience

Devin Gordon, Why is Joe Rogan so Popular? The Atlantic, August 2019.

The podcast:
Rogan’s podcast gushes like a mighty river of content—approximately three episodes a week, usually more than two hours per episode, consisting of one marathon conversation with a subject of his choosing. Over the course of about 1,400 episodes and counting, his roster of guests can be divided roughly three ways: (1) comedians, (2) fighters, and (3) “thinkers,” which requires air quotes because it encompasses everyone from Oxford scholars and MIT bioengineers to culture drivers such as the marketing entrepreneur Hotep Jesus and the rapper turned radio co-host Charlamagne tha God all the way across the known intellectual galaxy to conspiracy theorists like Rogan’s longtime buddy and Sandy Hook denier Alex Jones. Also Dr. Phil. And David Lee Roth. And B-Real from Cypress Hill.

It’s impossible to be a Joe Rogan completist, so most of his fans pick a few tributaries. The rest may as well not exist. Who can keep track? Rogan is a key figure in the rise of MMA—Dana White once called him “the best fight announcer who has ever called a fight in the history of fighting”—but I don’t care about fighting, so I didn’t listen to any of Joe’s podcasts with fighters. I also didn’t listen to Dr. Phil, and I’m sure I’m not the only one who skipped it, which is just another way of saying there’s no real way to describe “Joe Rogan fans.” They’re not aligned around any narrow set of curiosities or politics. They’re aligned around Joe.
The audience:
As popular as he seems to be with quote-unquote regular guys, that’s how unpopular Joe Rogan is with the quote-unquote prestige wing of popular culture—Emmy voters, HBO subscribers, comedy nerds. Thought leaders. Thought followers. There are plenty of Joe Rogan fans among them, too, but they tend not to bring it up. [...]

The bedrock issue, though, is Rogan’s courting of a middle-bro audience that the cultural elite hold in particular contempt—guys who get barbed-wire tattoos and fill their fridge with Monster energy drinks and preordered their tickets to see Hobbs & Shaw. Joe loves these guys, and his affection has none of the condescension and ironic distance many people fall back on in order to get comfortable with them. He shares their passions and enthusiasms at a moment when the public dialogue has branded them childish or problematic or a slippery slope to Trumpism. Like many of these men, Joe grumbles a lot about “political correctness.” He knows that he is privileged by virtue of his gender and his skin color, but in his heart he is sick of being reminded about it. Like lots of other white men in America, he is grappling with a growing sense that the term white man has become an epithet. And like lots of other men in America, not just the white ones, he’s reckoning out loud with a fear that the word masculinity has become, by definition, toxic.

Most of Rogan’s critics don’t really grasp the breadth and depth of the community he has built, and they act as though trying is pointless.
Technique and craft:
The hard truth for some of Rogan’s critics in the media is that he is much better at captivating audiences than most of us, because he has the patience and the generosity to let his interviews be an experience rather than an inquisition. And, go figure, his approach has the virtue of putting his subjects at ease and letting the conversation go to poignant places, like the moment when Musk reflected on what it was like to be Elon Musk as a child—his brain a set of bagpipes that blared all day and all night. He assumed he would wind up in a mental institution. “It may sound great if it’s turned on,” he said in his blunt mechanical way, “but what if it doesn’t turn off?”
Limitations:
And a key thing Joe and his fans tend to have in common is a deficit of empathy. He seems unable to process how his tolerance for monsters like Alex Jones plays a role in the wounding of people who don’t deserve it. Jones’s recent appearance on the podcast came after he was sued by families of children and educators murdered in the Sandy Hook massacre—a mass shooting that Jones falsely claimed was a hoax, which families of the victims say prompted his gang of fans to harass them. (Jones has since acknowledged that the Sandy Hook massacre occurred.) So is Joe really nurturing a generation of smarter, healthier, more worldly men, or an army of conspiracy theorists and alt-right super soldiers? At the very least, he shows too much compassion for bad actors, and not enough for people on the receiving end of their attacks. [...] More revealing is who he invites onto his podcast, and what subjects he chooses to feast on in his stand-up specials. And if you cast a wide enough net, clear patterns emerge. If there’s a woman or a person of color (or both) on Joe’s podcast, the odds are high that person is a fighter or an entertainer, and not a public intellectual.
And so:
My Joe Rogan experience ended because he wore me out. He never shuts up. He talks and talks and talks. He doesn’t seem to grasp that not every thought inside his brain needs to be said out loud. It doesn’t occur to him to consider whether his contributions have value. He just speaks his mind. He just whips it out and drops it on the table.

Through the broccoli leaves

Richard Macksey at 3QD [TALENT SEARCH]

I’ve taken my Richard Macksey post from July 30, tweaked it a bit here and there, and posted it at 3 Quarks Daily: https://www.3quarksdaily.com/3quarksdaily/2019/08/it-got-adults-off-your-back-richard-macksey-remembered.html. Why repost at 3QD? Because 3QD has a much larger readership than New Savanna, and people need to know about Dick Macksey.

Macksey is one of the three smartest and most creative people I’ve ever worked with. David Hays and Zeal Greenberg are the other two. They are very different men.

Both Hays and Macksey were academics; both were polymaths and both were interested in language. Unlike Hays, Macksey never really developed a line of thought that was his own. He published a bit, read ferociously across many disciplines, but he was mostly a teacher and an editor. Hays had and published about his own lines of thought. We collaborated closely from the time I met him when I was a graduate student until he died two decades later of lung cancer.

Zeal never went to college; he was a businessman. I met him long after I’d met the other two, in fact Hays had been dead almost by the time I’d met Zeal in November of 2003 or 2004. By that time he’d retired from business and was pushing a Quixotic project he called World Island – “a permanent world’s fair for a world that’s permanently fair”. I worked quite closely with him on that project for a couple of years, but then things tapered off. But I still keep in touch.

Here’s the question: How come I believe these are the three smartest and most creative people I’ve ever worked with? What’s my criterion of judgment? There’s really no way to compare them directly. Zeal is not at all an academic and so doesn’t have the kind of learning Macksey or Hays had, though he is immensely curious and knows many things. Macksey doesn’t have a body of original thought like Hays had and Hays, though broadly learned, was not like Macksey was – no one I’ve ever met was. I have no idea how these three would compare on any of the various tests of ability, and I wouldn’t pay any attention to those numbers if I had them.

That is to say, I wouldn’t value those numbers beyond the intuitive sense I’d developed through working with these men. That would be foolish. And that’s my basis of judgement, intuition based on direct experience. I certainly don’t claim any particular objectivity for that judgement. But remember, I’m only talking about people I’ve worked with directly. As for all those other people, how would I know?

It’s a strange business, this matter of talent and ability. We just don’t know.

Addendum, a day later: Walter Freeman may be as smart and creative as Macksey, Hays, and Zeal. I had extensive email correspondence with him while I was writing Beethoven's Anvil and for a year or three after, but I didn't work with him as much as I had with the other three.

Sunday, August 18, 2019

Are we having fun yet?

What happened to childhood?

Kim Brooks, We Have Ruined Childhood, NYTimes, 17 August 2019:
According to the psychologist Peter Gray, children today are more depressed than they were during the Great Depression and more anxious than they were at the height of the Cold War. A 2019 study published in the Journal of Abnormal Psychology found that between 2009 and 2017, rates of depression rose by more than 60 percent among those ages 14 to 17, and 47 percent among those ages 12 to 13. This isn’t just a matter of increased diagnoses. The number of children and teenagers who were seen in emergency rooms with suicidal thoughts or having attempted suicide doubled between 2007 and 2015.
Children's lives are too regimented:
No longer able to rely on communal structures for child care or allow children time alone, parents who need to work are forced to warehouse their youngsters for long stretches of time. School days are longer and more regimented. Kindergarten, which used to be focused on play, is now an academic training ground for the first grade. Young children are assigned homework even though numerous studies have found it harmful. STEM, standardized testing and active-shooter drills have largely replaced recess, leisurely lunches, art and music.

And so for many children, when the school day is over, it hardly matters; the hours outside school are more like school than ever. Children spend afternoons, weekends and summers in aftercare and camps while their parents work. The areas where children once congregated for unstructured, unsupervised play are now often off limits. And so those who can afford it drive their children from one structured activity to another. Those who can’t keep them inside. Free play and childhood independence have become relics, insurance risks, at times criminal offenses.

Saturday, August 17, 2019

On the beach


For Barack Hussein Obama, who was once upon a time The Prez [It's 1619 all over again]

III. As you may know, the NYTimes is now publishing its 1619 Project, which is about "the country's true origin." While this poem has been hanging out on the web for over two decades, I figure it belongs at the top of the queue as a contribution to that project.
* * 
I. I wrote this poem well over two decades ago and published it in Meanderings in 1995. Now that we have a black man as President, it seems appropriate to dust if off and publish it here on the new savanna. 
Note: For each item highlighted in gray there is an endnote at the end (down there at the bottom). 
* * 
II. Now that Donald Trump has the gig, it seems more important than ever that this rise to the top of the queue here at New Savanna. You might also check out Election Special: The Blues House, in which I reproduce the stump speech that John Birds "Dizzy" Gillespie toured on in 1964.

Independence Day, 2001: In Which a President Finally Frees His Slave Mistress

I. Summer

When Thomas Jefferson dreamed of Bessie Smith
Lincoln was shot and Michael Jackson got a nose job,
Atlanta was burned and Rosa Parks welcomed Neil Armstrong to the moon,
While hooded Klansmen invaded Star Wars with their laser whips
And FloJo embarrassed Hitler in Berlin.

The dream stained his sheets, the pleasure embarrassing.
Yet Tom needed his sweets and wouldn't dream of his wife.
She was the mother of his children and the apple of his eye,
But Bessie knew other things, secret hidden ways to sing
The blues, who do the voodoo? the long snake moan.

II. Autumn

When Bill Handy had dinner with Mozart
Malcolm X traveled to Mecca and Lennon gave peace a chance,
The Declaration of Independence was signed and Haiti was born,
Bobby Kennedy was shot while chatting with Nat Turner at Trader Vic's,
And Elvis became the King so he could buy his mamma a house.

It was a good evening. Amadeus sure could tickle the ivories,
And old Bill liked to tickle people, white folks too.
Wolfgang taught him the secret arts of notation so he could gather
Songs for Bessie to sing. That's how the blues propagated.
Now old Tom could buy records and learn to dance.

III. Winter

When Jack Johnson escorted Marilyn Monroe to the theatre
Hiroshima was atomized so Nipponese could sing doo-wop in blackface
Chinese ghosts still haunting the Union Pacific.
Sequoya created his alphabet so the Cherokee could read Booker T.
And Augustine became a Christian before Aretha's first was born.

The show depicted a familiar tableau:
Leontyne sang Aida in gold lame while Caliban
Fiddled with Queen Bess who couldn't believe
That Tom had finally taken Shine's advice and
Decided to jump ship and haul ass for New Jerusalem.

IV. Spring

When Bessie played with Martin Luther
Sometimes the magic worked, and sometimes it didn't.
The writers of those manuals couldn't cover everything.
Still, when Bird called and Louis Moreau played bamboula
Nijinsky would dance so fast he heated Chano's skins.

Tom liked to watch but finally got hip to participatory democracy.
He embraced equality and burned his wig,
Freeing himself to perform unspoken acts
With his wife while the children were asleep,
Dreaming of genies and their magic lamps.


Friday, August 16, 2019

Friday Fotos: Storefronts in the 'hood





Haunted by Love and Marriage: She and Heart of Darkness

Rider Haggard’s She and Joseph Conrad’s Heart of Darkness are very different books. She is an adventure romance of well over 100,000 words while Heart of Darkness has somewhat less than 40,000 words and is grim and impressionistic. And yet in 1983 Allan Hunter [1] suggested that Heart of Darkness is a parody of She. A few years after that Murray Pittock [2] published a list of correspondence between the two. More recently Johan Warodell [3] has given us a detailed comparison of Conrad’s Kurtz and Haggard’s Ayesha. This paragraph from Stephen Tabachnick [4], citing Pittock, will give you an idea of the similarity between these two superficially very different texts (p. 190):
Pittock specifically notes several parallels of plot and character, as follows. First, both stories “concern journeys undertaken to meet a mysterious character in the heart of Africa, in both cases white, recalling the legends of Prester John” or of Mujaji, the white-skinned queen of the Lovedu tribes in the Transvaal. Second, “Both journeys into the interior are by river, and, like Marlow’s, the helmsman of Holly and Leo is killed as a result of the action of the natives before either of the protagonists can reach his destination.” Third, in both works, the great age of Africa is part of the mystery. Fourth, the “technological superiority” of the contemporary explorers is “a feature of both books.” Fifth, “Marlow’s first sight of Kurtz echoes Holly’s last sight of Ayesha in She, as one terribly aged: ‘I could see the cage of his ribs [and] the bones of his arms moving.’” Sixth, both Holly and Marlow witness secret rites: Marlow “encounters Kurtz in the wood during the rites of the African sorcerer..., which echoes Holly’s solitary witnessing of Ayesha cursing her dead rival, Amenartas.” And last, neither Kurtz’s soul nor Ayesha’s knows any restraint and both “yearn for power.”
Beyond that I would note that both books are haunted by marriage.

Before we ever see Kurtz we see a painting he made of his Intended; moreover, we learn that he journeyed into Africa’s interior so that he could make his fortune and thus be worthy of his Intended, at least be worthy in the eyes of her family. This is from Marlow’s conversation with that woman:
And the girl talked, easing her pain in the certitude of my sympathy; she talked as thirsty men drink. I had heard that her engagement with Kurtz had been disapproved by her people. He wasn't rich enough or something. And indeed I don't know whether he had not been a pauper all his life. He had given me some reason to infer that it was his impatience of comparative poverty that drove him out there.
Kurtz sought to enrich himself through empire so he could return home and get married. That’s what interests me, the relationship between marriage that marks the outer boundary of Heart of Darkness.

It’s there in She as well. In Chapter 22, “Job Has a Presentment”, Ayesha is talking with Leo and Holly:
“Almost dost thou begin to love me, Kallikrates [that is, Leo],” she answered, smiling. “And now tell me of thy country–‘tis a great people, is it not? with an empire like that of Rome! Surely thou wouldst return thither, and it is well, for I mean not that thou shouldst dwell in these caves of Kôr. Nay, when once thou art even as I am, we will go hence–fear not but that I shall find a path–and then shall we journey to this England of thine, and live as it becometh us to live. Two thousand years have I waited for the day when I should see the last of these hateful caves and this gloomy-visaged folk, and now it is at hand, and my heart bounds up to meet it like a child's towards its holiday. For thou shalt rule this England––“

“But we have a queen already,” broke in Leo, hastily.

“It is naught, it is naught,” said Ayesha; “she can be overthrown.”
Of course, things didn’t work out that way at all. Ayesha died in the Pillar of Life and Leo went off to Tibet in search of wisdom. But the whole preposterous story is inscribed in the relationship between empire and marriage.

What could that relationship be, in either text? It’s not causal in either direction. And to say it’s (merely) symbolic is not quite right. Both stories tell of men adventuring into the unknown. But the territory where they conduct their adventure, Africa, is available to through empire and the goal of the adventuring is linked to marriage. Marlow is in search of Kurtz, who wanted to get rich so he could get married; Leo and Holly are in search of Leo’s ancestry, where they find a woman who wants to marry Leo.

I think Leslie Fiedler’s classic book, Love and Death in the American Novel (1966), can give us some idea of what’s at stake. Fiedler argues that, while the 18th- and 19th-century European novel is focused on courtship and marriage, the American novel — which is necessarily based on European prototypes — is about “a man on the run, harried into the forest [e.g., Cooper’s Natty Bumpo] and out to sea [e.g., Melville’s Ahab], down the river [e.g., Twain’s Huck Finn] or into combat [e.g., Crane’s Henry Fleming] — anywhere to avoid ‘civilization,’ which is to say, the confrontation of a man and a woman which leads to the fall to sex, marriage, and responsibility” (p. 26). Of the European novel Fiedler remarks (p. 25):
The novel, however, was precisely the product of the sentimentalizing taste of the eighteenth century; and a continuing tradition of prose fiction did not begin until the love affair of Lovelace and Clarissa (a demythologized Don Juan and a secularized goddess of Christian love) had been imagined. The subject par excellence of the novel is love, or more precisely–in its beginnings at least–seduction and marriage; and in France, Italy, Germany, and Russian, even in England, spiritually so close to America, love in one form or another has remained the novel’s central theme, as necessary and as expected as battle in Homer or revenge in the Renaissance drama. But our great Romantic Unroman, our typical anti-novel, is the womanless Moby Dick.
Haggard and Conrad are writing at the end of that version of the European novel. Both are stories of men lighting out for the territory like American novels, but both are as well haunted by love and marriage.

The analytic trick, it seems to me, is to figure out the imaginative objects and forces got reconfigured as we move from She to Heart of Darkness. Ayesha becomes the Intended, Leo becomes Kurtz, Holly becomes Marlow, and the unnamed editor of She becomes the unnamed auditor on the deck of the deck of the Nellie who then conveys Marlow’s story to us. And that’s just the beginning of the analytic job. Ayesha is a much more active character than the Intended; the relationship between Holly and Leo is different from that between Marlow and Kurtz; motives are different; one story traffics in the supernatural, the other note; and so forth. Completing that analysis is a job for another day, and a somewhat longer and more tortured post.

And that analysis is, in turn, preliminary to a consideration of the order of publication. Obviously, She, published in 1886, came before Heart of Darkness, 1899-1900. Why? Is that a matter of mere historical contingency or does it reflect the formal and thematic economy, if you will, of the Anglophone literary system. There is, of course, one respect in which Heart of Darkness could not have been written before She. It is set in the Belgian Congo. Since King Leopold had only created the Free Congo State in 1885; it would take a few years to administer the atrocities Conrad depicts (he sailed the Congo in 1890). But, by that time, European colonialism had centuries worth of depredations to its credit. A narrative like Heart could have been centered elsewhere.

What I’m after, and what I believe to be the case, is an argument that narrative is subject to a psycho-cultural economy such that She has to have been written before Heart of Darkness. Both texts arose in the same cultural system and that system. She reflects an earlier state of that system than Heart of Darkness and this is so, not because of the often contingent events of history, but because of the internal dynamics of that system.

What I’m thinking is that, in this case, that dynamics is that of parody, perhaps even of anxious influence in one of the senses Harold Bloom has theorized. We know that Conrad detested Haggard’s fiction. Did he create the expressive technique of Heart as a way of writing an Africa novel that was utterly different from Haggard’s?

References

[1] Hunter, Allan. Joseph Conrad and the Ethics of Darwinsm: The Challenges of Science. London: Croom Helm, 1983.


[2] Pittock, Murray. “Rider Haggard and Heart of Darkness”. Conradiana 1987, Vol. 3 (19).


[3] Warodell, Johan. “Twinning Haggard’s Ayesha and Joseph Conrad’s Kurtz”. Yearbook of Conrad Studies (Poland), Vol. 18, pp. 57-68 Kraków 2011.

[4] Tabachnick, Stephen E. “Two Tales of Gothic Adventure: She and Heart of Darkness” English Literature in Transition, 1880-1920, Vol. 56, Number 2, 2013, pp. 189-200.

[5] Fiedler, Leslie, Love and Death in the American Novel. New York, Stein and Day, 1966.

Thursday, August 15, 2019

Stake-out in the light

What’s an idea? An informal case study about an industrial idea that was awarded two patents [Stagnation]

In my working paper, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world [1], I took a close look at two cases examined by Bloom et al., Are Ideas Getting Harder to Find? [2]. In discussing their conceptual framework they noted (p. 5):
ideas are hard to measure. Even as simple a question as “What are the units of ideas?” is troublesome. We follow much of the literature and define ideas to be in units so that a constant flow of new ideas leads to constant exponential growth in A. For example, each new idea raises incomes by a constant percentage (on average), rather than by a certain number of dollars. This is the standard approach in the quality ladder literature on growth: ideas are proportional improvements in productivity.
That makes sense to me. It doesn’t take much reflection to understand that this is so. We can count words, for example, but the relationship between words and ideas is not at all clear.

Still, I think it would be useful to consider a real example of ideas, just to get a feel for the phenomenon. The example I choose, however, is not one that could have occurred in the three cases Bloom et al. have examined in detail (chip production, drug discovery, seed yields). Rather it is one I know something about, just barely.

It’s an idea my father had in the mid-1970s. He had spent most of his career working for Bethlehem Mines Corporation, the mining subsidiary of Bethlehem Steel Corporation. At that time he was Superintendent of Coal Preparation and, as such, was responsible for the design of plants that cleaned coal. He referred to his idea as “evaporative cooling”. Judging from how he talked about it, it was one idea in his mind [3]. He was awarded two patents for the idea and the company built a cleaning plant based on it.

Background: Cleaning Coal

Let us work backwards: Steel is made from iron, and iron is made from iron ore which is heated to a high temperature by coke. Coke is made from coal by driving off water and volatile organic material. Before that can be done, however, coal must cleaned of impurities, mostly sulfur-bearing rock. Most cleaning techniques take advantage of the fact that the rock is denser than coal. So, you crush the raw coal until all the particles measure less than, say, an eighth of an inch in diameter. Then you float the crushed coal in some medium – generally, but not always, water – and take advantage of the fact that the rock sinks faster than the coal. There are several ways you can do that, but whichever technique you use, you end up with wet coal when you’re done.

You need to dry the coal. The old drying techniques – drying ovens – leave a lot of coal dust in the air. Coal dust is dirty nasty stuff.

Starting early in the 1970s environmental regulators demanded that the output of coal dust be severely limited. The common method of doing this was to exhaust the dust-laden air through very tall chimneys lined with electrostatic precipitators. The precipitators used charged plates to attract the dust particles and draw them out of the air. They thus used a great deal of electricity in the process. Further, these precipitators sometimes emitted sparks, which then triggered explosions in the chimneys, filled, as they were, with fine coal particles in suspension. It was a messy and expensive business.

Evaporative Cooling

During the mid to late 1970s my father designed a new cleaning plant based on a new process. After doing some testing (in his basement), he discovered that, by using heated water (to about 200° Fahrenheit I believe) for the slurry (crushed coal in water), you could dry the coal through evaporation. The heat in the water was enough to evaporate it off the coal. When the impurities had been removed you simply filtered most of the water out and then dumped the still-wet coal on a conveyor belt. The conveyor then moved the coal to the storage pile. By the time it arrived at the pile the water had evaporated.

No drying ovens. No tall chimneys. No electrostatic precipitators. No explosions. And the air’s cleaner.

To do this, of course, we must heat the water. That costs money. But that one cost allows considerable savings. And there is a specific piece of new apparatus that must be constructed and operated (as we’ll see in the next section). But you no longer need drying ovens, chimneys or precipitators. That eliminates capital costs, operating costs, and maintenance costs. Further it turns out that heated slurry flows through the system more efficiently; so the plant works better.

Two patents

This idea resulted in two patents. While the patents were granted to my father, I assume that he had assigned the rights to them to his employer, The Bethlehem Mines Corporation, which built a cleaning plant based on this idea in Cambria County, Pennsylvania. The two patents:
Method of Cleaning Raw Ore, Patent No. 4,072,539, Feb. 7, 1978.

Abstract: A method of cleaning a raw ore product such as coal is disclosed wherein the ore product temperature is increased up to about 200°-212° F. before water is separated therefrom whereby the moisture content of the cleaned product is controlled. The ore product is passed through a bath of hot water, then surface water is removed before the ore product is moved through an evaporative cooler in a downward direction while being subjected to air at ambient temperature and at an air capacity of between 10,000 and 15,000 cubic feet per ton of ore product.

Method and Apparatus for Drying and cooling Products of a Granular Nature, Patent No. 4,141,155, Feb. 27, 1979.

Abstract: Apparatus and method for drying and cooling products of a granular nature such as coal as they move downwardly through two flow paths by directing ambient air horizontally through the flow paths into a chamber between the flow paths are disclosed. The chamber is partitioned, and air flow is controlled, so that more air flows through the upper part of the chamber as compared with the lower part of the chamber.
Patent 4,072,539 is for the process flow while 4,141,155 is for a specific piece of apparatus used in that flow. Notice that both patents treat coal as a specific example of something to be cleaned and dried rather than as the sole substance at issue. In addition to the abstract each patent cites prior relevant patents, has a short section of background and a somewhat longer section in which we have a narrative of disclosures along with appropriate diagrams.

This is the first diagram from 4,072,539, which is called “a block diagram of the method of the present invention”:


Something like that, along with the accompanying narrative, is what existed in my father’s mind as “evaporative cooling”. To him it is one thing, one thing that then unfolds into various components.

Rusted iron and green [Mystrigue]

Jeff Hawkins on the brain [Thousand Brains Theory of Intelligence]

Hawkins invented the Palm Pilot and then took his money and became a neuroscientist (he founded Numenta). Steve2152 over at Lesswrong summarizes a podcast Hawkins gave about his Thousand Brains Theory of Intelligence (cf. my post on the Busy Bee Brain). Here's a couple paragraphs I found particularly interesting:
He brought up his paper Why do neurons have thousands of synapses?. Neurons have anywhere from 5 to 30,000 synapses. There are two types. The synapses near the cell body (perhaps a few hundred) can cause the neuron to fire, and these are most similar to the connections in ANNs. The other 95% are way out on a dendrite (neuron branch), too far from the neuron body to make it fire, even if all 95% were activated at once! Instead, what happens is if you have 10-40 of these synapses that all activate at the same time and are all very close to each other on the dendrite, it creates a "dendritic spike" that goes to the cell body and raises the voltage a little bit, but not enough to make the cell fire. And then the voltage goes back down shortly thereafter. What good is that? If the neuron is triggered to fire (due to the first type of synapses, the ones near the cell body), and has already been prepared by a dendritic spike, then it fires slightly sooner, which matters because there are fast inhibitory processes, such that if a neuron fires slightly before its neighbors, it can prevent those neighbors from firing at all.

So, there are dozens to hundreds of different patterns that the neuron can recognize—one for each close-together group of synapses on a dendrite—each of which can cause a dendritic spike. This allows networks of neurons to do sophisticated temporal predictions, he says: "Real neurons in the brain are time-based prediction engines, and there's no concept of this at all" in ANNs; "I don't think you can build intelligence without them".
You can run on over to Numenta for a paper on this theory,  A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex:
Rather than learning one model of the world, the Thousand Brains Theory of Intelligence states that every part of the neocortex learns complete models of objects and concepts. Long range connections in the neocortex allow the models to work together to create your perception of the world.
Here are the high points of the theory:
  • Every cortical column has a location signal that we propose is implemented by grid cells.
  • We propose an extension of grid cells, called “displacement cells”. Displacement cells enable us to learn how objects are composed of other objects, also known as object compositionality.
  • Learning an object’s behavior is simply learning the sequence of movements tracked by displacement cells.
  • A location-based framework can be applied to concepts and high-level thought in the same way it can to physical objects.
  • We discuss how the “what” and “where” pathways of the brain can be thought of as performing the same computations, but modeling different object centered and body centered location spaces.
  • Our hypothesis that every cortical column can learn complete models and the brain creates thousands of models simultaneously, rather than one big model of the world, leads to a rethinking of hierarchy in the cortex. We refer to this idea as the Thousand Brains Theory of Intelligence.
Color me skeptical about location signals, but YES to the idea that concepts and high-level thought are implemented in the same neural constructs as physical objects (and processes). 


    Tuesday, August 13, 2019

    It's a small (purple) world after all

    Is America too large? (Heck Yeah!)

    Eli Dourado Wonders: Maybe America is Simply too Big (2016):
    But I want to focus on something else. I can’t shake the idea that we’re way out of equilibrium in terms of optimal country size. If this idea is correct, then at least some of our problems could be the result of a mismatch between reality and the unexamined assumption that we all have to be in this together.
    He goes on to summarize a classic paper on optimal country size, concluding:
    ...if economic integration prevails regardless of political integration—say, tariffs are low and shipping is cheap—then political integration doesn’t buy you much. Many of the other public goods that governments provide—law and order, social insurance, etc.—don’t really benefit from large populations beyond a certain point. If you scale from a million people to 100 million people, you aren’t really better off.

    As a result, if economic integration prevails, the optimal country size is small, maybe even a city-state.
    The number of independent nations in the world has been roughly tripled over the last century. As for the United States:
    In his book American Nations, Colin Woodard argues that North America is actually composed of 11 distinct cultures, each dominant in different parts of the continent. Many of our internal political divisions—over gun control, the death penalty, abortion, the welfare state, immigration, and more—may actually reflect these cultural differences.
    Therefore:
    Given what we know about optimal country size, a monolithic America makes less sense today than it did a century ago. What made America into the superpower that it is today is its massive internal free trade area. Now that trade barriers have declined worldwide, this is less of an advantage than ever before. It’s not at all clear that this diminishing advantage outweighs the cost of our divisive politics based on unshared cultural assumptions.
    All of which argues for a look at a pamphlet I edited, with some help from Charlie Keil, Thomas Naylor's Paths to Peace: Small Is Necessary (Local Paths to Peace Today))

    Tricky light, 432 Park in the distance

    What is AI? (And what could it be?)

    From a long post by David Chapman, How should we evaluate progress in AI?, at Meaningness:
    Because AI investigates artificial intelligence, its central questions are not necessarily scientifically interesting. They are interesting for biology only to the extent that AI systems deliberately model natural intelligence; or to the extent that you can argue that there is only one sort of computation that could perform a task, so biology and artificial intelligence necessarily coincide. This may be true of the early stages of visual processing, for example.

    AI is mostly not about what nature does compute (science), nor about what we can compute today (engineering), nor about what could in principle be computed with unlimited resources (mathematics). It is about what might be computed by machines we might realistically build in the not-too-distant future. As this essay goes along, I will suggest that AI’s criterion of interestingness is therefore closer to that of philosophy of mind than to those of science, engineering, or mathematics.
    I like that, the way it situates AI betwixt and between. That's why this post exists, the rest is gravy.

    Chapman goes on to assert:
    The problem—in both psychology and AI—is not bad scientists. It is that the communities have had bad epistemic norms: ones that do not reliably lead to new truths. Individual researchers do what they see other, successful researchers doing. We can’t expect them to do otherwise—not without a social reform movement.
    OK. Though...don't know about psychology, but AI, maybe AI needs to take its project up a whole cultural rank, to Rank 5 – which is, as far as I can tell, mostly a gleam in various folks' eyes.

    Later on:
    On the other hand, analytic philosophy of mind’s criterion for what counts as “interesting” largely coincides with, and formed, that of AI. From its founding, AI has been “applied philosophy” or “experimental philosophy” or “philosophy made material.” The hope is that philosophical intuitions could be demonstrated technically, instead of just argued for, which would be far more convincing. I share that hope.

    Two fundamental intuitions most analytic philosophers of mind want to prove are:
    1. Materialism (versus mind/body dualism): mental stuff is really just physical stuff in your brain.
    2. Cognitivism (versus behaviorism): you have beliefs, consider hypotheticals, make plans, and reason from premises to conclusions.
    These are apparently contradictory. “Hypotheticals” do not appear to be physical things. It is difficult to see how the belief “Gandalf was a wizard” could both be in your head and about Gandalf, as a physical fact. And so on.

    This tension generated the problem space for GOFAI. The intuition of all cognitive scientists (including me! until 1986) was that this conflict must be resolvable; and that its resolution could be proven, beyond all possibility of doubt, via technical implementation. [...]

    How did we go so wrong for so long with GOFAI? I think it was by inheriting a pattern of thinking from analytic philosophy: trying to prove metaphysical intuitions with narrative arguments. We knew we were right, and just wanted to prove it. And the way we went about proving it was more by argument than experiment.

    Eventually, obstacles to the GOFAI agenda appeared to be matters of principle, not just matters of limited technical or scientific know-how, and it collapsed.
    And so forth. Chapman goes on to suggest that AI is more like architectural design than engineering. Engineering starts with a well-defined problem. Architectural design not so much:
    Design, like engineering, aims to produce useful artifacts. Unlike engineering, design addresses nebulous (poorly characterized) problems; is not confined to explicit, rational methods; and develops snazzy—not optimal—solutions. [...]

    Design concentrates on synthesis, more than analysis. Since the problem statement is nebulous, it doesn’t provide helpful guiding implications; but neither does it strongly constrain final solutions. Design, from early in the process, constructs trial solutions from plausible pieces suggested by the concrete problem situation. Analysis is less important, and comes mostly late in the process, to evaluate how good your solution is.

    Since design problems are nebulous, there is no such thing as an optimal solution. The evaluation criterion might be called “snazziness” instead. A good design is one people like. It should make you go “whoa, cool!” An outstanding design amazes. Design success means not that you solved a specific problem as given, but that you produced something both nifty and useful in a general vicinity. (The products of design, unlike those of art, have to work as well as wow.)
    A bit later, in the context of empirical studies of design practice:
    First, a designer maintains contact with the messy concrete specifics of the problem throughout the process. An engineer, by contrast, operates primarily in a formal domain, abstracted from the mess.
    This I like, "maintains contact..."

    And so forth and so on:
    Analogously, I believe there is significantly less to current spectacular demos of “deep learning” than meets the eye. This is not mainly general cynicism about spectacles, nor skepticism about AI demos in general, nor dislike of deep learning in particular. (Although the deep learning field’s relative lack of interest in explanation does make it easier for researchers to fool themselves.) Primarily, it’s based on my guesses about specifically how these systems accomplish the tasks they are shown performing in the demos; and from that, how likely they are to accomplish tasks that may appear similar but aren’t.