Thursday, March 31, 2022

It won't be long

Scaling in deep learning: compute, parameters, dataset [makes my head spin]

Abstract from the linked article:

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly under- trained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4× more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.

Wednesday, March 30, 2022

Can you spot the AI rogue? [Psst! He's the one in charge.]

AI has yet to take over radiology

Dan Eton, AI for medicine is overhyped right now: Should we update our timelines to AGI as a result? More is Different, March 29, 2022.

AI for medicine has a lot of promise, but it's also really overhyped right now. This post explains why and asks if we should update our timelines to existentially dangerous AI as a result (tldr: personally I do not).

Back in 2016 deep learning pioneer Geoffrey Hinton famously said "people should stop training radiologists now - it's just completely obvious within 5 years deep learning is going to do better than radiologists. It might be 10 years, but we've got plenty of radiologists already."

Hinton went on to make similar statements in numerous interviews, including one in The New Yorker in 2017. It’s now been 4.5 years since Hinton's prediction, and while there are lots of startups and hype about AI for radiology, in terms of real world impact not much has happened. There's also a severe shortage of radiologists, but that's besides the point.

Don’t get me wrong, Hinton’s case that deep learning could automate much of radiology remains very strong.[1] However, what is achievable in principle isn’t always easy to implement in practice. Data variability, data scarcity, and the intrinsic complexity of human biology all remain huge barriers.

I’ve been working on researching AI for radiology applications for three years including with one of the foremost experts in the field, Dr. Ronald Summers, at the National Institutes of Health Clinical Center. When I discuss my work with people outside the field, I find invariably that people are under the impression that AI is already being heavily used in hospitals. This is not surprising given the hype around AI. The reality is that AI is not yet guiding clinical decision making and my impression is there are only around 3-4 AI applications that are in widespread use in radiology. These applications are mainly used for relatively simple image analysis tasks like detecting hemorrhages in MRI and hospitals are mainly interested in using AI for triage purposes, not disease detection and diagnosis.[2]

Whoops!

As an insider, I hear about AI systems exhibiting decreased performance after real world deployment fairly often. For obvious business reasons, these failings are mostly kept under wraps. [...] Finally, we have IBM’s Watson Health, a failure so great that rumors of a “new AI winter” are starting to float around. Building off the success of their Watson system for playing Jeopardy, about 10 years ago IBM launched Watson Health to revolutionize healthcare with AI. [...] Well, earlier this year IBM sold off all of Watson Health “for parts” for about $1 billion.

Knowledge helps:

Radiologists have a model of human anatomy in their head which allows them to easily figure out how to interpret scans taken with different scanning parameters or X-ray images taken from different angles. It appears deep learning models lack such a model.

If robust radiological diagnosis requires such a model, that's going to be very hard for a machine to acquire. Not only does the machine have to acquire the model, it has to know how to use it. Those are not very well-defined tasks.

There's more at the link.

H/t Tyler Cowen.

Mind Hacks 3: 1956 – The Forbidden Planet Within [Media Notes 59]

I’m bumping this post from 2010 to the top of the queue as it is relevant to my current thinking about current fears of rogue AI. Those fears are projective fantasy. Just as the Monster from Id in Forbidden Planet is a projection from the mind of a central character, Dr. Morbius. The mechanism is obvious in the movie. Morbius has been hooking himself up to advanced mind technology and it has, in turn, created the monster that stalks the planet.

Skynet in the Terminator films is thus a cultural descendant of that Monster from the Id. The same is true for all that crazy advanced technology that threatens the human race in so many science fiction films. But the same is true about those fears of out-of-control AI that real people have, real people who should know better. I’m thinking of people like Bill Gates, Elon Musk, Bill Joy, Eliezer Yudkowsky, Nick Bostrom, and others. It’s a bit scary to realize that these businessmen and ‘thought leaders’ indulge, are allowed to and even encouraged to, indulge in projective fantasy so openly and transparently.

We are seeing that the development of ‘mind technology,’ that is, artificial intelligence, has this side effect, that the ‘dark side of the mind’ is being projected into policy discussions in the civic sphere.

This thought needs to be refined and developed – as I have done in a post at 3 Quarks Daily, From “Forbidden Planet” to “The Terminator”: 1950s techno-utopia and the dystopian future. I note only, as I indicate in the post below, that Forbidden Planet is based on Shakespeare’s The Tempest and the Monster from the Id is the cultural descendent of Caliban. What is the mechanism through which these creatures of fantasy have been transmuted into real social mechanisms and forces?

* * * * *

 Forbidden Planet, Robbie the Robot on the right.

Based loosely on Shakespeare’s The Tempest, Forbidden Planet takes us on a Freudian trip to another world where we meet Robbie the Robot, the progenitor of Steven Spielberg’s R2D2 and C3PO, and the Monster from the Id, the progenitor of those irrational computers that crop up in science fiction. This use of Shakespeare underscores the point that the imaginative devices used publicly to comprehend and present computers often have old cultural roots. Part of the world that Disney had carved out for an indeterminate audience is now being crafted to fit the needs of young adults through the guise of science fiction and the fantasy fiction of J. R. R. Tolkein. Disney’s abstract imagery becomes the stuff of special effects. Similar imagery would be reported by subjects of the LSD experiments that were conducted by the CIA – in search of truth drugs and agents for psychological warfare – and by various clinicians in the United States and Canada.

Forbidden Planet, the super-hot Monster from the Id melting the door at the left.

The term “artificial intelligence” was coined at a 1956 conference held at Dartmouth; the Russians launched Sputnik a year later. Noam Chomsky vanquished behaviorism and revolutionized linguistics by making the study of syntax into a technical discipline modeled on the notion of abstract computation. The human mind was declared to be fundamentally computational in nature.

In the literary world Aldous Huxley initiates modern writing on drug experiences with The Doors of Perception while Jack Kerouac, Allen Ginsberg, and William Burroughs all published major drug-influenced works. At the same time banker R. Gordon Wasson reaches the general public with an ecstatic article in Life magazine about having discovered that the hallucinogenic mushroom, Amanita muscaria, was the root of much religious experience throughout the world. Psychiatric experts were predicting great things of LSD-aided psychotherapy.

Forbidden Planet, Morbius (seated at the table) in using the mind-amplification technology of an advanced, but dead, civilization.

The Josiah Macy Foundation was funding conferences in both arenas, cybernetics and psychedelics. It seemed as though, at last, we were on the verge of discovering the material basis of the human mind and harnessing it to our will. Yet, however this played out in the press, the “real goods” were restricted to relatively small and elite groups of people. Computers were very large and expensive devices that had to be kept in environmentally controlled rooms; very few people saw or worked directly with them. Similarly, these wonderful psychedelic drugs were not readily available; one had to travel to Mexico, or one had to live in a big city and know the right psychiatrist.

People were wishing upon a distant star, imagining a future world over which they, in fact, had no control and for which they had little responsibility. That safe remoteness was about to change. In the 1960s psychedelics became freely available on many college campuses and in their surrounding neighborhoods while the 1970s would see the emergence of personal computers, computers small enough and cheap enough that individuals could own them.

Selected Milestones:
  • 1953: Watson and Crick publish the double helix structure of DNA and thus initiate the age of microbiology, initiating biology into the information paradigm.
  • 1954: J. R. R. Tolkein publishes The Fellowship of the Ring, The Two Towers.
  • 1954: Thorazine, the first major tranquilizer, is marketed.
  • 1956: Dartmouth hosts the first conference on artificial intelligence. The Bathroom of Tomorrow attraction opens at Disneyland.
  • 1959: John McCarthy proposes time-sharing to MIT’s director of computing; time-sharing would make computers much more accessible.
  • 1960: Robert Heinlein publishes Stranger in a Strange Land, which would become a major point of literary reference in the drug and mystical counter-culture of the 1960s.

Tuesday, March 29, 2022

This way up

Adam Roberts on the future as imaginative territory

In a piece that is otherwise about Dickens's ghost stories, Adam Roberts writes:

The frame here is that the nineteenth-century saw a new kind of future-imagining. It’s Paul Alkon’s argument, advanced in The Origins of Futuristic Fiction (University of Georgia Press 1987) — that, basically, there was no such thing as a future-set fiction until the end of the eighteenth-century. He discusses a few, scattered earlier titles notionally set in the future, before expanding upon the success of Louis-Sébastien Mercier’s L’An 2440, rêve s’il en fut jamais (‘The Year 2440: A Dream If Ever There Was One’ 1771) as the book that really instituted the mode of ‘futuristic fiction’ as such. Mercier’s book is a utopian reimagining of France set in the titular year, and it was in its day extremely popular. Indeed, its popularity led to a large number of imitators. Broadly: before Mercier utopias tended to be set today, but in some distant place; after Mercier utopias tended to be set in the future.

Through the early nineteenth-century loads of books were set in ‘the future’, many of them utopian works in exactly this Mercerian mode, such as Vladimir Odoyevsky’s The Year 4338 (1835) and Mary Griffith’s Three Hundred Years Hence (1836). But alongside this ‘new’ version of futuristic fiction was a vogue for a second kind of futuristic fiction, secularised (to some extent) versions of the old religious-apocalyptic future-imagining. The big hit in this idiom was Jean-Baptiste Cousin de Grainville’s prose-poem Le Dernier Homme (1805) — the whole of humanity bar one dies out, leaving Omegarus alone, to wander the earth solus, meditating upon mortality and finitude. Eventually Old Adam himself makes an appearance, and the novel ends with the graves giving up their dead and various other elements from St John’s revelation.

There were a great many of these ‘last man’ fictions, some reworkings of Grainville’s text, others ringing changes upon his theme — Auguste Creuzé de Lesser poeticised and expanded Grainville’s novel as Le Dernier Homme, poème imité de Grainville (1832), adding-in various materialist SFnalities (those flying cities and projects to explore other planets I mentioned). And husband-and-wife team Etienne-Paulin Gagne (L’Unitéide ou la Femme messie 1858) and Élise Gagne (Omégar ou le Dernier Homme 1859) reworked and expanded the Grainvillean original in respectively more spiritual and more materialised ways. In Britain, Byron’s striking and gloomy blank verse ‘Darkness’ (1816) and Mary Shelley’s novel The Last Man (1826) both engage the theme. Shelley’s novel is interesting, though pretty turgid it: a tale of political intrigue, war, a deadly pandemic set in AD 2100 that details the extinction of the entire human race, save only the titular ‘last man’, Lionel Verney, who wanders the depopulated landscapes and muses on nature, mortality and the universe.

We might style these two modes of imagining the future as spinning ‘positive’ (utopian) and ‘negative’ (apocalyptic) valences out of their futurism, but let’s not do that, actually. It would be clumsily over-simplistic of us. I’m more interested in the way the two modes feed into one another. Grainville’s novel ends with a more-or-less straight retelling of the Revelation of St John. Creuzé de Lesser’s poetic rewriting of Le Dernier Homme follows Grainville’s lead, but adds in a number of materialist and secular ‘futuristic’ details: a flying city, a plan to build craft and explore the solar system. Shelley’s Last Man jettisons the religious element altogether. We’re deep into the tradition that, through Verne and Wells and the US Pulps and Golden Age, broadens into science fictional futurism — a mode that remains, I suggest, broadly secularised but in ways that retain a significant, latent religious component.

Why does the Mercier-ist style of ‘futuristic fiction’ come so late? Why does it arrive, specifically, at the end of the 18th-century? Darko Suvin argues that it is to do with the American and French Revolutions — revolutionary thinking requires a secularised sense of a future that can be planned, and which improves on the present — as opposed to the apocalyptic sense of a future that wraps-up and ends this mortal world so common in the older religious traditions. Novels like Mercier’s map that secularised future fictively.

There may be something in this.

But it’s also worth noting that the futurism of L’An 2440 and Shelley’s Last Man is, qua futurism, pretty weak-beer. You can see from this frontispiece to a later edition of Mercier’s novel that France in the 25th-century is, in all respects save its more utopian social organisation, basically the France of the 18th-century.

Later:

But it is not until later in the nineteenth-century that secular future-fiction began styling its imagined worlds as, in multiple key ways, different to the world out of which they were written — differently furnished, technology-wise, its characters differently dressed, its social and personal weltanschauung differently construed. Perhaps the first book to do this was Edward Bellamy’s prodigiously successful Looking Backward 2000–1887 (1888), second only to Uncle Tom’s Cabin in the ‘bestselling American novel of the 19th-century’ stakes. The point here is that Bellamy takes an 1880s individual, ‘Julian West’, into his AD 2000 such that West’s contemporary preconceptions can be repeatedly startled by the technological and social-justice alterations that the future has effected. [...]

By the end of the century, most notably with the variegated futuristic fictions of H G Wells, the notion that the future would be in substantive ways different to the present had bedded itself into the emergent genre, such that it is — now — a core aspect of science fiction’s many futures. Nowadays ‘futuristic fiction’ simply comes with the sense, more or less axiomatically, that the future will be different to the present, not just in the old utopian-writing sense that a notional 1776, or 1789, will usher in a new form of social justice and harmony (according to whichever utopian crotchet or social-reform king-charles-head happens to be yours), but rather that change will happen across multiple fronts, have intricate and widespread ramifications. That the future will be a different country, and that they, that we, will do things differently there.

Still later:

For most of human history, that distinctive mental capacity we possess to project ourselves imaginatively into the to-come has been put at the service of a set of limited and particular things, a realm of possibility and planning conceived along, basically, one axis. The original futurists were the first farmers — a development from the timeless intensity of the hunt, in which humans no more have need for elaborate futurological skills than do lions and eagles, cats and dogs. Farmers, though, must plan. We cannot farm unless we know that the seasons will change and that we must plan for that change. We plan, though, on the basis that the future will be, essentially, the same as the past — that spring will follow winter, that next year will be basically like this year.

And still later, skipping most of the Dickens argument, we have:

What Dickens is saying (what Dickens is innovating) is the capacity of the future to haunt us in both a material and a spiritual sense — a new kind of futurity, captured here via the mode of the ghost story.

There's more at the link.

See this post from last year ago, When did the future become a site for human habitation like, say, crossing the ocean to colonize the New World? See also Tim Morton's notion of futurality ("...the possibility that things could be different") in, Reading Spacecraft 4: The future.

Monday, March 28, 2022

Banister's end

Ramble: Will computers ever be able to think like humans? Do I care?

Honestly, I don’t know. Something’s been going on in how I think about this kind of question, but I’m not quite sure what it is. It seems mostly intuitive and inarticulate. This is an attempt at articulation.

I’m sure that I’m at least irritated at the idea that computers will inevitably emerge. The level of irritation goes up when people start predicting when this will happen, especially when the prediction is, say, 30 to 50 years out. Late last year the folks at Open Philanthropy conducted an elaborate exercise in predicting when “human-level AI” would emerge. The original report is by Ajeya Cotra and is 1969 pages long; I’ve not read it. But I’ve read summaries: Scott Summers at Astral Codex Ten and Open Philanthropy’s Holden Karnofsky at Cold Takes). Yikes! It’s a complicated intellectual object with lots of order-of-magnitude estimates and charts. But as an effort at predicting the final – big fanfare here – emergence of artificial general intelligence (AGI). It strikes me as being somewhere between over-kill and pointless. No doubt the effort itself reinforces their belief in the coming of AGI, which is a rather vague notion.

And yet I find myself reluctant to say that some future computational system will never “think like a human being.” I remember when I first read John Searle’s famous Chinese Room argument about the impossibility of artificial intelligence. At the time, 1980 or so, I was still somewhat immersed in the computational semantics work I’d done with David Hays and was conversant with a wide range of AI literature. Searle’s argument left it untouched. Such a wonder would lack intentionality, Searle argued, and without intentionality there can be no meaning, no real thought. At the time “intentionality” struck me as being a word that was a stand-in for a lot of things we didn’t understand. And yet I didn’t think that, sure, someday computers will think. I just didn’t find Searle’s argument convincing. Not then, and not now, not really.

Of course, Searle isn’t the only one to argue against the machine. Hubert Dreyfus did so twenty years before Searle, and on much the same grounds, and others have done so after. I just don’t find the argumentation very interesting.

It seems to me that those arguing strongly against AI implicitly depend on the fact that we don’t have such systems yet. They also have a strong sense of the difference between artificial inanimate systems, like computers, and living beings, like us. Those arguing in favor are depending on the fact that we cannot know the future (no matter how much they try to predict it), which is when these things will happen. They also believe that while, yes, animate and inanimate systems are different, they are both physical systems. It’s the physicality that counts.

None of that strikes me as a substantial basis for strong claims for or against the possibility of machines thinking at the level of humans.

Meanwhile, the actual history of AI has been full of failed predictions and unexpected developments.

We just don’t know what’s going on.

Addendum, 3.29.22: Is the disagreement between two views constructed within the same basic conceptual framework (“paradigm”), or is the disagreement at the level of the underlying framework? 

See also, A general comment concerning arguments about computers and brains, February 16, 2022.

Seinfeld and AI @ 3QD

My latest post at 3 Quarks Daily:

Analyze This! AI meets Jerry Seinfeld, https://3quarksdaily.com/3quarksdaily/2022/03/analyze-this-ai-meets-jerry-seinfeld.html

I took one of my Seinfeld posts from 2021 and reworked it: Analyze This! Screaming on the flat part of the roller coaster ride [Does GPT-3 get the joke?].

The back-and-forth between the human interrogator, Phil Mohun (in effect, standing in for Seinfeld), and GPT-3 is pretty much the same, but I framed it differently. I like the reframing, especially the ending:

As I already pointed out, GPT-3 doesn’t actually know anything. It was trained by analyzing swathes and acres and oceans of text sucked in off the web. Those texts are just naked word forms and, as we’ve observed, there are no meanings in word forms. GPT-3 is able to compute relationships between word forms as they appear, one after the other, in linguistic strings, and from that produce a simulacrum of sensible language when given a cue.

How could that possibly work? Consider this passage from a famous memorandum (PDF) written by the scientist and mathematician Warren Weaver in 1949 when he was head of the Natural Sciences division of Rockefeller Foundation:

If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words. "Fast" may mean "rapid"; or it may mean "motionless"; and there is no way of telling which.

But if one lengthens the slit in the opaque mask, until one can see not only the central word in question, but also say N words on either side, then if N is large enough one can unambiguously decide the meaning of the central word. The formal truth of this statement becomes clear when one mentions that the middle word of a whole article or a whole book is unambiguous if one has read the whole article or book, providing of course that the article or book is sufficiently well written to communicate at all.

That’s a start, and only that. But, by extending and elaborating on that, researchers have developed engines that weave elaborate webs of contextual information about relations between word form tokens. It is through such a web – in billions of dimensions! – that GPT-3 has been able to create an uncanny simulacrum of language. Hanging there, poised between chaos and order, the artificial texts tempt us into meaning.

Still, even those with a deep technical understanding of how GPT-3 works – I’m not one of them – don’t understand what’s going on. How could that be? What they understand well (enough) is the process by which GPT-3 builds a language model. That’s called learning mode. What they don’t understand is how the model works once it has been built. That’s called inference mode.

You might think: It’s just computer code. Can’t they “open it up” and take a look? Sure, they can. But there’s a LOT of it and it’s not readily intelligible.

But then, that’s how it is with the human brain and language. We can get glimmerings of what’s going on, but we can’t really observe brain operation directly and in detail. Even if we could, would we be able to make sense of it? The brain has 86 billion neurons and each of them has, on average, 10,000 connections to other neurons. That’s a LOT of detail. Even if we could make a record of what each one is doing, millisecond by millisecond, how would we examine that record and make sense of it?

So, we’ve got two mysterious processes: 1) humans creating and understanding language, 2) GPT-3 operating in inference mode. The second is derivative of and dependent on the first. Beyond that…

Saturday, March 26, 2022

Composition with a banister

Star Trek, the Original Series: Go nuts, young man, go nuts! [Media Notes 69]

I decided to take a look at the original Star Trek series, the one that started in 1966, though I’m not sure that I saw it then. I was in college at the time and didn’t have a TV, though I might have watched the show during summer reruns. In any event, I’ve certainly seen the whole series, though not in years.

I’ve only watched four episodes, the original pilot, The Cage, which wasn’t aired until the late 1980s, and then the first three episodes that actually aired. I note, first of all, and rather obviously, that special effects have changed a lot since then. You don’t have to watch the show to know this, but still, it’s rather striking to see it. Second, we haven’t really seen the fabled Star Trek computer, the one that Spock asks to do this that or the other and, shazam! it coughs up the answer a few seconds later. Computers weren’t nearly as prominent in people’s lives back then as they are now. Few people had ever seen one, though of course many had seen those (blasted) punch cards. In one or another of these episodes that was a passing reference to tapes, the major form of external memory storage in those days. Perhaps that computer doesn't really show up until The Next Generation?

What caught my attention is that both the second and third episodes were about someone who goes nuts because they’ve acquired superpowers. The second episode, Charlie X, is about a 17-year-old boy they take on board. It turns out that out that he’d acquired super mental powers that allow him to read minds, control minds, and take over the ship. Fortunately the aliens who’d endowed him with those powers come by before he destroys everything. Just why and how is secondary. What matters is super powers and goes nuts.

In the third episode, Where No Man Has Gone Before, it’s an adult crewman that acquires the superpowers. Different superpowers than the boy in the second episode, going nuts in a different way as well. Different escape as well. No deus ex machina to save The USS Enterprise. Kirk manages to do it in hand-to-hand combat.

Why does this interest me? Because I’ve been thinking about the fears prevalent in certain high-tech quarters that once human-level artificial intelligence emerges – as inevitably it must do (so these folks believe) – these super-smart computers will go nuts and threaten human existence. Of course, an out-of-control computer is different from an out-of-control adolescent or an out-of-control man, but still, out-of-control is out-of-control. I’m wondering if this anxiety about out-of-control computers isn’t a kissing cousin to anxiety about super-powered humans.

Addendum, 3.29.22: The computer showed up in episodes 11 and 12, The Menagerie. But it wasn’t asked to do anything very complicated. But the communication was in natural language.

And, I should note more generally, Shatner over-acts, as I remember, but so do others.

Bessie Smith in “St. Louis Blues” from 1929

From Wikipedia:

St. Louis Blues is a 1929 American two-reel short film starring Bessie Smith. The early sound film features Smith in an African-American speakeasy of the prohibition era singing the W. C. Handy standard, "St. Louis Blues". Directed by Dudley Murphy, it is the only known film of Bessie Smith, and the soundtrack is her only recording not controlled by Columbia Records.

Bessie Smith had a hit on the song in 1925 and Handy himself asked Bessie Smith to appear in the movie. Handy co-authored the film and was the musical director. The film was a dramatization of the song, a woman left alone by her roving man. It features a band that included James P. Johnson on piano, Thomas Morris and Joe Smith on cornet, Bernard Addison on guitar and banjo, as well as the Hall Johnson Choir with some thrilling harmonies at the end.

The film has an all African-American cast. Bessie Smith co-stars with Jimmy Mordecai as the boyfriend and Isabel Washington as the other woman.

It was filmed in June 1929 in Astoria, Queens. The film is about 16 minutes long. In 2006, this version was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".

Notice the quote from Rhapsody in Blue at about 13:25 as Bessie’s ne’er do well boyfriend slinks away with her money.

Friday, March 25, 2022

Tim Burke: What happened to belief in progress?

Timothy Burke, The News: The Broken Promises of Containment, Eight by Seven, March 23, 2022. From the essay:

...containment worked: the Soviet Union fell apart. But by the 1980s, the US and much of Western Europe had stopped being high on their own supply, e.g., stopped actually believing themselves in the core propositional ideals that containment’s most optimistic architects really did think were attributes of modern Western liberal democracies. The Cold War always had a horrific stench of hypocrisy around it, but most mainstream liberals and conservatives at its height could still reasonably assert that liberal democratic societies were better foundations for human flourishing than the East Bloc’s regimes and that progress towards liberal democratic ideals was still possible and even likely in the US and Europe. European empires were ending, the civil rights movement was leading the way towards internal reform followed by feminism and other social movements, social democracies were taming the excesses of capitalism.

In the 1980s, however, Reagan and Thatcher turned their backs on that vision of progress, and then “Third Way” liberal-progressive parties in the US, UK, Spain and elsewhere followed in that direction in the 1990s. As we enter the second decade of the 21st Century, the consequences of that abandonment are brutally clear. Nobody believes in progress any longer, no political leadership has a vision of a future where human beings are freer, happier and more secure than they are today. The best we get from most political parties and leaders is a vision of holding on to what we’ve got with a few modest incremental improvements around the edges. The entire global economy is strangling under the weight of runaway inequality and shadowy wealth accumulation that is beyond the ability of any nation to regulate or control. Democratic governance is threatened at every level, from school boards to national leadership. The entire planet just performed a resounding pratfall in the face of a global pandemic and almost no government is willing to honestly and thoroughly commit to the scale of response needed to face climate change.

Containment’s architects never really dreamed that the West would need to do anything if somehow their strategic plan paid off. The bad guys would crumble, the unfree nations would become free. Mission accomplished. The staggering complacency of their thought is perhaps most compactly expressed in Francis Fukuyama’s spectacularly terrible 1992 book The End of History, which gets mocked largely for the ineptitude of its predictive imagination but should instead be condemned for its provision of aid and comfort to the bipartisan complacency that seized most liberal democracies at the end of the Cold War. Fukuyama laid it out: there is nothing left that you need to do. Liberal democracy is the final form of human political life, nothing can arise to challenge it, and it is sufficient unto itself as it stands.

This was completely wrong, and we’ve all paid the price for it. For containment to have worked, the moment it succeeded in bringing down the great enemy of the Cold War it needed to free the West to do everything left that needed doing, to reallocate the grotesque and squalid costs of containment to the unfinished business of making liberal democratic societies fulfill their promises. Those costs weren’t just in making weapons and sustaining armies, they were in the proxy wars, the supported dictators, the covert actions, the we-can’t-fix-this-now-because excuses.

There's more at the link. 

Contrast with the somewhat different views of Jason Crawford, which I've excerpted here.

Typical worker in a utopian society

Margaret Atwood on utopias

Ezra Klein, Margaret Atwood on Stories, Deception, and the Bible, The New York Times, March 25, 2022. From the interview:

Atwood: Well, now. Now we’re getting into it. Now we’re getting into the problem, OK. 19th century was a century of utopias. So many of them were written that Gilbert and Sullivan write an opera called “Utopia Limited,” which is a satire on it. But you only satirize something that’s a thing, you know, that’s become a vogue. Why did they write so many utopias? Because they’d already made so many amazing discoveries that had changed things.

So germs, who knew about them? We know about them now. And look what we can do now that we know about germs. Maybe now we’ll wash our hands before delivering babies and giving everybody puerperal fever the way we had been doing before. Steam engines, wow, this is amazing. Steam machinery and factories, look at that. Sewing machines, wow. Oh, and before that was all hand sewing. And what might be coming? Jules Verne writing about submarines on the way, air travel, “Around the World in 80 Days.” So it was just going to get better.

There were some problems like the woman problem, but the utopias usually solved those by giving the women a better deal and less clothing and all different kinds. And they solved overpopulation various ways. One of them was the future people just wouldn’t be interested in sex. So I read a lot of those when I was a Victorianist, and then people start writing them in the 20th century. Why? Because too many of them were tried in real life on a grand scale.

So Soviet Union comes in as a utopia. Hitler’s Germany comes in as a utopia, though only for certain people. Soviet Union tried to be more inclusive. But first, you had to kill those people like the Cossacks and Kulaks and what have you. But then you could have the utopia. And Mao’s China comes in as a utopia, and lots of others. And then it’s not great. So instead, we get “We” by Yevgeny Zamyatin. We get “1984.” We get “Fahrenheit 451.” It’s not great, and it becomes very difficult to write a utopia because nobody believes in it anymore. And they’d seen the results.

But I think we’re getting back to if not let’s have utopia, but first we have to kill all those people, I think we’re getting to the point where we’re saying, unless we improve the way we’re living, unless we change the way we’re living, goodbye, homo sapiens sapiens. You cannot continue on a planet as a mid-sized, land-based, oxygen-breathing mammal if there isn’t enough oxygen, which is what will happen if we kill the oceans and cut down all the trees.

So we are looking into the barrel of a gun as a species. And the big debate now is, OK, how much, how soon can it be done? And will people even go for it? And meanwhile, you’ve got all of these other problems that the problem you’re trying to solve is causing. So cascading series of events — can it be reversed? So, some of the thinking is being directed towards, yes, it can, because unless you do, yes, it can, you’re going to do, no, it can’t. And if it’s, no, it can’t, goodbye, us.

Thursday, March 24, 2022

A very interesting discussion of China in the world and in relation to America by Robert Wright & Kishore Mahbubani

China and America (and Taiwan and Ukraine) | Robert Wright & Kishore Mahbubani | The Wright Show

0:58 Kishore’s diplomatic career and recent book, Has China Won?
2:27 Kishore: America is walking towards a cliff when it comes to handling China
12:10 Is the US making a Chinese invasion of Taiwan more likely?
20:51 China’s perception of the Uyghur situation
34:41 How anti-China sentiment in America increases Chinese nationalism
49:11 Diagnosing the degeneration of US-China relations since 2012
1:02:22 Bob: Has technology empowered the Chinese to petition their government?
1:10:28 Good news! China has no plans to export its political system
1:18:00 Assessing the threat Beijing poses to freedom in America
1:26:54 China’s view of the Ukraine invasion

Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Kishore Mahbubani (Has China Won?). Recorded February 17, 2022 and March 22, 2022.

Mahbubani has just published The Asian 21st Century, which is available as a free download. I've downloaded but not read it.

About Kishore Mahbubani, Distinguished Fellow at the Asia Research Institute (ARI), National University of Singapore (NUS):

Mr Mahbubani has been privileged to enjoy two distinct careers, in diplomacy (1971 to 2004) and in academia (2004 to 2019). He is a prolific writer who has spoken in many corners of the world.

In diplomacy, he was with the Singapore Foreign Service for 33 years (1971 to 2004). He had postings in Cambodia, Malaysia, Washington DC and New York, where he twice was Singapore’s Ambassador to the UN and served as President of the UN Security Council in January 2001 and May 2002. He was Permanent Secretary at the Foreign Ministry from 1993 to 1998. As a result of his excellent performance in his diplomatic career, he was conferred the Public Administration Medal (Gold) by the Singapore Government in 1998.

Mr Mahbubani joined academia in 2004, when he was appointed the Founding Dean of the Lee Kuan Yew School of Public Policy (LKY School), NUS. He was Dean from 2004 to 2017, and a Professor in the Practice of Public Policy from 2006 to 2019. In April 2019, he was elected as an honorary international member to the American Academy of Arts and Sciences, which has honoured distinguished thinkers, including several of America’s founding fathers, since 1780.

Wednesday, March 23, 2022

AI Rogue #1

Ted Chiang on ethical blindness, capitalism and the rogue impulses of future AI

Ted Chiang, Silicon Valley Is Turning Into Its Own Worst Fear, BuzzFeed, December 18, 2017:

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism. [...]

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

A bit later:

There have been some impressive advances in AI recently, like AlphaGo Zero, which became the world’s best Go player in a matter of days purely by playing against itself. But this doesn’t make me worry about the possibility of a superintelligent AI “waking up.” (For one thing, the techniques underlying AlphaGo Zero aren’t useful for tasks in the physical world; we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs.) What I’m far more concerned about is the concentration of power in Google, Facebook, and Amazon.

And:

The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted.

There's more at the link.

Tuesday, March 22, 2022

Geometry, symbols, and the human mind

Siobhan Roberts, Is Geometry a Language That Only Humans Know? NYTimes, Mar. 22, 2022.

Last spring, Dr. Dehaene and his Ph.D. student Mathias Sablé-Meyer published, with collaborators, a study that compared the ability of humans and baboons to perceive geometric shapes. The team wondered: What was the simplest task in the geometric domain — independent of natural language, culture, education — that might reveal a signature difference between human and nonhuman primates? The challenge was to measure not merely visual perception but a deeper cognitive process. [...]

In the experiment, subjects were shown six quadrilaterals and asked to detect the one that was unlike the others. For all the human participants — French adults and kindergartners as well as adults from rural Namibia with no formal education — this “intruder” task was significantly easier when either the baseline shapes or the outlier were regular, possessing properties such as parallel sides and right angles.

The researchers called this the “geometric regularity effect” and they hypothesized — it’s a fragile hypothesis, they admit — that this might provide, as they noted in their paper, a “putative signature of human singularity.”

The baboons live at a research facility in the South of France, beneath the Montagne Sainte-Victoire (a favorite of Cézanne’s), and they are fond of the testing booths and their 19-inch touch-screen devices. (Dr. Fagot noted that the baboons were free to enter the testing booth of their choice — there were 14 — and that they were “maintained in their social group during testing.”) They mastered the oddity test when training with nongeometric images — picking out an apple, say, among five slices of watermelon. But when presented with regular polygons, their performance collapsed.

Fruit, Flower, Geometry

[...] Probing further, the researchers tried to replicate the performance of humans and baboons with artificial intelligence, using neural-network models that are inspired by basic mathematical ideas of what a neuron does and how neurons are connected. These models — statistical systems powered by high-dimensional vectors, matrices multiplying layers upon layers of numbers — successfully matched the baboons’ performance but not the humans’; they failed to reproduce the regularity effect. However, when researchers made a souped-up model with symbolic elements — the model was given a list of properties of geometric regularity, such as right angles, parallel lines — it closely replicated the human performance.

These results, in turn, set a challenge for artificial intelligence. “I love the progress in A.I.,” Dr. Dehaene said. “It’s very impressive. But I believe that there is a deep aspect missing, which is symbol processing” — that is, the ability to manipulate symbols and abstract concepts, as the human brain does. This is the subject of his latest book, “How We Learn: Why Brains Learn Better Than Any Machine … for Now.”

Yoshua Bengio, a computer scientist at the University of Montreal, agreed that current A.I lacks something related to symbols or abstract reasoning. Dr. Dehaene’s work, he said, presents “evidence that human brains are using abilities that we don’t yet find in state-of-the-art machine learning.”

That’s especially so, he said, when we combine symbols while composing and recomposing pieces of knowledge, which helps us to generalize.

There's more at the link.

Sunday, March 20, 2022

Lampost, the river, and sky

Physical scale, tiny robots, and life

Sabine Hossenfelder has an interest post (and video) about small, smaller, and vanishingly small robots, These tiny robots could work inside your body:

Here's a passage:

No, the problem with tiny robots is a different one. It’s that, regardless of whether the prefix is nano, micro, or xeno, at such small scales, different laws of physics become relevant. You can’t just take a human sized robot and scale it down, that makes no sense.

For tiny robots, forces like friction and surface tension become vastly more important than they are for us. That’s why insects can move in ways that humans can’t, like walking on water, or walking upside-down on the ceiling, or like, flying. Tiny robots can indeed fly entirely without wings. They just float on air like dust grains. Tiny robots need different ways of moving around, depending on their size and the medium they’re supposed to work in, or on.

Scale played an important role in a paper David Hays and I published some years ago:

William Benzon and David G. Hays, A Note on Why Natural Selection Leads to Complexity, Journal of Social and Biological Structures 13: 33-40, 1990, https://www.academia.edu/8488872/A_Note_on_Why_Natural_Selection_Leads_to_Complexity https://ssrn.com/abstract=1591788

Here’s the relevant passage:

We live in a world in which “evolutionary processes leading to diversification and increasing complexity” are intrinsic to the inanimate as well as the animate world (Nicolis and Prigogine 1977: 1; see also Prigogine and Stengers 1984: 297-298). That this complexity is a complexity inherent in the fabric of the universe is indicated in a passage where Prigogine (1980: xv) asserts “that living systems are far-from-equilibrium objects separated by instabilities from the world of equilibrium and that living organisms are necessarily ‘large,’ macroscopic objects requiring a coherent state of matter in order to produce the complex biomolecules that make the perpetuation of life possible.” Here Prigogine asserts that organisms are macroscopic objects, implicitly contrasting them with microscopic objects.

Prigogine has noted that the twentieth century introduction of physical constants such as the speed of light and Planck's constant has given an absolute magnitude to physical events (Prigogine and Stengers 1984: 217-218). If the world were entirely Newtonian, then a velocity of 400,000 meters per second would be essentially the same as a velocity of 200,000 meters per second. That is not the universe in which we live. Similarly, a Newtonian atom would be a miniature solar system; but a real atom is quite different from a miniature solar system.

Physical scale makes a difference. The physical laws which apply at the atomic scale, and smaller, are not the same as those which apply to relatively large objects. That the pattern of physical law should change with scale, that is a complexity inherent in the fabric of the universe, that is a complexity which does not exist in a Newtonian universe. At the molecular level life is subject to the quantum mechanical laws of the micro-universe. But multi-celled organisms are large enough that, considered as homogeneous physical bodies, they exist in the macroscopic world of Newtonian mechanics. Life thus straddles a complexity which inheres in the very structure of the universe.

Do the people have any influence on national politics in America?

Daniel Bessner, The End of Mass Politics, Foreign Exchanges, Jan. 5, 2022:

If social media is any indication—and, despite what people might say or hope, it very much is—the major feelings experienced by most Americans are dislocation and unease. In the last twenty years, the US government has lurched from failure to failure, to the point where a repetition of American-led disasters—Afghanistan, Iraq, Katrina, the Great Recession, Libya, Trump—reads like a negative doxology. After living through Washington’s inability and/or unwillingness to address adequately the COVID-19 pandemic either at home or abroad over the past two years, many Americans have resigned themselves to a reality in which their lives are getting, and will continue to get, worse.

Ordinary people, it seems, have almost no influence on politics. Despite clear majority support for ideas like public health care or universal childcare, little headway has been made in getting these or similar programs enacted.

We live in an era when the institutions of mass politics—mass-based political parties, the mass media, mass protests, and social media—have proven themselves unable to serve as vehicles of the people’s will. This, I believe, is a major if under-appreciated reason why Americans feel so disconnected from their government and their communities, and why so many of us are depressed: we are told, and we trust, that there are productive ways to channel our political desires, but in actuality this is simply not true.

Despite our many interactions with the Democratic Party; despite our taking to the streets to protest war or police violence; despite our manifold op-eds decrying the present state of affairs; and despite our innumerable posts on Twitter or Facebook, those who wield power have displayed little interest in heeding the advice or the will of the demos. Ours is an individualistic, neoliberal era overlaid with mass institutions and forms that, for all intents and purposes, are atavistic, toothless, and ineffective, unable to force the power elite’s hand in meaningful ways.

How did this come to be? As I discussed in an earlier column, there has been a century-long, and largely successful, elite project to remove ordinary people from the decision-making process.

There's more at the link.

Thursday, March 17, 2022

Not another version of that iris!

Programming for the masses?

Craig S. Smith, ‘No-Code’ Brings the Power of A.I. to the Masses, NYTimes, March 15, 2022:

Mr. Cusack is part of a growing army of “citizen developers,” who use new products that allow anyone to apply artificial intelligence without having to write a line of computer code. Proponents of the “no-code” A.I. revolution believe it will change the world: It used to require a team of engineers to build a piece of software, and now users with a web browser and an idea have the power to bring that idea to life themselves.

“We are trying to take A.I. and make it ridiculously easy,” said Craig Wisneski, a no-code evangelist and co-founder of Akkio, a start-up that allows anyone to make predictions using data.

A.I. is following a familiar progression. “First, it’s used by a small core of scientists,” Jonathan Reilly, Akkio’s other co-founder, said. “Then the user base expands to engineers who can navigate technical nuance and jargon until, finally, it’s made user-friendly enough that almost anyone can use it.”

Just as clickable icons have replaced obscure programming commands on home computers, new no-code platforms replace programming languages with simple and familiar web interfaces. And a wave of start-ups is bringing the power of A.I. to nontechnical people in visual, textual and audio domains.

There's more at the link.

I make no predictions about the future of this technology. It seems likely to improve, increase its range of competence, and be used by more and more people. But how far and how many more? I note, however, that the world of computation is AI's natural environment and so is readily accessible to it. It's a "natural" environment for machine learning.

The thing is, this is not software that's demonstrating human-level intelligence, this is not artificial general intelligence (AGI). It's what Michael Jordan calls Intelligence Augmentation. I think we're going to see more of this in the future.

Of, there are also those who believe that AGI is inevitable, even that it will arrive within half a century or less. Some among those fear that AGI will go rogue. I don't know how much time and energy is devoted to these ends, but I can't help but think it blinds people to what is actually happening.

Is the academy a scam that somehow manages to produce knowledge?

Tuesday, March 15, 2022

Flush with thought [What's computation?]

And then there's this thread:

Near, far, high, and bright

Beyond AGI (Artificial General Intelligence)

Tyler Cowen recently posted "Holden Karnofsky emails me on transformative AI" over at Marginal Revolution.

I made two comments. One was similar to my recent post on Rodney Brooks, but much shorter – see this Brooks post as well. I'm posting the other one here as a place-holder. I may or may not elaborate it into a more substantial post at some future time.

Just what IS transformative AI? And AGI, what's that? Or superintelligence? They're highly abstract ideas that can easily be decked out in a grand way.

I've been reading around and about in the material at and linked to that "most important century" page. I certainly haven't read it all. It's interesting stuff. But, for better or worse, I find that Kim Stanley Robinson's science fiction novel, New York 2150, presents a more tangible and believable picture of the future. I'm not saying I agree with it, much less like it, only that it feels more credible to me than a world in which, for example, completely digital people are zipping around in a big pile of compute sometime later in this century.

Now, here's a page listing mostly recent open access articles about robotics and AI that have been published in the Nature family of journals. I've not read any of them nor even the abstracts. My remarks are offered solely on the basis of these titles:

A wireless radiofrequency-powered insect-scale flapping-wing aerial vehicle.
Enhancing optical-flow-based control by learning visual appearance cues for flying robots.
Highly accurate protein structure prediction with AlphaFold.
Advancing mathematics by guiding human intuition with AI.
A mobile robotic chemist.
Neuro-inspired computing chips.
Designing neural networks through neuroevolution.
Deep learning robotic guidance for autonomous vascular access.

I'm pretty sure that none of them assert that they've achieved AGI and I'd be surprised if any of them even hinted that AGI will be popping up in mid-century. I see no reason why these kinds of things won't keep coming and coming and coming. Some will eventuate in practical technologies. Some, likely most, won't. But, what's the likelihood that the cumulative result will be transformative, without, however, producing such wonders as fully digital people?

You might want to take a look at Michael Jordan, "Artificial Intelligence — The Revolution Hasn’t Happened Yet", Medium 4/19/2018. He suggests that beyond what Jordan calls "human imitative AI" (such as digital people) we should recognize "Intelligence Augmentation" and "Intelligent Infrastructure." He concludes:

Moreover, we should embrace the fact that what we are witnessing is the creation of a new branch of engineering. The term “engineering” is often invoked in a narrow sense — in academia and beyond — with overtones of cold, affectless machinery, and negative connotations of loss of control by humans. But an engineering discipline can be what we want it to be.

In the current era, we have a real opportunity to conceive of something historically new — a human-centric engineering discipline.

I will resist giving this emerging discipline a name, but if the acronym “AI” continues to be used as placeholder nomenclature going forward, let’s be aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype and recognize the serious challenges ahead.

Friday, March 11, 2022

Advice on playing ballads [jazz]

Distant helicopter in the sky

A Neural Programming Language for the Reservoir Computer

Check out the whole thread.

Here's the abstract for the linked article:

From logical reasoning to mental simulation, biological and artificial neural systems possess an incredible capacity for computation. Such neural computers offer a fundamentally novel computing paradigm by representing data continuously and processing information in a natively parallel and distributed manner. To harness this computation, prior work has developed extensive training techniques to understand existing neural networks. However, the lack of a concrete and low-level programming language for neural networks precludes us from taking full advantage of a neural computing framework. Here, we provide such a programming language using reservoir computing -- a simple recurrent neural network -- and close the gap between how we conceptualize and implement neural computers and silicon computers. By decomposing the reservoir's internal representation and dynamics into a symbolic basis of its inputs, we define a low-level neural machine code that we use to program the reservoir to solve complex equations and store chaotic dynamical systems as random access memory (dRAM). Using this representation, we provide a fully distributed neural implementation of software virtualization and logical circuits, and even program a playable game of pong inside of a reservoir computer. Taken together, we define a concrete, practical, and fully generalizable implementation of neural computation. 

Thursday, March 10, 2022

Wednesday, March 9, 2022

An interesting Twitter thread on the Russian economy

 After a bunch of tweets, including a series on Mexican avocado cartels, we get this:

 

And so forth. H/t Tyler Cowen.

On the relationship between theory of consciousness and methodology

Abstract of the linked article:

Understanding how consciousness arises from neural activity remains one of the biggest challenges for neuroscience. Numerous theories have been proposed in recent years, each gaining independent empirical support. Currently, there is no comprehensive, quantitative and theory-neutral overview of the field that enables an evaluation of how theoretical frameworks interact with empirical research. We provide a bird’s eye view of studies that interpreted their findings in light of at least one of four leading neuroscientific theories of consciousness (N = 412 experiments), asking how methodological choices of the researchers might affect the final conclusions. We found that supporting a specific theory can be predicted solely from methodological choices, irrespective of findings. Furthermore, most studies interpret their findings post hoc, rather than a priori testing critical predictions of the theories. Our results highlight challenges for the field and provide researchers with an open-access website (https://ContrastDB.tau.ac.il) to further analyse trends in the neuroscience of consciousness.

Another wacked-out iris

What is cognitive science?

While I’m thinking about definitions, I came up against the problem of defining “cognitive science” when I wrote my dissertation, “Cognitive Science and Literary Theory,” back in the Ancient Days. Cognitive science was rather new and I couldn’t expect literary critics to know what it was. For that matter, no one really knew what it was nor, to this day, does anyone quite know what it is. It’s just a bunch of ideas and themes about the mind, originally inspired by computing.

Anyhow, I chose to say that cognitive science was the study of the relationships between: 

1) behavior,
2) computation,
3) neuroanatomy and physiology,
4) ontogeny, and
5) phylogeny

Roughly speaking, behavior is what animals do. Computation is, shall we say (I’ll say a bit more shortly) what nervous systems to in linking perception and action. Neuroanatomy is about the regions of the brain and peripheral nervous system. Cognitive neuroscience, then, is about the relations among neuroanatomy & physiology, behavior, and computation. Ontogeny is the maturation of animals from birth to adulthood – I read a great deal about human developmental psychology early in my career.

As for computing, consider a passage from a most interesting book by Peter Gärdenfors, Conceptual Spaces (MIT 2000) p. 253:

On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computationally more complex than the process that is simulated.

I rather suspect that all of these kinds of processes take place in the human brain.

In my way of thinking, though, only the symbolic level processes are irreducibly computational as implemented in the human brain. The other processes are implemented in some non-computational way. That is to say, we can model or simulate anything with computing given an appropriate description of the phenomenon. But that doesn’t mean that the phenomenon is itself computational. The simulation of an atomic explosion is very different from an explosion itself. The former is computational, the latter is not.

Thus, I have been attracted by Walter Freeman’s account of neural ensembles of complex dynamical systems. He has done computer simulations as part of his overall investigation. But that doesn’t mean he thought of those ensembles as performing some computations. No, he thought of them as physical systems with many interacting parts.

So, that second element in that five-way correspondence likely involves more than computing. Just what, that’s not at all clear.

A useful definition of AGI (artificial general intelligence)

The term, I believe, was coined sometime in the 1990s because, by then, work in AI had concentrated on narrow application domains of one sort of another (e.g. chess). So, artificial general intelligence (AIG) is not narrow. It is broad. Here's a defintion supplied by Tom Davidson about a year ago:

By AGI, I mean computer program(s) that can perform virtually any cognitive task as well as any human, for no more money than it would cost for a human to do it. The field of AI is largely understood to have begun in Dartmouth in 1956, and since its inception one of its central aims has been to develop AGI.

OK. I note that the scope is very broad, which is no surprise to me.

But what does it mean? Yes, I know what the words mean and all that. But "any cognitive task"? Would you can to enumerate them? If not, then just how are we to work with that phrase?