Saturday, October 31, 2020

There’s more to progress than (mere) productivity: What about diversity of occupational specialization?

The literature on economic growth seems to have settled on productivity as the defining measure of progress, which is not unreasonable. But there is a largely empirical literature from the second half of the previous century that was interested in cultural complexity. That literature was mostly and anthropological interest in preliterate societies, though there was some archaeological work on ancient civilizations as well. That work, as far as I know, has been given almost consideration in economist’s study of growth.

In the mid-1990s the late David Hays undertook a review and synthesis of the literature on cultural complexity: The Measurement of Cultural Evolution in the Non-Literate World (1998). I would like to quote a passage from that book. Though I quote it without explain the context, the import should be obvious enough (p. 203):

Using the 1953 Yellow Pages for central Los Angeles, Naroll estimated that there must exist 500 craft specialties in that area, and guessed at 1000 or more in “the entire Los Angeles settlement” (1956:702). That would be 10-20 times as many as his regression line predicted:

What is involved may be a curved line of regression or it may be a step function, a jump from one allometric line to another reflecting a sudden fundamental change in developmental dynamics . . .

[…] The number of occupational specialties recognized by the United States government in its official classification is in the tens of thousands. But something different again can be seen in contemporary life. Whereas middle-aged workers who lose factory jobs have much difficulty preparing for a new kind of work, educated young persons switch from one specialty to another spontaneously. The concept of a lifetime dedication to a single craft specialty, which may be as old as burnt pottery and woven cloth, is perhaps about to lose its hold on economic life.

Two things: 1) that regression line was based on studies of non-literate cultures. The number of craft specialties would seem to represent a fundamentally different kind of social organization, rather than simply more and more of the same old same old. 2) What about the proliferation of craft/occupational specialties in the last quarter to half a century?

That may not represent growth as economists understand it, but it surely represents an important aspect of cultural change. Perhaps the economists need to take account of it.

More Caven Point – trees, leaves, bushes, grass, buildings, sky [urban pastoral]

Some notes on the fourth photo, reading it from bottom to top. The blurry stuff at the bottom is plants relatively near to me while I'm focused on those sky scrapers in the distance. That's a deliberate feature of my aesthetic. I have no interest in pretending that we're looking directly at the world; those blurs are an artifact of the photographic process. But they are also explicit markers of distance. What appears to be a horizontal canopy above the bottom edge is just that, a canopy over a walkway along the water's edge. My guess is that it is there to protect people from stray golf balls from Liberty National Golf Course, immediately above.

You can see a green to the right, then a bit of sand trap, and then behind that some rough, and some shrubs, and the roof-line of some building (I don't know what). Then you see the tops of four mid-line buildings belonging to an apartment complex called The Beacon. That complex started life as a hospital built by Boss Hague back in the 1930s. The two sky scrapers are quite new and were built by the Kushner Corporation. Jared Kushner is Trump's son in law and as far as I can tell, more or less runs the Federal Government.

Finally, the owner of Liberty National is trying to enlarge the course by taking over Caven Point so he can place three holes there. The dispute between him and the Friends of Liberty State Park over Caven Point has gotten quite nasty.

The Simulation Hypothesis, a reductio ad absurdum [where have all the good minds gone, stark raving mad]

Tyler Cowen has an interesting take on the simulation hypothesis (the idea that we’re living in a computer simulation):

As you may already know, my view is that there is no proper external perspective, and the concept of “living in a simulation” is not obviously distinct from living in a universe that follows some kind of laws, whether natural or even theological. The universe is simultaneously the simulation and the simulator itself! Anything “outside the universe doing the simulating” is then itself “the (mega-)universe that is simultaneously the simulation and the simulator itself.” etc.

I agree with him on that first clause, there really is no proper external perspective (a problem that Kant, among others, wrestled with). As for the rest of it, well, OK, why not?

But it seems to me that it does empty Bostrom’s thought experiment of most of its juice. As Bostrom originally proposed the simulation hypothesis there is a carefully prepared frame and set-up (Are You Living in a Computer Simulation?, Philosophical Quarterly, 2003, vol. 53, No. 211, 243-255). First he argues that we ourselves are not so very far from having the computational capacity to simulate a full-on human consciousness. If we can do one, then surely we’ll be able to do 100s, 1000s, millions and billions! At this point a probabilistic argument places us in the future watching over the shoulders of superintelligent beings with super-duper-intelligent computers running scads of simulations of the world and somewhere in one of those simulations we’ll find ourselves.

Except of course we won’t. We’ll just get dizzy.

That is, the simulation hypothesis is not just the bare idea that we’re electrons spinning around in a cosmic computation. It’s the whole frame that contextualizes that idea. And that frame holds out the hope that out/up there somewhere are beings who see and understand all.

Except of course they don’t. As they are us.

What kind of blinders must one assume in order to find such puzzles intellectually worthwhile?

I note, that for all the criticism and ridicule it has received, post-modern literary criticism hasn’t produced anything sillier than the simulation hypothesis, and its siblings, the various conceptions and contraptions of transhumanism. And then we have the foundations of physics with its five decades of empirically fruitless theorizing. Has the whole world gone mad?

* * * * *

For more on the simulation hypothesis see these older posts:

My problem with the simulation argument: It’s too idealist in its assumptions (all mind, no matter), July 25, 2018

Two silly ideas: We live in a simulation & our computers are going to have us for lunch, July 22, 2018

Friday, October 30, 2020

Does the future of AI belong to deep pockets? [Gwern addendum]

As far as I can tell, there's no particularly good reason to think that deep pockets belong to those with greater imagination. And, I must admit, my biases are rather in the other direction. I suspect that those with deep pockets are likely to seek out intellectually conservative investigators in order to protect their investment. See, for example, Gwern on the scaling hypothesis:

The blessings of scale in turn support a radical theory: an old AI paradigm held by a few pioneers in connectionism (early artificial neural network research) and by more recent deep learning researchers, the scaling hypothesis. The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is ‘just’ simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale. As increasing computational resources permit running such algorithms at the necessary scale, the neural networks will get ever more intelligent.

When? Estimates of Moore’s law-like progress curves decades ago by pioneers like Hans Moravec indicated that it would take until the 2010s for the sufficiently-cheap compute for tiny insect-level prototype systems to be available, and the 2020s for the first sub-human systems to become feasible, and these forecasts are holding up. (Despite this vindication, the scaling hypothesis is so unpopular an idea, and difficult to prove in advance rather than as a fait accompli, that while the GPT-3 results finally drew some public notice after OpenAI enabled limited public access & people could experiment with it live, it is unlikely that many entities will modify their research philosophies, much less kick off an ‘arms race’.)

More concerningly, GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions. Their primary concerns appear to be supporting the status quo, placating public concern, and remaining respectable. As such, their comments on AI risk are meaningless: they would make the same public statements if the scaling hypothesis were true or not.

Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting—sigmoid or singularity?

Gwern has a more favorable view of AI than I do, and greater faith in the efficacy of mere scaling, but I do share his skepticism about motivation. These people, whatever they think of themselves, are not intellectual adventurers setting off for intellectual adventure in parts unknown. They are well paid clerks in search of titillation.

Friday Fotos: FR8s, graffiti at Caven Point

The Ranch [all in the family | Media Notes 51]

I started watching The Ranch when it first started streaming in 2018 and stuck with it for, say, a season and a half and dropped it for whatever reason. I got back to it a few weeks ago and am now well into the fourth and final season. As the title implies, it’s set on a small family-run ranch in Colorado. Willa Paskin noted in Slate:

The Ranch is a red-state sitcom, though it takes place in the swing state of Colorado, and is good enough to be watched by people of any political affiliation. The goodness sneaks up on you. It is a sitcom that is meatier than it is funny, unusually in touch with the painful, disappointing aspects of life. […] But The Ranch is sophisticated in pursuit of its audience, preferring to dignify the Bennetts’ hardscrabble circumstances than to raise anxiety by fixating on their precariousness.

That’s a fair assessment.

The ranch is run by patriarch, Beau Bennett (Sam Elliot) and his two sons, Colt (Ashton Kutcher) and Rooster (Danny Masterson), while Maggie Bennett (Debra Winger) runs a local bar. She’s estranged from, then divorces, her husband, Beau, but they maintain more or less cordial relations. Relations between Beau his two sons, Beau and Rooster, are at best problematic. While the individuals try hard, Bennetts are a dysfunctional family. In a way, the harder they try, the more dysfunctional they are.

There’s some good work to be done analyzing just how this family functions, but that’s more than I want to take on in this brief note. I note that Beau, Colt, and Rooster drink a lot and, as I already mentioned, Maggie runs a bar. Though she doesn’t have a drinking problem, she does like marijuana, as do others in the community. And at least one character, who becomes central in the later seasons, has a serious addiction problem. The show harbors no pretense that this rural community is a drug-free zone.

The show runs a continuous narrative, rather than a succession of half-hour episodes, and the Bennetts seems to live at the edge of financial disaster. So the plot has the feel of free-form improvisation; these people are just making things up as they go along. That’s one thing.

The other is that, because the Iron River Ranch is a family farm, family life and work life and intertwined. Much of the conflict within they family, especially between the three men, but not only them, is around and about roles in running the farm: Who is competent in what tasks and who has what rights and obligations? This dynamic is set off against the Neumann’s Hill ranching corporation, which plays a more prominent role in as the series moves on. Neumann’s Hill wants to dominate and buy up everything thing it can while the Bennetts want to remain free and independent. As the four season moves along – I’m only about half way through it – Neumann’s Hill seems to be winning.

These two issues, the lack of security and constancy in life, and the relationship between family and business life, seem to define the overall scope of this series. How do those issues speak to the American public? I don’t know. I’d really like to know the demographics of the audience. How will that audience be voting in this coming presidential election? There’s an obvious appeal to Trump’s America, but I’m not in Trump’s America and I like the show. Yet, it’s one thing for me to say that the show appeals to Trump’s America, but does it? I was right about NCIS, am I right about this?

I note that the show makes no reference to current presidential politics, so one can only guess at how, say, Beau Bennett voted in 2016 or how he’d vote in 2020. I’d like to think that, though he’s a gun-toting conservative and proud of it, he couldn’t stomach Trump. But he certainly couldn’t stomach Biden either. Perhaps he’d write in Reagan, which he’s done before.

Is American exceptionalism at core a detachment from reality?

John Gray reviews Bruno Maçães, History Has Begun: The Birth of a New America, in The New Statesman, October 28, 2020.

So:

For the Portuguese former diplomat Bruno Maçães, however, the decoupling of American culture from the objective world is a portent of great things to come. Finally shedding its European inheritance, America is creating a truly new world, “a new, indigenous American society, separate from modern Western civilisation, rooted in new feelings and thoughts”. The result, Maçães suggests, is that American politics has become a reality show. The country of Roosevelt and Eisenhower was one in which, however lofty the aspiration, there was always a sense that reality could prove refractory. The new America is built on the premise that the world can be transformed by reimagining it. Liberals and wokeists, conservatives and Trumpists are at one in treating media confabulations as more real than any facts that may lie beyond them.

Maçães welcomes this situation, since it shows that American history has finally begun. As he puts it at the end of this refreshingly bold and deeply thought-stirring book, “For America the age of nation-building is over. The age of world-building has begun.”

And:

American thought has always tended to a certain solipsism, a trait that has become more prominent in recent times. If Fukuyama and his neoconservative allies believed the world was yearning to be remade on an imaginary American model, the woke movement believes “whiteness” accounts for all the evils of modern societies. America’s record of slavery and racism is all too real. Even so, passing over in silence the repression and enslavement of peoples outside the West – Tibetans, Uighurs and now Mongols in China, for example – because they cannot be condemned as crimes of white supremacy reveals a wilfully parochial and self-absorbed outlook.

Wokery is the successor ideology of neo-conservatism, a singularly American world-view. That may be why it has become a powerful force only in countries (such as Britain) heavily exposed to American culture wars. In much of the world – Asian and Islamic societies and large parts of Europe, for example – the woke movement is marginal, and its American prototype viewed with bemused indifference or contempt.

In contrast, Russia:

Reflected in varying degrees throughout the west, America’s immersion in self-invented worlds contrasts starkly with Russian practice. Like the US, Russia conceals awkward facts behind a media-created veil. Unlike those in the US, Russia’s ruling elites know this virtual world is deceptive. The point is not to create a new reality but to obscure what is actually happening. When Vladimir Putin asserted that Russian forces had not entered Ukraine, no one apart from a handful of anti-western ultra-leftists believed him. When the Kremlin denies Russian pilots are targeting schools and hospitals in Syria, there is well-founded disbelief. When officials deny that the Russian state had any hand in the 2018 Novichok attack in Salisbury and the poisoning of opposition leader Alexei Navalny, hardly anyone believes this is true. Nonetheless, the continuous repetition of these falsehoods has succeeded in clouding perception of the behaviour of Putin’s regime.

The Chinese project of cultural homogenization is borrowed from the West:

Regime-friendly Chinese intellectuals are fond of telling western visitors that China is not a nation state but a “civilisation state”, and there has been a shift towards touting the merits of Confucian governance. Yet in many ways Xi’s regime is copying the homogenising national states constructed in Europe after the French Revolution. Like them, it aims to impose a monoculture where different ways of life existed before. In Revolutionary France, which under the ancien régime contained many languages and peoples, this was achieved through military conscription and a national education system. Another, more violent process of nation-building by ethnic cleansing occurred in central and eastern Europe after the collapse of the Hapsburg empire.

Following these precedents, Xi is using the state machine to fabricate an immemorial Chinese nation and obliterate minority cultures. As in its pursuit of maximal economic growth, China is building a future imported from the Western past. ... How curious if, as the 21st century staggers on, a hyper- authoritarian China emerges as the only major state still governed by an Enlightenment faith in progress.

H/t Tyler Cowen.

Thursday, October 29, 2020

UFO Events, a Thought Experiment about the Evolution of Language

I'm bumping this to the top in general principle. And because of this recent string of tweets involving me, Dan Everett, and Mark Changizi. See also this more recent post, Gestalt Switch in the Emergence of Human Culture.
The problem of human origins, of which language origins is one aspect, is deep and important. It is also somewhat mysterious. If we could travel back in time at least some of those mysteries could be cleared up. One that interests me, for example, is whether or not the emergence of language was preceded by the emergence of music, or more likely, proto-music. Others are interested in the involvement of gesture in language origins.

Some of the attendant questions could be resolved by traveling back in time and making direct observations. Still, once we’d observed what happened and when it happened, questions would remain. We still wouldn’t know the neural and cognitive mechanisms, for they are not apparent from behavior alone. But our observations of just what happened would certainly constrain the space of models we’d have to investigate.

Unfortunately, we can’t travel back in time to make those observations. That difficulty has the peculiar effect of reversing the inferential logic of the previous paragraph. We find ourselves in the situation of using our knowledge of neural and cognitive mechanisms to constrain the space of possible historical sequences.

Except, of course, that our knowledge of neural and cognitive mechanisms is not very secure. And large swaths of linguistics are mechanism free. To be sure, there may be an elaborate apparatus of abstract formal mechanism, but just how that mechanism is realized in step-by-step cognitive and neural processes, that remains uninvestigated, except among computational linguists.

The upshot of all this is that we must approach these questions indirectly. We have to gather evidence from a wide variety of disciplines – archeology, physical and cultural anthropology, cognitive psychology, developmental psychology, and the neurosciences – and piece it together. Such work entails a level of speculation that makes well-trained academicians queasy.

What follows is an out-take from Beethoven’s Anvil, my book on music. It’s about a thought experiment that first occurred to me while in graduate school in the mid-1970s. Consider the often astounding and sometimes absurd things that trainers can get animals to do, things the don’t do naturally. Those acts are, in some sense, inherent in their neuro-muscular endowment, but not evoked by their natural habitat. But place them in an environment ruled by humans who take pleasure in watching dancing horses, and . . . Except that I’m not talking about horses.
It seems to me that what is so very remarkable about the evolution of our own species is that the behavioral differences between us and our nearest biological relatives are disproportionate to the physical and physiological differences. The physical and physiological differences are relatively small, but the behavioral differences are large.

In thinking about this problem I have found it useful to think about how at least some chimpanzees came to acquire a modicum of language. All of them ended in failure. In the most intense of these efforts, Keith and Cathy Hayes raised a baby chimp in their household from 1947 to 1954. But that close and sustained interaction with Vicki, the young chimp in question, was not sufficient. Then in the late 1960s Allen and Beatrice Gardner began training a chimp, Washoe, in Ameslan, a sign language used among the deaf. This effort was far more successful. Within three years Washoe had a vocabulary of Ameslan 85 signs and she sometimes created signs of her own.
 
The results startled the scientific community and precipitated both more research along similar lines—as well as work where chimps communicated by pressing ironically identified buttons on a computerized panel—and considerable controversy over whether or not ape language was REAL language. That controversy is of little direct interest to me, though I certainly favor the view that this interesting behavior is not really language. What is interesting is the fact that these various chimps managed even the modest language that they did.

The string of earlier failures had led to a cessation of attempts. It seemed impossible to teach language to apes. It would seem that they just didn’t have the capacity. Upon reflection, however, the research community came to suspect that the problem might have more to do with vocal control than with central cognitive capacity. And so the Gardners acted on that supposition and succeeded where others had failed. It turns out that whatever chimpanzee cognitive capacity was, it was capable of surprising things.

Note that nothing had changed about the chimpanzees. Those that learned some Ameslan signs, and those that learned to press buttons on a panel, were of the same species as those that had earlier failed to learn to speak. What had changed was the environment. The (researchers in the) environment no longer asked for vocalizations; the environment asked for gestures, or button presses. These the chimps could provide, thereby allowing them to communicate with the (researchers in the) environment in a new way.

It seemed to me that this provided a way to attack the problem of language origins from a slightly different angle. So I imagined that a long time ago groups of very clever apes – more so than any extant species – were living on the African savannas. One day some flying saucers appeared in the sky and landed. The extra-terrestrials who emerged were extraordinarily adept at interacting with those apes and were entirely benevolent in their actions. These creatures taught the apes how to sing and dance and talk and tell stories, and so forth. Then, after thirty years or so, the ETs left without a trace. The apes had absorbed the ETs’ lessons so well that they were able to pass them on to their progeny generation after generation. Thus human culture and history were born.

Now, unless you actually believe in UFOs, and in the benevolence of their crews, this little fantasy does not seem very promising, for it is a fantasy about things that certainly never happened. Further, even if this had happened, it does seem to remove the mystery from language’s origins. Instead of something from nothing we have language handed to us on a platter. We learned it from some other folks, perhaps they were little short fellows with green skin, or perhaps they were the modern style aliens with pale complexions, catlike pupils in almond eyes and elongated heads.

But, and here is where we get to the heart of the matter, what would have to have been true in order for this to have worked? Just as the chimps before Ameslan were genetically the same as those after, so the clever before alien-instruction were the same as the proto-humans after. The species has not changed, the genome is the same – at least for the initial generation. The capacity for language would have to have been inherent in the brains of those clever apes. All the aliens did was activate that capacity. Once that happened the newly emergent proto-humans were able to sustain and further develop language on their own. Thus the critical event is something the precipitates a reconfiguration of existing capabilities.

However, language origins is not our problem. We are searching for the origins of music. So, instead of alien instruction in Hebrew or Sanskrit we can imagine alien instruction in samba or polka. The basic configuration and dynamics of the story remains the same. However, to make it real we have to get rid of those aliens and their instruction. Instead of the aliens we have only our group of clever apes. They are going to have to instruct one another. What we are looking for is a a way to get a gestalt switch in group dynamics that supports new modes of neural dynamics in the brains of individuals who are interacting with one another in a group.

Let us call this the Gestalt Origins Hypothesis:
Gestalt Origins: The precursor to music arose when groups of hominids interacted in a way that triggered a new configuration of operation in their existing nervous system.
Notice that I talk of a precursor to music. I don’t think we can get from ape to music in a single bound. We need at least one precursor, something that is rhythmic, like music, but not yet fully formed. In order to get even that far, so my argument goes, our proto-humans need better control over their vocal cords than apes have, they need more rhythmic sophistication, and greater mimetic capacity.

Notice that this story says nothing about the adaptive value of music. I do intend to get around to that toward the end of the chapter, but that’s not my primary concern. My primary concern is getting our ancestors to the point where a gestalt switch can happen that will bring about a precursor to music, something we can call musicking. In order for that to happen we need to solve an adaptive problem or two. But those adaptations are not about music; they are about its precursors. Once music-making is going along smoothly we need another gestalt switch to differentiate it into language and music proper.

Tuesday, October 27, 2020

Covid-19 virus has mutated during the course of the pandemic

Further tweets in the stream:

If hamsters are shedding more virus in their upper respiratory tracts, then they are more likely to be contagious by producing infectious droplets and aerosols with higher concentrations of infectious virus. This is the type of data we need to get to understand this question, BUT

Transmission itself wasn't tested. Ideally, they would have set up two hamster cages in the same room or in cages with shared air to see if G614 was transmitted more efficiently or to more animals than D614 virus.

Since transmission studies weren't part of this paper, ultimately whether or not D614G increases transmissibility remains unknown. No doubt those studies are in progress. Ideally these will be performed in multiple transmission models to confirm the findings.

The good news: at least in hamsters, this mutation didn't appear to confer any observable differences in pathogenicity. The virus may be mutating—because that's what RNA viruses do—but it's not becoming more virulent.

The second good news: serum from hamsters infected with D614 virus neutralized G614 virus in culture, suggesting that vaccines (all designed against D614 spike) will work against either variant.

My take home: while the jury's still out on whether this variant is more transmissible in the real world, it still can be neutralized by antibodies against all the vaccine candidates. It's not mutating into something more dangerous or pathogenic. It's behaving like a normal virus

Ultimately it doesn't matter whether D614G is more transmissible or not: the message to everyone is avoid getting EITHER variant of SARS-CoV-2.

Stay the course and stay safe, so that you can be protected against all known variants when a safe, effective vaccine is ready.

"Wall of Fame" [now gone]

Adam Roberts reviews Kim Stanley Robinson's "Ministry of the Future" – Reimagining how we think about the future

He’s ambivalent:

It’s a righteous novel, and I’m a KSR fan of longstanding, so I expected to like Ministry of the Future. And I did, if only up to a point. Beyond that point I ... didn't, really. Nonetheless, I’d suggest, or I would if it didn’t just look perversely contradictory, that the very reason I didn’t much like The Ministry of the Future is actually an index of its success: its ambition, its throughline and above all its—well, it’s ministry.

He goes on, “fat book…thin on plot,” which is hardly surprising. The plot is strung between two poles, one located in the Ministry based in Zurich and the other in a clinic in Northern India. Lots of leaden dialog, lots of information dumps, plenty of future tech, some gee-whiz, some not so much. This is all standard book-review stuff and Roberts handles it very well. Where he shines, though, is in his final two paragraphs:

The negative way of spinning all this would be to say that this novel can be a dry read, sometimes positively drought-dry. There are stretches here which are, baldly stated, an effort for the reader to push through. But the positive way to spin it would be to see it as a novel not just about climate change, but about the kind of stories we tell ourselves about disasters like climate change. Those stories are, clearly, not helping. Take ‘eucatastrophe’, Tolkien’s term for a thrilling story in which disaster impends, becomes more and more inevitable and then is averted at the very last moment. It’s a real workhorse of storytelling nowadays, the eucatastrophe, especially in cinema. There is a threat to the whole world! Let’s imagine that as a singular, external thing: an asteroid on collision course, a huge invading alien spaceship. Then let’s draw out the approaching disaster and make it seem like it could never be overcome. Finally, bam: rabbit from hat, the hero saves the day at the last minute.

The Ministry for the Future is, in effect, saying: that’s a bad story—not bad in entertainment terms but bad in verisimilitude terms. It is saying: we are actually, right now, indeed facing a threat to the whole world, but it’s not a single thing it’s a complex and deeply-embedded function of human interrelation and social praxis. It’s not exterior to us, it is us. And it won’t be solved by a single heroic flourish in the nick of time. It will be solved by a congeries of difficult, drawn-out, collective labour, much of which is so inimical to ‘popular narrative’ that we dismiss it as boring. It’s not boring, though: it’s literally life-and-death. And so one part of our large, human task will be: to reconfigure the kinds of stories we are telling ourselves about disaster and how to avert it.

Notice that word, “verisimilitude”, used about a work of fiction pitched in the future, albeit the near-enough future, a future at least some of the readers of the book will live into.

* * * * *

Check out KSR’s recent interview in The Jacobin:

I’ve steeped myself in the utopian tradition. It’s not a big body of literature, it’s easy to read the best hits of the utopian tradition. You could make a list, I mean roughly twenty or twenty-five books would be the highlights of the entire four hundred years, which is a little shocking. And maybe there’s more out there that hasn’t stayed in the canon. But if you talk about the utopian canon, it’s quite small — it’s interesting, it has its habits, its problems, its gaps.

Famously, from Thomas More (Utopia) on, there’s been a gap in the history — the utopia is separated by space or time, by a disjunction. They call it the Great Trench. In Utopia, they dug a great trench across the peninsula so that their peninsula became an island. And the Great Trench is endemic in utopian literature. There’s almost always a break that allows the utopian society to be implemented and to run successfully. I’ve never liked that because one connotation of the word “utopian” is unreality, in the sense that it’s “never going to happen.”

So we have to fill in this trench. When Jameson said it’s easier to imagine the end of the world than the end of capitalism, I think what he was talking about is that missing bridge from here to there. It’s hard to imagine a positive history, but it’s not impossible. And now, yes, it’s easy to imagine the end of the world because we are at the start of a mass extinction event. But he’s talking about hegemony, and a kind of Marxist reading of history, and the kind of Gramscian notion that everybody’s in the mindset that capitalism is reality itself and that there can never be any other way — so it’s hard to imagine the end of capitalism. But I would just flip it and say, it’s hard to imagine how we get to a better system. Imagining the better system isn’t that hard; you just make up some rules about how things should work. You could even say socialism is that kind of utopian imaginary. Let’s just do it this way, a kind of society of mutual aid. And I would agree with anyone who says, “Well, that’s a good system.”

The interesting thing, and also the new stories to tell if you’re a science fiction novelist, if you’re any kind of novelist — almost every story’s been told a few times — but the story of getting to a new and better social system, that’s almost an empty niche in our mental ecology. So I’ve been throwing myself into that attempt. It’s hard, but it’s interesting.

Friday, October 23, 2020

Friday Fotos: Foreign places in the key of burnt orange

Is the fact that ideas are nonrival the key to economic growth in the 21st century? Or: What’s an idea? [the peculiarities of economic models]

I’ve been chewing on one particular paragraph, the final one, of Bloom et al. “Are Ideas Getting Harder to Find?”[1] Why? Because it bears on just what (these particular) economists mean by “idea”. Early in the paper they noted that “ideas are hard to measure” (p. 1108), noting that appropriate units of measure are far from obvious. They went on to note that “in some ways a more accurate title for this paper would be ‘Is Exponential Growth Getting Harder to Achieve?’” Which brings up the question of why didn’t they choose that more accurate title? Custom, perhaps? I don’t know.

How do you measure ideas?

I understand the problem. I’m not at all sure that “idea” can even be a properly technical term, thinking perhaps it’s better regarded as an informal common-sense term with but limited use in technical work. In any event, when it comes to actually measuring ideas, the authors use proxies in two of their three case studies. In their study of semi-conductor manufacture they use research effort as measured by wages as a proxy for ideas (p. 1129) and in their study of seed lines they use R & D expenditure (pp. 1120-1121). Their measure was more direct in the case of pharmaceutical development; they counted articles in the PubMed database as identified by appropriate key words (pp. 1125-1126).

I have no problem with that. But it does mean they tend to tread ideas as atomic entities with no properties beyond the fact that they can be counted, if only indirectly, and that they can be shared. And that brings us to the final paragraph of the article.

Key insight: Ideas are nonrival

Economist distinguish between things that are rival and things that are non-rival. When something is a rival good only one person or entity can use it. If Amalgamated Mining owns a particular deposit of iron ore that means that, for example, Universal Minerals cannot mine that deposit. Ideas, in contrast, are nonrival. The fact that Jim Manley knows Newton’s laws of motion doesn’t preclude anyone else from understanding and using them.

With that mind, considered the highlighted passage from the final paragraph (p. 1139) of Bloom et al.:

That one particular aspect of endogenous growth theory should be reconsidered does not diminish the contribution of that literature. Quite the contrary. The only reason models with declining research productivity can sustain exponential growth in living standards is because of the key insight from that literature: ideas are nonrival. For example, if research productivity were constant, sustained growth would actually not require that ideas be nonrival; Akcigit, Celik, and Greenwood (2016) shows that rivalrous ideas can generate sustained exponential growth in this case. Our paper therefore suggests that a fundamental contribution of endogenous growth theory is not that research productivity is constant or that subsidies to research can necessarily raise growth. Rather it is that ideas are different from all other goods in that they can be used simultaneously by any number of people. Exponential growth in research leads to exponential growth in At. And because of nonrivalry, this leads to exponential growth in per capita income.

The first highlighted passage seems to suggest that that idea that ideas are nonrival is due to the tradition of research on endogenous growth theory. That doesn’t make any sense since the nonrival nature of ideas follows from the definition of “nonrival,” which is independent of that research tradition.

What’s going on? Two paragraphs earlier they had noted that: 1) endogenous growth theory assumes constant exponential growth given constant research productivity, and 2) their article reports a variety of work showing that, in fact, over past few decades it requires more and more research to sustain exponential growth. This final paragraph is an effort to reconcile theory with evidence. The rest of the paragraph after the highlighted section does that.

How does it do it? Not very well, it seems to me, not very well. They cite a paper showing that it is possible to get sustained exponential growth from constant productivity if ideas were rival. However, it turns out that productivity is not constant (the burden of the article) and, wouldn’t you know, ideas aren’t rival either. Surely that must be why exponential growth remains possible.

Really? I understand that that works within the bounds of endogenous growth theory. But it seems awfully flimsy to me. It amounts to little more than saying exponential growth remains possible because ideas are ideas. And that’s not very helpful. Ideas were always nonrival; it’s not as though that property miraculously emerged in time to allow endogenous growth theory to save the appearances – a phrase, incidentally, that dates back to Plato.

It might be more useful to figure out what it is about the current run of ideas that makes them less productive. That’s what I’ve done in my working paper, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, which is what I’ve done in my recent working paper on stagnation [2]. But there I was concerned with cognitive architecture and the relationship between ideas and the world. I didn’t treat ideas merely as countable atomic units. Whether my argument is going in the right direction, that’s another matter. But it doesn’t depend on a truism.

References

[1] Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb, Are Ideas Getting Harder to Find? American Economic Review 2020, 110(4), https://doi.org/10.1257/aer.20180338.

[2] William Benzon, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, Version 2, Working Paper, August 2, 2019, 62 pp., https://www.academia.edu/39927897/Stagnation_and_Beyond_Economic_growth_and_the_cost_of_knowledge_in_a_complex_world.

Thursday, October 22, 2020

Mid-day sun on an overcast day, with trees

Environmental change forced humans to become more adaptable 320,000 years ago

Smithsonian Magazine, "Turbulent era sparked leap in human behavior, adaptability 320,000 years ago", October 21, 2020:

For hundreds of thousands of years, early humans in the East African Rift Valley could expect certain things of their environment. Freshwater lakes in the region ensured a reliable source of water, and large grazing herbivores roamed the grasslands. Then, around 400,000 years ago, things changed. The environment became less predictable, and human ancestors faced new sources of instability and uncertainty that challenged their previous long-standing way of life.

The first analysis of a new sedimentary drill core representing 1 million years of environmental history in the East African Rift Valley shows that at the same time early humans were abandoning old tools in favor of more sophisticated technology and broadening their trade networks, their landscape was experiencing frequent fluctuations in vegetation and water supply that made resources less reliably available. The findings suggest that instability in their surrounding climate, land and ecosystem was a key driver in the development of new traits and behaviors underpinning human adaptability.

H/t Tyler Cowen.

The underlying research, Richard Potts et al. Increased ecological resource variability during a critical transition in hominin evolution, Science Advances 21 Oct 2020: Vol. 6, no. 43, eabc8975, DOI: 10.1126/sciadv.abc8975

Abstract: Although climate change is considered to have been a large-scale driver of African human evolution, landscape-scale shifts in ecological resources that may have shaped novel hominin adaptations are rarely investigated. We use well-dated, high-resolution, drill-core datasets to understand ecological dynamics associated with a major adaptive transition in the archeological record ~24 km from the coring site. Outcrops preserve evidence of the replacement of Acheulean by Middle Stone Age (MSA) technological, cognitive, and social innovations between 500 and 300 thousand years (ka) ago, contemporaneous with large-scale taxonomic and adaptive turnover in mammal herbivores. Beginning ~400 ka ago, tectonic, hydrological, and ecological changes combined to disrupt a relatively stable resource base, prompting fluctuations of increasing magnitude in freshwater availability, grassland communities, and woody plant cover. Interaction of these factors offers a resource-oriented hypothesis for the evolutionary success of MSA adaptations, which likely contributed to the ecological flexibility typical of Homo sapiens foragers.

Wednesday, October 21, 2020

On the beach

On why comparisons between brains and computers are problematic at best [the brain is analog, not digital]

Matthew Hutson, How Much Can Your Brain Actually Process? Don’t Ask. Slate, March 29, 2016.

This is a useful summary comparison between digital computers and the brain: "Reported estimates of how much data the brain holds in long-term memory range from 100 megabytes to 10 exabytes—in terms of Thriller on MP3, that’s either one album or 100 billion albums. This range alone should give you an immediate sense of how seriously to take the estimates." Here's the good stuff:

The fundamental difference between analog and digital information is that analog information is continuous and digital information is made of discrete chunks. Digital computers work by manipulating bits, ones, and zeroes. And operations on these bits occur in discrete steps. With each step, transistors representing bits switch on or off. Jiggle a particular atom on a transistor this way or that, and it will have no effect on the computation, because with each step the transistor’s status is rounded up or down to a one or a zero. Any drift is swiftly corrected.

On a neuron, however, jiggle an atom this way or that, and the strength of a synapse might change. People like to describe the signals between neurons as digital, because a neuron either fires or it doesn’t, sending a one or a zero to its neighbors in the form of a sharp electrical spike or lack of one. But there may be meaningful variation in the size of these spikes and in the possibility that nearby neurons will spike in response. The particular arrangement of the chemical messengers in a synapse, or the exact positioning of the two neurons, or the precise timing between two spikes—these all can have an impact on how one neuron reacts to another and whether a message is passed along.

Plus, synaptic strength is not all that matters in brain function. There are myriad other factors and processes, both outside neurons and inside neurons: network structure, the behavior of support cells, cell shape, protein synthesis, ion channeling, vesicle formation. How do you calculate how many bits are communicated in one molecule’s bumping against another? How many computational “operations” is that? “The complexity of the brain is much higher at the biochemical level” than models of neural networks would have you believe, according to Terrence Sejnowski, the head of the Salk Institute’s Computational Neurobiology Laboratory. “The problem is that we don’t know enough about the brain to interpret the relevant measurement or metric at that level.”

There's more at the link.

Fighting the Big Tech ecosystem

Kara Swisher, The Justice Dept.’s Lawsuit Against Google: Too Little, Too Late, Oct. 20, 2020.

There’s no such thing as a single entity called Big Tech, and just saying it exists will not cut it. The challenges plaguing the tech industry are so complex that it is impossible to take action against one without understanding the entire ecosystem, which hinges on many monster companies, with many big problems, each of which requires a different remedy.

Certainly reforming Section 230 could help. But other tools may be needed, like significant fines, as well as new state and federal laws, enforcement of existing regulations and increased funding for agencies like the Federal Trade Commission, along with more aggressive consumer action and media scrutiny.

Apple’s control over the App Store and its developers? Perhaps some fairer rules over how to operate when it comes to fees and approvals, since separating the apps from the phones is a near impossible task.

Amazon’s problem with owning a critical marketplace platform where it sells its own goods alongside third-party retailers? Simply put, should Amazon be allowed to sell its own batteries if it also controls the store for a lot of batteries? It sounds like separating Amazon retail products from the store itself might be a possible solution, as well as establishing much less porous walls between the various Amazon businesses.

Facebook and its damning “land grab” and its “neutralize” emails (referring to squelching rivals), as well as its worrisome domination of the online discourse and news distribution across much of the world? This one is harder, but some breakup of its units, say a cleaving of Instagram and WhatsApp, might be a step in the right direction, along with figuring out a way to make its controversial editorial decisions more transparent and systemic rather than the more random Whatever Mark Zuckerberg Says This Week they have become.

And Google, of course, which is now for the first time ever in a real fight with the United States government? It was in early 2013 that the F.T.C. commissioners decided unanimously to scuttle the agency’s investigation of Google after getting the company to make some voluntary changes to the way it conducted its business. This despite a harsher determination by its own staff in a 160-page report, which came to light in 2015, that Google had done a lot of the things that the Justice Department is now alleging, including that its search and advertising dominance violated federal antitrust laws.

Tuesday, October 20, 2020

How fastText and BERT encode linguistic features

Abstract for the linked article:

Most modern NLP systems make use of pretrained contextual representations that attain astonishingly high performance on a variety of tasks. Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it. In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing pop- ular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted. To enable intrinsic probing, we propose a novel frame- work based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. We then probe fastText and BERT for various morphosyntactic attributes across 36 languages. We find that most attributes are reliably encoded by only a few neurons, with fastText concentrating its linguistic structure more than BERT.

From Hoboken's Pier 13 to the Chrysler Building

Are Inventors or Firms the Engines of Innovation?

Article by Ajay Bhaskarabhatla, Luis Cabral, Deepak Hegde, and Thomas Peeters in Management Science, published online Oct. 7, 2020.

Abstract: In this study, we empirically assess the contributions of inventors and firms for innovation using a 37-year panel of U.S. patenting activity. We estimate that inventors’ human capital is 5–10 times more important than firm capabilities for explaining the variance in inventor output. We then examine matching between inventors and firms and find highly talented inventors are attracted to firms that (i) have weak firm-specific invention capabilities and (ii) employ other talented inventors. A theoretical model that incorporates worker preferences for inventive output rationalizes our empirical findings of negative assortative matching between inventors and firms and positive assortative matching among inventors.

H/t Tyler Cowen

Monday, October 19, 2020

The sad truth about social media

The Boys: Superheroes for the Trump Era? [Media Notes 50]

Just watched the Netflix series, The Boys, which came out in 2019 and is based on a comic book from 2006-2008, about which I know nothing. Here’s the basic premise (Wikipedia):

The Boys is set in a universe where superpowered individuals are recognized as heroes by the general public and work for the powerful corporation Vought International, which markets and monetizes them. Outside of their heroic personas, most are arrogant and corrupt. The series primarily focuses on two groups: the Seven, Vought's premier superhero team, and the eponymous Boys, vigilantes looking to bring down Vought and its corrupt superheroes.

The Boys are led by Billy Butcher, who despises all superpowered people (whom he calls “Supes”) and the Seven are led by the narcissistic and violent Homelander. At the start of the series, the Boys are joined by Hughie Campbell after a member of the Seven accidentally kills his girlfriend, while the Seven are joined by Annie January, a young and hopeful heroine forced to face the truth about those she admires. Other members of the Seven include the disillusioned Queen Maeve, drug-addicted A-Train, insecure the Deep, the mysterious Black Noir, and narcissistic Homelander. The Boys are rounded out by tactical planner Mother's Milk, weapons specialist Frenchie, and superpowered test subject Kimiko. Overseeing the Seven is Vought executive Madelyn Stillwell, who is later succeeded by publicist Ashley Barrett.

That’s the general idea. We get quite a bit about the personal lives of, say, ten of the major characters, including, in a number of cases, tangled relationships with parents or parent surrogates.

But I wish I had a more extensive and fine-grained knowledge of superheroes against which to read this series. How original is the corporate premise? I know, from a movie or three, that the Avengers has some kind of corporate identity beyond merely teaming-up time and again. And, while they have their personal idiosyncrasies, they aren’t vain, self-serving, two-faced and corrupt in the way The Seven are. There’s nothing new about evil mega-corporations, of course, but the linkage between Vought and The Seven does seem new.

During the first episode or two I saw the possibility of turning such a series into a critique of the business to superhero films and comics, but nothing like that has appeared, nor did I expect it. Still, if you just cranked the right knob up to eleven, that’s where it would take the series.

The overall plot seems driven by three arcs: 1) Butcher’s desire for revenge, 2) some kind of interaction between Hughie and Annie, and 3) a drive toward, well, world domination by (some of) the Supes. It’s this last drive coupled with the need the Supes have for the adulation of the crowd that makes this seem particularly Trump-like. Since the series has been made during Trump’s presidency that seems possible, either through deliberate intent by the creators or simply by osmotic awareness of the current scene. How much of that is there in the comic books? All of it or only some of it?

Of course Trump did come not out of nowhere, even if his 2016 victory was unexpected and unforeseen. The cultural forces were there and those forces have certainly been alive in popular culture.

Above all I’m thinking about role superheroes in people’s imaginative lives. There is a need to make sense of the world to the extent – in time, space, and causal nexus – that we are aware of it. Superheroes, even or perhaps especially corrupt ones, are mediating figures in this process. In the first place they are concrete individuals, not abstract actors like corporations and governments. But these individuals are powerful enough that they can, as individuals, motivate the actions of corporations and governments. Moreover, as individuals, they have their merely personal stories, which we can identify with. In this series the story of Homelander in particular is filled with personal details, some of the sort that suggest or even require a psychoanalytic reading.

Finally, I note that the series has scenes of extravagant violence.

BTW, what is it with Fresca?

Graffiti, bold is gold

Investment and the conditions for exponential growth [#Progress Studies | Tech Evol]

At the end of their article, Are Ideas Getting Harder to Find? (2020), Nicholas Bloom, et al. Begin to draw some conclusions, thus (p. 1134):

The evidence presented in this paper concerns the extent to which a constant level of research effort can generate constant exponential growth, either in the economy as a whole or within relatively narrow categories, such as a firm or a seed type or a health condition. We provide consistent evidence that the historical answer to this question is “no”: as summarized in Table 7, research productivity is declining at a substantial rate in virtually every place we look.

A bit later (p. 1138):

This analysis has implications for the growth models that economists use in our own research, like those cited in the introduction. The standard approach in recent years employs models that assume constant research productivity, in part because it is convenient and in part because the earlier literature has been interpreted as being inconclusive on the extent to which this is problematic. We believe the empirical work we have presented speaks clearly against this assumption. A first-order fact of growth empirics is that research productivity is falling sharply.

Future work in the growth literature should determine how best to understand this fact.

In my 2019 working paper in which I commented on this paper, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world, I argue that what they see as a decline in research productivity reflects the (unavoidably) increasing costs of obtaining knowledge about the world and then briefly outline that account of cultural ranks that David Hays and I have developed. It is my impression that the growth literature centers on the growth accompanying the Industrial Revolution, which is a reflection of Rank 3 modes of thought, in terms of our account. Perhaps the usual growth models, where “a constant level of research effort can generate constant exponential growth,” are valid for the conditions obtaining during the Industrial Revolution but fail because those conditions no longer obtain. We are moving into a different world which may well require different models.

That is to say, it is inappropriate to search for a growth model that is valid everywhere and always. There is no such thing. Rather different material and cultural circumstances require different models. Note that I am not proposing potentially hundreds or even tens of models. I am proposing only a handful, corresponding to the handful of cultural ranks we have proposed.

And that brings me to another excerpt from Hays’s The Evolution of Technology Through Four Cognitive Ranks (1993). This excerpt is the opening section of Chapter 6, “Investment; with a life-cycle cost analysis of one individual human being.” Hays uses the idea of the factory to characterize rank 3 economic production, though of course he realizes that agriculture continues and that there are rank 3 innovations other than the factory system. I follow this excerpt with a brief comment about the specific case studies Bloom et al. develop in their article.l

As I have mentioned before in this series of excerpts, Hays wrote the book for an online course in the 1990s and distributed the book to students on an MS-DOS disk – chosen by the school, I assume, as the least common denominator among personal computer operating systems. Thus it was written as a text file in a simple hypertext system. It was never published in hardcopy. I reproduce it below more or less as Hays created it, in a mono-spaced font.

Investment

Capital is a concept of rank 3, but investment in skill and land were necessary in ranks 1 and 2. Sapients who believe in the future will attempt to prevent the net worth of Earth from declining.

6.1. INVESTMENT

6.1.1. Stone Tools and Hides Require Skill
6.1.2. The Land
6.1.3. The Factory System

The main investment in rank 1 is the acquisition of lore. For rank 2, the improvement of land for agriculture requires large investment. In rank 3, capital is assembled to invest in the ensembles of machines that produce large quantities of goods – and to invest also in transportation, communication, urban systems, and education.

Day by day a person works and uses the fruits of his (or her) labor for subsistence (food, clothing, shelter), for ritual, for pleasure, and so on. Whatever the rank, if each day's consumption equals each day's income, there is no investment.

Investment is the conversion of a portion of income into capital. The purpose of investment is to increase expected future income. Consumption yields instant gratification; the investor has to postpone gratification. The future increment must be larger than the present decrement to justify investment. Maybe I will put aside a dollar a day this year, but I want to get back two dollars a day next year!

The theory of rank says that the nature of capital changes from rank to rank, according to the technology. Figure 6.1 contains a sort of chronology of investments.

6.1.1. Stone Tools and Hides Require Skill

Rank 1 lives by hunting and gathering. In fact, most people of rank 1 seem to live, or to have lived, mostly by gathering with occasional feasts when the hunters get lucky. Emphasis on hunting appears to be sexist; women do the bulk of gathering. What do rank 1 people have that they do not consume on the day they get it? Some weapon-tools, a little clothing made from hides, and very simple shelter. My impression is that they are willing to leave these things behind and make new ones. An igloo does not pin down an Eskimo as your home holds you.

So rank 1 has almost no material capital; but it does have skill. The best over-simplification is probably that the rank 1 person's head is as full as yours or mine. Our knowledge is more specialized, and I assert that our knowledge is more abstract. One of us can teach video, one can repair automobiles, and so on; I can't teach video, and I suppose that only a few of us can. The rank 1 person is also a specialist, in a way that is not so obvious: The rank 1 person's concrete skills are specific to an environment, a climate, a range of vegetation and animal life, an area on a map.

But these skills are considerable, and take time to acquire. Little or none of them are acquired in situations comparable to schooling. Mostly the children watch adults, help, imitate, and play. No doubt they get some verbal guidance, orally since there is no writing. But not a great deal. The investment necessary to acquire skills is made in childhood. Children work less than adults, or at least less productively. A rank 1 society does not put its children to doing routine simple tasks and thus preclude their acquiring adult skills, as Britain did during the worst part of the Industrial Revolution. But at puberty the children must be ready to play adult roles. The rank 1 adult's skills are as fixed as our speech patterns. Few of us learn a second language after puberty with the exact speech patterns of a native speaker, and no one in rank 1 acquires a new skill after puberty.

Japanese temple

Sunday, October 18, 2020

Long term storage in the human brain [ANNs pale in comparison]

Some cross street in Hoboken

Cultural Evolution 8: Language Games 1, Speech

I'm bumping this post, from 2010, to the top of the queue for two reasons: 1) the section "Language Games and Game Theory" is germane to my recent post, Why do we need a genotype-phenotype distinction for cultural evolution? Because minds are built from the inside.The post proposed as an answer: minds are built from the inside. From that it follows that we can't read one another's minds, which is my point of departure in this post. 2) The following section, "What is a language and what are the memes?," is where I first worked out my current approach to the genetic component of culture, which I have since come to call coordinators. The rest of the posts in this particular series are gathered under the tag CE workshop. Note: You might want to read the comments for this post.
* * * * *

The key to the treasure is the treasure.
– John Barth

But I’m not talking of language games in Wittgenstein’s sense, though the Wittgenstein of the Tractatus had a considerable influence on me as an undergraduate. No, I’m thinking of game theory, not something I’ve studied, though I did have an undergraduate course on decision theory taught by R. B. Braithwaite. But I’m getting ahead of the game.

As the title says, this post is about language. There’s been a fair amount of work done on language from an evolutionary point of view, which is not surprising, as historical linguistics has well-developed treatments of language lineages and taxonomy, the “stuff” of large-scale evolutionary investigation. While this work is directly relevant to a consideration of cultural evolution, however, I will not be reviewing or discussing it. For it doesn’t deal with the theoretical issues which most concern me in these posts, namely, a conceptualization of the genetic and phenotypic entities of culture. This literature is empirically oriented in a way that doesn’t depend on such matters.

The Arbitrariness of the Sign
 
In particular, I want to deal with the arbitrariness of the sign. Given my approach to memes, that arbitrariness would appear to eliminate the possibility that word meanings could have memetic status. For, as you may recall, I’ve defined memes to be perceptual properties – albeit sometimes very complex and abstract ones – of physical things and events. Memes can be defined over speech sounds, language gestures, or printed words, but not over the meanings of words. Note that by “meaning” I mean the mental or neural event that is the meaning of the word, what Saussure called the signified. I don’t mean the referent of the word, which, in many cases, but by no means all, would have perceptible physical properties. I mean the meaning, the mental event. In this conception, it would seem that that cannot be memetic.

That seems right to me. Language is different from music and drawing and painting and sculpture and dance, it plays a different role in human society and culture. On that basis one would expect it to come out fundamentally different on a memetic analysis.

This, of course, leaves us with a problem. If word meaning is not memetic, then how is it that we can use language to communicate, and very effectively over a wide range of cases? Not only language, of course, but everything that depends on language. Literature obviously – which I’ll take up in the next post – but much else as well.

Speech as a Means of Communication
 
Willard van Orman Quine has given us a classic thought experiment that points up the problem of word meaning. He broaches the issue by considering the problem of radical translation, “translation of the language of a hitherto untouched people” (Quine 1960, 28). He asks to consider a “linguist who, unaided by and interpreter, is out to penetrate and translate a language hitherto unknown. All the objective data he has to go on are the forces that he sees impinging on the native’s surfaces and the observable behavior, focal and otherwise, of the native.” That is to say, he has no direct access to what is going on inside the native’s head, but utterances are available to him. Quine then asks us to imagine that “a rabbit scurries by, the native says ‘Gavagai’, and the linguist notes down the sentence ‘Rabbit’ (of ‘Lo, a rabbit’) as tentative translation, subject to testing in further cases” (p. 29). And thus begins one of the best known intellectual romps in the philosophy of language.

Quine goes on to argue that, in thus proposing that initial translation, the linguist is making illegitimate assumptions. Perhaps he begins his argument by noting that the native might, in fact, mean “white” or “animal” and later on offers more exotic possibilities, the sort of things only a philosopher would think of. Quine also notes that whatever gestures and utterances the native offers as the linguist attempts to clarify and verify will be subject to the same problem. Quine’s argument is thorough and convincing.

When he did that work, however, he did not, of course, have access to a range of more recent work in cognitive anthropology and evolutionary psychology that indicated that our adapted minds have a preferred way of parsing the world, as do baboons. To be sure, this is “overwritten” and augmented in culture-specific ways, but those underlying perceptual and cognitive systems do not disappear. To consider a specific example, the work on folk taxonomy (Berlin 1992) suggests that there is a so-called basic level of designation, and that is at the level of “rabbit” and not “animal” (in fact, many languages don’t even have a word at that level of generality). So the linguist is reasonable in assuming “rabbit” is a more likely translation than “animal.” Other considerations are likely to rule out “white” or Quine’s other suggestions. I have no reason to believe that this cognitive architecture so constrains matters that there is only one possible referent for “Gavagai.” But I do think that it is likely to turn out that, all other things being equal, “rabbit” is in fact the best guess.

This situation, of course, is rather different from that of ordinary speech between people who share a common language. In the common situation both parties would know the meaning of “Gavagai.” Yet, however effective it is, ordinary speech sometimes fails to secure understanding between people and, where such understanding is achieved, that achievement has required back-and-forth speech. The mutual understanding is achieved through a process of negotiation. As William Croft reiterates in chapter 4 of Explaining Language Change, we cannot get inside one another’s heads and so must negotiate meanings in conversation.

That is to say, communication through language is not a matter of sending information through a pipeline. It does not happen according to what Michael Reddy (1993) has called the conduit metaphor. Reddy’s article is based on 53 example sentences. Here are the first three (p. 166):
1. Try to get your thoughts across better
2. None of Mary’s feelings came through to me with any clarity
3. You still haven’t given me any idea of what you mean
Reddy’s argument is that many of our statements about communication seemed to be based on the notion of sending something (the thought, idea, feeling) through a conduit, hence he calls it the conduit metaphor. He knows that communication doesn’t work that way, but that’s not is central issue. His central concern is to detail the way we use the conduit metaphor to structure our thinking about communication.

Reddy’s argument is reminiscent of a somewhat earlier argument by Paul de Man, “Form and Intent in the American New Criticism” (1983, first published in 1971). Consider this passage (p. 25):
“Intent” is seen, by analogy with a physical model, as a transfer of a psychic or mental content that exists in the mind of the poet to the mind of a reader, somewhat as one would pour wine from a jar into a glass. A certain content has to be transferred elsewhere, and the energy necessary to effect the transfer has to come from an outside source called intention.
De Man’s point was that, when we read a text, the intention (de Man uses the term in its somewhat rarified philosophical sense) that gives life to those signs on the page is our intention, not the author’s. And he is right.

De Man’s insight, and similar ones by Derrida, Barthes, Foucault and others, had an electrifying effect on literary critics in the United States, leading to a tremendously fertile period in academic literary criticism that, however, became increasingly sclerotic in the 1990s. But that story’s neither here nor there. My point is simply that these thinkers were attempting to deal with a real problem and, ultimately, they failed.

What, for example, could Derrida (1976, p. 158) have possibly meant by proclaiming “There is nothing outside of the text”? What he did not mean is that the world is nothing but a text and a text created by more or less arbitrary social conventions. Read sympathetically, and in context, the phrase seems to mean something to the effect that there is no way we can “step outside” language so as to examine, in full omniscient and transcendental objectivity, the relationship between language and the world. And that, it seems to me, is true. We’re always going to be immersed in “language,” whether natural or the various languages of science and mathematics.

How, then, do we fly free of the bottle? We play games.

Language Games and Game Theory
 
Where de Man argues that intent cannot be transmitted from one speaker to another like pouring wine from a jar, William Croft points out that linguistic communication is tricky “precisely because our thoughts cannot leave our heads” (2000, p. 111). Croft is a linguist who has undertaken to explain language change using an evolutionary approach. He defines a language to be “the population of utterances in a speech community” (p. 26), thus focusing our attention, not on some abstract language system, but on the concrete production of speech.

How does Croft deal with the fact that we cannot transmit thoughts directly to another’s mind? He argues that meaning is negotiated in the back-and-forth of conversation and draws on game theory to make his argument (p. 95):
There is a problem here: the hearer cannot read the speaker’s mind, and she can’t read his. This is what is called a COORDINATION PROBLEM. In speaking and understanding, speaker and hearer are trying to coordinate on the same meaning.
Croft then introduces the notion of a third-party Schelling game in which two players “are presented by a third party with a set of stimuli” which helps them converge on the same meaning. Sometimes it works, sometimes not. One possibility, he argues, is to use “natural perceptual or cognitive distinctiveness [as] a COORDINATION DEVICE” (p. 96). That gives us the adapted mind that I invoked in discussing Quine’s problem. Croft goes on to discuss a variety of linguistic devices as non-conventional coordination devices.

While the details are interesting and important – I recommend his discussion to you – we need not worry about them now.

Save one. Croft notes that, in order for speaker and hearing to reach agreement in conversation their mental states “need not be identical, though it is assumed that they are systematically related” (p. 99). Later on he notes that (114):
successful communication involves not the recovery of and original, ‘correct’ interpretation of the speaker’s original intention, but instead an interpretation that evolves over the course of the conversation, and is assessed by the success or failure of the higher social-interactional goals that the interlocutors are striving to achieve.
One reason why this effort is not doomed to failure from the beginning is the fact that although we cannot read each other’s minds, we do inhabit a shared world.

Croft’s general point, then, is simple, speech communication is a two-way interaction, not the one-way transmission of meaning, information, whatever, though a channel. De Man’s problem is thus solved for the case of face-to-face interaction, a common case, and surely the most basic one. Note that this solution does not involve recourse to a transcendental signified nor to stepping outside the text, nothing like that. It involves the ordinary and obvious means of interactive speech. In this sense, the key to the treasure, is the treasure. Nothing else is required.

But what, you may ask, of written communication, where direct interaction is not possible? After all, de Man was a literary critic, writing about the reading of written texts. What about that?

Good question. I’m going to punt on it. But I observe that some written communication – correspondence – does involve interaction, but at a slower pace than conversation, often much slower. In the case of literary texts, yes, readers cannot ordinary interact with authors, but they can interact with one another. I’ll say a little about that in the next post. Beyond that, yes, there are issues, serious issues. But this is not the place to address them. My concern here is just to get things started.
Note: Mathematician and psychologist Mark Changizi (1999) has an interesting argument about why vagueness of word meaning is essential to the proper functioning of language. His argument is grounded in considerations of computability and I recommend it to you. It makes an interesting complement to the game-theoretic conception of speaking.
Addition: See subsequent post reporting an experiment that David Hays did at RAND in the mid-1950s. It’s relevant to the game theoretic treatment of conversation.
What is a language and what are the memes? 
 
Now I want to shift gears a bit and work my way back to the physical “side” of the linguistic sign, because that’s where we’re going to go looking for memetic entities.

Throughout this post I’ve been assuming that we know what a language is. Now I want to get picky. Here’s what Sidney Lamb has to say in Pathways of the Brain. He’s talking about Roman Jakobson, the great linguist (p. 41):
Using the term language in a way it is commonly used . . . we could say that he spoke six languages quite fluently: Russian, Czech, German, English, Swediksh, and French, and he had varying amounts of skill in a number of others. But each of them except Russian was spoken with a thick accent. It was said of him that “He speaks six languages, all of them in Russian.” . . . the evidence indicates that from a neurocognitive point of view there is no such unit as a language. What exists from a neurocognitive point of view is not so much one linguistic system as a group of interconnected systems, relatively independent from one another.
Lamb goes on to assert that (p. 42):
Professor Jakobson’s internal linguistic information included a single phonological system, that of his native Russian, together with separate systems of grammar and lexicon for Russian, Czech, English, German, French, and Swedish – with some overlap in these grammars and lexicons . . . along with his more limited abilities in various additional languages; plus a conceptual system connected to them all.
So far we’ve been concerned with how meaning is negotiated, where meaning is a matter of the conceptual system. That’s on one “side” of the arbitrary sign, the side inside the brain. Now we’re going to look at the other “side” of the sign, the side that’s in public view, the physical sign. It’s that physical side that most differs among languages.

The question before us is: How do we conceptualize the memetic elements of language? In glossing the emic/etic distinction in a comment to John Wilkins I remarked that (now I’m simply repeating that comment) the distinction originates in linguistics, in the distinction between phonetics and phonemics. The former is about the psychophsics of speech sound while the latter is about phoneme systems. These are obviously very closely related matters, but they aren’t the same. We tend to perceive the speech stream as consisting of discrete sound entities, syllables and phonemes; this is the domain of phonemics. But the speech signal is, in fact, continuous. If you look at a sonogram of some chunk of speech, you don’t draw a series of vertical lines through it separating one phoneme from another; nor can you snip a tape recording into phoneme-long or syllable-long segments and reassemble it into something that sounds like natural speech. The aspects of the speech stream which are phonemically active differ from one language to another, which is why foreign languages all sound like “Greek.” Independently of the fact that you don’t know what the words mean or how the syntax works, you can’t even hear the phonemes in the speech stream.

Now, that’s the distinction I’m after, between phonemes and the raw speech stream. That’s the distinction I drew in my discussion of music (third post). Phonemes are those properties of the speech stream that are linguistically active. We need, however, to distinguish between segmental phonemes and suprasegmental phonemes. The segmental phonemes are roughly parallel to the letters of an alphabetic writing system. Suprasegmentals include tone, stress, and prosodic patterns. And then we need to consider ordering as well, as the order in which elements occur is certainly a property of the speech stream, and a most important one.

Before thinking about order, thought, we need to think a bit more about what’s going on. Roughly speaking, two things need to be extracted from the speech signal: 1) word identities (to be somehow linked to word meanings), and 2) the relations between the words (syntax). My quick take on matters – I’m not a linguist and I’ve not thought this through – is that both segmental and suprasegmental phonemes are involved in both of those processes. Relations between words are often indicated by word affixes, which are realized through segmental phonemes. Word identities are certainly realized by segmental phonemes, but tone and accent are involved as well.

Beyond this, relations between words are signaled by word order. In linguistic typology, typical word order is the primary trait on which classification based. Thus one has SVO languages (subject-verb-object), VSO languages (verb-subject-object), and so forth. As those designations suggest, word order indicates grammatical function, that is, relations between words.

Thus between word order and phonemes we’ve got a rich set of memetic elements. And we could also consider morphology in here as well. Taken together these aspects of the speech signal seem to be as memetically rich and abstract as the musical properties we looked at in discussing Rhythm Changes (first post).