Monday, December 10, 2018

Was post-war anti-communism ginned up as a device to support "We're Number One" as a FP goal?

FP=foreign policy.

Israel Kamakawiwo'ole on 3 Quarks Daily, plus more IZ


Here’s some more IZ: Here he is in a Hawaiian nationalist mode, “Hawaii ‘78”:



Notice how he supplies a new melody for “Twinkle Twinkle Little Star” and translates the lyrics into Hawaiian.



"White Sandy Beach":


Jubilee! Bronze Age economics (perhaps we should give it a go, no?)

... the Bronze Age and early Western civilization was shaped so differently from what we think of as logical and normal, that one almost has to rewire one’s brain to see how differently the archaic view of economic survival and enterprise was.

Credit economies existed long before money and coinage. These economies were agricultural. Grain was the main means of payment – but it was only paid once a year, at harvest time. You can imagine how awkward it would be to carry around grain in your pocket and measure it out every time you had a beer.

We know how Sumerians and Babylonians paid for their beer (which they drank through straws, and which was cleaner than the local water). The ale-woman marked it up on the tab she kept. The tab had to be paid at harvest time, on the threshing floor, when the grain was nice and fresh. The ale-woman then paid the palace or temple for its advance of wholesale beer for her to retail during the year.

If the crops failed, or if there was a flood or drought, or a military battle, the cultivators couldn’t pay. So what was the ruler to do? If he said, “You owe the tax collector, and can’t pay. Now you have to become his slave and let him foreclose on your land.”

Suddenly, you would have had a slave society. The cultivators couldn’t serve in the army, and couldn’t perform their corvée duties to build local infrastructure.

To avoid this, the ruler simply cancelled the debts (most of which were owed ultimately to the palace and its collectors). The cultivators didn’t have to pay the ale-women. And the ale women didn’t have to pay the palace. [...]

This concept is very hard for Westerners to understand. Yet it was at the center of the Old and New Testaments, in the form of the Jubilee Year – taken out of the hands of kings and placed at the center of Judaic religion.
A contemporary analogy:
A bad international loan to a government is one that the government cannot pay except by imposing austerity on the economy to a degree that output falls, labor is obliged to emigrate to find employment, capital investment declines, and governments are forced to pay creditors by privatizing and selling off the public domain to monopolists.

The analogy in Bronze Age Babylonia was a flight of debtors from the land. Today from Greece to Ukraine, it is a flight of skilled labor and young labor to find work abroad.

No debtor – whether a class of debtors such as students or victims of predatory junk mortgages, or an entire government and national economy – should be obliged to go on the road to and economic suicide and self-destruction in order to pay creditors. The definition of statehood – and hence, international law – should be to put one’s national solvency and self-determination above foreign financial attacks. Ceding financial control should be viewed as a form of warfare, which countries have a legal right to resist as “odious debt” under moral international law.

The basic moral financial principal should be that creditors should bear the hazard for making bad loans that the debtor couldn’t pay — like the IMF loans to Argentina and Greece. The moral hazard is their putting creditor demands over the economy’s survival.

Sunday, December 9, 2018

Two contemplatives considering the question of whether they've been naughty or nice

Chaos in the Movie Biz: A Review of Hollywood Economics [#DH]

New bump, this time for Tyler Cowen and those interested in "the great stagnation". On Nov. 9, 2004 De Vany gave a presentation at Harvard Business School, "Hollywood Economics: Dealing with 'Wild' Uncertainty in the Movies and Pharmaceuticals". The pharmaceutical industry is one of three case studies in a recent paper, Are Ideas Getting Harder to Find? De Vany argued that both industries, movies and pharmaceuticals, had broadly similar characteristics despite their very different underlying natures: many 'ideas' are proposed, a few make it into production, of those some break even and then become profitable and among those a very few become blockbusters (and keep the boat afloat). Success is as much a matter of luck as of positive knowledge and deliberate intent. Among De Vany's wry observations: "It's not a Gaussian world out there."
* * * * *
I'm bumping this to the top of the queue for my friends in computational criticism (aka distant reading). When they analyze a corpus, they tread all books in the corpus alike. All are linked to their publication date and all are treated as though they had the same sales and readership. Yet some were read by a few and forgotten a year or two after they were published while others were read by thousands and tens of thousands over many years. DeVany tracks movies that came out in the 1990s. Most all but disappear within weeks of release. But some go on to make a profit and a few of these become blockbusters. (And who knows how many of those 1990s movies will be watched in 2030?)
* * * * *
Arthur De Vany, Hollywood Economics: How Extreme Uncertainty Shapes the Film Industry, Routledge, 2004.
De Vany presents a profound and imaginative treatment of the economics of the movie business, one that has implications, not only for similar businesses such as publishing and music (and even pharmaceuticals), but for our understanding of the dynamics of culture. When Richard Dawkins coined the term "meme" he unwittingly paved the way for tons and tons of sexy but shallow commentary on human culture. Though that is not what he set out to do – "meme" never shows up in the book – De Vany has given mathematical form to the behavior of movie memes and has demonstrated that it is the people who are in charge, not the memes.

I want to underscore this point as many of my humanist colleagues have spent the last several decades castigating Hollywood for its hegemonic hold over the subject masses who have little choice but to submit to having their brains scrambled by Hollywood nonsense. It’s not that simple. Hollywood would dearly love to have such control over the audience, for it would make for a much more profitable business. Alas, as De Vany demonstrates, the Hollywood suits and moguls don’t have that kind of power. The oppressed masses do, in fact, have quite a bit of autonomy in their actions. No movie can succeed without word-of-mouth recommendations, and those words cannot be dictated from on high.

Nobody Knows

In the words of [the later] screen writer William Goldman, “nobody knows anything” about what happens to movies once they are released to the theatres. Most movies don't even break even, much less make a profit – not in theatrical release, which is what De Vany investigates. (These days, movies make money on DVDs and TV, but that's another story, told by Jay Epstein.) That's no way to run a business, but the problems are inherent in the nature of movies as a business venture. The deep and ineradicable condition of the business is that there is no reliable way to estimate the market appeal of a movie short of putting it on screens across the country and seeing if people come to watch.

Does having “bankable” names on the marquee guarantee that the movie will make bank? No. Does opening big on thousands of screens with PR from here to the moon guarantee that the movie will make bank? No. Does a small opening mean the film is doomed? No. Hence Goldman's remark.

But all is not chaos. Or rather it is, but chaos of the mathematical kind. De Vany shows that about 3 or 4 weeks into circulation, the trajectory of movie dynamics (that is, people coming to theaters to watch a movie) hits a bifurcation. Most movies enter a trajectory that leads to diminishing attendance and no profits. But a few enter a trajectory that leads to continuing attendance and, eventually, a profit. Among these, a very few become block busters.

And these few come to dominate the statistics of movie economics. From the point of view of statistics based on the normal distribution those few movies are outliers and should be discounted. De Vany develops a statistical framework – he calls it the stable Paretian model – that gives proper attention to those block busters. The model is stable in the sense that it exhibits the same structure at all scales.

Structure of the Industry

De Vany devotes particular attention to the structure of the movie business. During its glory years from the 1920s through the mid-1950s the industry was organized by the so-called studio system. The studios owned both the means of production and the means of distribution. Stars, directors, writers, and craftspeople, all were on staff at the studios. When it came time to release films, a studio's distribution system went to work and the films went out to theaters owned by the studio and to independent theaters with long-term booking arrangements. The system worked well.

But in the 1950s an anti-trust action was brought against the studios and they were ordered to divest themselves of their theaters and stop the cozy booking arrangements. In consequence the studios lost the stars, directors, writers, and producers – all of whom became independent contractors – and the costs of production went up. Those increased costs were passed on to the movie-goer.

De Vany argues, convincingly, that the studios were not a cartel that drove up prices for their own benefit. Rather, their ownership of theaters helped them cope with the extreme uncertainty of the business. They had just enough direct control over exhibition practices to stabilize their income so that they could afford to keep the talent on staff. Once that stability was taken from them, they had to let the talent go. And that, in turn, required that, each time a film was to be made, someone had to go out into the marketplace and put the team together, thus incurring transaction costs that didn't exist in the studio system.

* * * * *

This is an excellent book. Note that it is thick with mathematics. But it also has lots of charts. You can read those even if you can't make sense of the equations.

Saturday, December 8, 2018

Two versions of weeds



Angels hide and entrepreneurs seek [TALENT SERCH]

Merwan H. Engineer, Paul Schure, Dan H. Vo, Hide and seek search: Why angels hide and entrepreneurs seek, Journal of Economic Behavior & Organization, Available online 6 December 2018, https://doi.org/10.1016/j.jebo.2018.10.007.

Highlights
  • Traditional search theory assumes search frictions are part of the technology; search frictions are imposed by the environment.
  • We introduce the idea that search frictions may be the consequence of a deliberate choice.
  • Our leading example is the angel capital market, a large market for arms-length entrepreneurial finance.
  • In our model entrepreneurs engage in costly search for financiers as these financiers choose to make themselves hard to find.
  • Entrepreneurs signal (high) productivity by engaging in costly search for financiers.
Abstract

The angel capital market poses a puzzle for search theory. Angel investors (“angels”) are often described in the literature as if they were hiding from entrepreneurs that seek angel capital investment. Such behavior by angels forces entrepreneurs to engage in costly search for angels. In our model, a separating equilibrium exists in which hiding by angels discourages search by low-productivity entrepreneurs who would inundate any visible angels. Only high-productivity entrepreneurs incur the time and effort costs of search to signal their type and avoid the lemons problem in the visible capital market. As the search market generates higher quality, hence more profitable matches, social surplus may increase despite the costs of hiding and searching. Hide and seek search contrasts with standard search theory where agents choose strategies to mitigate inherent physical and informational search frictions.

Friday, December 7, 2018

Friday Fotos: Seven for my birthday (one per decade)

I've decided to re-render some photos from 2006.

This was an important photo for me. I liked the composition and the energy, but it as marked by motion blur. What should I do. I decided to gow with it. [Click on a photo to embiggen.]




Here's the "birthday cake" photo, bright pastels on hard brick:




The same wall, but at a distance. The sky of course.




And around the corner. The writing's a bit obscure: "Say no 2 war":




Along Jersey Ave. heading to Hoboken:




Before we get there, though, we hang a right and head toward a housing development, Holland Gardens (not many gardens here):




One of my favorites:


Computation in language, Turing machine edition


So, I’m exploring the idea that language is the simplest thing humans do that involves computation. Thus, in my current view, whatever it is that goes on in the brain of a chimpanzee, a chameleon, or a roundworm, for example, it isn’t computation. Just what it is, that’s not my concern at this point. It follows as well that linguistic computation is grounded in something else, likely several things, none of which are my direct concern here. To be clear on this point, I reject the view the individual neurons are the basic elements of mental computation as was suggested by McCulloch and Pitts 1943, “A logical calculus of the ideas immanent in nervous activity.” Of course neurons and neuronal circuits can be simulated by a computer, but that’s something else. Computers can simulated atomic explosions too, but we don’t take that as evidence that atomic explosions are computational phenomena.

But what do I mean by computation? I mean a Turing machine, albeit one of somewhat limited capacity. As language is first of all speaking, that’s where we start. The vocal system writes to the tape while the auditory system reads from it; taken together they are the head of the device. The brain contains that table of instructions – I’m indifferent at this point as to whether those instructions are symbolic, pre-symbolic, or both – and the state register. The speech stream itself is the tape.

Therein lies the limitation of this Turing machine. In standard Turing machines the tape can move over the head in either direction. This “tape” moves in only one direction. The tape in a universal Turing machine in indefinitely long. This tape is quite limited in length. Experiments on the length of short-term memory put it at about 3 to 4 seconds. That’s the length of this one-directional tape. It can carry a single line of poetry.

Those limitations gives this Turing device the character of a very specialized input-output system. It’s a way, of course, for people to exchange symbolic “information” with one another, sending outputs to others and receiving inputs from them. Most interstingly, and very curiously, it allows us to exchange inputs and outputs with ourselves in ways otherwise impossible. Just why and how that is so is something I’ve thought about a great deal over the years, but do not understand. And don’t think I’ll get there now. But I note it.

Rather than go on and on I’ll conclude with a passage about the poetic line from my 2003 paper, “Kubla Khan” and the Embodied Mind. Note, in particular, the few lines about syntax and its relation to semantics:
Considered as a unit of analysis, the line is a conjunction of units of thought, or sense, and units of physical realization – speaking and hearing.

The significance of the poetic line is easily demonstrated by the common experiment of taking some fragment of ordinary prose and breaking it into separate lines. The result is rarely good poetry, but the poetry-like presentation invites one to consider each line as a unit by itself in addition to its connections with the lines before and after. The quasi-autonomy of the poetic line belongs to the cultural conventions governing how we read poetry. The psychological, not to mention the neural, underpinnings of this effect are, as far as I know, obscure.

Nonetheless, the linguist Wallace Chafe has quite a bit to say about what he calls an intonation unit, and that seems germane to any consideration of the poetic line. In Discourse, Consciousness, and Time Chafe asserts that the intonation unit is “a unit of mental and linguistic processing” (Chafe 1994, pp. 55 ff. 290 ff.). He begins developing the notion by discussing breathing and speech (p. 57): “Anyone who listens objectively to speech will quickly notice that is not produced in a continuous, uninterrupted flow but in spurts. This quality of language is, among other things, a biological necessity.” He goes on to observe that “this physiological requirement operates in happy synchrony with some basic functional segmentations of discourse,” namely “that each intonation unit verbalizes the information active in the speaker’s mind at its onset” (p. 63).

While it is not obvious to me just what Chafe means here, I offer a crude analogy to indicate what I understand to be the case. Speaking is a bit like fishing; you toss the line in expectation of catching a fish. But you do not really know what you will hook. Sometimes you get a fish, but you may also get nothing, or an old rubber boot. In this analogy, syntax is like tossing the line while semantics is reeling in the fish, or the boot. The syntactic toss is made with respect to your current position in the discourse (i.e. the current state of the system). You are seeking a certain kind of meaning in relation to where you are now.

Chafe identifies three different kinds of intonation units. Substantive units tend to be roughly five words long on average and, as the term suggests, present the substance of one’s thought. Regulatory units are generally a word or so long (e.g. and then, maybe, mhm, oh, and so forth), and serve to regulate the flow of ideas, rather than to present their substance. Given these durations, a single line of poetry can readily encompass a substantive unit or both a substantive and a regulatory unit.

The third kind of unit, fragmentary, results when one of the other types is aborted in mid-execution. That is to say, one is always listening to one’s own speech and is never quite sure, at the outset of a phrase, whether or not one’s toss of the syntactic line will reel-in the right fish. If things do not go as intended, the phrase may be aborted. Fragments do not concern us, as we are dealing with a text that has been thought-out and, presumably, edited, rather than with free speech, which is what Chafe studied.

Chafe’s notion is consistent with an observation made initially by Ernst Pöppel. After reviewing studies by others and offering some of his own, Pöppel concluded that our awareness of the present extends roughly three to four seconds. That suggested that lines of poetry last no longer than that and that, where written lines appeared to take longer to read, they have a strong break in the middle. Working with a poet and critic, Frederick Turner, Pöppel found evidence for these notions in the poetry of several cultures, thus showing how versification technique deals with this constraint (cf. Turner and Pöppel 1983, Pöppel 1985, pp. 75-82).

Thursday, December 6, 2018

Remember that green shoe?

20171122-_IGP1182

Four ways of managing a rock and roll band

Ian Leslie, A rocker’s guide to management, The Economist, December/January 2019. An interesting article. In part because:
If rock groups are businesses, businesses are getting more like rock bands. Workplaces are far more informal than they used to be, with less emphasis on protocol, rank and authority. Many firms try to cultivate the creativity that can come from close collaboration. Employers attempt to engineer personal chemistry, hiring coaches to fine-tune team dynamics and sending staff on team-building exercises. Employees are encouraged to share lunch, play table tennis and generally hang out. As the founder of Hubble, a London office-space company, put it, “We hope that our team will become friends first, and colleagues second.” [...]

Successful startups have to make a difficult transition from being a gang of friends working on a cool idea to being managers of a complex enterprise with multiple stakeholders. It’s a problem familiar to rock groups, which can go quickly from being local heroes to global brands, and from being responsible only for themselves to having hundreds of people rely on them for income. In both cases, people who made choices by instinct and on their own terms acquire new, often onerous responsibilities with barely any preparation. Staff who were hired because they were friends or family have their limitations exposed under pressure, and the original gang can have its solidarity tested to destruction. A study from Harvard Business School found that 65% of startups fail because of “co-founder conflict”. For every Coldplay, there are thousands of talented bands now forgotten because they never survived contact with success.

The history of rock groups can be viewed as a vast experimental laboratory for studying the core problems of any business: how to make a group of talented people add up to more than the sum of its parts. And, once you’ve done that, how to keep the band together. Here are four different models.
Here they are, the four types:
FRIENDS

“We can work it out”

The Beatles invented the idea of the band as a creative unit in the 1960s. John Lennon’s and Paul McCartney’s artistic partnership enabled them to vertically integrate the hitherto separate functions of songwriting and performing. The band had no designated frontman; all four Beatles were capable of singing lead. Though Lennon was the de facto leader in the early years, one of the band’s innovations was not to call itself “Johnny and the Beatles”, as was conventional at the time. Partly because promoters and journalists found this new entity hard to grasp, friendship became central to the band’s image. John, Paul, George and Ringo were presented to the world as a gang of inseparable buddies. Their voices blended thrillingly. They cut their hair and dressed in the same style. They talked – oh how they talked – in synchrony. “We’re really all the same person,” said McCartney in 1969. “We’re just four parts of the one.” [...]

AUTOCRACIES

“I won’t back down”

Tom Petty and the Heartbreakers were formed in 1976 by five musicians from Gainesville, Florida, who had moved to Los Angeles in search of stardom. Petty was the group’s lead singer, songwriter and driving force, but the band split its income equally. Petty was talented enough to make it alone, but he loved being in a band: it gave him a sense of belonging after a fraught childhood scarred by violence. The Heartbreakers had an ethos of all for one, and one for all. By 1978 they had released two albums that sold well. Their next, “Damn the Torpedoes”, would go triple platinum and propel them into the big league. But before that happened, the band’s leader faced a tough decision.

The Heartbreakers had a new manager, Elliot Roberts, who, at 35, was already a grizzled veteran of the industry, having managed Neil Young and Joni Mitchell. The first thing Roberts did was sit down with Petty and tell him that he needed to be more selfish. “You can’t do this deal where you’re giving every­body in the band an equal cut of money,” Roberts said, “because there’s going to be a big problem at some point. You’re going to feel really bitter and used. I’ve been down this road with bands before. It explodes, and everyone walks away.” Petty listened. The days of equal shares were over. [...]

DEMOCRACIES

“Everybody hurts”

In 1979, Michael Stipe, a college student in Athens, Georgia, was browsing in a downtown record store called Wuxtry when he got talking to the clerk, a college dropout and amateur guitarist called Peter Buck. The two men bonded over a love of underground rock and soon decided to form a band, recruiting two fellow students, Bill Berry and Mike Mills. Thirty-two years later, their band, R.E.M., broke up amicably, ending one of the happiest collaborations in rock history.

Another regular at Wuxtry Records was Bertis Downs, a law student. An early fan of the band, Downs became R.E.M.’s legal adviser and manager. He told me that R.E.M. operated as an Athenian democracy. “They all had equal say. There was no pecking order.” This was not majority rule: “Everyone had a veto, which meant everyone had to buy into every decision, business or art. They hashed things out until they reached a consensus. And they said ‘No’ a lot.” [...]

FRENEMIES

“It’s only rock ’n’ roll”

Charlie Watts’s forceful rebuke to Mick Jagger came at a difficult time for the Rolling Stones. In the 1980s they came as close to splitting as they ever have. Their last album, “Undercover”, had sold disappointingly. Jagger embarked on a solo career and seemed to be seeking an escape from the band, possibly because he was tired of dealing with Richards, who had shaken off a debilitating dependence on heroin only to replace it with one on alcohol. But Jagger’s solo albums flopped, and he returned to his old partner. The two came to an accommodation. By the end of the decade, the Stones were back on the road again, promoting a successful new album. They have been touring – and the money has kept pouring in – ever since.

“In bands that survive a long time, there’s often an agreement to disagree,” says Simon Napier-Bell, a manager of multiple bands, including the Yardbirds and Wham! “People who don’t get on can get on in an interesting way.” It was possible for the Stones to come to such an arrangement precisely because they were never as close as the Beatles. It’s not that Jagger and Richards weren’t friends, but friendship was never as central to their image. When it comes down to it, they are there to work.
H/t Tyler Cowen.

Sequence detection in the cerebellum as a driver in the rise of Homo sapiens

Larry Vandervert, How Prediction Based on Sequence Detection in the Cerebellum Led to the Origins of Stone Tools, Language, and Culture and, Thereby, to the Rise of Homo sapiens, Front. Cell. Neurosci., 13 November 2018 | https://doi.org/10.3389/fncel.2018.00408

This article extends Leiner et al.'s watershed position that cerebellar mechanisms played prominent roles in the evolution of the manipulation and refinement of ideas and language. First it is shown how cerebellar mechanism of sequence-detection may lead to the foundational learning of a predictive working memory in the infant. Second, it is argued how this same cerebellar mechanism may have led to the adaptive selection toward the progressively predictive phonological loop in the evolution of working memory of pre-humans. Within these contexts, cerebellar sequence detection is then applied to an analysis of leading anthropologists Stout and Hecht's cerebral cortex-based explanation of the evolution of culture and language through the repetitious rigors of stone-tool knapping. It is argued that Stout and Hecht's focus on the roles of areas of the brain's cerebral cortex is seriously lacking, because it can be readily shown that cerebellar sequence detection importantly (perhaps predominantly) provides more fundamental explanations for the origins of culture and language. It is shown that the cerebellum does this in the following ways: (1) through prediction-enhancing silent speech in working memory, (2) through prediction in observational learning, and (3) through prediction leading to accuracy in stone-tool knapping. It is concluded, in agreement with Leiner et al. that the more recently proposed mechanism of cerebellar sequence-detection has played a prominent role in the evolution of culture, language, and stone-tool technology, the earmarks of Homo sapiens. It is further concluded that through these same mechanisms the cerebellum continues to play a prominent role in the relentless advancement of culture.

Wednesday, December 5, 2018

A note on the measurement of socio-cultural complexity and rankshift

There is a sophisticated literature on the measurement of socio-cultural complexity that dates mostly to the third quarter of the previous century. Raoul Naroll was one of the most active investigators on that issue. At the bottom of this post you’ll see an index he compiled in the late 1960s [1]. The 50 societies were chosen to represent all geographic areas and represent a standard sample often used in such research

He developed the index using measures for division of labor (how many different occupational specialties?), scale of settlement (size of largest settlement), and social ramification ¬– “more hierarchical levels, more councils of advisors, and more staffs to enhance the organization’s power to enforce its decrees (Teams)” [2]. Once he’d gotten a raw score for he normalized them to the interval between 1 and 100. Those are the scores you see in the middle column below. Note that the lowest normalized score is 12 and the highest is 84. The right-hand column assigns an ordinal rank; where two or more societies had the same score, they’re assigned the same rank.

Now, look at the red numbers, which I added. Starting at the bottom, notice the interval between successive scores. Where that interval is larger than one I’ve indicated the interval distance with a red number, up to a point. That point is the distance between the Amhara, at 43, and the Burmese, at 56, where the difference is 13. That is by far the largest interval distance on the chart. The next largest distance is 4, between the Egyptians and the Irish.

What’s interesting about that place in the chart is that every culture from the Burmese through the Austrians is literate, while everything from the Semang through the Tongans is illiterate. What about the Amhara, you ask? They’re literate.

That is, setting the Amhara aside, all the cultures with complexity values up through 42 are Rank 1 cultures while all those with values of 56 or above are Rank 2 or Rank 3. I asked Naroll about the Amhara and he replied something to the effect, “Oh, they’re a puzzle.” But I forget what he said afterward.

Assuming that there is a good account for the Amhara, what that 13 point gap suggests is that there is an interesting difference between the societies below that interval and the societies above. The difference between Ranks 1 and 2 has a qualitative aspect to it that shows up in this quantitative measure as a dramatic quantitative difference.


[1] I found it in James M. Schaefer, “A Comparison of Three Measures of Cultural Complexity”, American Anthropologist 71, 1969, 706-708, https://anthrosource.onlinelibrary.wiley.com/doi/pdf/10.1525/aa.1969.71.4.02a00090.

[2] David G. Hays, The Measurement of Cultural Evolution in the Non-Literate World: Homage to Raoul Naroll, Metagram Press, 1998, p. 28, https://www.academia.edu/37163326/The_Measurement_of_Cultural_Evolution_in_the_Non-Literate_World.

God loves

Computation in Language

For some time now I’ve been pursuing the idea that language is the basic locus of computation in the human mind and, correlatively, that it is grounded in processes that are non-computational in kind. This might not seem strange to a computational linguist, but other of course have other ideas. Many humanists are at best skeptical, if not horrified at the idea that the mind is computational in some way. And many cognitive scientists would like to think it’s computation all the way down to individual neurons. Humanistic skepticism is simply out of date while the cognitive scientists are over-eager, especially now that we have other ways of thinking about basic neural processes (e.g. complex dynamics).

More specifically, I identify computation with establishing the mapping between a language string and elements of meaning, whether in comprehension or production. The problem, of course, is that the elements of the string are all there together in one place, one after the other. But the elements of meaning are scattered all over the place in neuro-mental space. Here and there might be a pair of contiguous elements, but for the most part, not. We need a computational process to establish the coupling.

But why? Or, conversely, why doesn’t this problem exist for any other activity?

Questions questions questions.

Tuesday, December 4, 2018

Could mainland China force itself on Taiwan through military power? Perhaps not.

Tanner Greer, Taiwan Can Win a War With China, Foreign Policy, September 25, 2018:
China has already ratcheted up economic and diplomatic pressure on the island since the 2016 election of Tsai Ing-wen and the independence-friendly Democratic Progressive Party. Saber-rattling around the Taiwan Strait has been common. But China might not be able to deliver on its repeated threats. Despite the vast discrepancy in size between the two countries, there’s a real possibility that Taiwan could fight off a Chinese attack—even without direct aid from the United States.

Two recent studies, one by Michael Beckley, a political scientist at Tufts University, and the other by Ian Easton, a fellow at the Project 2049 Institute, in his book The Chinese Invasion Threat: Taiwan’s Defense and American Strategy in Asia, provide us with a clearer picture of what a war between Taiwan and the mainland might look like. Grounded in statistics, training manuals, and planning documents from the PLA itself, and informed by simulations and studies conducted by both the U.S. Defense Department and the Taiwanese Ministry of National Defense, this research presents a very different picture of a cross-strait conflict than that hawked by the party’s official announcements.

Chinese commanders fear they may be forced into armed contest with an enemy that is better trained, better motivated, and better prepared for the rigors of warfare than troops the PLA could throw against them. A cross-strait war looks far less like an inevitable victory for China than it does a staggeringly risky gamble.