Friday, August 14, 2020

Friday Fotos: Where it all started, the old junk spot beneath Christ Hospital in Jersey City – Those were the days






The battle is joined: Fortnite vs. Apple and Google

Jack Nicas, Kellen Browning and Erin Griffith, Fortnite Creator Sues Apple and Google After Ban From App Stores, NYTimes, Aug. 13, 2020:
Apple’s and Google’s spats with app developers over their cut of revenues exploded into a high-stakes clash on Thursday when the tech giants kicked the wildly popular game Fortnite out of their app stores and the game’s maker hit back with lawsuits.

The fight began on Thursday morning with a clear provocation. Epic Games, the maker of Fortnite, started encouraging Fortnite’s mobile-app users to pay it directly, rather than through Apple or Google. The companies require that they handle all such app payments, so they can collect a 30 percent commission, a policy that has been at the center of antitrust complaints against the companies.

Hours later, Apple responded, removing the Fortnite app from its App Store. ...

Within an hour, Epic opened a multifront war against Apple that appeared months in the making.

First, it sued Apple in federal court, accusing the company of violating antitrust laws by forcing developers to use its payment systems.

“Apple’s removal of Fortnite is yet another example of Apple flexing its enormous power in order to impose unreasonable restraints and unlawfully maintain its 100% monopoly over the” market for in-app payments on iPhones, Epic said in its 62-page lawsuit.

Then Epic rolled out a sophisticated public-relations campaign that depicted Apple, one of the industry’s most image-conscious companies, as the stodgy old guard trying to stifle the upstart. To do so, it used Apple’s own imagery against it, mimicking Apple’s iconic “1984” ad from its own fight against IBM 36 years ago. This time, Fortnite characters were defying Apple’s totalitarian regime. Within hours, #FreeFortnite was the top trend on Twitter.

Later on Thursday, Google also removed the Fortnite app from its official Android app store, the Google Play Store, saying the app violated Google’s policies. Epic replied with a similar lawsuit.

Apple’s confrontation with Epic has much higher stakes than Google’s because Fortnite remains available for Android devices. Google’s Android software allows people to download apps outside Google’s app store, unlike Apple’s approach with iPhones, and Epic had added Fortnite to the Play store only in April....

Epic, a North Carolina company that is valued at roughly $17 billion and is partly owned by the Chinese internet giant Tencent, now appears poised to sacrifice millions of dollars in revenue in a fight that will keep Fortnite off iPhones. That immediately makes Apple’s flagship devices far less attractive to millions of people across the world — just ahead of Apple’s most prominent iPhone introduction in years....

For Apple, the world’s most valuable company, there are few easy options. Apple has largely staked its future on its services business, which has become its second-largest source of revenue after sales of the iPhone, at $51.7 billion over the past year. But that business is mostly built on its cut of other apps’ sales, so enforcing its 30 percent commission is crucial to keeping its business growing.
Others have complained about Apple's policies:
Apple has had a series of recent spats with app makers. The music service Spotify has complained to regulators in Europe and the United States. Blix, which makes an email app that competes with Apple's service, also sued Apple on antitrust grounds last year. And last week, Microsoft ended a pilot of its mobile gaming app and Facebook watered down its gaming app on iPhones because of Apple’s rules.

Apple has said that all app developers are subject to the same rules, and that its commission is fair. Apple has argued that it spends billions of dollars on the App Store and iPhone technology, creating business opportunities for companies like Epic.

Thursday, August 13, 2020

Five posts, a sinking pier, the river, and some birds

Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds]

Tyler Cowen has just posted a conversation with Nicholas Bloom, a Stanford economist interested in economic growth. That gives me a chance to revise a working paper I’d written last year, Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world [2], which is based on a paper Bloom had published along with three colleagues, Are Ideas Getting Harder to Find? [3]. I took two of their case studies, Moore’s law and drug discovery, examined them in an informal manner and concluded that (much of) the increasing difficulty of discovering new ideas can be attributed to the cost of finding out more about the world.

At the time I wrote that paper I was unaware of Paul Romer’s 1992 article, Two Strategies for Economic Development [4]. He develops his argument with a toy model that is similar in spirit to the idea I’d advanced in my working paper. The objective of this post is to update that idea in view of Romer’s model by deploying some toys of my own.

First I set the stage with some passages from Cowen’s conversation with Bloom. Then I introduce Romer, after which I offer my elaborations. First I introduce some simple diagrams through which we can visualize the relationship between our conceptual systems (maps) and the world itself (territory). I conclude by anchoring that story in time and space making it one that is, in principle, about the evolution of human society over time.

Bloom’s conversation with Cowen

Bloom sets the stage:
The big picture — just to make sure everyone’s on the same page — is, if you look in the US, productivity growth . . . In fact, I could go back a lot further. It’s interesting — you go much further, and you think of European and North American history. In the UK that has better data, there was very, very little productivity growth until the Industrial Revolution. Literally, from the time the Romans left in whatever, roughly 100 AD, until 1750, technological progress was very slow.

Sure, the British were more advanced at that point, but not dramatically. The estimates were like 0.1 percent a year, so very low. Then the Industrial Revolution starts, and it starts to speed up and speed up and speed up. And technological progress, in terms of productivity growth, peaks in the 1950s at something like 3 to 4 percent a year, and then it’s been falling ever since.
Then you ask that rate of fall — it’s 5 percent, roughly. It would have fallen if we held inputs constant. The one thing that’s been offsetting that fall in the rate of progress is we’ve put more and more resources into it. Again, if you think of the US, the number of research universities has exploded, the number of firms having research labs.

Thomas Edison, for example, was the first lab about 100 years ago, but post–World War II, most large American companies have been pushing huge amounts of cash into R&D. But despite all of that increase in inputs, actually, productivity growth has been slowing over the last 50 years. That’s the sense in which it’s harder and harder to find new ideas. We’re putting more inputs into labs, but actually productivity growth is falling.
Cowen responds by saying, “Let’s say paperwork for researchers is increasing, bureaucratization is increasing. How do we get that to be negative 5 percent a year as an effect?” A bit later he’ll offer:
Doesn’t the explanation have to be that scientific efforts used to be devoted to public goods much more, and now they’re being devoted to private goods? That’s the only explanation that’s consistent with rising wages for science but a declining social output from her research, her scientific productivity.
In his response to Tyler, Bloom makes two suggestions that are consistent with my hypothesis:
Why is it happening at the aggregate level? I think there are three reasons going on. One is actually come back to Ben Jones, who had an important paper, which is called, I believe, “[Death of the] Renaissance Man.” This came out 15 years ago or something. The idea was, it takes longer and longer for us to train.

Just in economics — when I first started in economics, it was standard to do a four-year PhD. It’s now a six-year PhD, plus many of the PhD students have done a pre-doc, so they’ve done an extra two years. We’re taking three or four years longer just to get to the research frontier. There’s so much more knowledge before us, it just takes longer to train up. That’s one story.

A second story I’ve heard is, research is getting more complicated. I remember I sat down with a former CEO of SRI, Stanford Research Institute, which is a big research lab out here that’s done many things. For example, Siri came out of SRI. He said, “Increasingly it’s interdisciplinary teams now.”
His third suggestion repeats one of Cowen’s:
Then finally, as you say, I suspect regulation costs, various other factors are making it harder to undertake research. A lot of that’s probably good. I’d have to look at individual regulations. Health and safety, for example, is probably a good idea, but in the same way, that is almost certainly making it more expensive to run labs…
I am most interested in pursing the notion that ideas are getting harder to find because that’s just how the world is. Even assuming that I am correct, one must still show that that factor is stronger than the one’s Cowen favors and, for that matter, other factors one might suggest. That’s a different kind of argument and one I won’t pursue here.

Romer’s model

Let’s start Paul Romer’s toy model from 1992. Romer sets it up with one that is standard in economics, that of a factory (p. 67):
One of the great successes of neoclassical economics has been the elaboration and extension of the metaphor of the factory that is invoked by a production function. To be explicit about this image, recall the child's toy called the Play-Doh Fun Factory. To operate the Fun Factory, a child puts Play-Doh (a form of modeling compound) into the back of the toy and pushes on a plunger that applies pressure. The Play-Doh is extruded through an opening in the front of the toy. Depending on the particular die used to form the opening, out come solid Play-Doh rods, Play-Doh1-beams,or lengths of hollow Play-Doh pipe.

We use the Fun Factory model or something just like it to describe how capital (the Fun Factory)and labor (the child's strength) change the characteristics of goods, converting them from less valuable forms (lumps of modeling compound) into more valuable forms (lengths of pipe).
But Romer isn’t interest in the production of physical goods. He’s interested in the production of ideas. The child’s chemistry set proves useful (p. 68):
Another child's toy is a chemistry set. For this discussion, the set can be represented as a collection of N jars, each containing a different chemical element. From the child's point of view, the excitement of this toy comes from trying to find some combination of the underlying chemicals that, when mixed together and heated, does something more impressive than change colors (explode, for example). In a set with N jars, there are 2^N–1 different mixtures of K elements, where K varies between 1 and N. (There are many more mixtures if we take account of the proportions in which ingredients can be mixed and the different pressures and temperatures that can be used during mixing.)


As N grows, what computer scientists refer to as the curse of dimensionality
sets in. The number of possible mixtures grows exponentially with N, the dimension of this system. For a modestly large chemistry set, the number of possible mixtures is far too large for the toy manufacturer to have directly verified that no mixture is explosive. If N is equal to 100, there are about 10^30 different mixtures that an adventurous child could conceivably put in a test tube and hold over a flame. If every living person on earth (about 5 billion) had tried a different mixture each second since the universe began (no more than 20 billion years ago), we would still have tested less than 1 percent of all the possible combinations. […]
Continuing on, Romer observes (pp. 68-69):
The potential for continued economic growth comes from the vast search space that we can explore. The curse of dimensionality is, for economic purposes, a remarkable blessing. To appreciate the potential for discovery, one need only consider the possibility that an extremely small fraction of the large number of possible mixtures may be valuable.
What interests me is that this vast search space of possibilities is mostly empty of useful ideas, combinations of chemical elements in this case. Moreover I want to distinguish between the space itself and the strategy we have for searching that space. That strategy is based on our theory or theories about that space, or domain as I sometimes like to call it. I want to examine the relationship between the world, on the one hand, and our ideas of it on the other. To do so I must enter the den of the metaphysician, to borrow a phrase from Warren McCulloch.

On the relationship between the world and our ideas of it

The background space in the following simple diagram represents the space of potential chemical combinations (in this case) while the black dots indicate the useful ones.

Figure 1: Useful ideas in the field of all possible ideas
Notice that some areas are less sparsely populated than others. Now let us superimpose a mental map on it – the orange grid is that map:

Figure 2: Our theory of the domain (in orange)
We can now see that while, yes, some regions of the territory are less populated than others, on the whole the mental map – our theory of the domain – is highly congruent with the territory. Most “bins” in our map contain a useful idea, though it isn’t in the same place in each bin. If we search each bin, then, we will very likely find a useful idea. Some searches will take longer than others, but most searches will be rewarded.

Leaves and light


Wednesday, August 12, 2020

REALM, a new open-source method for language model pre-training

Prestige bias in cultural transmission

Reflections on and current status of my GPT-3 project

As I noted at the beginning of the month the project began with a long comment posted to Marginal Revolution on July 19 [see below for a copy of that comment]. My original idea was to elaborate on that comment in a series of posts. It soon became clear, however, that things were going to more complicated.

On August 5 I issued the first working paper in the project, with the expectation that there would be a second one. That first paper is entitled, GPT-3: Waterloo or Rubicon? Here be Dragons. At that time I expected to issue a second working paper to cover the rest of the material from that original comment.

And then things became even more complicated. What happened is that I started thinking over the material in the first working paper and reading more about GPT-3. It was like when I first came to terms with topic models. The hardcore technical literature is a bit beyond me, but the surrounding explanatory material wasn’t doing it for me. For a couple of days I didn’t know whether I was coming or going.

A Plan, three more papers

Now things have cleared up. I think. At any rate I now have a plan, which is to issue not one, but three more working papers, two shorter ones and a longer one. The longer one will cover the rest of the material from the original comment below while the other two will go into greater depth on specific issues. This is the plan:
GPT-3: The Star Trek computer, and beyond
GPT-3: Bounding the Space, toward a theory of minds
Why GPT-X will fail in creating literature
I’ve been working on all three, but my current plan is to issue the future-oriented one – Star Trek computer – next, thereby covering the full scope of that original comment. I will issue the other two papers as they are ready.

But who knows, things may change. There’s no telling where a mind will go once it’s got the scent. Here’s brief notes on the other two working papers.

GPT-3: Bounding the Space, toward a theory of minds

This is really re-working and expanding on two sections from the first paper: 3. The brain, the mind, and GPT-3: Dimensions and conceptual spaces, and 5. Engineered intelligence at liberty in the world. I’ll be making sense out of this:
1. Symbolic AI: Construct a model of what the mind’s doing and run that model.

2. Machine learning: Construct a learning architecture (e.g. GPT-3), feed it piles of examples, and let it figure out what’s going on inside.

3. The question I’ve been getting to: What’s the world have to be like in order for 2 to work at all.

4. And 3 reflects back on 1: If THAT’s how the world is, what kind of (symbolic) model will produce output such that 2 will work.

And so forth
The third proposition is particularly important. That’s where the semantics of Peter Gärdenfors comes into play.

Why GPT-X will fail in creating literature

There’s a pro forma discussion of that issue: GPT-3 is not human, doesn’t have emotion, and is not creative. I suppose we could think of that as Commander Data’s problem, since he was forever fretting about it.

I suppose it’s true enough. But it doesn’t interest me. I have a much narrower and more specific issue in mind: GPT-X can’t do rhyme and neither will GPT-X. It’s a limitation that is inherent in the technology. Rhyme is a feature of how a text sounds, and the text base on which GPT-3 is built doesn’t have sound in it, nor is it at all obvious how that deficiency can be remedied.

If it can’t do rhyme, then it can’t do meter either. Nor can it do prose rhythm, which also depends, if not directly on sound, certainly on timing. Without these, GPT-X cannot do literature. At least it can’t do good literature, much less great literature. Oh, it can crank out wacky language by the bucket full, but that’s not what poetry is, and it can tell stories too. But stories are only a beginning point, not the end.

Think about it: Computers play the best chess in the world, Go too. But it’s not at all clear whether or not they’ll ever do anything more than mediocre literature. And that, I supposed, brings us back to the fact that computers aren’t human.

And they aren’t. They’re computers.

Who are these guys and what do they have to do with Jersey City's Berry Lane Skate Park?





Zero- and few-shot learning: Transformers on the cheap

The problematics of text and form and the transmutation zone between symbol and thought [once more into the methodological breach]

On the one hand we have those simple diagrams Mark Rose drew back in 1972, the ones he apologized for. Why?

Because they’re over the line, ever so slightly. But over, definitely over.

On the other hand, we have these recurring discussions: What is the text? What is the form? They go nowhere, that is, there is no effort to reach consensus on what those things are. On the contrary, these occasions serve to keep the questions alive: Ah, we don’t agree, but that’s OK. Maybe someday they will. Someday.

Someday my ass. The latent purpose of these challenge sessions, if you will, is to make sure that that day never comes. As long as the discipline can continue asking these questions, it can remain blind to its founding shortcomings, the blindness that allows it to pursue interpretation with single-minded vigor.

But what’s that have to do with those poor diagrams of Mark Rose? They blow the whistle on the whole thing. They follow from a simple conception of the text – as a string of words – and a simple conception of form – as the arrangement of items in that string. The profession can’t let such heresy spread!

Let’s take a closer look.

Mark Rose apologizes for simple diagrams

Mark Rose’s slender volume, Shakespearean Design [1], has been on my mind recently off and on for several years as it speaks to two hobby horses of mine, 1) literary form and 2) its description. Rose is interested in Shakespeare’s plays, not interpreting them, telling us what they mean,

This is about an incidental remark in the Preface, something of an apology (p. viii):
A critic attempting to talk concretely about Shakespearean structure has two choices. He can create an artificial language of his own, which has the advantage of precision; or he can make do with whatever words seem most useful at each stage in the argument, which has the advantage of comprehensibility. In general, I have chosen the latter course.

The little charts and diagrams may initially give a false impression. I included these charts only reluctantly, deciding that, inelegant as they are, they provide an economical way of making certain matters clear. The numbers, usually line totals, sprinkled throughout may also give a false impression of exactness. I indicate line totals only to give a rough idea of the general proportions of a particular scene or segment.
Here’s two of those offending diagrams, from the Hamlet chapter, pages 97 and 103 respectively; you can see the line counts in parentheses:

Rose 97

Rose 113

What’s the fuss about? And there aren’t many of them. These are simple diagrams and, yes, without them, Rose’s accounts would be more difficult to understand. Indeed, without them, a reader would be tempted to sketch their own diagrams on convenient scraps of paper.

I figure Rose’s misgivings about the numbers are about humanistic ideology, we don’t do numbers, though his use of numbers is, as he says, slight. His willies about the diagrams may be that as well, but I think there’s something more there. The diagrams themselves are problematic simply because they ARE diagrams. They intrude too deeply into the inner workings of the humanistic mind. It’s like throwing a spanner into a machine; it gums-up the works.

It’s one thing to have pictures in an illustrated edition of, say, Hamlet, pictures depicting a scene in the play. That’s fine, for it’s consistent with the narrative flow. And it’s fine to have illustrations in, say, an article about the Elizabethan theatre, where you need to depict the stage layout or the relationship between the stage and the seating. Such illustrations are consistent with the ongoing flow of thought.

Those diagrams are different. It’s not that they’re inconsistent with the flow of thought. They’re not. They’re essential to it. But they indicate that this kind of thinking is not quite kosher. Why not? How do these simple diagrams intrude on the humanistic mind, while more elaborate images of the type mentioned in the previous paragraph, while those images are fine?

The problematics of text and form

The concepts of text and form are central to literary criticism. Texts are the things we study and form is what somehow makes them special. And yet we have no consensus views of either concept. In a sense, we don’t know what we’re talking about, though we talk about it incessantly.

The following statement is typical. It is by Frances Ferguson and John Brenkwood introducing papers from the 2013 English Institute on form:
A second irony is that the recently renewed interest in questions of literary form has proved quite amorphous. Perhaps, though, that has been the predicament and vitality of the topic all along. Georg Lukács inaugurated modern literary theory with a collection of essays called Soul and Form, a title that would be impossible today unless it were for a critical reflection on jazz. Among theorists preoccupied with form, there is a recurrent conflict between nonformalist and formalist conceptions of form: Bahktin as against Shklovsky; Jameson as against Frye; or the Barthes of S/Z over against the Barthes of “Introduction to the Structural Analysis of Narrative.” There is also a conflict, cutting across these competing methods, between form as a feature of literary works and form as constitutive of literary works. The New critics are often the benchmark of formalism in American discussions, but they did very little to illuminate literary forms compared to the Russian Formalists or, say, Lévi-Strauss and Jakobson’s classic essay on Baudelaire’s “Les chats.” And yet even the surest markers of literary forms fail to define form when it comes to actual works. The form of the sonnet, for example, is readily defined by the number of lines and the stanza organization, but does that account for a particular sonnet’s form any more than a rectangle accounts for a painting’s form? Vertical for portraits, horizontal for landscapes! And, finally, is formalism itself based on the idea that literary works are purely form, or on the idea that the vocation of literary criticism lies in formalization, that is, in its capacity to create categories at a level of abstraction applicable to the widest variety of literary phenomena?
They end with this:
The polarity between Clark’s appeal to human-sensuous activity and Macpherson’s call for a conception of form that holds good even on the assumption of human extinction suggests the philosophical extremities to which the question of form gives rise. So, too, the polarity between the richly historical texture of formal analysis in Jones, Martin, and Butterfield and the radically formalist analysis of Jarvis exemplifies how critical practice, not definition, is where the question of form is most fruitfully fought out.
So, this discipline for which the concept of form is central is still fighting over the concept after how many years? Fifty, a hundred?

Tuesday, August 11, 2020

Hot clouds over Hoboken

Sedentary hunter-gatherers built complex societies – "Recognizing this... has big implications for how we narrate history"





Mark Moffett discusses similar societies in The Human Swarm. See my post: Reading The Human Swarm 1: Hunter-Gatherers and the Plant Trap.

Monday, August 10, 2020

Robot researchers

Racial attitudes CAN change [Black GIs in the UK during WWII]

David Schindler, Mark Westcott, Shocking Racial Attitudes: Black G.I.s in Europe, The Review of Economic Studies, rdaa039, https://doi.org/10.1093/restud/rdaa039
Abstract: Can attitudes towards minorities, an important cultural trait, be changed? We show that the presence of African American soldiers in the U.K. during World War II reduced anti-minority prejudice, a result of the positive interactions which took place between soldiers and the local population. The change has been persistent: in locations in which more African American soldiers were posted there are fewer members of and voters for the U.K.’s leading far-right party, less implicit bias against blacks and fewer individuals professing racial prejudice, all measured around 2010. Our results point towards intergenerational transmission from parents to children as the most likely explanation.

City organic by the river