Saturday, December 15, 2018

Stagnation 1: The phenomenon and a simple-minded model with some remarks on search (pharmaceuticals) and process re-engineering (semiconductors)

Summary: Bloom, Jones, Van Reenen, and Webb (2018) have examined three cases of R&D productivity: 1) Moore’s Law in semiconductor production, 2) crop yields, and 3) drug discovery for cancer and heart disease. In each case R&D costs rise more rapidly than increases in productivity. I introduce an informal spatial model, the White House Easter egg hunt, as a way of thinking about the problem. Then I consider two of their cases and suggest real-world interpretations for that model. In the case of Moore’s Law we face process re-engineering costs imposed by the fact that the character of physical phenomena change as scale decreases (with different laws coming into play in the quantum realm). In the case of drug discovery we’re up against search through a high dimensional space sparsely populated in an irregular pattern. These two factors seem rather general and not specific to these particular cases.
* * * * *

I don’t remember just when I read about stagnation (in the production of new ideas), but it would have been on the Marginal Revolution blog and it would have been in a post by Tyler Cowen, who is deeply interested in the problem. Nor do I remember my reaction but it was likely some version of “so what else is new?” Some time late in the previous century I’d observed the same thing in the academic disciplines I know best, literary criticism and the cognitive sciences. The basic ideas were on the table by, say, the 1970s and 80s, and it’s been dotting i’s and crossing t’s since then. John Horgan had made a similar point across a number of scientific disciplines in his 1996 book, The End of Science. Cowen was simply informing me that it’s a very general problem.

The topic came up again in Cowen’s recent conversation with Paul Romer, a Nobel laureate with an interest in economic development, where they cited a recent and apparently rather thorough paper on the subject:
Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb, Are Ideas Getting Harder to Find? March 5, 2018.
I decided to read the paper (which I shall identify as BJRW) and see if I had anything to add to the discussion.

I begin by stating some reservations about this effort; after all, I am not an economist. Then I summarize and abstract the main result of BJRW followed by a simple, if not quite simple-minded, spatial model for the increasing difficulty of coming across new “ideas”. Next comes some observations about two cases, pharmaceuticals and semiconductor ships which I use to suggest real-world interpretations of the spatial model. I conclude with some brief remarks of a broadly historical character. In a second post I will address the larger question: Are we stuck here? Quick answer, no. In that post I will take an approach that is evolutionary in kind, thus treating the stagnation results as being about an episode in the socio-cultural evolution of humankind.

Note: You might want to pour yourself a drink as this is going to take awhile.

Calibration

That I am not an economist imposes (at least) two limitations on my understanding of this material, BJRW in particular: 1) it’s a technical subject and I don’t know the math, and 2) it’s embedded in a disciplinary history that I don’t know. On the first, that’s not a new kind of problem; I’ve been facing it my entire career. I’m used to reading technical material and I’m happy to take BJRW as technically competent and reasonable. And I ask questions; thus as I’ve been thinking this through I’ve queried Tyler Cowen and he’s answered. That’s been helpful. He is not, of course, responsible any short comings in my understanding or exposition.

In the video, Paul and Tyler point out some problems with the narrative of ideas and increasing returns. It seems to imply that economic growth will just get faster and faster, as we have more people combining more ideas. But measured economic growth, while still positive, appears to have slowed in recent decades. Also, since ideas can be used without being used up, why are some countries so backward in their use of ideas?

The answers to these sorts of questions take us out of the realm of typical economic factors. At one point, Paul quotes Robert Solow as saying that these discussions end up in a “blaze of amateur sociology.”

Solow’s name comes up in any discussion of economic growth. In the Solow model, the economic driver of productivity is savings. But there is a “residual” driver of economic growth, that Solow equates to pure advances in technology.
My own view is that intangible factors are important determinants of economic outcomes. I believe that they have become increasingly important in recent years. This limits our ability to explain economic growth on the basis of the measurable components of the Solow model.
There’s some of the context, economic growth, not just economic growth in the United States, but economic growth in general, Africa, the Far East, the whole world and, why not? throughout history.

At the center of this interest in economic growth is the idea of exponential growth. I suppose the paradigm example there would be something we all learned about in middle school, compound interest in a savings account. Each month you put a relatively small amount in a savings account. As you are doing this the bank pays you interest on your money, which also goes into the account. Thus it pays interest upon interest; thus your interest is compounded. You do this year after year, decade upon decade and in time all the money in the world miraculously ends up in your savings account – well, maybe not all, but you get the idea.

That, put rather crudely, seems to be what’s hanging over this research. It’s haunted by the spirit of compound interest. That’s why there’s a reluctance to accept research results, such as those in BJRW, at face value, as empirical statements about the nature of the world. If the world in general doesn’t reward hard work in the way that banks reward regular savings, there must be some reason. What is it?

As for those blazes of amateur sociology, well yes, discussions of ideas and culture can seem like that and, alas, that’s what some of them are. On the other hand, sticking fudge factors into your models and giving them technical names doesn’t accomplish much even in the cases where you can supply empirically derived numerical values for the fudge. What’s your pleasure, the Scylla of amateur sociology or the Charybdis of scientistic fudge? Perhaps, if we’re careful and patient we can avoid both, though it’s tricky and there are no guarantees.

Stagnation, the problem

As I see it the economists have black box which we can call, say, the innovation engine. They don’t really know what goes on inside, but they are sure there are ideas in there, whatever they are. They’re got some fairly good ideas about how this innovation engine fits into the economy and, in particular, the role it plays in economic development. And they have ways of measuring things that go into the box and things that come out. In one case considered by BJRW what comes out is element density on computer chips, in another case it is crop yields, and in a third case it’s extra years lived. They take these outputs as proxy measures of those idea, whatever ideas are. They also measure what goes into the box in each of those three cases. And they also consider some more aggregate measures.

In every case, and going back through several decades of research, we have the same result: while the output continues to rise year after year (a good thing), the input rises at a much steeper rate (not so good). Idea generation inside the box doesn’t seem to be a direct linear function of input to the box. So the economists revise the model of what’s happening inside the box by adding an elasticity factor ß, which accounts for diminishing returns. BJRW estimated ß for each of their cases.

What we’d really like to know, though, is what’s behind ß. For it is in effect a measure of something that’s going on inside that box that we don’t understand. What?

But the big issue, the really big one, the one that, is motivating this research is this: Are we stuck here? Is economic development (growth) going to become harder and harder and harder until – The horror! the horror! – it grinds to a halt because it’s too expensive? No, I suspect not. Before that happens the world is going to change. It’s done so in the past and there’s no reason to believe that the process will stop. I’ll save that one for the next post, though attentive readers of New Savanna already know how I’m going to approach it, because we’ve been there before.

What about ideas?

Measuring ideas is of course difficult. Memeticists have discussed the problem, to no avail. BJRW take a standard approach: “ideas are proportional improvements in productivity” (p. 5). They are in effect treating ideas as atomic units having no particular character and no relationships or dependencies among one another. I have no reason to believe that they actually believe that, for even the most casual reflection on their intellectual activities would tell them otherwise. But it’s a facilitating assumption. It allows them to get on with their research.

As we’ve already noted, this line of investigation has been telling us the same thing, year after year: good ideas are getting harder and harder to find. What is it about ideas, or about ideas and their relationship with the world, that makes things that way?

I propose to develop some intuitions by starting with a crude analogy: the White House Easter egg hunt. I haven’t got the foggiest idea of how the hunt is actually set up and conducted, but for our purposes let us assume that the lawn is divided into a grid of one-meter squares and an egg is hidden somewhere in each square. Let’s assume that each egg is an idea. In that situation it should take roughly the same amount of effort to find each egg and the number of eggs you find should be roughly proportional to the amount of time and effort you spend looking for them.

Call that First or Uniform Model. That, of course, is not what the research tells us. What distribution of eggs would give us the results we see in the literature? Let’s imagine a Second, or Uneven model.

Let’s imagine that somewhere near the center of lawn is a region we’ll simply call The Beginning. We’ll hide the eggs fairly densely near the beginning with the density dropping off as we get further from The Beginning (TB). At this stage of the argument I have no principled way to transport the children from their locations outside the White House to TB without having to cross the lawn, so let us assume they have access to a Star Trek transporter to position them there at the start of the hunt – I’ll return to this peculiar aspect of the model at the very end. What happens? At first they find eggs quickly and easily. But as the hunt does on it gets harder and harder. At some point the hunt may get so difficult that children band together in teams and agree to share eggs among the team members.

Now our little egg hunt is yielding results like those in the development literature. Why? Because I rigged it that way. The increased distance between the eggs is, in effect, that elasticity factor, ß, that the economists added to the growth model to account for diminishing returns.

Let’s make a little adjustment and then move on. BJRW need a more accurate title for their paper. I suggest “Are Effective Ideas Getting Harder to Find?” Ideas are a dime a dozen; it’s the effective ones, the ones that make fruitful connections with the world, that are hard to find and getting harder. What BJRW are in effect measuring might better be called ideational units or ideational effort. Ideas themselves, whatever they are (you knew I was going to say that, didn’t you?), remain mysterious. BJRW are discovering that it takes more and more effort to gain deeper purchase on the world. The world does not simply open itself to our inspection. We have to work for understanding.

Discourse about the relationship between our ideas and the world has traditionally been the province of the philosophical study of epistemology. The effort required to create effective ideas has not, so far as I know, been included in that study. Now it is.

And that gives this discussion a distinctly philosophical cast, one in the mold of Bruno Latour. He began his career, as you may know, as an anthropologist of scientific practice. From that work he concluded that if you really want to understand the just how scientific thought informs us of the world, you have to understand and take into account how that thinking is constructed, the apparatus, the data, the analysis and abstraction, all of it is part of the process. All of that is not merely incidental to and only contingently related to epistemology but rather is internal to it.

BJRW are measuring the productivity of the process of knowledge acquisition. To be sure they aren’t working in the realm of pure science. The science they examine is very much applied (nor do they even call it science). But if you want pure science, look at the costs involved in, for example, the physics of the very small or the astronomy of the very large. We have huge scientific instruments costing billions of dollars that are designed, built, maintained and operated by tens of thousands of people. Consider, for example, the relationship between the cost of CERN’s Large Hadron Collider and the results it has produced. We have the 2013 discovery of the Higgs boson, proposed in 1964. What else?

* * * * *

Now I want to offer two interpretations of the Uneven version of the Easter egg model: one where the governing R&D process is search through a sparse space and one where the governing process is process re-engineering in the face of changes in physical scale. For that I will take take a quick and informal look at two of the case studies BJRW presented, 1) the effort required to sustain Moore’s Law growth in semiconductors (re-engineering) and 2) the effort require to effect mortality improvements through the discovery of new drugs for cancer and heart disease (search). I note in passing that both search and changes in physical scale are general enough that they factor in many manufacturing processes so the suggestions I offer may have general applicability.

I have no particular expertise in either area, so I’m mostly making this up out of general knowledge and an excursion or two into Wikipedia. In the case of pharmaceuticals I consulted a friend of mine, Rich Fritzson, who did IT work in the  industry, though his experience is now a decade out of date. I of course take responsibility for the statements in this post.

Process re-engineering in chip fabrication

Moore’s Law, as you know, is an empirical generalization stated by Gordon Moore in a 1965 paper where he asserted that the number of components on an integrated circuit doubles yearly; a decade later he revised the generalization so the doubling period is two years. Ever since then people have been wondering when the law (that is, the generalization) would fail. So far it hasn’t, but it has taken more and more effort to achieve the doubling. Why?

What we’re doing is making circuits smaller and smaller. The logical structure of the circuits is scale free. And the instruction set of a CPU remains the same regardless of the chip’s size. No doubt the physical layout of chips has to be adjusted as the circuits get smaller and smaller. But I suspect that the major R&D costs come from figuring out the new tooling and processes.

Intuitively it doesn’t seem particularly mysterious that making things smaller and smaller should impose difficulties. Let’s take a quick example from the everyday world. My father was a skilled woodworker who had a well-equipped shop. At one time he made a table for my electric trains. I don’t know the exact dimensions, but I figure the top was roughly the size of two 4 ft. by 8 ft. sheets of plywood arranged in an L configuration. It stood, say, 30 inches off the floor, with legs and supports made from, say, ordinary 2 by 4 lumber. This wasn’t fine ‘furniture grade’ carpentry; but it was sturdy, functional, and well made. He also made my sister a playpen for her dolls. It was furniture grade. It was also somewhat smaller than that layout table, say 30 inches square by 18 inches high. The frame and bottom were hinged so you could fold it up. And on the slats he cut the letters of the alphabet; that is, he sawed through the wood so as to cut out the shapes of the letters.

I have no idea how long either project took him, but if one took longer than the other, I’d say it was the playpen. It may have had more parts, but the difficult and time-consuming part of the project was doing those letter cut-outs. I don’t know whether he used a jig-saw or a coping saw – he was skilled with both – but in either case it was painstaking work.

Still, in all, these two projects are in roughly the same scale and, except for the coping saw or the jig-saw, use the same tool set.

I’ve got a cousin who makes museum-class ship models, mostly of wooden sailing ships. While the finished models may range in length from less than a foot to three feet or more, they consist of hundreds of small parts. While some of my cousin’s tool set overlaps with my father's, because most of his work is so small, he needs different tools. In addition to standard wood working tools he uses dental tools and jewelers tools, makes custom jigs, and has fittings custom cast. At times he has had to undertake R&D to figure out how to make things.

We’re now getting into the range where scale imposes noticeable costs. Of course, in these cases we’re talking mostly about manufacturing costs, not about developing the tools and processes for manufacturing. But that, developing the process, is what I figure much of that semiconductor R&D is about. Let me suggest that if we lived in a scale-free universe this wouldn’t matter. You just take the same process and shrink it down, easy as pie.

Scale-free? In a scale-free universe there would be no essential difference between moving at, say, 10,000 mph and 200,000 mph. Sure, the higher speed takes more energy and you wouldn’t want to move at either speed in the earth’s atmosphere. But that’s it. However, our universe is not scale-free. The speed of light sets an upper limit on how fast you can move and 200,000 mph is above that limit. Similarly atoms would be miniature solar systems. But they’re not at all like that. The laws of the quantum world are different from those of the macroworld. If the world were scale-free than a human one inch tall would be just like a six-footer, only shorter, and a ten-foot bumblebee would be just like a regular one, only larger. The world isn’t like that. Scale makes a difference.

And that’s what chip makers are up against, physical scale. In a scale-free universe you simply build a new smaller tool set on the model of your first and you’re up and running. [Maybe have some 1 millimeter humans do the process design and tool fabrication.] That won’t do in our universe. The tool set and processes have to be re-thought. Wikipedia mentions an increase in fabrication times from six to eight weeks to 11-13 weeks. That implies more process steps and likely more tooling. All of that has to be developed. Just why development costs go up exponentially while size increments decrease linearly, that I don’t know. That it should be that way, though, doesn’t seem strange. That’s just how the world is.

Returning to the Easter egg hunt as a crude model, if the universe were scale-free the development of greater chip densities would proceed according to the Uniform Model. But the universe isn’t scale-free, so instead development becomes more expensive in accordance with the Uneven Model where the increasing distance between the eggs corresponds to the ever smaller scale of circuit elements. We can think of that special beginning zone (TB, The Beginning) as being the world of ordinary electrical circuits of the sort in radios and televisions based on old tube technology, which was also employed in the earliest digital computers.

Searching for drugs

The pharmaceutical industry is different. There the big problem is to search through zillions of substances that are potentially useful as drugs to find just that very small handful that are therapeutically effective. I have no idea about the ratio between number of items considered and the number that make it production, but it must be horrendous. Does the increasing cost of coming up with effective treatments reflect a worsening of the ratio, an increase in the cost of testing items, or both? I don’t know.

Let’s think about this a bit more. In order to conduct a search you need some kind of strategy, some kind of map. If you had a really good map you wouldn’t need to search at all, you just go to X and you’ve got your drug. But what do I mean by map? I just mean some ordering, some kind of molecule space if you will. Obviously we’ve got some understanding of the substances that are likely to be useful; we’ve got a well developed organic chemistry. We’ve got some ordering of these substances in to families and affinity groups, whatever, and where these things are in what we will call molecule space – think of it as a high-dimensional space that is unevenly and on the whole sparsely populated. Note that the structure of this molecule space is a function of our best current understanding, which may not be congruent with the deepest physical principles actually involved.

Unfortunately this ordering doesn’t pinpoint which chemicals would make effective interventions in some disease pathway. Just as useful mineral substances are not evenly and perspicuously distributed in the earth’s crust – think of crude oil, iron ore, gold, what have you – so useful molecules are not evenly distributed in molecule space. So we have to search it. And that takes time.

This brings me back to BJRW. They take PUBMED publication statistics as their measure of research input while years of life saved is their measure of output. They did this for all cancers, for breast cancer alone, and for heart disease. They found that research productivity dropped between 1975 and 2006 in the three cases. However, “between 1975 and the mid-1980s, research productivity for these two cancer research categories increased quite substantially” before then dropping off (p. 35). Thus “the production function for new ideas is obviously complicated and heterogeneous.” In the interpretation I am offering here, this suggests that in the case of cancer, the search for drugs started in a region of the space that was relatively highly populated and so “strikes” came more quickly. Once research had moved through that region strikes became harder to come by.

Now let’s think back to our simple little Easter egg model. If we had an effective map of molecule space, the drug discovery would proceed according to the Uniform Model. We don’t have such a map. So we have to search the space and discovery proceeds according to a modified version Uneven Model. While on the whole the density of eggs increases as we get farther from The Beginning, there are local increases in density scattered about. As for The Beginning, let’s say it corresponds roughly to the pharmacopeia at the beginning of the 20th century.

Some informal observations about our ancient roots

While the literature on economic stagnation deals with relatively recent history, the cases considered in BJRW have very old roots, comparable to those philosophical ideas originating in footnotes to Plato, if not older. The pharmaceutical industry is recent but we’ve been ingesting herbs, nostrums, medicinals, and psychedelics since, if not forever, certainly before recorded history, long before. And so it is with agriculture (their third case, which I’ve left unexamined), which dates back 5000 to 10,000 years ago depending on your criteria; and before that we were of course gathering plant stuffs.

The semiconductor industry is much more recent, after World War II. Silicon is the most common substrate used for chip wafers. And we have been fabricating things out of silicon, the commonest element in rocks, for over two million years. I’m thinking of course of those old hand axes, as they’re called (we don’t really know what they were for), which are the oldest evidence of our cultural life.

And THIS, more or less, is what I really had in mind in Uneven Easter egg model with that special Beginning zone where the eggs were most densely hidden and thus discovered with the least effort. The industries these economists have been examining are all relatively recent. They’ve got a history behind them. It took us quite awhile to get there.

That is we evolved with certain capacities for perception and action, capacities that allowed us to act effectively in the world. Those capacities allowed us to find an gather food, find interesting and useful drugs, and to fabricate various things. That’s the true Beginning Zone. Through a long process of cultural evolution we increased our capacities in all areas and thus moved outward from that beginning. But those increases in capacity were not free. We’ve had to pay for them in ever increasing time and effort.

That’s where we’ll pick things up in the next post. I have no intention of telling that story in any great detail. But there is a way of moving through it quickly and in a way that should motivate my conviction that, no, we’re not mired in a slough of ever-decreasing productivity. As Cowen has observed in Stubborn Attachments, and as others have observed as well, there have been a relatively small number of major transformations of human life in our history. I will review those transformations in a later post and offer some speculations about the next one.

* * * * *

See this post, Innovation, stagnation, and the construction of ideas and conceptual systems, from November 18, 2018, where I discuss an article by Patrick Collison and Michael Nielsen, “Science Is Getting Less Bang for Its Buck” (The Atlantic, Nov 16, 2018).

No comments:

Post a Comment