Pages in this blog

Sunday, November 18, 2018

Innovation, stagnation, and the construction of ideas and conceptual systems

Patrick Collison and Michael Nielsen recently published an article that’s been getting a lot of attention in this neck of the woods, “Science Is Getting Less Bang for Its Buck” (The Atlantic, Nov 16, 2018). The purpose of this post is to set their idea in the context of ideas about cognitive evolution in culture that David Hays and I have developed.

I’ve addressed the issue of stagnation in some previous posts. There is a short post from 2014, “Why has progress slowed down?”, where I talk of Roman numerals and their limitations. More recently there is an appendix to “Notes Toward a Naturalist Cultural History (with a new addendum on the paradoxical stagnation of our era)”. All of this speculation takes place, as I’ve indicated, within the general account of the cultural evolution of successive “ranks” of cognitive systems that David Hays and I developed between the mid-1970s and 1990s. I’ve written a general overview of that work HERE; the fundamental paper is “The Evolution of Cognition” (Journal of Social and Behavioral Structures 13(4), 297-320, 1990).

First I comment on a metaphor Collison and Nielsen introduce, that of geographic exploration. Then I offer a metaphor of my own, that of physical construction. I conclude by extending that metaphor to the construction of conceptual systems.

The metaphor of exploration

In developing their argument they offer a metaphor which I’ve used to a similar end:
Suppose we think of science—the exploration of nature—as similar to the exploration of a new continent. In the early days, little is known. Explorers set out and discover major new features with ease. But gradually they fill in knowledge of the new continent. To make significant discoveries explorers must go to ever-more-remote areas, under ever-more-difficult conditions. Exploration gets harder. In this view, science is a limited frontier, requiring ever more effort to “fill in the map.” One day the map will be near-complete, and science will largely be exhausted. In this view, any increase in the difficulty of discovery is intrinsic to the structure of scientific knowledge itself.
However, I would stop with “Exploration gets harder” and elaborate a bit on what happens once initial exploration is complete: pioneering settles move in, communities are established, the land becomes more densely settled, and so forth. That’s a matter of detail. The real issue, though, is to move beyond the metaphor and talk directly about ideas and innovation.

I want to build up to that. But for the moment let’s continue with Collison and Nielsen. They continue with this paragraph:
An archetype for this point of view comes from fundamental physics, where many people have been entranced by the search for a “theory of everything,” a theory explaining all the fundamental particles and forces we see in the world. We can only discover such a theory once. And if you think that’s the primary goal of science, then it is indeed a limited frontier.
Are we discovering theories or constructing them? “Discovering” implies that they’re out there independent of us and all we have to do is talk the right path, turn the right corner, and there it will be, the theory. “Construction” places the emphasis on our activities, our tools, materials, and concepts, whatever it is we use in constructing theories. But these need not be opposed ideas. To extend the metaphor of exploration, it was impossible for us to discover the geography of the Moon’s dark side until we’d constructed the means of investigating it. In this case discovery and construction go hand-in-hand. Without the proper tools, discovery is impossible.

They go on:
But there’s a different point of view, a point of view in which science is an endless frontier, where there are always new phenomena to be discovered, and major new questions to be answered. The possibility of an endless frontier is a consequence of an idea known as emergence. Consider, for example, water. It’s one thing to have equations describing the way a single molecule of water behaves. It’s quite another to understand why rainbows form in the sky, or the crashing of ocean waves, or the origins of the dirty snowballs in space that we call comets. All these are “water,” but at different levels of complexity. Each emerges out of the basic equations describing water, but who would ever have suspected from those equations something so intricate as a rainbow or the crashing of waves?
But how do you get from the equations for a single molecule of water to the equations for the crashing of ocean waves? I’m guessing that the mathematics is by no means self-evident, that quite a bit of construction is necessary. Where to the construction techniques come from?

The metaphor of construction

If you give a competent engineer a set of plans and a pile of materials, she should be able to determine whether or not the device can be build with those materials. Any number of things can be built with a given set of materials, and any given device can be constructed in various ways. But the possibilities are not endless. There must be a match between the materials and the device.

This is obvious enough in the case of material devices, whether they be relatively simple things like axes and clay pots, mechanical devices like a watch or a steam engine, or buildings of all shapes and sizes. I contend that the same is true for ideas of all kinds, but we have but a poor understanding of how ideas are constructed. So let’s continue with the physical world for just a bit.

What do you need to build a skyscraper? Well, of course, there are skyscrapers of all kinds and sizes. But it seems unlikely that we could construct even a small 10-story building out of adobe, or even out of wood – at least wood in its natural state as opposed to the various engineered wood materials that we now being made. And you can’t place a skyscraper just anyplace. The ground must be able to support the weight.

But it’s not just about materials and construction techniques. You also need elevators. They don’t play a role in holding the building up, but they make tall buildings functionally useful. When a building gets beyond six, seven, or eight stories or so, stairs become impractical. Not only does it take too much time to go up and down stairs in a tall building, but climbing stairs is physically challenging.

Twenty years ago I had a room on the 14th floor of a building. There was an emergency that required evacuation. Coming down 14 flights of stairs wasn’t bad but, for some reason, the elevators were unavailable when we were allowed back in. Climbing those 14 flights was a challenge. At the time I was an out of shape middle aged man; had I been in shape the climb wouldn’t have been so bad. Twenty years later I’m 50 pounds heavier and I’m not sure I could do the climb at all, at least not without stopping so often that it would take over an hour. Elevators eliminate that problem, one that simply doesn’t exist for lower buildings.

What other problems do skyscrapers present that don’t exist for lower buildings?

My larger point, though, is that conceptual systems are like these physical systems. They consist of parts of various kinds combined in various ways to perform functions. We just don’t know much about the nature of the parts and how they go together. But we know something.

Mercantilism, arithmetic, logarithms, and the clockwork universe

The Wikipedia tells me that mercantilism “was dominant in modernized parts of Europe from the 16th to the 18th centuries.” Fine. Wikipedia tells us that mercantilism “promotes Government regulation of a nation’s economy for the purpose of augmenting state power at the expense of rival national powers. Mercantilism includes a national economic policy aimed at accumulating monetary reserves through a positive balance of trade, especially of finished goods. Historically, such policies frequently led to war and also motivated colonial expansion.” Again, fine.

What made mercantilism possible? Lots of things I presume. For example:
Mercantilism developed at a time of transition for the European economy. Isolated feudal estates were being replaced by centralized nation-states as the focus of power. Technological changes in shipping and the growth of urban centres led to a rapid increase in international trade. Mercantilism focused on how this trade could best aid the states. Another important change was the introduction of double-entry bookkeeping and modern accounting. This accounting made extremely clear the inflow and outflow of trade, contributing to the close scrutiny given to the balance of trade. Of course, the impact of the discovery of America cannot be ignored... New markets and new mines propelled foreign trade to previously inconceivable volumes, resulting in “the great upward movement in prices” and an increase in “the volume of merchant activity itself”.
There’s a lot of stuff in that one paragraph. I note the importance of double-entry bookkeeping. You can’t run a complex mercantile economy if you can’t keep track of your money.

I note as well the discovery of America. Would it have been possible to exploit that discovery in a world where all calculation was done using Roman numerals? I suggest that it would have been very difficult.

Why?

Logarithms, common logarithms.

I have no idea just how the Vikings first made their way to the new world in the 10th century, but I’m pretty sure logarithms had nothing to do with it, as they weren’t discovered until the 17th century. Columbus made it to the New World without logarithms as well.

But tables of common logarithms became important to celestial navigation in the 17th century (Wikipedia) and accurate celestial navigation was necessary for the very many long distance sea voyages on which mercantilism depended. Common logarithms, as you know, are logarithms taken to the base 10. And that, in turn, implies the use of a counting system based on positional notation, which was only introduced to Europe in the early 13th century from the Arab world through Leonardo Fibonacci’s 1202 work, Algebra et almuchabala. Were these tables of algorithms absolutely necessary for celestial navigation? No. But they made it much easier and more accurate. That surely must have been a factor in the rise of mercantilism. How much of a factor I cannot say.

But the introduction of the Arabic notation, as it is called, was of far more general significance. As David Hays as I remarked in “The Evolution of Cognition”:
It is easy enough to see that algorithms were important in the eventual emergence of science, with all the calculations so required. But they are important on another score. For algorithms are the first purely informatic procedures which had been fully codified. Writing focused attention on language, but it never fully revealed the processes of language (we're still working on that). A thinker contemplating an algorithm can see the complete computational process, fully revealed.

The amazing thing about algorithmic calculation is that it always works. If two, or three, or four, people make the calculation, they all come up with the same answer. This is not true of non-algorithmic calculation, where procedures were developed on a case-by-case basis with no statements of general principles. In this situation some arithmeticians are going to get right answers more often than others, but no one can be sure of hitting on the right answer every time.

This ad hoc intellectual style, moreover, would make it almost impossible to sense the underlying integrity of the arithmetic system, to display its workings independently of the ingenious efforts of the arithmetician. The ancients were as interested in magical properties of numbers as in separating the odd from the even (Marrou 179-181). By interposing explicit procedures between the arithmetician and his numbers, algorithmic systems contribute to the intuition of a firm subject-object distinction. The world of algorithmic calculations is the same for all arithmeticians and is therefore essentially distinct from them. It is a self-contained universe of objects (numbers) and processes (the algorithms). The stage is now set for experimental science. Science presents us with a mechanistic world and adopts the experimental test as its way of maintaining objectivity. A theory is true if its conceptual mechanism (its "algorithm") suggests observations which are subsequently confirmed by different observers. Just as the results of calculation can be checked, so can theories.
We go on to remark:
The world of classical antiquity was altogether static. The glories of Greece were Platonic ideals and Euclidean geometry, Phidias's sculptures and marble temples. Although Mediterranean antiquity knew the wheel, it did not know mechanism. Water mills were tried, but not much used. Hero of Alexandria invented toys large and small with moving parts, but nothing practical came of them. Historians generally assert that the ancients did not need mechanism because they had surplus labor, but it seems to us more credible to say that they did not exploit mechanisms because their culture did not tolerate the idea. With the little Renaissance, the first machine with two co-ordinated motions, a sawmill that both pushed the log and turned the saw blade, turned up (White 1978: 80). Was it something in Germanic culture, or the effect of bringing together the cultures of Greece and Rome, of Islam and the East, that brought a sense of mechanism? We hope to learn more about this question, but for the moment we have to leave it unanswered.

What we can see is that generalizations of the idea of mechanism would be fruitful for technology (and they were), but that it would take an abstraction to produce a new view of nature. The algorithm can be understood in just this way. If its originators in India disregarded mechanism, and the north European developers of mechanism lacked the abstraction, it would only be the accidental propinquity of the two that generated a result. Put the abstract version together in one culture with a host of concrete examples, and by metaphor lay out the idea of the universe as a great machine. What is characteristic of machines is their temporality; a static machine is not a machine at all. And, with that, further add the co-ordination of motions as in the sawmill. Galileo discovered that force alters acceleration, not velocity (a discovery about temporality) and during the next few centuries mechanical clocks were made successfully. The notion of a clockwork universe spread across Europe (note that the Chinese had clockworks in the 11th Century, but never developed the notion of a clockwork universe, cf. Needham 1981). For any machine, it is possible to make functional diagrams and describe the relative motions of the parts; and the theories of classical science can be understood as functional diagrams of nature, with descriptions of the relative motions of the parts.
Could it be that our current stagnation reflects the lack of some idea as fundamental to future intellectual progress as arithmetic calculations, algorithms, and the clockwork universe proved to be for early modern and industrial Europe?

It is impossible for me to answer that question YES or NO, but my intuitions in these matters, such as they are, point in that direction.

To return to the exploration metaphor proposed by Collison and Nielsen, we have been exploring the world with a set of techniques grounded in basic ideas set in place by, say, the middle of the previous century. We’ve pretty much described the phenomena and constructed the technologies reachable by those means. Beyond that I believe that we’ve flushed out many hints of things beyond those means, but we have yet to consolidate those hints into a conceptual device as powerful and basic as the clockwork universe proved to be in an earlier era. When that new conception finally emerges we’ll move forward into yet another brave new world.

* * * * *

Addendum on calculation 11.19.18: I have no idea how pervasive and important numerical calculation was in the ancient world, but it is absolutely central to the modern world, by which I mean the post-1600 world. Thus the invention of logarithms and, in particular, common logarithms (to base 10) was tremendously important. The computation of logarithms took untold hours, but once the results were computed they were complied into tables which were printed and distributed, thus saving exponentially more hours of calculation for those who used them. The calculation and dissemination of logarithms led directly to the invention of the slide rule, which became pervasive in engineering and scientific circles up through the 1970s.

See Wikipedia, History of logarithms: https://en.wikipedia.org/wiki/History_of_logarithms.

No comments:

Post a Comment