Pages in this blog

Friday, August 25, 2017

Metagram Software - A New Perspective on the Art of Computation


Back in the ancient days of 1980 or thereabouts my teacher, the late David Hays, got a small grant from the Air Force to study metagramming. What, you may ask, is metagramming? Why, I replay, as the report says, it’s a new perspective on the art of computation.


Huh?
* * * * *

The construction of reliable software is difficult, more difficult, it would seem, than the building of reliable computational hardware. Software projects are notorious for being late, over budget, and, all too often, for failing. The most public example in recent memory is the roll-out of Obamacare (aka the Affordable Care Act). That snafu is as much a matter of bureaucratic and administrative incompetence as technical weakness, but in practice it is hard to disentangle the two. For a classic statement of the problem dating back to the Jurassic Era of computing, we have Fred Brooks, The Mythical Man-Month: Essays on Software Engineering (1975). We are now several generations of software development methodology down the road from Brooks, but, so far as I know, the problem persists.

The problem is not whether superb programmers can develop high quality (that is, [relatively] bug-free) software. They can. But can teams populated by ordinary developers do so. Alas, it would seem, that’s a dicier proposition.

That’s the problem Hays set out to tackle. Hays was not himself a programmer. He was a theorist of computation and language who had, early in his career, run a machine translation project for the RAND Corporation. But he worked with programmers and allied practitioners through much of his career and so he did on this project.

So, what sort of thing did Hays have in mind? Something like the difference between calculating with Roman numerals and computing with Arabic numerals. Consider this passage from our article, The Evolution of Cognition:
Marrou, in describing education in the Hellenistic period, writes
Strange though it may seem at first, it is nevertheless quite clear that addition, subtraction, multiplication and division ... were, in antiquity, far beyond the horizon of any primary school. The widespread use of calculating-tables and counting-machines shows that not many people could add up--and this goes on being true to a much later date, even in educated circles. (1956: 158)
In an additional note (p. 410), Marrou remarks that adults would often write out multiplication tables for themselves, presumably because they could not obtain answers out of their heads. Without a good system of notation the formulation of algorithms is so difficult that a complete set wasn't created for any number system other than the Indo-Arabic. Before these procedures were gathered and codified the calculations our children routinely make required the full attention of educated adults, who solved them on a case-by-case basis:
The mathematical texts are simply concrete examples of different problems worked out in full. They illustrate to the reader how to do sums of various kinds. But by themselves such series of examples could hardly suffice to enlighten a novice as to new methods nor impart to him fresh knowledge. They must have been intended as supplements to oral instruction. (Childe 1936/1951, 152-153)
But Childe has no evidence about the oral instruction, and Marrou seems to believe that there was none. In the twentieth century we have taught psychiatry, business management, and the law by the method of case study. What has to be accepted as fact, however "Strange though it may seem at first," is that up to the Renaissance elementary arithmetic was taught in just that way, and, we hold, for the same reason: The kind of thinking that was available in the culture could just manage the substance of the matter but could not rise above it to abstract and rationalize the principles.
Hays believed that computer programming was in a phase comparable to calculation before it had been rationalized through the use of place notation and zero in the Arabic notation system.

Was he, is he, right? I don’t know. I must admit that, in retrospect, I find the confidence expressed in the report to be rather breath-taking. Still, why try if you aren’t confident? In any event, the objective of the metagram project was to point the way toward that kind of change in programming methodology.

Hays wrote the body of the report while I wrote a long appendix discussing the analysis of strategic intelligence in terms of the theoretical framework used in the report (and, a bit later, in that article on cognition).


I’ve reproduced the reports abstract, and two summaries (one for the body, one for the appendix) below.

* * * * *

Metagram Software - A New Perspective on the Art of Computation

Final Technical Report,
RADC-TR-81-118,
October 1981

Rome Air Development Center,
Air Force Systems Command,
Griffiss Air Force Base,
New York 13441

Abstract

The report documents the results of a six-month R&D effort consisting of a critical examination and feasibility test of the metagramming technique to assess its innovative utility in providing an improved access to databases in the COINS* network. The introduction briefly describes current problems in software development/management and outlines metagramming principles. The first chapter illustrates state-of-the-art limitations of conventional programming. The second chapter elucidates the conceptual foundations of metagramming (multi-level abstraction, cognitive processes) and describes a three-level computational system based on metagramming. The third chapter discusses a continuous evolutionary growth of cognition to progressively higher strata described as a sequence of cognitive jumps, each of them characterized by a greater control over complexity than its predecessor. The historical evolution of computational technology is described in the fourth chapter, prior to highlighting the role of higher-level abstractions and the “universal executive” inherent in the metagramming strategy of computation. The fifth chapter envisions the development of metagramming technology as a series of successively easier-to-use machines. The problem of control in metagramming processes is addressed in the sixth chapter. The seventh chapter discusses computational requirements associated with progressively more complex world models inherent in the evolution of metagramming from the initial system (level 0) to a multi-system (level-6). The last chapter deals with the applicability of metagramming to intelligence needs as a means of substantially enhancing the analytic competence of the intelligence community. A discussion of metagramming in the context of intelligence requirements is provided in the Appendix.

* COINS = COunterinsurgency INtelligence Surveillance

Summary of Metagramming
David G. Hays

Metagramming offers a new perspective on information systems. Its aim is to achieve conceptual control at a level that would reduce the software problem that slows the growth of computation. Such a level of control would also serve well in the analysis of military and political intelligence, and in other areas where highly abstract methods of thought serve practical ends.

Contemporary methods of programming include at least two levels of abstraction. One is a standard level of instructions, subroutines, and so on, conceptually very closely related to the fetch-execute cycle. The higher level is exemplified by a few systems such as LISP and FORTH, which provide the user with a concept of functions. These systems include executives which define the most abstract level in relation to the lower.

Metagramming concerns itself with the executive. A universal format for executives constitutes a model for the relation between a more concrete and a more abstract system. This is the metarelation.

The evolution of human culture can be interpreted as a sequence of stages, each with new metarelations establishing control over the fundamental processes of the prior stage. All of the previous states are represented in the contemporary population: computer software, military command systems, and other practical affairs must meet the condition the persons with varying levels of cognitive power have to operate their several components. Hence the problem of software design is the use of metasystems to construct worlds (databases, computing languages) adapted to the various levels of users. The present lack of intermediate grades, between the universal and the specific (between FORTRAN and word processors, say) forces many persons to operate in worlds that suit them badly, reducing their productivity and inducing error.

Metastructures are needed in intelligence because several levels of abstraction are involved in a political or military system, because analysis entails a complex interplay between presuppositions and conclusions, and because international relations is an intricate mixture of mutual trust and mutual deceit.

The initial implementation of metagramming systems requires neither a large expense nor a long period of development. Even the initial systems should have practical value. If this proves to be the case, then benefits should increase with time.

Summary: Computation and Strategic Intelligence, Notes on Sherman Kent, Double Contingency, Gibsonian Psychology, and Metagramming
William Benzon

The theoretical expansiveness of metagramming paradoxically serves the purpose of reducing the problem of writing software to one which can reliably be solved.

The goal of the intelligence analyst is to detect and analyze substantive problems against a background of descriptive, reportorial, and speculative-evaluative knowledge; these forms of knowledge are stratified, with higher strata being meta to lower strata.

Gibsonian psychology frees us from Cartesian solipsism and doubt and provides a criterion for ascertaining the reality of our perceptions: If new information becomes available when the object is inspected, then it is real; otherwise it is a figment of someone’s imagination. Invariance detection is the medium of perceptual interchange with the environment and conservation principles are invariance principles.

Just as the search space in which chess is played is meta to the one generated by the rules of chess, so the space in which computers can reliably be metagrammed is meta to the one in which computers have heretofore been designed and built. The higher level space is related to the lower level through representation functions which work well with complex irregular objects.**

Metagramming works by inducing an ecological closure over an otherwise unbounded search space. An account of the application domain (e.g. strategic intelligence) is crucial to formulating the closure.

** Comment from 2017: The machine learning techniques that have flourished in the past decade have had some success in dealing with “complex irregular objects”, for example, visual objects. These programs have been more successful with visual recognition than the hand-coded of the classical era of symbolic AI. These techniques didn’t exist at the time this report was written. Might they play a role in the regime imagined in this report?

2 comments:

  1. Wpdoyle1@earthlink.netAugust 25, 2017 at 7:47 AM

    Bill great fun to read Metagramming. I think you were spot on with higher level comtrol structures for programming. However, it became clear first with expert systems and now with 2017 cognitive knowledge systems that software system functionality is only partly a result of the code. When the code interacts with knowledge new function emerges. Or as Mustapha at Cal Tech remarked when asked how his neural net had decided to deny loans to some people, he said "I don't know, nobody knows the algorithm figured it out".

    ReplyDelete
    Replies
    1. Yeah, interesting and fruitful thing about machine learning is that the code 'tells' the machine how to learn, but the machine itself 'figures out' just what to learn.

      Delete