Friday, January 3, 2020

Does biological evolution operate on algorithmic complexity?

Jordana Cepelewicz, Mathematical Simplicity May Drive Evolution’s Speed, Quanta Magazine, November 28, 2019.

By way of analogy, consider those infamous monkey's pounding away on keyboards.  Imagine that, instead of producing Hamlet, they're after the digits of pi.
The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.

In other words, algorithmic information theory essentially says that the probability of producing some types of outputs is far greater when randomness operates at the level of the program describing it rather than at the level of the output itself, because that program will be short. In this way, complex structures — fractals, for instance — can be more easily produced by chance.
The algorithmic complexity of some object is called Komogorov complexity, after Andrey Komogorov, and is equal to the length of the shortest program needed to compute it. So, we do a bit of this and that, which the article discusses, and we end up here:
Despite its problems, algorithmic information does hold some appeal in the realm of biology. Traditionally, the mathematical framework used to describe evolutionary dynamics is population genetics — statistical models of how frequently genes may appear in a population. But population genetics has limitations: It can’t account for the origin of life and other major biological transitions, for instance, or for the emergence of entirely new genes. “A notion that sort of got lost in this lovely mathematical theory is the notion of biological creativity,” Chaitin said. But if we take algorithmic information into account, he said, “creativity fits in naturally.”

So does the idea that the evolutionary process itself is improving over time and becoming more efficient. “I’m quite convinced that evolution does intrinsically learn,” said Daniel Polani, a computer scientist and professor of artificial intelligence at the University of Hertfordshire in England. “And I would not be surprised if this would be expressible by algorithmic complexity going down asymptotically.”

Zenil and his team set out to explore experimentally the biological and computational implications of the algorithmic complexity framework. Using the same complexity approximation technique they had developed to analyze and perturb networks, they “evolved” artificial genetic networks toward certain targets — matrices of ones and zeros meant to represent interactions between genes — by biasing the mutations in favor of those that produced matrices with lower algorithmic complexity. In other words, they selected for greater structure.
Photo of Hector Zenil

Hector Zenil, a computer scientist at the Karolinska Institute in Sweden, seeks to analyze evolving biological networks in terms of their algorithmic (or Kolmogorov) complexity.

They recently reported in Royal Society Open Science that, compared to statistically random mutations, this mutational bias caused the networks to evolve toward solutions significantly faster. Other features also emerged, including persistent, regular structures — sections within the matrices that had already achieved a degree of simplicity that new generations were unlikely to improve on. “Some regions were more prone or less prone to mutation, simply because they may have evolved some level of simplicity,” Zenil said. “This immediately looked like genes.” That genetic memory, in turn, yielded greater structure more quickly — implying, the researchers propose, that algorithmically probable mutations can lead to diversity explosions and extinctions, too.
However, the implications are not clear:
Zenil hopes to explore whether biological evolution operates according to the same computational rules, but most experts have their doubts. It’s unclear what natural mechanism could be responsible for approximating algorithmic complexity or putting that kind of mutational bias to work. Moreover, “thinking of life totally encoded in four letters is wrong,” said Giuseppe Longo, a mathematician at the National Center for Scientific Research in France. “DNA is extremely important, but it makes no sense if [it is] not in a cell, in an organism, in an ecosystem.” Other interactions are at play, and this application of algorithmic information cannot capture the extent of that complexity.
Stay tuned.

And read an article David Hays and I published some years ago, which might, on the basis of its title, seem to point in the opposite direction (but not really): William Benzon and David G. Hays, A Note on Why Natural Selection Leads to Complexity, Journal of Social and Biological Structures 13: 33-40, 1990, https://www.academia.edu/8488872/A_Note_on_Why_Natural_Selection_Leads_to_Complexity, or, https://ssrn.com/abstract=1591788.

Addendum: If you are interested in the brain, see Kevin Mitchell, How much innate knowledge can the genome encode? January 4, 2020.

1 comment:

  1. It's not exactly "unclear what natural mechanism could be responsible". The obvious sort of mechanism would be some algorithmic component in the DNA copying process. Well, in fact, there is a little bit of that, present as error checking of various types. The catch, I suppose, is that error checking doesn't inventively try to compress new genes as they come along; if anything it's evolving more slowly than the "core genome" that it's checking. (Well, there's probably not a core genome; checkers might well check one another.)

    ReplyDelete