Friday, October 6, 2023

Towards Self-Assembling Artificial Neural Networks

Elias Najarro, Shyam Sudhakaran, Sebastian Risi, Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs, arXiv:2307.08197v1 [cs.NE]

Abstract: Biological nervous systems are created in a fundamentally different way than current artificial neural networks. Despite its impressive results in a variety of different domains, deep learning often requires considerable engineering effort to design high-performing neural architectures. By contrast, biological nervous systems are grown through a dynamic self-organizing process. In this paper, we take initial steps toward neural networks that grow through a developmental process that mirrors key properties of embryonic development in biological organisms. The growth process is guided by another neural network, which we call a Neural Developmental Program (NDP) and which operates through local communication alone. We investigate the role of neural growth on different machine learning benchmarks and different optimization methods (evolutionary training, online RL, offline RL, and supervised learning). Additionally, we highlight future research directions and opportunities enabled by having self-organization driving the growth of neural networks.

From the introduction:

The study of neural networks has been a topic of great interest in the field of artificial intelligence due to their ability to perform complex computations with remarkable efficiency. However, despite significant advancements in the development of neural networks, the majority of them lack the ability to self-organize, grow, and adapt to new situations in the same way that biological neurons do. Instead, their structure is often hand-designed, and learning in these systems is restricted to the optimization of connection weights.

Biological networks on the other hand, self-assemble and grow from an initial single cell. Additionally, the amount of information it takes to specify the wiring of a sophisticated biological brain directly is far greater than the information stored in the genome (Breedlove and Watson, 2013). Instead of storing a specific configuration of synapses, the genome encodes a much smaller number of rules that govern how to grow a brain through a local and self-organizing process (Zador, 2019). For example, the 100 trillion neural connections in the human brain are encoded by only around 30 thousand active genes. This outstanding compression has also been called the “genomic bottleneck” (Zador, 2019), and neuroscience suggests that this limited capacity has a regularizing effect that results in wiring and plasticity rules that generalize well.

In this paper, we take first steps in investigating the role of developmental and self-organizing algorithms in growing neural networks instead of manually designing them, which is an underrepresented research area (Gruau, 1992; Nolfi et al., 1994; Kow Aliw et al., 2014; Miller, 2014). Even simple models of development such as cellular automata demonstrate that growth (i.e. unfolding of information over time) can be crucial to determining the final state of a system, which can not directly be calculated (Wolfram, 1984). The grand vision is to create a system in which neurons self-assemble, grow, and adapt, based on the task at hand.

Towards this goal, we present a graph neural network type of encoding, in which the growth of a policy network (i.e. the neural network controlling the actions of an agent) is con- trolled by another network running in each neuron, which we call a Neural Developmental Program (NDP). The NDP takes as input information from the connected neurons in the policy network and decides if a neuron should replicate and how each connection in the network should set its weight. Starting from a single neuron, the approach grows a functional policy network, solely based on the local communication of neurons. Our approach is different from methods like NEAT (Stanley and Miikkulainen, 2002) that grow neural networks during evolution, by growing networks during the lifetime of the agent. While not implemented in the current NDP version, this will ultimately allow the neural network of the agents to be shaped based on their experience and environment.

I speculate that such a regime seems to be a way of eliminating the problem that current models cannot be readily modified once they have been trained. Rather, they must be completely retrained if one wishes to add new information to them. This represents a step toward polyviscosity.

No comments:

Post a Comment