Saturday, June 10, 2023

A morning walk along the river

How network structure shapes decision-making for bio-inspired computing

Schirner, M., Deco, G. & Ritter, P. Learning how network structure shapes decision-making for bio-inspired computing. Nat Commun 14, 2963 (2023).

Abstract: To better understand how network structure shapes intelligent behavior, we developed a learning algorithm that we used to build personalized brain network models for 650 Human Connectome Project participants. We found that participants with higher intelligence scores took more time to solve difficult problems, and that slower solvers had higher average functional connectivity. With simulations we identified a mechanistic link between functional connectivity, intelligence, processing speed and brain synchrony for trading accuracy with speed in dependence of excitation-inhibition balance. Reduced synchrony led decision-making circuits to quickly jump to conclusions, while higher synchrony allowed for better integration of evidence and more robust working memory. Strict tests were applied to ensure reproducibility and generality of the obtained results. Here, we identify links between brain structure and function that enable to learn connectome topology from noninvasive recordings and map it to inter-individual differences in behavior, suggesting broad utility for research and clinical applications.

Saturday, June 3, 2023

Neural Networks and the Chomsky Hierarchy

Grégoire Delétang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, Pedro A. Ortega, Feb 28, 2023, arXiv:2207.02098v3 [cs.LG],

Abstract: Reliable generalization lies at the heart of safe ML and AI. However, understanding when and how neural networks generalize remains one of the most important unsolved problems in the field. In this work, we conduct an extensive empirical study (20'910 models, 15 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice. We demonstrate that grouping tasks according to the Chomsky hierarchy allows us to forecast whether certain architectures will be able to generalize to out-of-distribution inputs. This includes negative results where even extensive amounts of data and training time never lead to any non-trivial generalization, despite models having sufficient capacity to fit the training data perfectly. Our results show that, for our subset of tasks, RNNs and Transformers fail to generalize on non-regular tasks, LSTMs can solve regular and counter-language tasks, and only networks augmented with structured memory (such as a stack or memory tape) can successfully generalize on context-free and context-sensitive tasks.

The New World Disorder | Robert Wright & Thomas Friedman

1:31 China’s role in Saudi-Iran rapprochement
7:10 The legacy of Trump’s withdrawal from the Iran nuclear deal
12:25 Does Israel have a Palestine plan?
24:12 Tensions resurface in Kosovo
29:03 NATO expansion and Russia’s invasion of Ukraine
35:31 Has the US mismanaged its relationship with Russia?
43:53 Prospects for peace in Ukraine—or for massive military escalation

The whole discussion is excellent, and depressing.

Friday, June 2, 2023

From the breakfast table

Why AI can't be conscious

Abstract of the article linked above:

Interactions with large language models have led to the suggestion that these models may be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Secondly, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Finally, while the previous two arguments can be overcome in future AI systems, the third one might be harder to bridge in the near future. Namely, we argue that consciousness might depend on having 'skin in the game', in that the existence of the system depends on its actions, which is not true for present-day artificial intelligence.

Thursday, June 1, 2023

Pier 13, Hoboken [Empire State Building]

Duke Ellington on Rock and Roll