Project Details
Projekt Print View

Theoretical framework and bifurcation analysis for deep recurrent neural networks inferred from neural measurements

Subject Area Experimental and Theoretical Network Neuroscience
Cognitive, Systems and Behavioural Neurobiology
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 502196519
 
In theoretical neuroscience, we construct mathematical models of neural circuits to gain insight into the computational principles and dynamical mechanisms underlying experimental measurements. Recent advances in deep learning may help to automatize this process to some degree: Rather than explicitly building a model which could explain the data based on the theoretician’s insights, we may train recurrent neural networks (RNNs) directly on data to reproduce, predict, or freely generate the observed recordings. While this is a promising approach to which our group has substantially contributed in the past years, it also comes with new challenges: Since such models were designed by an algorithm and not by ourselves, the analysis of their computational and dynamical properties is generally more demanding. Moreover, current RNN training algorithms usually assume that the underlying system is stationary (with parameters constant across time), an assumption that will often be violated in neuroscience, especially in tasks that require any form of learning or adaptive behavior. Allowing for adaptive parameter changes in RNNs (or any dynamical system), in turn, will lead to abrupt transitions in system dynamics at some critical points, so-called bifurcations. Bifurcations are hugely important both from a theoretical perspective as well as for explaining many observations in neural systems and psychiatry.The present proposal addresses these challenges: In WP1, we develop a mathematical framework for a special class of RNNs inferred from experimental data by deep learning, based on concepts from nonlinear dynamical systems and bifurcation theory. This framework will enable to open the “black box” and thoroughly characterize the dynamical behavior of empirically inferred RNNs within different parameter regimes, and thereby to deduce important computational and functional properties of the trained-on system itself. In WP2, we will extend existing RNN training algorithms to allow for parameter changes across time or experimental trials. This will enable, together with the planned mathematical advances in WP1, to thoroughly characterize bifurcations (and the specific type of bifurcation) in experimental data. In WP3 we will use the mathematical concepts and algorithms developed in WP1 and WP2 to address a long-standing (but still unresolved) hypothesis about rule learning in rodents: Through analysis of RNNs directly trained on neural recordings, we will test whether the empirically observed abrupt changes in neural population activity during learning of a new behavioral rule are indeed due to bifurcations. We will also check which type of bifurcation may underlie the observed neural activity changes, as this has important functional implications. The methods and concepts evolved here will be applicable in many scientific and engineering areas, far beyond neuroscience.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung