Project Details
Projekt Print View

Principled, efficient and stable event-based learning in spiking neural networks

Applicant Dr. Christian Klos
Subject Area Experimental and Theoretical Network Neuroscience
Methods in Artificial Intelligence and Machine Learning
Statistical Physics, Nonlinear Dynamics, Complex Systems, Soft and Fluid Matter, Biological Physics
Term since 2026
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 580311533
 
Unlike conventional artificial neural networks, spiking neural networks (SNNs) rely on sparse, asynchronous communication through discrete spikes, reflecting the fundamental mode of biological neuronal interaction. This communication mode enables more faithful models of neural computation and substantially more energy-efficient artificial intelligence when employed on neuromorphic hardware. To realize the potential of SNNs, efficient and performant training methods are essential. The prevailing approach relies on timestep-based simulations combined with surrogate gradient descent, which modifies gradient computation to ensure non-zero gradients despite the discreteness of spikes. However, this approach fails to exploit the sparse communication and yields inaccurate gradients. Event-based methods offer a compelling alternative: they simulate SNNs exactly by iterating over spikes, thereby exploiting communication sparsity and enabling exact gradient computation by treating spike times as differentiable quantities. Yet, current event-based approaches are hindered by computational inefficiency when spike counts are high and suffer from exploding and vanishing gradients. This project aims to establish exact event-based simulation and learning as practically useful methods for training SNNs. It consists of three parts. First, I will design efficient algorithms for event-based training on modern GPUs. To reduce memory requirements, I propose a novel combination of time-reversed simulations with reuse of minimal stored information from the forward pass. To reduce runtime, I will exploit sparse network connectivity and explore simultaneous processing of multiple spikes. Second, I will address gradient instabilities using rigorous tools from dynamical systems theory. Using tasks of controllable temporal complexity, I will characterize gradient stability in terms of gradient norm, gradient dimensionality, and the spectrum of Lyapunov exponents, and identify key neuronal and network features that determine trainability. Third, I will apply these insights to construct novel SNN models and architectures that are both computationally efficient and trainable, and benchmark them on demanding neuromorphic datasets not yet tackled by event-based methods. The results will deliver a principled understanding of which classes of SNNs can be trained effectively and how this can be done efficiently. Furthermore, they will open new avenues for designing next-generation SNNs for use in neuromorphic computing and computational neuroscience.
DFG Programme Fellowship
International Connection USA
 
 

Additional Information

Textvergrößerung und Kontrastanpassung