Project Details
Projekt Print View

Autonomous and Efficiently Scalable Deep Learning

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2014 to 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 260197604
 
Neural circuits of humans and animals extract meaningful high-level information from sensory stimuli using different stages of information processing. Neural processing is highly autonomous, flexible, interactive and efficient at very large scales. In contrast, Deep Learning algorithms as currently developed by the Machine Learning community require, first, strong involvements of researchers and, second, are relatively small scale and inefficient compared, e.g., to mammalian sensory systems. Researcher dependence is due to large sets hsof free parameters that have to be hand-set and large amounts of data that have to be hand-labeled. Scalability of Deep Learning depends on learning autonomy and on technical implementations of learning. Our goals in this project are therefore: (A) to minimize the dependency of Deep Learning on developing researchers, and (B) to efficiently scale Deep Learning to large networks. To achieve our first goal (A), we will study Deep Learning with low numbers of free parameters and internal self-tuning. Our investigations will build up on Poisson mixtures which are directed and probabilistic graphical models with exact closed-form learning equations and fully interpretable hidden states. They are functionally competitive on weakly labeled data and provide sophisticated uncertainty information. This uncertainty feedback will be used to further increase autonomy by an active interaction with the environment. To achieve our second goal (B), we apply efficient approximation methods in combination with very compact and locally implementable learning in neural circuits. Such circuits can theoretically and empirically be shown to approximate optimal learning in Poisson mixtures. They are inherently parallel, learn unsupervised and online, and their implementation on GPU clusters and analog VLSI is straight-forward. In summary we seek to develop the most autonomous and most efficiently scalable Deep Learning systems to date.
DFG Programme Priority Programmes
 
 

Additional Information

Textvergrößerung und Kontrastanpassung