Project Details
Projekt Print View

Multimodal and Multivariate Machine Learning Methods for Nonlinearly Coupled Oscillatory Systems

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Human Cognitive and Systems Neuroscience
Software Engineering and Programming Languages
Theoretical Computer Science
Term from 2013 to 2016
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 236447838
 
Learning appropriate representations, or extracting useful features from data, is one of the fundamentalproblems of Machine Learning. Recently, multimodal neuroimaging has become an important tool forbasic research and clinical diagnosis. By utilizing methods from machine learning, it has been possibleto further our understanding of multimodal neural data and extract novel insights from the multitudeof high dimensional data, such as obtained from EEG/EMG recordings and simultaneous measuresof hemodynamics (e.g. NIRS of fMRI). However, analysis methods that are currently being used arenot able to optimally extract the underlying common factors (latent sources) if the coupling betweenthe modality specific dynamics is nonlinear. This is due to the fact that either the methods are nottruly multimodal or they do not fully take into account established generative models and the nonlinearnature of the types of coupling between modalities. Furthermore, todays methods suffer from a tradeoffbetween accuracy (e.g. errors in terms of prediction or quality of regression) and interpretability (i.e. theability to interpret the resulting representations with respect to the hidden causes/sources).The proposed project is organized in two parts. In the first (analytical) part we will develop novelmultivariate methods for the simultaneous extraction of nonlinearly interacting sources from multimodalimaging data. In particular we will focus on domain specific generative models in order to find low dimensional representations of the multimodal data that maximally explain the coupled dynamics of theunderlying system. At the same time, the extracted sources will adhere to the domain specific generativemodel assumptions and will therefore be interpretable therein. Specifically we will develop novelmultimodal and multivariate spatial filtering methods that uncover common sources in multimodal neuroimaging data whose dynamics are nonlinearly and nonstantaneously coupled. By seeking common source space representations, we anticipate that our methods will not only provide excellent performance in terms of establishing a connection between measurement modalities, but that they will overcome the aforementioned tradeoff between accuracy and interpretability.In the second part of this project we will apply the newly developed methods to existing open questionsfrom the fields of (computational) neuroscience and neurotechnology. We expect to be able tocontribute substantially to questions concerning (i) the mechanisms underlying the generation of eventrelated potentials (ERP), (ii) novel unsupervised training methods for Brain-Computer Interfaces (BCI), and (iii) better understanding of common dynamics in EEG spectral power and NIRS measurements which in turn will lead to superior performance of BCI applications.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung