Project Details
Projekt Print View

Regularization strategies for interpretable deep models and robust explanations with application to genomics

Subject Area Medical Informatics and Medical Bioinformatics
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 459422098
 
One of the prevailing concepts to achieve transparency in already trained machine learning models is explanation through attribution. Its aim is to explain which input dimensions have been most important for the model to arrive at its prediction for an individual sample. Such explanations can be visualized and provided to the human user for verification and interpretation. While successfully used in many applications, relevance scores assigned to each input variable convey only limited information and may also be affected by various types of noise, rendering them insufficient to gain deeper insights into the complex relations in the data and the functioning of the model. Additionally, explanations may be correct, but be intuitively not understandable to humans and as such of limited use, e.g., when the model operates on an input domain which does not correspond to human sensory input.The goal of this project is to develop novel methods for making explanations more robust and readable for the human expert. We will address both the impact of the model and the explanation method itself. First we will investigate the effect of different parameters guiding the model training as well as the use of model regularization techniques for improving explanations with respect to desirable properties such as (group) sparsity or robustness. Next, we will directly compare and suggest improvements for current explainable AI techniques, e.g., using methods from robust statistics. We will develop post-processing methods for explanations, making them more readable for the human expert, via techniques which help improve the quality and information content of explanations, or which will provide information on the population. In another research direction, we will connect uncertainty with explanation in order to improve the latter. In particular, we will reconsider the current deterministic form of explaining predictions and combine the concept of uncertainty and explanation in order to generate explanation distributions, showing the whole range of different strategies to arrive at a prediction.Our interest and application lies in molecular sequence data, specifically to identify and understand sequence patterns that influence gene expression via different mechanisms of gene regulation. Here, (redundant) combinations of features may lead to an observed effect. While a single explanation would only relate to a part of the underlying biology, an explanation distribution provides the full picture. In summary, by developing novel regularization techniques and global interpretation methods, we expect this project to provide new techniques that lead to more robust and complete explanations, as well as to humanly accessible insights into the model's prediction strategies.
DFG Programme Research Units
 
 

Additional Information

Textvergrößerung und Kontrastanpassung