Project Details
Projekt Print View

Verification of Anomaly Detectors

Subject Area Theoretical Computer Science
Software Engineering and Programming Languages
Technical Thermodynamics
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 459419731
 
Anomaly detectors have to work safely and reliably when deployed in safety-critical applications, such as chemical plants. Failure to report anomalies may result in imminent hazards to the environment and human life. False alarms during regular operation may result in unnecessary downtime of plants and, thus, substantial financial or scientific costs. Therefore, this project's overall goal is to develop efficient methods to verify (i.e., to prove formally) the safety and reliability of artificial neural networks used in anomaly detection (Project A1) and data generation (Project A4).While the general topic of verifying artificial neural networks has recently started to receive increasing attention, there is a lack of methods for the types of neural networks that will arise in this research unit: neural networks on multi-modal sequential data that are trained in unsupervised learning settings. With this project, we will contribute to closing this gap, thereby substantially advancing the state of the art in neural network verification.First, we will elicit correctness properties that neural networks need to satisfy to operate safely and reliably - both in the context of this research unit and beyond. To this end, we will closely collaborate with Projects A1 and A4 as well as with Projects B1 and B2.Second, we will formalize the elicited correctness properties. This task will entail developing a novel temporal logic, provisionally entitled Neural Temporal Logic (NTL), that we will specifically tailor to neural networks over multi-modal sequential data. With this new logic, it will also become possible - for the first time - to express (and verify) "fuzzy" properties such as "an alarm has to be triggered when the gas in a given pipe starts to condensate".Third, we will develop efficient algorithms to verify neural networks against correctness properties expressed in NTL. We will consider both qualitative verification methods (proving or disproving that a network satisfies a property) and quantitative verification methods (computing the probability of a network satisfying a property). For both settings, we will develop two orthogonal types of verification algorithms: one operating on the actual neural network and one computing property-preserving abstractions. Since one can, in general, not hope that every neural network will immediately satisfy all correctness properties, we intend to also develop techniques that use counterexamples (obtained from failed verification attempts) to "repair" neural networks so that they fulfill the correctness properties. We will evaluate our algorithms empirically on the neural networks produced in Projects A1 and A4.
DFG Programme Research Units
 
 

Additional Information

Textvergrößerung und Kontrastanpassung