Descriptive Complexity of Learning
Final Report Abstract
Descriptive complexity theory explains the computational complexity of algorithmic problems in terms of the language resources required to define the problems. In this project, we extended the descriptive complexity approach to machine learning problems: we captured efficient learnability in terms of the descriptive complexity of the model, that is, the language resources required to define the hypotheses to be learned. In an attempt to connect the abstract model theoretic framework of our descriptive complexity theoretic approach with modern deep learning, we explored the theoretical foundations of graph neural networks as a deep learning architecture for graphs and discrete structures. We were able to establish surprisingly tight connections between logic and graph neural networks. Furthermore, we developed a learning architecture for the logical formalism of constraint satisfaction problems based on graph neural networks.
Publications
-
“Learning MSO-definable hypotheses on strings”. In: International Conference on Algorithmic Learning Theory, ALT 2017, 15-17 October 2017, Kyoto University, Kyoto, Japan. Ed. by S. Hanneke and L. Reyzin. Vol. 76. Proceedings of Machine Learning Research. PMLR, 2017, pp. 434–451.
M. Grohe; C. Lüding & M. Ritzert
-
Learning Concepts Definable in First-Order Logic with Counting. 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 1-13. IEEE.
van Bergerem, Steffen
-
Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4602-4609.
Morris, Christopher; Ritzert, Martin; Fey, Matthias; Hamilton, William L.; Lenssen, Jan Eric; Rattan, Gaurav & Grohe, Martin
-
“Learning Definable Hypotheses on Trees”. In: 22nd International Conference on Database Theory, ICDT 2019, March 26-28, 2019, Lisbon, o Portugal. Ed. by P. Barcel´ and M. Calautti. Vol. 127. LIPIcs. Schloss Dagstuhl - u Leibniz-Zentrum f¨r Informatik, 2019, 24:1–24:18.
E. Grienenberger & M. Ritzert
-
Graph Neural Networks for Maximum Constraint Satisfaction. Frontiers in Artificial Intelligence, 3.
Tönshoff, Jan; Ritzert, Martin; Wolf, Hinrikus & Grohe, Martin
-
The Logic of Graph Neural Networks. 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 1-17. IEEE.
Grohe, Martin
-
The Surprising Power of Graph Neural Networks with Random Node Initialization. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2112-2118. International Joint Conferences on Artificial Intelligence Organization.
Abboud, Ralph; Ceylan, İsmail İlkan; Grohe, Martin & Lukasiewicz, Thomas
-
“Learning Concepts Described By Weight Aggregation Logic”. In: 29th EACSL Annual Conference on Computer Science Logic, CSL 2021, January 25-28, 2021, Ljubljana, Slovenia (Virtual Conference). Ed. by C. Baieru and J. Goubault-Larrecq. Vol. 183. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum f¨r In- formatik, 2021, 10:1–10:18.
S. van Bergerem & N. Schweikardt
-
“Learning on graphs with logic and neural networks”. PhD thesis. RWTH Aachen University, Germany, 2021.
M. Ritzert
-
On the Parameterized Complexity of Learning First-Order Logic. Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, 337-346. ACM.
van Bergerem, Steffen; Grohe, Martin & Ritzert, Martin
-
“Descriptive complexity of learning”. PhD thesis. RWTH Aachen University, Germany, 2023.
S. van Bergerem
