Project Details
Projekt Print View

Teaching Computer Programming using Deep Recursive Neural Networks

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
General and Domain-Specific Teaching and Learning
Term from 2019 to 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 424460920
 
Final Report Year 2021

Final Report Abstract

Computer programming is a hard skill to learn and beginners often get stuck when trying to solve practical programming tasks. As such, it would be helpful to provide support by giving novice programmers a hint what a good next step might be. Human teacher support would be the gold standard but is infeasible in large classes or massive online courses. In this project, we developed an automatic pipeline which receives a current program as input and outputs a slightly modified version that gets closer to a correct solution, based on the programming process of past students. The first module of our pipeline is an auto-encoder. This is an artificial neural network that can translate a computer program into a vector – that is, an array of numbers – and translate such a vector back into a computer program. Importantly, our auto-encoder takes the grammar of the programming language into account, meaning that it follows the syntactic structure of a program and ensures that all generated programs are syntactically correct. This unique combination of grammar knowledge and artificial neural networks makes our auto-encoder particularly suitable for computer programs, yielding a lower auto-encoding error compared to several baselines. Because artificial neural networks need a lot of training data, we forged a collaboration with the Australian e-learning company Grok Learning, which kindly provided us with access to a large, anonymous dataset of beginner’s computer programs. After training on this data, we made our auto-encoder openly available for future research. For scenarios where only little training data is available, we also developed a variation of our neural network which requires much less training. The second module of our pipeline is a predictor that estimates the most likely next step from the current program. Importantly, this predictor needs training data for each specific programming task. This is a challenge because we only have little data for new tasks – perhaps only a single teacher demonstration. Fortunately, our auto-encoder makes prediction much easier: It translates programs to numbers. The predictor only needs to tell us how these numbers should change. For this purpose, a simple, linear model suffices. Such a model requires little data, is fast to train, and comes with a mathematical guarantee that predictions get closer to a correct solution. This approach made hint generation much faster compared to existing methods while retaining a similar prediction error. Importantly, the applications of our developed methods are not limited to hint generation. We have also used the encodings to visualize student’s progress through the space of possible pro- grams, to cluster typical ways of solving a programming task, and to recognize very unusual or novel attempts at solving the task. These additional techniques can help teachers to make sense of student data and design their courses accordingly. In the longer term, our project may provide the basis for developing automatic detectors of programming knowledge which can be used to track and support students’ learning over time. Beyond education, our tools may be useful in chemical and medical research to find novel com- pounds and drugs with desired properties without having to synthesize every possible molecule in a lab. Further, our predictive methods could be used to analyze changes in other settings, like traffic networks, social networks, or epidemiological networks. We facilitate such applications by providing all our software as open source packages and all our scientific publications as openly available preprints. The Covid-19 pandemic has required us to adapt and perform research in a fully digital fashion. This prevented us from doing classroom studies, as we had originally planned. Nonetheless, the dedicated teamwork of all participating researchers has enabled us to achieve the project goals and partially go beyond. A striking and surprising success was the first place in an international educational datamining challenge. Overall, our project provided a novel and openly accessible set of tools for programming education and beyond and we are excited to see future applications.

Publications

  • (2020). Assessing the Quality of Mathematics Questions Using Student Confidence Scores. Ed. by Jack Wang et al. Winning contribution for task 3 of the NeurIPS 2020 Education Challenge
    McBroom, Jessica and Benjamin Paaßen
  • (2020). Recursive Tree Grammar Autoencoders
    Paaßen, Benjamin, Irena Koprinska, and Kalina Yacef
  • (2021). “Graph Edit Networks”. In: Proceedings of the Ninth International Conference on Learning Representations (ICLR 2021). Ed. by Shakir Mohamed et al.
    Paaßen, Benjamin, Daniele Grattarola, et al.
  • “Mapping Python Programs to Vectors using Recursive Neural Encodings”. In: Journal of Educational Datamining
    Paaßen, Benjamin, Jessica McBroom, et al.
    (See online at https://doi.org/10.5281/zenodo.5634224)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung