Project Details
Projekt Print View

Neuro-symbolic graph-to-text generation

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Applied Linguistics, Computational Linguistics
Term from 2022 to 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 492792184
 
There is a growing interest in computers that can communicate with users using real human language. For example, smart home devices talk with their users, and computers write automatic summaries or answer questions. This kind of interaction includes the challenging step of generating human language with a computer. This project contributes to the stage where the computer already knows what to say, but not how to say it. That is, the computer has built an abstract version of what it wants to say, in a format it can work with internally, but now it has to express it in language - choose words and sentence structure, and get the grammar right.For complex text, current methods rely on neural networks, powerful machine learning devices trained on large amounts of data, that generate surprisingly human-like text. However, these neural networks can have a mind of their own, dropping or inventing content, so that the generated text does not exactly express what the computer meant to say. Neural networks also tend to be intransparent, since their inner workings essentially consist of large amounts of hard-to-interpret numbers. In this project, I will add explicit linguistic structures into this process. This will make the model more transparent, since there will be interpretable structures to look at, explaining the model's decisions. It will also provide a scaffolding along which the neural generation system can generate the sentence, enabling it to create text that stays closer to the original, abstract representation of what the computer wanted to express.The first of two central technical contributions is to figure out how exactly the linguistic structures and the neural networks should interact, how they can best work together to create fluent, natural sounding text that has exactly the right meaning. The second contribution is a method to learn the linguistic structures from data that only contains pairs of abstract representations and corresponding sentences. Such pairs are the standard type of data used in the field, and a method that can learn the "hidden" linguistic structures will be very useful in practice.I will also create a visualization interface, so that we can actually look at how the model works. Finally, I will test the developed method in, and optimize it for, applications such as human-robot dialog.
DFG Programme WBP Fellowship
International Connection Netherlands, United Kingdom
 
 

Additional Information

Textvergrößerung und Kontrastanpassung