Project Details
Projekt Print View

A Generalised Approach to Learning Models of Human Behaviour for Activity Recognition from Textual Instructions

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2016 to 2019
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 314457946
 
Final Report Year 2019

Final Report Abstract

Computational models for activity recognition aim at recognising the user actions and goals based on precondition-effect rules. One problem such approaches face, is how to obtain the model structure. To reduce the need of domain experts or sensor data during the model building, methods for learning models of human behaviour from textual data have been investigated. Existing approaches, however, make various simplifying assumptions during the learning process. This renders the model inapplicable for activity recognition problems. To address this problem, this project aimed at developing a generalised methodology for learning the model structure from textual instructions. The methodology combines existing and novel methods for model learning. Given a textual input, the methodology generates a situation model based on which computational state space models (CSSMs) are generated. A situation model is a semantic structure representing the relevant elements discovered in the textual description (these are actions, objects, locations, properties of objects, abstraction of objects) and the corresponding causal, spatial, functional, and abstraction relations between the elements. Based on the semantic structure, the methodology then generates precondition-effect rules describing the possible actions that can be executed in the problem domain, the initial state of the problem and the possible goal states. The generated CSSMs are used for activity recognition tasks from the domain of daily activities. As the generated models are relatively general, one problem with activity recognition is that the model cannot correctly recognise the executed activities due to too many options. To address this problem, an optimisation phase follows where the action weights are adjusted based on existing plan traces. The generated models are compared to hand-crafted models for the same problem domains. Not surprisingly, the results show that models generated from texts cannot learn implicit common sense knowledge. This means, that we as humans add additional knowledge to the models in order to put relevant context information or to specialise the model. This phenomenon is to a degree resolved by the optimisation phase where the manually built models slightly outperform generated models for activity recognition tasks. The models are however unable to provide the additional contextual information that humans encode in hand-crafted models. This poses the challenging research question of how to combine multiple heterogenous sources of information in order to generate rich and accurate computational models for activity recognition.

Publications

  • From Textual Instructions to Sensor-based Recognition of User Behaviour. In Companion Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI 2016), Sonoma, CA, pp. 67–73. 2016
    Kristina Yordanova
    (See online at https://doi.org/10.1145/2876456.2879488)
  • A Simple Model for Improving the Performance of the Stanford Parser for Action Detection in Textual Instructions. In Proceedings of Recent Advances in Natural Language Processing, Varna, Bulgaria, pp. 831–838
    Kristina Yordanova
    (See online at https://doi.org/10.26615/978-954-452-049-6_106)
  • Automatic Generation of Situation Models for Plan Recognition Problems. In Proceedings of Recent Advances in Natural Language Processing, Varna, Bulgaria, pp. 823–830
    Kristina Yordanova
    (See online at https://doi.org/10.26615/978-954-452-049-6_105)
  • Knowledge Extraction from Task Narratives. In Proceedings of 4th International Workshop on Sensorbased Activity Recognition and Interaction, Rostock, Germany, pp. 7:1–7:6. 2017
    Kristina Yordanova, Carlos Monserrat, David Nieves, José Hernández-Orallo
    (See online at https://doi.org/10.1145/3134230.3134234)
  • TextToHBM: A Generalised Approach to Learning Models of Human Behaviour for Activity Recognition from Textual Instructions. In AAAI Workshop Proceedings (PAIR 2017), 2017
    Kristina Yordanova
  • Creating and Exploring Semantic Annotation for Behaviour Analysis. In Sensors, 18(9): 2778. 2018
    Kristina Yordanova and Frank Krüger
    (See online at https://doi.org/10.3390/s18092778)
  • Extracting Planning Operators from Instructional Texts for Behaviour Interpretation. In German Conference on Artificial Intelligence, Berlin, Germany, pp. 215–228. 2018
    Kristina Yordanova
    (See online at https://doi.org/10.1007/978-3-030-00111-7_19)
  • Analysing Cooking Behaviour in Home Settings: Towards Health Monitoring. In Sensors, 19(3): 646. 2019
    Kristina Yordanova, Stefan Ludtke, Samuel Whitehouse, Frank Kruger, Adeline Paiement, Majid Mirmehdi, Ian Craddock, Thomas Kirste
    (See online at https://doi.org/10.3390/s19030646)
  • Automatic Detection of Everyday Social Behaviours and Environments from Verbatim Transcripts of Daily Conversations. In Proceedings of IEEE International Conference on Pervasive Computing and Communications. Kyoto, Japan. pp. 1–10, 2019
    Kristina Y. Yordanova, Burcu Demiray, Matthias R. Mehl, Mike Martin
    (See online at https://doi.org/10.1109/PERCOM.2019.8767403)
  • Challenges Providing Ground Truth for Pervasive Healthcare Systems. In IEEE Pervasive Computing 18(2):100-104. 2019
    Kristina Yordanova
    (See online at https://doi.org/10.1109/MPRV.2019.2912261)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung