Human Manipulation Actions: Neurophysiological Validation of their Formal Characterization
Cognitive, Systems and Behavioural Neurobiology
Final Report Abstract
This project consisted of two parts: a theory-oriented part where we were trying to get a better insight how humans “understand” manipulation actions of others and an experimental part where we wanted to test hypotheses arising from the theoretical investigations using behavioral as well as fMRI experiments. The theoretical investigations had directly led to a behavioral experiment, where we tested specific predictions in virtual reality. Similarly, those theoretical predictions had been addressed in two fMRI experiments with additional support by one behavioral test-retest study. Main results are: we performed a theoretical analysis of human manipulation actions focusing on the change of the spatiotemporal relations between the objects (including hands) in a scene. The underlying algorithm is based on computer vision and it is objective in the sense that it operates without human interventions. Actions are recognized at the moment where specific events happen and we could show experimentally that humans in most cases use the same events to recognize an action. This result has been deepened by a test-retest study that revealed that humans’ action segmentations are consistent with the computer visionbased touching and untouching events. Modelling fMRI activity during observation of the same actions we found that segmentation was announced by a strong increase of visual activity at touching events followed by the engagement of frontal, hippocampal and insula regions, signaling updating expectation at subsequent untouching events. Replacing real objects with dough balls, we additionally investigated the role of objects in action segmentation. We found that now segmentation judgments were even stronger related to the algorithmically determined structure of touching and untouching events, and movement related areas of the brain play a stronger role as compared to the situation where real objects are present. Finally, using representational similarity and multidimensional scaling approaches, we could show that the same structure can be predictive of action classification as reflected in behavioral and brain data.
Publications
-
Recognition and prediction of manipulation actions using Enriched Semantic Event Chains. Robotics and Autonomous Systems, 110, 173-188.
Ziaeetabar, Fatemeh; Kulvicius, Tomas; Tamosiunaite, Minija & Wörgötter, Florentin
-
Humans Predict Action using Grammar-like Structures. Scientific Reports, 10(1).
Wörgötter, F.; Ziaeetabar, F.; Pfeiffer, S.; Kaya, O.; Kulvicius, T. & Tamosiunaite, M.
-
Using enriched semantic event chains to model human action prediction based on (minimal) spatial information. PLOS ONE, 15(12), e0243829.
Ziaeetabar, Fatemeh; Pomp, Jennifer; Pfeiffer, Stefan; El-Sourani, Nadiya; Schubotz, Ricarda I.; Tamosiunaite, Minija & Wörgötter, Florentin
-
Touching events predict human action segmentation in brain and behavior. NeuroImage, 243, 118534.
Pomp, Jennifer; Heins, Nina; Trempler, Ima; Kulvicius, Tomas; Tamosiunaite, Minija; Mecklenbrauck, Falko; Wurm, Moritz F.; Wörgötter, Florentin & Schubotz, Ricarda I.
