Detailseite
Projekt Druckansicht

Empirische und komputationale Untersuchung der Intergration von Sprache und ikonischen Gesten unter Berücksichtigung ihrer Entwicklung bei Vorschulkindern. (EcoGest)

Fachliche Zuordnung Allgemeine und Vergleichende Sprachwissenschaft, Experimentelle Linguistik, Typologie, Außereuropäische Sprachen
Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Förderung Förderung von 2017 bis 2022
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 322326507
 
Erstellungsjahr 2024

Zusammenfassung der Projektergebnisse

The aim of the project was to study the development of iconic communication in speech and gesture reflecting mental representations being processed a communicative act. The main hypothesis was that in children of the considered age group (4-5 years)), the integration of gesture with speech depends on (1) the context of use as well as (2) their developing spatial and cognitive skills. We conducted a study with N = 55 children in two subsequent sessions. In the first session, children performed three communicative tasks; in the second session a nonverbal intelligence test (SON-R) was administered. Due to the COVID-19 pandemics, only16 of the children were participating longitudinally. Context was operationalized by different genres such as explaining, retelling, and illustrating. It was hypothesized that speech-gesture integration differs across these communicative tasks with regard to frequency and patterns of the employed iconic gestures. In a joint effort we developed and applied a fine-grained coding schema describing different iconic gestural patterns across the three tasks using an existing taxonomy (Cartmill et al., 2017). We found that more iconic gestures were elicited when children were prompted by a story on video rather than in a book. Frequency of gestures thus appears to be modulated by cognitive processing of moving stimuli. We also found differences in how many aspects of the events children verbalized: Children’s verbal behavior was richest in Explaining. When controlling for the difference in richness of events, the tasks of Retelling and Illustrating were comparable with regard to the occurrence of hand-as-hand and hand-as-object gestures, whereas Explaining seemed to elicit more hand-as-neutral gestures. Further experiments balancing richness of events are necessary to verify the context-dependency of gestural patterns. We also found support for some general mechanisms. As children were interacting with their caregivers, we could observe how caregivers scaffolded the children’s performance when comparing Retelling to Explaining. In both tasks, we found a significant negative relation between the children’s individual contribution and their caregivers’ scaffolding: The less children could perform individually, the more support they received from their caregivers. This relation evidences a fine-tuned interactive support system. In addition, the results reveal a strong relation between higher genre competence and increased frequency of iconic gestures. A qualitative in-depth analysis showed that children with high discourse competence used significantly more iconic gestures during narration than children with lower discourse competence. Further support was found in our longitudinal data revealing that the more events children verbalize, the more they gesture at the age of 4. At the age of 5, there seems to be a trend toward more verbosity, especially in the task of Retelling. Concerning the children’s spatial and cognitive skills, we assumed an influence on how meaning is integrated and distributed across gesture and speech. Based on previous literature, we investigated (1) when and how children contribute information via iconic gestures that is not conveyed verbally, and (2) whether children employ character-viewpoint gestures (enacting something from a first-person perspective) or observer-viewpoint gestures (depicting an event from a third-person perspective). We found that, across the three studied genres, children’s scores on visuospatial skills were predictive of the frequency of observer-viewpoint gestures. In addition, children scoring higher for both spatial cognitive abilities and general cognitive skills conveyed significantly more meaning solely via gesture and less meaning via speech. These findings support a gesture-as-simulated-action acount but extend it to relations with situational factors such as the genre and developing skills in the group of children. Our extensive empirical studies were carried out in collaboration of all project parts (Psycholinguistiscs, Linguistics, Computer Science). This work included development of AI-based methods to prepare and quantitatively analyze the collected empirical data (e.g. using speech embeddings, NLG techniques for incremental and child-like language generation, or a representation formalism for the underlying visuospatial meaning about objects and actions in the domain). These are important steps toward a computational cognitive model of speech–gesture integration, providing an empirically grounded basis for the future development of an integrated process model of speech-gesture production in children.

Projektbezogene Publikationen (Auswahl)

 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung