Mechanisms of spatial context learning through touch and vision
Final Report Abstract
The focus of this project was on crossmodal plasticity in spatial context learning. We devised a novel visual-tactile search paradigm suited for the investigation of context learning under unimodal and multimodal search tasks and employed behavioral and neuroscience methods, in addition to mathematical modeling. Behavioral experiments consistently provide evidence that consistent visual-tactile relations can be (incidentally) acquired in a multimodal search environment and guide the search for a visual or a tactile target. In order to permit crossmodal contextual associations to be developed to guide visual or tactile search, sufficient time is required for the tactile spatial configuration to be processed prior to the visual stimuli. Critically, this involves the remapping of the somatotopically sensed stimuli into common external coordinates shared with the visual stimuli. That is: the crossmodal contextual-cueing effect is supported by an external, most likely visuospatial, representation. These findings were further corroborated by the EEG studies: in a visual search, both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target; in contrast, when the predictive context was tactile/visual in a tactile odd-one-out search task, both somatosensory and visual cortical regions contribute to the more efficient processing of the tactile target in repeated stimulus arrays, though their involvement is differentially weighted depending on the sensory modality that contains the predictive information. Further, the drift-diffusion modeling suggested that multisensory (relative to unisensory) training enhances contextual cueing primarily by facilitating the attentional-guidance stage of the search process, as evidenced by a higher rate of evidence accumulation towards the required decision. The findings of the research project also raises various further research questions. One challenging issue in multisensory contextual learning is the asymmetry in the multisensory remapping: we have no ready routines for remapping visual stimuli from an external into a somatotopic format, whereas we routinely and efficiently remap tactile stimuli from a somatotopic into an external format, though it consumes time for the sensation to be stably localized in external space. Thus, the visual guidance might benefit more from multisensory learning than the guidance for the to-be-‘remapped’ tactile modality. For a redundant visual-tactile search, the visual search items may be even 'distracting' and conflict with the somatotopic-frame-based tactile search. Thus, a more dedicated design might be needed in order to observe the multisensory training benefit in the tactile search. Another issue important to examine concerns where and how visual-tactile associations are represented in the brain, to gain an understanding of the underlying functional and structural mechanisms of crossmodal contextual learning. It is possible that the facilitation of search by crossmodal repeated contexts may result from multisensory integration of information from different sensory cortices at a supra-modal attention-guiding priority map of space. This likely involves top-down feedback from higher brain circuits, such as the medial temporal lobes, in particular the hippocampus, which is known to play a role in signaling cross-modal predictions, thereby realizing the unimodally predicted distractor-target relations on the supra-modal priority map. Especially visuospatial memories are also strongly dependent on the integrity of the hippocampus. Accordingly, the observed enhancement in the electrocortical responses over visual and tactile areas might be the result of top-down feedback signals transmitted by the hippocampus or higher regions concerned with the processing of the tactile target.
Publications
-
Crossmodal learning of target-context associations: When would tactile context predict visual search?. Attention, Perception, & Psychophysics, 82(4), 1682-1694.
Chen, Siyi; Shi, Zhuanghua; Zang, Xuelian; Zhu, Xiuna; Assumpção, Leonardo; Müller, Hermann J. & Geyer, Thomas
-
Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search. Scientific Reports, 11(1).
Chen, Siyi; Shi, Zhuanghua; Müller, Hermann J. & Geyer, Thomas
-
When visual distractors predict tactile search: The temporal profile of cross-modal spatial learning.. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(9), 1453-1470.
Chen, Siyi; Shi, Zhuanghua; Müller, Hermann J. & Geyer, Thomas
-
Contextual cueing in co-active visual search: Joint action allows acquisition of task-irrelevant context. Attention, Perception, & Psychophysics, 84(4), 1114-1129.
Zang, Xuelian; Zinchenko, Artyom; Wu, Jiao; Zhu, Xiuna; Fang, Fang & Shi, Zhuanghua
-
Cross‐modal contextual memory guides selective attention in visual‐search tasks. Psychophysiology, 59(7).
Chen, Siyi; Shi, Zhuanghua; Zinchenko, Artyom; Müller, Hermann J. & Geyer, Thomas
-
Multisensory Rather than Unisensory Representations Contribute to Statistical Context Learning in Tactile Search. Journal of Cognitive Neuroscience, 34(9), 1702-1717.
Chen, Siyi; Geyer, Thomas; Zinchenko, Artyom; Müller, Hermann J. & Shi, Zhuanghua
