Value-driven Crossmodal Attention
Final Report Abstract
This collaborative research project investigated the dynamics of value-based attention across various dimensions: its ability to transfer between sensory modalities, the processing stages it affects, its impact on search performance when value-based association is on irrelevant stimuli, the interplay with individual preferences, and its interaction with the brain's attention networks. Investigations revealed that value-driven attention spans across sensory modalities. Studies using visual-tactile search tasks found that value-based attention predominantly influences the later stages of target identification. This finding was supported by experiments that differentiated between the search and identification stages within each modality. The research also explored how reward associations could be formed at different levels—feature, response, and task-set—with evidence suggesting that both feature-based and task-set-based associations are viable, given continuity in task-set between training and testing phases. Studies also highlighted the relationship between reward association and individual color-valence preferences, particularly with the commonly used red and green stimuli. Results indicated that individual preferences for certain color-valence combinations tend to enhance search performance, even when the preferred color appears as a distractor. This suggests an overarching facilitation effect, likely mediated by the general alerting system of the brain, which enhances both attention selection and target processing mechanisms. Further neuroimaging studies utilizing EEG and fMRI techniques further underscored these findings, showing that high-reward targets necessitate greater inhibitory control to counteract increased automatic responses. This inhibition involves critical areas of the brain, including the motor cortex, medial frontal cortex, and the frontobasal ganglia network. Together, these findings provide a comprehensive view of how value-based attention operates across sensory modalities and processing stages, its behavioral implications, and the underlying neural mechanisms.
Publications
-
Neural Dynamics of Reward-Induced Response Activation and Inhibition. Cerebral Cortex, 29(9), 3961-3976.
Wang, Lihui; Chang, Wenshuo; Krebs, Ruth M.; Boehler, C. Nico; Theeuwes, Jan & Zhou, Xiaolin
-
Crossmodal learning of target-context associations: When would tactile context predict visual search?. Attention, Perception, & Psychophysics, 82(4), 1682-1694.
Chen, Siyi; Shi, Zhuanghua; Zang, Xuelian; Zhu, Xiuna; Assumpção, Leonardo; Müller, Hermann J. & Geyer, Thomas
-
A value-driven McGurk effect: Value-associated faces enhance the influence of visual information on audiovisual speech perception and its eye movement pattern. Attention, Perception, & Psychophysics, 82(4), 1928-1941.
Luo, Xiaoxiao; Kang, Guanlan; Guo, Yu; Yu, Xingcheng & Zhou, Xiaolin
-
Microsaccadic Eye Movements but not Pupillary Dilation Response Characterizes the Crossmodal Freezing Effect. Cerebral Cortex Communications, 1(1).
Chen, Lihan & Liao, Hsin-I.
-
Reward facilitates response conflict resolution via global motor inhibition: Electromyography evidence. Psychophysiology, 58(10).
Wang, Lihui; Luo, Xiaoxiao; Yuan, Ti‐Fei & Zhou, Xiaolin
-
Reward makes the rhythmic sampling of spatial attention emerge earlier. Attention, Perception, & Psychophysics, 83(4), 1522-1537.
Su, Zhongbin; Wang, Lihui; Kang, Guanlan & Zhou, Xiaolin
-
Reward-based distractor interference: associative learning and interference stage. Dissertation, LMU München: Graduate School of Systemic Neurosciences (GSN)
Li, Bing
-
When visual distractors predict tactile search: The temporal profile of cross-modal spatial learning.. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(9), 1453-1470.
Chen, Siyi; Shi, Zhuanghua; Müller, Hermann J. & Geyer, Thomas
-
Behavioral and neural mechanisms underlying selective attention in anxiety and value-driven selection modulated by associative learning. Dissertation, LMU München: Faculty of Psychology and Educational Sciences
Stanković, Miloš
-
Cross‐modal contextual memory guides selective attention in visual‐search tasks. Psychophysiology, 59(7).
Chen, Siyi; Shi, Zhuanghua; Zinchenko, Artyom; Müller, Hermann J. & Geyer, Thomas
-
Perceptual learning across saccades: Feature but not location specific. Proceedings of the National Academy of Sciences, 120(43).
Grzeczkowski, Lukasz; Shi, Zhuanghua; Rolfs, Martin & Deubel, Heiner
-
Statistical context learning in tactile search: Crossmodally redundant, visuo-tactile contexts fail to enhance contextual cueing. Frontiers in Cognition, 2.
Chen, Siyi; Shi, Zhuanghua; Vural, Gizem; Müller, Hermann J. & Geyer, Thomas
-
Task-irrelevant valence-preferred colors boost visual search for a singleton-shape target. Psychological Research, 88(2), 417-437.
Stanković, Miloš; Müller, Hermann J. & Shi, Zhuanghua
-
Influences of temporal and probabilistic expectation on subjective time of emotional stimulus. Quarterly Journal of Experimental Psychology, 77(9), 1824-1834.
Karaaslan, Aslan & Shi, Zhuanghua
