Project Details
Projekt Print View

Crossmodale Temporale Integration in der Scheinbewegung

Subject Area General, Cognitive and Mathematical Psychology
Term from 2011 to 2014
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 194001222
 
Final Report Year 2015

Final Report Abstract

The focus of this project is on dynamic crossmodal temporal integration of multisensory information, particular on motion perception. We used both implicit measure of temporal processing by applying Ternus apparent motion, and explicit measure by duration reproductions. In the first stage of the research, two important factors, crossmodal interval and perceptual grouping, in multisensory temporal integration have been identified. Several studies brought convergent evidence that crossmodal temporal integration determines the temporal ventriloquist effect. Asymmetric crossmodal or intramodal perceptual grouping, on the other hand, may abolish the temporal ventriloquist effect. In addition, Interval (duration) integration plays a critical role in crossmodal apparent motion and sensorimotor timing, too. The reproduced duration, for example, is a combination and mix of motor and perceptual time. The weights of perceptual and motor time depend on the variability of correspondent estimates. Moreover, when sensory feedback delay is introduced, the reproduced duration is then heavily relied on the onset of the feedback, as well as the offset of motor action. Using quantitative measures and Bayesian approaches, crossmodal temporal integration has been shown to follow the MLE model with some modifications. Incorporating biases explicitly in the model shows high prediction of MLE for crossmodal perceptual duration integration and sensorimotor duration reproduction. The results of the research project also raised various further research questions. One challenge issue in multisensory temporal integration is biased temporal estimates. It is a common knowledge that time perception can be easily distorted by a variety of factors. Given that time processing is distributed, differential biases in different sensory time estimates may cause an internal conflict of time representation. Our brain must continuously calibrate related sensory estimates to keep internal consistency. This kind of predictive errors calibrate internal priors have been proposed in generative Bayesian framework, which has been successfully predict various types of multisensory temporal integration. We summarized recent progress of multisensory temporal integration and calibration, and related approaches of Bayesian inference in our recent review paper.

Publications

  • (2011). Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion. PLoS ONE, 6(2), e17130
    Chen, L., Shi, Z., & Müller, H. J.
    (See online at https://doi.org/10.1371/journal.pone.0017130)
  • (2012). Duration reproduction with sensory feedback delay: differential involvement of perception and action time. Frontiers in Integrative Neuroscience, 6(October), 1–11
    Ganzenmüller, S., Shi, Z., & Müller, H. J.
    (See online at https://doi.org/10.3389/fnint.2012.00095)
  • (2012). Modulation of tactile duration judgments by emotional pictures. Frontiers in Integrative Neuroscience, 6(May), 24
    Shi, Z., Jia, L., & Müller, H. J.
    (See online at https://doi.org/10.3389/fnint.2012.00024)
  • (2012). Motion extrapolation in the central fovea. PloS One, 7(3), e33651
    Shi, Z., & Nijhawan, R.
    (See online at https://doi.org/10.1371/journal.pone.0033651)
  • (2012). Non-spatial sounds regulate eye movements and enhance visual search. Journal of Vision, 12(5), 2, 1–18
    Zou, H., Müller, H. J., & Shi, Z.
    (See online at https://doi.org/10.1167/12.5.2)
  • (2013). Bayesian optimization of time perception. Trends in Cognitive Sciences
    Shi, Z., Church, R. M., & Meck, W. H.
    (See online at https://doi.org/10.1016/j.tics.2013.09.009)
  • (2013). Concurrent emotional pictures modulate spatial-separated audiotactile temporal order judgments. Brain Research, 1537, 156–163
    Jia, L., Shi, Z., Zang, X., & Müller, H. J.
    (See online at https://doi.org/10.1016/j.brainres.2013.09.008)
  • (2013). Reducing Bias in Auditory Duration Reproduction by Integrating the Reproduced Signal. PLoS ONE, 8(4), e62065
    Shi, Z., Ganzenmüller, S., & Müller, H. J.
    (See online at https://doi.org/10.1371/journal.pone.0062065)
  • (2013). Transfer of contextual cueing in full-icon display remapping. Journal of Vision, 13(3), 2, 1–10
    Shi, Z., Zang, X., Jia, L., Geyer, T., & Müller, H. J.
    (See online at https://doi.org/10.1167/13.3.2)
  • 2014). Invariant Spatial Context Is Learned but Not Retrieved in Gaze-Contingent Tunnel-View Search. J Exp Psychol Hum Percept Perform, 40(Oct), 1–41
    Zang, X., Jia, L., Müller, H. J., & Shi, Z.
    (See online at https://psycnet.apa.org/doi/10.1037/xlm0000060)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung