Original Research ARTICLE
Distinct mechanism of audiovisual integration with informative and uninformative sound in a visual detection task: a DCM study
- 1School of Computer Science and Technology, Changchun University of Science and Technology, China
- 2School of Computer Science and Technology, Changchun University of Science and Technology, China
- 3School of Computer Science, Northeast Electric Power University, China
- 4Department of Radiology, China-Japan Union Hospital, Jilin University, China
- 5School of Psychology, Liaoning Normal University, China
Previous studies have shown that task-irrelevant auditory information can provide temporal clues for the detection of visual targets and improve visual perception; such sounds are called informative sounds. The neural mechanism of the integration of informative sound and visual stimulus has been investigated extensively, using behavioral measurement or neuroimaging methods such as functional magnetic resonance imaging (fMRI) and event-related potential (ERP), but the dynamic processes of audiovisual integration cannot be characterized formally in terms of directed neuronal coupling. The present study adopts dynamic causal modelling (DCM) of fMRI data to identify changes in effective connectivity in the hierarchical brain networks that underwrite audiovisual integration and memory. This allows us to characterize context-sensitive changes in neuronal coupling and show how visual processing is contextualized by the processing of informative and uninformative sounds. Our results show that audiovisual integration with informative and uninformative sounds conforms to different optimal models in the two conditions, indicating distinct neural mechanisms of audiovisual integration. The findings also reveal that a sound is uninformative owing to low-level automatic audiovisual integration and informative owing to integration in high-level cognitive processes.
Keywords: Audiovisual integration, fMRI, informativity of sound, DCM, effective connectivity
Received: 21 May 2019;
Accepted: 16 Aug 2019.
Copyright: © 2019 Li, Xi, Zhang, Liu and Tang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Prof. Qi Li, Changchun University of Science and Technology, School of Computer Science and Technology, Changchun, China, firstname.lastname@example.org