Original Research ARTICLE
Lack of Cross-modal Effects in Dual-modality Implicit Statistical Learning
- 1Department of Psychology, School of Education, Shanghai Normal University, China
- 2Department of Psychology, Georgia State University, United States
- 3Neuroscience Institute, Georgia State University, United States
A current controversy in the area of implicit statistical learning (ISL) is whether this process consists of a single, central mechanism or multiple modality-specific ones. To provide insight into this question, the current study involved three ISL experiments to explore whether multimodal input sources are processed separately in each modality or are integrated together across modalities. In Experiment 1, visual and auditory ISL were measured under unimodal conditions, with the results providing a baseline level of learning for subsequent experiments. Visual and auditory sequences were presented separately, and the underlying grammar used for both modalities was the same. In Experiment 2, visual and auditory sequences were presented simultaneously with each modality using the same artificial grammar to investigate whether redundant multisensory information would result in a facilitative effect (i.e., increased learning) compared to the baseline. In Experiment 3, visual and auditory sequences were again presented simultaneously but this time with each modality employing different artificial grammars to investigate whether an interference effect (i.e., decreased learning) would be observed compared to the baseline. Results showed that there was neither a facilitative learning effect in Experiment 2 nor an interference effect in Experiment 3. These findings suggest that participants were able to track simultaneously and independently two sets of sequential regularities under dual-modality conditions. These findings are consistent with the theories that posit the existence of multiple, modality-specific ISL mechanisms rather than a single central one.
Keywords: Implicit statistical learning, Cross-Modal Learning, modality-specific, Multimodal input, Dual-modality
Received: 24 Sep 2017;
Accepted: 29 Jan 2018.
Edited by:Petko Kusev, Department of Management ,Huddersfield Business School, University of Huddersfield, United Kingdom
Reviewed by:Valerio Santangelo, University of Perugia, Italy
Paulo Carvalho, Carnegie Mellon University, United States
Copyright: © 2018 Li, Zhao, Shi and Conway. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Prof. Wendian Shi, Shanghai Normal University, Department of Psychology, School of Education, Shanghai, China, swd_nx@163.Com
Prof. Christopher Conway, Georgia State University, Department of Psychology, Atlanta, United States, firstname.lastname@example.org