AUTHOR=Sanchez Kauyumari TITLE=Cross-modal matching of monosyllabic and bisyllabic items varying in phonotactic probability and lexicality JOURNAL=Frontiers in Language Sciences VOLUME=Volume 4 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/language-sciences/articles/10.3389/flang.2025.1488399 DOI=10.3389/flang.2025.1488399 ISSN=2813-4605 ABSTRACT=In two experiments, English words and non-words varying in phonotactic probability were cross-modally compared in an AB matching task. Participants were presented with either visual-only (V) speech (a talker's speaking face) or auditory-only (A) speech (a talker's voice) in the A position. Stimuli in the B position were of the opposing modality (counterbalanced). Experiment 1 employed monosyllabic items, while experiment 2 employed bisyllabic items. Accuracy measures for experiment 1 revealed main effects for phonotactic probability and presentation order (A-V vs. V-A), while experiment 2 revealed main effects for lexicality and presentation order. Reaction time measures for experiment 1 revealed an interaction between probability and lexicality, with a main effect for presentation order. Reaction time measures for experiment 2 revealed two 2-way interactions: probability and lexicality and probability and presentation order, with significant main effects. Overall, the data suggests that (1) cross-modal research can be conducted with various presentation orders, (2) perception is guided by the most predictive components of a stimulus, and (3) more complex stimuli can support the results from experiments using simpler stimuli, but can also uncover new information.