Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Hum. Neurosci.

Sec. Speech and Language

Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1661010

This article is part of the Research TopicThe cortical representation of speech perception: Current developmentsView all articles

Differential Electroencephalography Responses in Speech Perception Between Native and Non‐Native Speakers

Provisionally accepted
  • 1University of Ulsan, Ulsan, Republic of Korea
  • 2University of Iowa, Iowa, United States

The final, formatted version of the article will be published soon.

Native and non-native listeners rely on different neural strategies when processing speech in their respective native and non-native languages, encoding distinct features of speech from acoustic to linguistic content in different ways. This study investigated differences in neural responses between native English and Korean speaker when they passively listened to speech in their native and non-native languages using electroencephalography. The study employed two approaches to examine neural responses: temporal response functions (TRFs) measure how the brain tracks continuous speech features (i.e., speech envelope, phoneme onset, phonemic surprisal, and semantic dissimilarity), and phoneme-related potentials (PRPs) assess phonemic-level processes. Non-native speakers showed significantly stronger neural tracking of the speech envelope, but no group differences for higher-level linguistic features within analyses of TRFs. PRP analyses, however, revealed distinct response patterns across phoneme categories, with non-native speakers showing heightened peaks. The results suggest that non-native speakers rely more on bottom-up acoustic cues during passive listening. TRFs and PRPs provide information on neural markers that indicate how speech is processed differently depending on the listener's native language and language experience.

Keywords: Temporal response function, phoneme‐related potential, passive listening, Neural tracking, bottom-up acoustic features

Received: 07 Jul 2025; Accepted: 07 Oct 2025.

Copyright: © 2025 Do Anh Quan, Thi Trang, Choi and Woo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Jihwan Woo, jhwoo@ulsan.ac.kr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.