ORIGINAL RESEARCH article

Front. Neurosci., 29 April 2025

Sec. Auditory Cognitive Neuroscience

Volume 19 - 2025 | https://doi.org/10.3389/fnins.2025.1482828

Resting-state functional connectivity changes following audio-tactile speech training


Katarzyna Ciela,,
Katarzyna Cieśla1,2,3*Tomasz WolakTomasz Wolak3Amir Amedi,Amir Amedi1,2
  • 1The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
  • 2The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
  • 3World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland

Understanding speech in background noise is a challenging task, especially when the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if, simultaneously with auditory speech, the person receives speech-extracted low-frequency signals on their fingertips. The effect increases after short audio-tactile speech training. In this study, we used resting-state functional magnetic resonance imaging (rsfMRI) to measure spontaneous low-frequency oscillations in the brain while at rest to assess training-induced changes in functional connectivity. We observed enhanced functional connectivity (FC) within a right-hemisphere cluster corresponding to the middle temporal motion area (MT), the extrastriate body area (EBA), and the lateral occipital cortex (LOC), which, before the training, was found to be more connected to the bilateral dorsal anterior insula. Furthermore, early visual areas demonstrated a switch from increased connectivity with the auditory cortex before training to increased connectivity with a sensory/multisensory association parietal hub, contralateral to the palm receiving vibrotactile inputs, after training. In addition, the right sensorimotor cortex, including finger representations, was more connected internally after the training. The results altogether can be interpreted within two main complementary frameworks. The first, speech-specific, factor relates to the pre-existing brain connectivity for audio–visual speech processing, including early visual, motion, and body regions involved in lip-reading and gesture analysis under difficult acoustic conditions, upon which the new audio-tactile speech network might be built. The other framework refers to spatial/body awareness and audio-tactile integration, both of which are necessary for performing the task, including in the revealed parietal and insular regions. It is possible that an extended training period is necessary to directly strengthen functional connections between the auditory and the sensorimotor brain regions for the utterly novel multisensory task. The results contribute to a better understanding of the largely unknown neuronal mechanisms underlying tactile speech benefits for speech comprehension and may be relevant for rehabilitation in the hearing-impaired population.

Introduction

Resting-state functional magnetic resonance imaging (rsfMRI) is based on the premise that spontaneous low-frequency oscillations in the brain, while, at rest, it can reveal its functional organization and alterations, without the need for the participant to engage in a specific task (Biswal et al., 1995; Raichle, 2009). At the same time, the signals obtained from task-MRI and rsfMRI are likely to stem from similar structural connections and neuronal processes (Ngo et al., 2022). In addition, in resting-state protocols, the oscillations of the blood-oxygenation-level–dependent (BOLD) signal offer a signal-to-noise ratio up to three times higher than the signal increases observed during task-related activities (Fox and Greicius, 2010). Using a measure of functional connectivity (FC), which represents signal correlations among remote brain regions, rsMRI has been shown to reveal language networks with high sensitivity (e.g., Lemée et al., 2019; Tie et al., 2014; Zhu et al., 2014; Branco et al., 2016; Sun et al., 2019; Liu et al., 2021). Considerable FC changes have been reported as a result of several-minute- or several-week-long cognitive or motor interventions, as well as aging (e.g., Katsuno et al., 2022; Maruyama et al., 2018; Cao et al., 2014; Farrens et al., 2023; Kawata et al., 2022; Bamidis et al., 2014). It has been proposed that, to gain a deeper understanding of changes in brain organization, it is essential to consider functional brain networks.

In the current study, we used the rsfMRI method specifically to investigate changes in functional connectivity following speech comprehension training, combining congruent auditory and vibrotactile inputs on the fingertips.

Audio-vibrotactile interactions are relatively common in our everyday lives, for example, when receiving tactile feedback from a ringing phone, when steering wheel, or while playing computer games. The combination and integration of these two sensory signals seem natural and inherent (Van Békésy, 1959), which is possibly due to the significant similarities between the two sensory modalities, including their ability to encode the very same oscillatory patterns within an overlapping frequency range using mechanoreceptors. Specifically, for the tactile sense, the Pacinian corpuscles in the skin respond to vibrations of up to 700–1,000 Hz, while the auditory system encodes vibrations in the air through inner hair cells between 20 Hz and 20 kHz (Bolanowski et al., 1988; Schnupp et al., 2011).

In several published studies, congruent vibrotactile inputs have been shown to successfully enhance music perception (e.g., Russo et al., 2012; Sharp et al., 2019), as well as the localization of sounds in space (Gori et al., 2014; Occelli et al., 2009; Snir et al., 2024). Another specific use of low-frequency vibrations has been to improve speech comprehension. This approach was originally developed to assist the deaf population, with tactile aids designed to convey different features of speech signals via the skin (e.g., Galvin et al., 1991; Weisenberger and Percy, 1995). In a series of recent studies by our lab and others, it has been shown that improvement of speech perception is also possible in both typically hearing and hearing-impaired individuals using cochlear implants, with continuous speech-extracted vibrations delivered on the fingertips (Cieśla et al., 2019, 2022; Fletcher et al., 2018; Fletcher et al., 2019; Huang et al., 2017; Schulte et al., 2023; Rǎutu et al., 2023). Most authors implemented some form of manipulation to the speech signal to mimic the acoustic conditions that are most challenging for the hearing-impaired population, such as speech distortions and/or simultaneous background noise.

This rsfMRI study complements our parallel task-fMRI study (manuscript under review; preprint at Cieśla et al., 2024), with participants performing speech comprehension tasks inside an MRI scanner before and after dedicated audio-tactile speech comprehension training. The speech signals consisted of distorted sentences (vocoded to resemble speech delivered through a cochlear implant system), presented alongside speech background noise. The tactile vibrations corresponded to low frequencies (< 200Hz) extracted directly from the speech inputs using an adjusted STRAIGHT algorithm (Kawahara et al., 1999). We demonstrated enhanced comprehension following audio-tactile training, which aligns with findings from several other studies that utilized audio–visual speech stimuli, as well as those showing improved intelligibility of distorted speech with practice time (e.g., Casserly and Pisoni, 2015; Erb et al., 2013; Guediche et al., 2014; Bernstein et al., 2013). The results suggest that the speech-extracted tactile signal can potentially work in a manner similar to the visual signal (such as lip-reading) through temporal synchronization with auditory inputs (O'Sullivan et al., 2021; Schulte et al., 2023).

Here, we speculated that training-induced neuronal changes would also manifest in resting-state functional connectivity (FC). Specifically, brain networks were identified using independent component analysis (ICA) and tested for functional connectivity before and after audio-tactile training. Based on our previous behavioral findings indicating successful multisensory integration of speech stimuli (Cieśla et al., 2019, 2022), we expected to observe enhanced connectivity between the auditory and the somatosensory resting-state networks, as well as within the language network (Price, 2012; Hertrich et al., 2020). This finding includes multisensory areas such as the posterior superior temporal sulcus (pSTS), the angular gyrus/supramarginal gyrus (AG/SMG), and the lateral occipital cortex (LOC), which have been shown in the literature for audio-visual and audio-tactile integration (e.g., Renier et al., 2009; Kassuba et al., 2013; Landelle et al., 2023; King et al., 2019; Nath and Beauchamp, 2012; Hickok et al., 2018). The applied outcome measure was the difference in functional connectivity patterns between two resting-state fMRI sessions, before and after audio-tactile speech comprehension training.

Materials and methods

The study took place at the ELSC Neuroimaging Unit in Jerusalem, Israel. All participants gave informed consent and received compensation for their participation. The study procedures complied with the World Medical Association (2013) and received approval from the Ethics Committee of the Hadassah Medical Center in Jerusalem (protocol no. 353-18.1.08).

Participants

The reported data were acquired from 20 adult healthy individuals (8 male/12 female, 26.8+/−4.2), all of whom were right-handed and had no history of neurological or neurodevelopmental impairments.

Experimental procedures

Behavioral experimental procedures

Between the two resting-state fMRI sessions, the participants were tested twice (before and after the training) on their vocoded speech-in-noise comprehension. There were three test conditions: a) auditory sentences (A), b) auditory sentences with congruent tactile vibrations delivered to two fingertips of the right hand (audio-tactile congruent, ATc), and c) auditory sentences accompanied with non-congruent tactile vibrations on the fingertips (audio-tactile non-congruent, ATnc). For each test condition, an individual speech reception threshold (SRT)—i.e., the SNR between the target sentence and the background noise required for 50% understanding—was determined. The used sentences were derived from the Hearing-In-Noise-Test (HINT) database (Nilsson et al., 1994) and vocoded to resemble stimulation through a cochlear implant (Walkowiak et al., 2010). In the multisensory test conditions, low-frequency vibrations (< f0 of the spoken sentence, i.e., 200 Hz) were extracted from either the concurrently presented auditory sentence (audio-tactile congruent, ATc) or from another randomly selected sentence from the HINT database (audio-tactile non-congruent, ATnc) and delivered to the index and middle fingers of the right hand through an in-house developed device (Cieśla et al., 2019). Between the two SRT testing sessions, the participants took part in a 30–45 min training session. They were asked to repeat 148 HINT sentences, one by one, with (group 1) or without (group 2) accompanying congruent tactile vibrations delivered to the fingertips, at individually determined SRTs. The training session continued until all 148 sentences were repeated correctly without feedback. The results of the behavioral study, including pre- and post-training speech comprehension measurements, are published elsewhere (Cieśla et al., 2022). The resting-state fMRI study was conducted in a subgroup of participants who underwent audio-tactile speech comprehension training.

Resting-state fMRI data acquisition

The resting-state fMRI data were acquired twice using a 3T Skyra MRI scanner by Siemens and a 32-channel head coil, before and after the training, during a 10:11 min session. The parameters were as follows: TR =1.5 s, TE = 32.6 ms, voxel size = 2 × 2 × 2 mm, 72 slices, flip angle = 78, FOV = 192 mm, and bandwidth = 1,736 Hz. The participants had their eyes open and were asked to fixate on a cross presented on the screen. In addition, a high-resolution T1 MR sequence was collected with the following parameters: TR = 2.3 s, TE = 2.98 ms, IT = 900 ms, matrix size = 256 × 256, voxel size = 1 × 1 × 1 mm, 160 slices, and bandwidth = 240Hz/voxel.

Data analysis

Resting-state fMRI data analysis

To evaluate functional connectivity patterns between brain regions, the CONN 22a (Connectivity Toolbox, https://web.conn-toolbox.org/) software was used for data analysis. Preprocessing of the functional data included the following: slice timing correction, motion correction, scrubbing, linear detrending, band-pass filtering (0.008 Hz < f < 0.09 Hz), co-registration to individual T1 structural scans, surface-based spatial normalization (https://surfer.nmr.mgh.harvard.edu/), and spatial smoothing (6 mm Gaussian kernel).

Denoising was applied using default settings, including the regression of potential confounding effects characterized by white matter time series (five CompCor noise components), cerebrospinal fluid (CSF) time series (five CompCor noise components), motion parameters and their first-order derivatives (12 factors), outlier scans (below 25 factors), and linear trends (two factors) within each functional run. This was followed by bandpass frequency filtering of the BOLD time series between 0.008 Hz and 0.09 Hz. The CompCor noise components within the white matter and CSF were estimated by computing the average BOLD signal and the largest principal components orthogonal to the BOLD average, motion parameters, and outlier scans within each participant's eroded segmentation masks. Based on the number of noise terms included in this denoising strategy, the effective degrees of freedom of the BOLD signal after denoising were estimated to range from 59.8 to 125 (average 94.7) across all participants.

Group ICA) was used for statistical analysis. ICA is a data-driven approach that investigates multiple voxel-to-voxel interactions of several networks in the brain simultaneously. Based on the acquired signal, ICA separates noisy components from those representing neuronal activation in the form of resting-state networks. Networks represent sets of regions with voxel-by-voxel correlated BOLD signal fluctuations (Smitha et al., 2017; Rosazza et al., 2012). Using the CONN software, we applied a G1 FastICA + GICA3 back-projection procedure with dimensionality reduction set at 64 (64 principal components) and the number of independent components set at 40 (components optimized for orthogonality; 40 is the default value), based on the recorded BOLD signal combined across all participants. Components comprising the language (including auditory cortex), visual, and sensorimotor networks were selected for further analysis (Figure 1, ICAs 1–6, 11, 16). For each of the separately selected eight components, functional connectivity (FC) was estimated with all other voxels in the brain. Next, FC was compared between the PRE and POST training sessions for each component using t-tests. See the subsequent steps of the analysis in Figure 2. Specific outcome brain regions were identified that showed increased FC after the training compared to before the training (Figure 3). The statistical analysis used permutation with the following parameters: voxel-threshold with a p-value of < 0.01 and cluster-mass p-FDR corrected p-value of < 0.05. The regions were labeled using the Harvard-Oxford Atlas.

Figure 1
www.frontiersin.org

Figure 1. (A) Left-hemisphere and (B) right-hemisphere aspects of the 40 independent components (ICs) revealed in the IC analysis. The color bar represents voxel-to-voxel correlation z-values (threshold at z = 2); yellow to red in the color bar indicates a positive correlation, and green to blue indicates a negative correlation.

Figure 2
www.frontiersin.org

Figure 2. Subsequent steps of the statistical analysis: (A) extraction of the signal from an independent component in the session before (PRE) and after (POST) training, (B) functional connectivity (FC) analysis in the PRE and POST sessions separately with the signal in all brain voxels, (C) unthresholded FC comparison between PRE and POST, (D) thresholded FC comparison between PRE and POST. IC3 (early visual cortex) is used as an example.

Figure 3
www.frontiersin.org

Figure 3. Results of the statistical analysis. (A) for IC3, FC was found increased with a cluster in the left SPL/aSMG/PostCG after training; (B) for IC6, FC was found decreased with the right sensorimotor cortex after training; (C) for IC4, FC was found decreased with the right primary visual cortex; (D) for IC2, FC was found both increased with a cluster in the right LOC/pMTG (red-yellow) and decreased with the insula (purple-pink) after training. The color bars represent t-values of the statistical comparison of functional connectivity POST vs. PRE training; t-values above zero (red-yellow) indicate increased FC after training, while t-values below zero (purple-pink) indicate decreased FC after training. IC, Independent Component; LOC, lateral occipital cortex; MTG, middle temporal gyrus; pSTS, posterior superior temporal sulcus.

Results

Resting-state fMRI functional connectivity changes after training in the auditory, sensorimotor, and visual systems

Of the eight comparisons of FC before and after training, four turned out to be statistically significant (Figure 3). Specifically, resting-state functional connectivity increased after, compared to before, the audio-tactile speech comprehension training between bilateral clusters encompassing parts of the lateral occipital cortex (LOC), angular gyrus (AG), posterior medial temporal gyrus (pMTG), and posterior superior temporal sulcus (pSTS; IC2; including functional areas, such as MT +/−44 −68 2 and extrastriate body area (EBA) –/+ 50 −70 5; https://neurosynth.org/), as well as LOC/pMTG/AG on the right side, indicating increased internal connectivity within this network. In addition, increased connectivity was observed between bilateral clusters encompassing parts of the occipital pole (OP), LOC, lingual gyrus (LG), and intracalcarine cortex (ICC; early visual cortex; IC3), as well as a hub representing the left superior parietal lobe (SPL), anterior supramarginal gyrus (aSMG), and postcentral gyrus (PostCG). Increased connectivity was observed in a right-hemisphere cluster encompassing parts of PostCG, precentral gurus (PreCG), and SPL (sensorimotor cortex and operculum, IC6), with an almost overlapping area. At the same time, decreased functional connectivity after training was observed between bilateral clusters encompassing parts of iLOC, AG, pMTG, and pSTS (IC 2) and bilateral anterior insula, as well as between bilateral clusters encompassing parts of the superior temporal gyrus (STG), temporal pole (TP), polar pole (PP), SMG (auditory cortex, IC 4), and OP and LOC on the right side (early visual cortex). The results are depicted in Figures 13.

Discussion

As reported in a series of our previous studies using an in-house setup, congruent low-frequency speech-extracted vibrations on the fingertips can improve auditory speech perception in difficult acoustic conditions by an average of 4–6 dB SNR (Cieśla et al., 2019, 2022). Interestingly, training did not increase this tactile benefit any further, suggesting that only to a certain degree one can use tactile information naturally. In addition, audio-tactile training was found to be more efficient than auditory training, leading to comparable improvement but in more difficult acoustic conditions (Cieśla et al., 2022). In this study, we present resting-state fMRI findings from a subgroup of participants trained using audio-tactile speech stimulation. The results provide insights into the brain networks likely engaged in the new experience of audio-tactile speech perception and insights that promote training-related improvements in comprehension.

The revealed functional connectivity (FC) results showed several effects when the pre- and post-training resting-state networks were compared, which can be summarized as follows. First, we observed a shift in the functional connectivity of the lateral occipital cortex (LOC) from increased FC with the bilateral dorsal anterior insula to increased internal connectivity (i.e., between the right LOC and itself). Second, there was a change in the functional connectivity of the early visual system, shifting from one with the auditory system before training to one with a hub in the left somatosensory/multisensory association cortex afterward. Third, the sensorimotor system in the right hemisphere showed increased internal FC after training. While the effects involving the auditory and sensorimotor systems were hypothesized following an audio-tactile intervention, the engagement of the early visual system was not. Interestingly, this system was also observed in a parallel task-fMRI study by our lab [under review; see also a preprint at: Cieśla et al., 2024]. In the previously mentioned study, the participants were asked to repeat distorted sentences inside the MRI scanner, before and after speech comprehension training, with and without tactile vibrations delivered to the fingertips. The effects observed for the trained audio-tactile speech condition partially correspond to those revealed in the current resting-state analysis, including significant involvement of the visual system in both learning and multisensory integration, especially in the early visual regions and LOC. This finding further suggests that the signals obtained from task-fMRI and rsfMRI are likely to stem from similar structural connections and neuronal processes (Ngo et al., 2022).

We speculate that the revealed “visual” regions (both early V1–V2 and LOC) might represent parts of a network typically reported for audio-visual speech comprehension (e.g., Hertrich et al., 2020; Beer et al., 2013; Nath and Beauchamp, 2012; Hauswald et al., 2018). The involvement of these regions indicates that the brain functional system engaged in the utterly novel task of processing audio-tactile speech might be first built upon an existing blueprint for connectivity. In the current study, the speech task required comprehension of sentences in a non-native language that were also vocoded (resembling cochlear implant stimulation) and presented against background speech noise, all rendering the task rather challenging. In everyday situations, when exposed to ambiguous speech, either distorted or occurring in a challenging acoustic environment, we naturally search for informative visual cues, such as from lip-reading. These audio-visual associations develop in the early years of life and are practiced on a regular basis throughout lifetime (Hickok et al., 2018; Dopierała et al., 2023). Furthermore, the involvement of the early visual cortex has been previously reported for auditory and tactile (e.g., Braille reading) language tasks as well. Several studies have demonstrated this effect in both blind and sighted individuals, indicating that early visual regions have the capacity for amodal language processing (Amedi et al., 2003; Bedny et al., 2015; Reich et al., 2011; Seydell-Greenwald et al., 2023; Merabet et al., 2008). In this study, after the audio-tactile speech training, once it became apparent that no informative visual cues were available during task performance, the pre-existing (and potentially automatically engaged in the context of speech perception) connectivity between the auditory system and the early visual system decreased (Panel C in Figure 2).

The other identified “visual” hub in the lateral occipital/temporal lobe encompassed regions such as the middle temporal visual area (MT), EBA, and LOC (Panel D in Figure 2). These brain areas, among other functions, can play a role in non-verbal communication, specifically in encoding lip movements, body gestures, facial expressions, and other language-related spatial cues (Kitada et al., 2014; Hickok et al., 2018; Van der Stoep and Alais, 2020). All these cues can facilitate comprehension in difficult acoustic conditions. Furthermore, the revealed adjacent pSTS is recognized as a key hub for multisensory integration, particularly for audio-tactile and audio-visual inputs. It specifically specializes in processing temporally changing signals, such as speech (King et al., 2019; Hertrich et al., 2020; Beer et al., 2013; Beauchamp et al., 2004). In a non-speech-specific context, the revealed area of the LOC has also been shown to have multisensory properties (although mainly audio-visual) and has been implicated in activities such as grasping and object recognition (e.g., Kassuba et al., 2013; Jao et al., 2015; Brand et al., 2020), as well as in response to passive vibrotactile stimulation on the hand—potentially related to the mechanism of object manipulation through increased connectivity with SI (Tal et al., 2016). During the speech comprehension training, the participants both listened to speech stimuli through headphones and had the fingertips of their right hand placed in the tactile device. This specific multisensory setup may also involve body awareness and reaching behavior, which could be reflected in resting-state functional network changes.

The functional connectivity of the LOC after training diminishes with the bilateral anterior insular cortex. This outcome may stem from the insula's role in various cognitive processes, particularly in the context of developing new skills or adapting to new environments (e.g., Gogolla, 2017; Zühlsdorff et al., 2023; Ardilla et al., 2014). Specifically, the revealed dorsal anterior part of the insula, through its multiple connections with the rest of the brain, has been found to participate in language processing and general learning (e.g., Ardilla et al., 2014; although see Woolnough et al., 2019), cognitive control, spatial attention tasks, and multimodal integration—as opposed to the social-emotional functions typically attributed to the posterior and ventral-anterior aspects of the insula (Kurth et al., 2010; Uddin et al., 2017). In addition, the anterior insula may play a role in bodily ownership, bodily self-awareness, and tactile perception (Wang et al., 2019). One suggested mechanism is through direct projections from the secondary somatosensory cortex to the insular cortex, which, in turn, innervates regions of the temporal lobe believed to be critical for tactile learning and memory (Kurth et al., 2010; Augustine, 2023). The decreased involvement of the insula possibly reflects progress in learning to use the newly available tactile inputs for improved speech comprehension.

Considering the role of the sensorimotor system, we demonstrate two additional effects. First, the applied audio-tactile speech training resulted in increased connectivity from the bilateral early visual cortex to a posterior parietal region on the side opposite to the vibrotactile stimulation. Part of the revealed hub can be considered the somatosensory association cortex, where the experienced low-frequency tactile signal may be analyzed for its temporal and speech features. Other tactile operations attributed to this area include Braille reading, tactile attention, shape recognition, and others (e.g., Kim et al., 2015; Bodegard et al., 2001; Li Hegner et al., 2010). At the same time, the superior parietal lobe—comprising 40% of the revealed cluster—receives inputs not only from the early sensory system, particularly the hand, but also from other modalities, which are further combined to inform behaviors, such as reaching or grasping (Sheth and Young, 2014; Goodale and Milner, 1992). Finally, the demonstrated left-hemisphere SMG (Panel A in Figure 2) plays a critical role in the phonological analysis of language inputs, while also having multisensory properties (e.g., Oberhuber et al., 2016).

The increased functional connectivity after training between the early visual and the sensorimotor system/multisensory is not a straightforward result. It might reflect a mechanism similar to the one reported in some other studies on tactile perception, although in the opposite direction (Tal et al., 2016; Merabet et al., 2007). Specifically, Tal et al. showed deactivation of early visual regions (e.g., V1–V2) when participants experienced passive, delicate brushing of various body parts—although a single-case EcoG study by Gaglianese et al. (2020) showed positive involvement of V1 during brushing. In contrast, Merebet et al. found active engagement of V1 in estimating roughness or inter-dot spacing in tactile patterns. This study indicates that the responsiveness of early visual regions during tactile stimulation might be task-specific and could reflect processes such as motion processing, spatial analysis, or tactile-to-visual remapping of inputs. As suggested by some authors, the increased FC with early visual areas after training and its general role in the current experiment might also be mediated by the visualization of the content of the speech signal or the vibrating devices (e.g., Reiner, 2008).

In addition, after training, we observed increased internal connectivity within the right-hemisphere sensorimotor system (ipsilateral to the tactile finger stimulation). The cluster encompasses the primary (SI), secondary (SII, operculum), and somatosensory association cortices, including areas representing the fingers, as well as the corresponding motor cortex. This finding was unexpected but may be related to several mechanisms. First, although responses to tactile stimuli are found to be mainly contralateral (and especially in SI), both animal and human neuroimaging studies have shown modulation of responses in ipsilateral SII (and to a lesser extent, also in SI), possibly through dense transcallosal connectivity. Moreover, the bilateral sensorimotor association cortex in the posterior parietal cortex is the area where inputs from both hands become integrated (Pala and Stanley, 2022; Tamè et al., 2012; Hlushchuk and Hari, 2006). The sensorimotor cortex is also densely connected with the bilateral motor cortex, specifically the regions representing hands, which enables the successful manipulation of objects. The sensorimotor interactions involve the motor cortex responding to tactile stimulation, as well as the sensory cortex receiving feedback from and regulating motor behavior (Mastria et al., 2023; Tamè et al., 2015; Bao et al., 2024). Further research is needed to understand why the observed effect was ipsilateral to the stimulated hand rather than contralateral.

In summary, the main results of the study highlight the role of the visual network, auditory system, sensorimotor system, and insula in the audio-tactile integration of speech information and learning using our specific multisensory setup. The findings are in partial agreement with our hypotheses— e.g., showing engagement of the early sensory systems, as well as the pSTS and SMG in multisensory integration—although no enhancement of direct connectivity was found between the auditory and the sensorimotor cortex. The lack of such an effect might reflect comparable tactile benefits for speech perception before and after training (Cieśla et al., 2022). The involvement of the visual system (LOC, early regions), potentially mediating auditory-to-tactile communication and thus the integration of inputs, indicates that functional network changes following perceptual learning might be initially constrained by the existing connections established through experience (Makin and Krakauer, 2023; Heimler and Amedi, 2020). Since the audio-tactile speech context is novel—never experienced in a lifetime or in evolution—unlike the well-trained audio-visual speech context, a longer training regime might be necessary to modify pre-existing connectivity patterns. This might, in turn, translate into greater benefits of adding tactile information. In addition, some of the observed results in the visual, sensorimotor, and insular networks can be interpreted within the framework of bodily awareness, grasping, and mental imagery, as the task required successful integration of inputs arriving from different spatial locations. One limitation of the current study is the lack of assessment of functional connectivity changes following auditory-only versus multisensory audio-tactile speech training, especially given our behavioral results showing its significant but less profound effect on learning (Cieśla et al., 2022).

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by Ethics Committee of the Hadassah Medical Center in Jerusalem. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

KC: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Validation, Writing – original draft, Writing – review & editing. TW: Conceptualization, Methodology, Software, Visualization, Writing – review & editing. AA: Conceptualization, Funding acquisition, Resources, Supervision, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was supported by Polish Ministry of Science and Higher Education MOBILNOSC PLUS V grant (1642/MOB/V/2017/0) to KC, ERC Consolidator Grant (773121 NovelExperiSense) to AA, Horizon GuestXR (101017884) grant to AA; Israeli Science Foundation grant (3709/24) to AA.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Amedi, A., Raz, N., Pianka, P., Malach, R., and Zohary, E. (2003). Early 'visual' cortex activation correlates with superior verbal memory performance in the blind. Nat. Neurosci. 6, 758–766. doi: 10.1038/nn1072

PubMed Abstract | Crossref Full Text | Google Scholar

Ardilla, A., Bernal, B., and Rosselli, M. (2014). Participation of the insula in language revisited: a meta-analytic connectivity study. J. Neurolinguistics 29, 31–41. doi: 10.1016/j.jneuroling.2014.02.001

Crossref Full Text | Google Scholar

Augustine, G. J. (eds.)., (2023). Neuroscience (Oxford: Sinauer Associates, Oxford University Press; 7th edition).

Google Scholar

Bamidis, P. D., Vivas, A. B., Styliadis, C., Frantzidis, C., Klados, M., Schlee, W., et al. (2014). A review of physical and cognitive interventions in aging. Neurosci. Biobehav. Rev. 44, 206–220. doi: 10.1016/j.neubiorev.2014.03.019

PubMed Abstract | Crossref Full Text | Google Scholar

Bao, S., Wang, Y., Escalante, Y. R., Li, Y., and Lei, Y. (2024). Modulation of motor cortical inhibition and facilitation by touch sensation from the glabrous skin of the human hand. eNeuro. 11:ENEURO.0410-23.2024. doi: 10.1523/ENEURO.0410-23.2024

PubMed Abstract | Crossref Full Text | Google Scholar

Beauchamp, M. S., Lee, K. E., Argall, B. D., and Martin, A. (2004). Integration of auditory and visual information about objects in superior temporal sulcus. Neuron 41, 809–823. doi: 10.1016/S0896-6273(04)00070-4

PubMed Abstract | Crossref Full Text | Google Scholar

Bedny, M., Richardson, H., and Saxe, R. (2015). “Visual” cortex responds to spoken language in blind children. J. Neurosci. 35, 11674–11681. doi: 10.1523/JNEUROSCI.0634-15.2015

PubMed Abstract | Crossref Full Text | Google Scholar

Beer, A. L., Plank, T., Meyer, G., and Greenlee, M. W. (2013). Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing. Front. Integr. Neurosci. 7:5. doi: 10.3389/fnint.2013.00005

PubMed Abstract | Crossref Full Text | Google Scholar

Bernstein, L. E., Auer, E. T. Jr, Eberhardt, S. P., and Jiang, J. (2013). Auditory perceptual learning for speech perception can be enhanced by audiovisual training. Front Neurosci. 7:34. doi: 10.3389/fnins.2013.00034

PubMed Abstract | Crossref Full Text | Google Scholar

Biswal, B., Yetkin, F. Z., Haughton, V. M., and Hyde, J. S. (1995). Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magn. Reson. Med. 34, 537–541. doi: 10.1002/mrm.1910340409

PubMed Abstract | Crossref Full Text | Google Scholar

Bodegard, A., Geyer, S., Grefkes, C., Zilles, K., and Roland, P. E. (2001). Hierarchical processing of tactile shape in the human brain. Neuron 31, 317–328. doi: 10.1016/S0896-6273(01)00362-2

PubMed Abstract | Crossref Full Text | Google Scholar

Bolanowski Jr, S. J., Gescheider, G. A., Verrillo, R. T., and Checkosky, C. M. (1988). Four channels mediate the mechanical aspects of touch. J. Acoust. Soc. Am. 84, 1680–1694. doi: 10.1121/1.397184

PubMed Abstract | Crossref Full Text | Google Scholar

Branco, P., Seixas, D., Deprez, S., Kovacs, S., Peeters, R., Castro, S. L., et al. (2016). Resting-state functional magnetic resonance imaging for language preoperative planning. Front. Hum. Neurosci. 10:11. doi: 10.3389/fnhum.2016.00011

PubMed Abstract | Crossref Full Text | Google Scholar

Brand, J., Piccirelli, M., Hepp-Reymond, M. C., Eng, K., and Michels, L. (2020). Brain activation during visually guided finger movements. Front. Hum. Neurosci. 14:309. doi: 10.3389/fnhum.2020.00309

PubMed Abstract | Crossref Full Text | Google Scholar

Cao, W., Luo, C., Zhu, B., Zhang, D., Dong, L., Gong, J., et al. (2014). Resting-state functional connectivity in anterior cingulate cortex in normal aging. Front. Aging Neurosci. 6:280. doi: 10.3389/fnagi.2014.00280

PubMed Abstract | Crossref Full Text | Google Scholar

Casserly, E. D., and Pisoni, D. B. (2015). Auditory learning using a portable real-time vocoder: preliminary findings. J. Speech Lang. Hear. Res. 58, 1001–1016. doi: 10.1044/2015_JSLHR-H-13-0216

PubMed Abstract | Crossref Full Text | Google Scholar

Cieśla, K., Wolak, T., and Amedi, A. (2024). Neuronal basis of audio-tactile speech perception (2024). BIORXIV/2024/608369. doi: 10.1101/2024.08.16.608369

Crossref Full Text | Google Scholar

Cieśla, K., Wolak, T., Lorens, A., Heimler, B., Skarżyński, H., Amedi, A., et al. (2019). Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor. Neurol. Neurosci. 37, 155–166. doi: 10.3233/RNN-190898

PubMed Abstract | Crossref Full Text | Google Scholar

Cieśla, K., Wolak, T., Lorens, A., Mentzel, M., Skarżyński, H., Amedi, A., et al. (2022). Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci. Rep. 12, 1–16. doi: 10.1038/s41598-022-06855-8

PubMed Abstract | Crossref Full Text | Google Scholar

Dopierała, A. A. W., Pérez, D. L., Mercure, E., Pluta, A., Malinowska-Korczak, A., Evans, S., et al. (2023). The development of cortical responses to the integration of audiovisual speech in infancy. Brain Topogr. 36, 459–475. doi: 10.1007/s10548-023-00959-8

PubMed Abstract | Crossref Full Text | Google Scholar

Erb, J., Molly, H., Eisner, F., and Obleser, J. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. J. Neurosci. 33, 10688–10697. doi: 10.1523/JNEUROSCI.4596-12.2013

PubMed Abstract | Crossref Full Text | Google Scholar

Farrens, A. J., Vahdat, S., and Sergi, F. (2023). Changes in resting state functional connectivity associated with dynamic adaptation of wrist movements. J. Neurosci. 43, 3520–3537. doi: 10.1523/JNEUROSCI.1916-22.2023

PubMed Abstract | Crossref Full Text | Google Scholar

Fletcher, M. D., Hadeedi, A., Goehring, T., and Mills, S. R. (2019). Electro-haptic enhancement of speech-in-noise performance in cochlear implant users. Sci. Rep. 9, 1–8. doi: 10.1038/s41598-019-47718-z

PubMed Abstract | Crossref Full Text | Google Scholar

Fletcher, M. D., Mills, S. R., and Goehring, T. (2018). Vibro-tactile enhancement of speech intelligibility in multi-talker noise for simulated cochlear implant listening. Trends Hear. 22:2331216518797838. doi: 10.1177/2331216518797838

PubMed Abstract | Crossref Full Text | Google Scholar

Fox, M. D., and Greicius, M. (2010). Clinical applications of resting state functional connectivity. Front. Syst. Neurosci. 4:19. doi: 10.3389/fnsys.2010.00019

PubMed Abstract | Crossref Full Text | Google Scholar

Gaglianese, A., Branco, M. P., Groen, I., Benson, N. C., Vansteensel, M. J., Murray, M. M., et al. (2020). Electrocorticography evidence of tactile responses in visual cortices. Brain Topogr. 33, 559–570. doi: 10.1007/s10548-020-00783-4

PubMed Abstract | Crossref Full Text | Google Scholar

Galvin, K. L., Cowan, R. S. C., Sarant, J. Z., Alcantara, J., Blarney, P. J., Clark, G. M., et al. (1991). Use of a multichannel electrotactile speech processor by profoundly hearing-impaired children in a total communication environment. J. Am. Acad. Audiol. 12, 214–225.

PubMed Abstract | Google Scholar

Gogolla, N. (2017). The insular cortex. Curr. Biol. 27, R580–R586. doi: 10.1016/j.cub.2017.05.010

PubMed Abstract | Crossref Full Text | Google Scholar

Goodale, M. A., and Milner, A. D. (1992). Separate visual pathways for perception and action. Trends Neurosci. 15, 20–25. doi: 10.1016/0166-2236(92)90344-8

PubMed Abstract | Crossref Full Text | Google Scholar

Gori, M., Vercillo, T., Sandini, G., and Burr, D. (2014). Tactile feedback improves auditory spatial localization. Front. Psychol. 5:109563. doi: 10.3389/fpsyg.2014.01121

PubMed Abstract | Crossref Full Text | Google Scholar

Guediche, S., Blumstein, S. E., Fiez, J. A., and Holt, L. L. (2014). Speech perception under adverse conditions: insights from behavioral, computational, and neuroscience research. Front. Syst. Neurosci. 7:126. doi: 10.3389/fnsys.2013.00126

PubMed Abstract | Crossref Full Text | Google Scholar

Hauswald, A., Lithari, C., Collignon, O., Leonardelli, E., and Weisz, N. (2018). A visual cortical network for deriving phonological information from intelligible lip movements. Curr. Biol. 28, 1453–1459.e3. doi: 10.1016/j.cub.2018.03.044

PubMed Abstract | Crossref Full Text | Google Scholar

Heimler, B., and Amedi, A. (2020). Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci. Biobehav. Rev. 116, 494–507. doi: 10.1016/j.neubiorev.2020.06.034

PubMed Abstract | Crossref Full Text | Google Scholar

Hertrich, I., Dietrich, S., and Ackermann, H. (2020). The margins of the language network in the brain. Front. Commun. 5:519955. doi: 10.3389/fcomm.2020.519955

Crossref Full Text | Google Scholar

Hickok, G., Rogalsky, C., Matchin, W., Basilakos, A., Cai, J., Pillay, S., et al. (2018). Neural networks supporting audiovisual integration for speech: a large-scale lesion study. Cortex. 103, 360–371. doi: 10.1016/j.cortex.2018.03.030

PubMed Abstract | Crossref Full Text | Google Scholar

Hlushchuk, Y., and Hari, R. (2006). Transient suppression of ipsilateral primary somatosensory cortex during tactile finger stimulation. J. Neurosci. 26, 5819–5824. doi: 10.1523/JNEUROSCI.5536-05.2006

PubMed Abstract | Crossref Full Text | Google Scholar

Huang, J., Sheffield, B., Lin, P., and Zeng, F. G. (2017). Electro-tactile stimulation enhances cochlear implant speech recognition in noise. Sci. Rep. 7:2196. doi: 10.1038/s41598-017-02429-1

PubMed Abstract | Crossref Full Text | Google Scholar

Jao, R. J., James, T. W., and James, K. H. (2015). Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia. 77, 76–89. doi: 10.1016/j.neuropsychologia.2015.08.008

PubMed Abstract | Crossref Full Text | Google Scholar

Kassuba, T., Menz, M. M., Röder, B., and Siebner, H. R. (2013). Multisensory interactions between auditory and haptic object recognition. Cereb. Cortex. 23, 1097–1107. doi: 10.1093/cercor/bhs076

PubMed Abstract | Crossref Full Text | Google Scholar

Katsuno, Y., Ueki, Y., Ito, K., Murakami, S., Aoyama, K., Oishi, N., et al. (2022). Effects of a new speech support application on intensive speech therapy and changes in functional brain connectivity in patients with post-stroke aphasia. Front. Hum Neurosci. 16:870733. doi: 10.3389/fnhum.2022.870733

PubMed Abstract | Crossref Full Text | Google Scholar

Kawahara, H., Masuda-Katsuse, I., and de Cheveigne, A. (1999). Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: possible role of a repetitive structure in sounds. Speech Commun. 27, 187–207. doi: 10.1016/S0167-6393(98)00085-5

Crossref Full Text | Google Scholar

Kawata, N. Y. S., Nouchi, R., Oba, K., Matsuzaki, Y., and Kawashima, R. (2022). Auditory cognitive training improves brain plasticity in healthy older adults: evidence from a randomized controlled trial. Front. Aging Neurosci. 14:826672. doi: 10.3389/fnagi.2022.826672

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, J., Müller, K. R., Chung, Y. G., Chung, S. C., Park, J. Y., Bülthoff, H. H., et al. (2015). Distributed functions of detection and discrimination of vibrotactile stimuli in the hierarchical human somatosensory system. Front. Hum. Neurosci. 8:1070. doi: 10.3389/fnhum.2014.01070

PubMed Abstract | Crossref Full Text | Google Scholar

King, A. J., Hammond-Kenny, A., and Nodal, F. R. (2019). “Multisensory processing in the auditory cortex,” Multisensory Processes. Springer Handbook of Auditory Research (vol 68), eds. A. Lee, M. Wallace, A. Coffin, A. Popper, R. Fay. (Springer, Cham). doi: 10.1007/978-3-030-10461-0_6

Crossref Full Text | Google Scholar

Kitada, R., Yoshihara, K., Sasaki, A. T., Hashiguchi, M., Kochiyama, T., Sadato, N., et al. (2014). The brain network underlying the recognition of hand gestures in the blind: the supramodal role of the extrastriate body area. J. Neurosci. 34, 10096–10108. doi: 10.1523/JNEUROSCI.0500-14.2014

PubMed Abstract | Crossref Full Text | Google Scholar

Kurth, F., Zilles, K., Fox, P. T., Laird, A. R., and Eickhoff, S. B. (2010). A link between the systems: functional differentiation and integration within the human insula revealed by meta-analysis. Brain Struct. Funct. 214, 519–534. doi: 10.1007/s00429-010-0255-z

PubMed Abstract | Crossref Full Text | Google Scholar

Landelle, C., Caron-Guyon, J., Nazarian, B., Anton, J. L., Sein, J., Pruvost, L., et al. (2023). Beyond sense-specific processing: decoding texture in the brain from touch and sonified movement. iScience. 26:107965. doi: 10.1016/j.isci.2023.107965

PubMed Abstract | Crossref Full Text | Google Scholar

Lemée, J. M., Berro, D. H., Bernard, F., Chinier, E., Leiber, L. M., Menei, P., et al. (2019). Resting-state functional magnetic resonance imaging versus task-based activity for language mapping and correlation with perioperative cortical mapping. Brain Behav. 9:e01362. doi: 10.1002/brb3.1362

PubMed Abstract | Crossref Full Text | Google Scholar

Li Hegner, Y., Lee, Y., Grodd, Y. W., and Braun, C. (2010). Comparing tactile pattern and vibrotactile frequency discrimination: a human FMRI study. J. Neurophysiol. 103, 3115–3122. doi: 10.1152/jn.00940.2009

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, C., Jiao, L., Li, Z., Timmer, K., and Wang, R. (2021). Language control network adapts to second language learning: a longitudinal rs-fMRI study. Neuropsychologia 150:107688. doi: 10.1016/j.neuropsychologia.2020.107688

PubMed Abstract | Crossref Full Text | Google Scholar

Makin, T. R., and Krakauer, J. W. (2023). Against cortical reorganization. eLife 12:e84716. doi: 10.7554/eLife.84716

PubMed Abstract | Crossref Full Text | Google Scholar

Maruyama, T., Takeuchi, H., Taki, Y., Motoki, K., Jeong, H., Kotozaki, Y., et al. (2018). Effects of time-compressed speech training on multiple functional and structural neural mechanisms involving the left superior temporal Gyrus. Neural. Plast. 2018:6574178. doi: 10.1155/2018/6574178

PubMed Abstract | Crossref Full Text | Google Scholar

Mastria, G., Scaliti, E., Mehring, C., Burdet, E., Becchio, C., Serino, A., et al. (2023). Morphology, connectivity, and encoding features of tactile and motor representations of the fingers in the human precentral and postcentral gyrus. J. Neurosci. 43, 1572–1589. doi: 10.1523/JNEUROSCI.1976-21.2022

PubMed Abstract | Crossref Full Text | Google Scholar

Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T., Pitskel, N. B., et al. (2008). Rapid and reversible recruitment of early visual cortex for touch. PLoS ONE 3:e3046. doi: 10.1371/journal.pone.0003046

PubMed Abstract | Crossref Full Text | Google Scholar

Merabet, L. B., Swisher, J. D., McMains, S. A., Halko, M. A., Amedi, A., Pascual-Leone, A., et al. (2007). Combined activation and deactivation of visual cortex during tactile sensory processing. J. Neurophysiol. 97, 1633–1641. doi: 10.1152/jn.00806.2006

PubMed Abstract | Crossref Full Text | Google Scholar

Nath, A. R., and Beauchamp, M. S. (2012). A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion. Neuroimage 59, 781–787. doi: 10.1016/j.neuroimage.2011.07.024

PubMed Abstract | Crossref Full Text | Google Scholar

Ngo, G. H., Khosla, M., Jamison, K., Kuceyeski, A., and Sabuncu, M. R. (2022). Predicting individual task contrasts from resting-state functional connectivity using a surface-based convolutional network. Neuroimage 248:118849. doi: 10.1016/j.neuroimage.2021.118849

PubMed Abstract | Crossref Full Text | Google Scholar

Nilsson, M., Soli, S. D., and Sullivan, J. A. (1994). Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. J. Acoust. Soc. Am. 95, 1085–1099. doi: 10.1121/1.408469

PubMed Abstract | Crossref Full Text | Google Scholar

Oberhuber, M., Hope, T. M. H., Seghier, M. L., Parker Jones, O., Prejawa, S., Green, D. W., et al. (2016). Four functionally distinct regions in the left supramarginal gyrus support word processing. Cereb. Cortex. 26, 4212–4226. doi: 10.1093/cercor/bhw251

PubMed Abstract | Crossref Full Text | Google Scholar

Occelli, V., Spence, C., and Zampini, M. (2009). Compatibility effects between sound frequency and tactile elevation. Neuroreport 20, 793–797. doi: 10.1097/WNR.0b013e32832b8069

PubMed Abstract | Crossref Full Text | Google Scholar

O'Sullivan, A. E., Crosse, M. J., Liberto, G. M. D., de Cheveigné, A., and Lalor, E. C. (2021). Neurophysiological indices of audiovisual speech processing reveal a hierarchy of multisensory integration effects. J. Neurosci. 41, 4991–5003. doi: 10.1523/JNEUROSCI.0906-20.2021

PubMed Abstract | Crossref Full Text | Google Scholar

Pala, A., and Stanley, G. B. (2022). Ipsilateral stimulus encoding in primary and secondary somatosensory cortex of awake mice. J. Neurosci. 42, 2701–2715. doi: 10.1523/JNEUROSCI.1417-21.2022

PubMed Abstract | Crossref Full Text | Google Scholar

Price, C. J. (2012). A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 62, 816–847. doi: 10.1016/j.neuroimage.2012.04.062

PubMed Abstract | Crossref Full Text | Google Scholar

Raichle, M. E. (2009). A paradigm shift in functional brain imaging. J. Neurosci. 29, 12729–12734. doi: 10.1523/JNEUROSCI.4366-09.2009

PubMed Abstract | Crossref Full Text | Google Scholar

Răutu, I. S., De Tiège, X., Jousmäki, V., Bourguignon, M., and Bertels, J. (2023). Speech-derived haptic stimulation enhances speech recognition in a multi-talker background. Sci. Rep. 13:16621. doi: 10.1038/s41598-023-43644-3

PubMed Abstract | Crossref Full Text | Google Scholar

Reich, L., Szwed, M., Cohen, L., and Amedi, A. (2011). A ventral visual stream reading center independent of visual experience. Curr. Biol. 21, 363–368. doi: 10.1016/j.cub.2011.01.040

PubMed Abstract | Crossref Full Text | Google Scholar

Reiner, M. (2008). “Seeing through touch: the role of haptic information in visualization,” in Visualization: Theory and Practice in Science Education, eds. J. K. Gilbert, M. Reiner, M.B. Nakhleh (Springer: Netherlands), 73–84. doi: 10.1007/978-1-4020-5267-5_4

Crossref Full Text | Google Scholar

Renier, L. A., Anurova, I., De Volder, A. G., Carlson, S., VanMeter, J., Rauschecker, J. P., et al. (2009). Multisensory integration of sounds and vibrotactile stimuli in processing streams for “what” and “where”. J. Neurosci. 29, 10950–10960. doi: 10.1523/JNEUROSCI.0910-09.2009

PubMed Abstract | Crossref Full Text | Google Scholar

Rosazza, C., Minati, L., Ghielmetti, F., Mandelli, M. L., and Bruzzone, M. G. (2012). Functional connectivity during resting-state functional MR imaging: study of the correspondence between independent component analysis and region-of-interest–based methods. Am. J. Neuroradiol. 33, 180–187 doi: 10.3174/ajnr.A2733

PubMed Abstract | Crossref Full Text | Google Scholar

Russo, F. A., Ammirante, P., and Fels, D. I. (2012). Vibrotactile discrimination of musical timbre. J. Exp. Psychol. Hum. Percept. Perform. 38:822. doi: 10.1037/a0029046

PubMed Abstract | Crossref Full Text | Google Scholar

Schnupp, J., Nelken, I., and King, A. (2011). Auditory Neuroscience: Making Sense of Sound. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/7942.001.0001

Crossref Full Text | Google Scholar

Schulte, A., Marozeau, J., Ruhe, A., Büchner, A., Kral, A., Innes-Brown, H., et al. (2023). Improved speech intelligibility in the presence of congruent vibrotactile speech input. Sci. Rep. 13:22657. doi: 10.1038/s41598-023-48893-w

PubMed Abstract | Crossref Full Text | Google Scholar

Seydell-Greenwald, A., Wang, X., Newport, E. L., Bi, Y., and Striem-Amit, E. (2023). Spoken language processing activates the primary visual cortex. PLoS ONE 18:e0289671. doi: 10.1371/journal.pone.0289671

PubMed Abstract | Crossref Full Text | Google Scholar

Sharp, A., Houde, M. S., Maheu, M., Ibrahim, I., and Champoux, F. (2019). Improve tactile frequency discrimination in musicians. Exp. Brain Res. 237, 1–6. doi: 10.1007/s00221-019-05532-z

PubMed Abstract | Crossref Full Text | Google Scholar

Sheth, V., and Young, R. (2014). Ventral and dorsal streams in cortex: focal vs. ambient processing/exploitation vs. exploration. J. Vision. 14:51. doi: 10.1167/14.10.51

Crossref Full Text | Google Scholar

Smitha, K. A., Akhil Raja, K., Arun, K. M., Rajesh, P. G., Thomas, B., Kapilamoorthy, T. R., et al. (2017). Resting state fMRI: a review on methods in resting state connectivity analysis and resting state networks. Neuroradiol. J. 30, 305–317. doi: 10.1177/1971400917697342

PubMed Abstract | Crossref Full Text | Google Scholar

Snir, A., Cieśla, K., Ozdemir, G., Vekslar, R., and Amedi, A. (2024). Localizing 3D motion through the fingertips: following in the footsteps of elephants. iScience. 27:109820. doi: 10.1016/j.isci.2024.109820

PubMed Abstract | Crossref Full Text | Google Scholar

Sun, X., Li, L., Ding, G., Wang, R., and Li, P. (2019). Effects of language proficiency on cognitive control: evidence from resting-state functional connectivity. Neuropsychologia 129, 263–275. doi: 10.1016/j.neuropsychologia.2019.03.020

PubMed Abstract | Crossref Full Text | Google Scholar

Tal, Z., Geva, R., and Amedi, A. (2016). The origins of metamodality in visual object area LO: bodily topographical biases and increased functional connectivity to S1. NeuroImage 127, 363–375. doi: 10.1016/j.neuroimage.2015.11.058

PubMed Abstract | Crossref Full Text | Google Scholar

Tamè, L., Braun, C., Lingnau, A., Schwarzbach, J., Demarchi, G., Li Hegner, Y., et al. (2012). The contribution of primary and secondary somatosensory cortices to the representation of body parts and body sides: an fMRI adaptation study. J. Cogn. Neurosci. 24, 2306–2320. doi: 10.1162/jocn_a_00272

PubMed Abstract | Crossref Full Text | Google Scholar

Tamè, L., Pavani, F., Braun, C., Salemme, R., Farnè, A., Reilly, K. T., et al. (2015). Somatotopy and temporal dynamics of sensorimotor interactions: evidence from double afferent inhibition. Eur. J. Neurosci. 41, 1459–1465. doi: 10.1111/ejn.12890

PubMed Abstract | Crossref Full Text | Google Scholar

Tie, Y., Rigolo, L., Norton, I. H., Huang, R. Y., Wu, W., Orringer, O., et al. (2014). Defining language networks from resting-state fMRI for surgical planning – a feasibility study. Hum. Brain Mapp. 35, 1018–1030. doi: 10.1002/hbm.22231

PubMed Abstract | Crossref Full Text | Google Scholar

Uddin, L. Q., Nomi, J. S., Hébert-Seropian, B., Ghaziri, J., and Boucher, O. (2017). Structure and function of the human insula. J. Clin. Neurophysiol. 34, 300–306. doi: 10.1097/WNP.0000000000000377

PubMed Abstract | Crossref Full Text | Google Scholar

Van Békésy, G. (1959). Similarities between hearing and skin sensations. Psychol. Rev. 66, 1–22. doi: 10.1037/h0046967

PubMed Abstract | Crossref Full Text | Google Scholar

Van der Stoep, N., and Alais, D. (2020). Motion perception: auditory motion encoded in a visual motion area. Curr. Biol. 30, R775–R778. doi: 10.1016/j.cub.2020.05.010

PubMed Abstract | Crossref Full Text | Google Scholar

Walkowiak, A., Kostek, B., Lorens, A., Obrycka, A., Wasowski, A., Skarzynski, H., et al. (2010). Spread of excitation (SoE) - a non-invasive assessment of cochlear implant electrode placement. Cochlear Implants Int. 11, 479–481. doi: 10.1179/146701010X12671177204787

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, X., Wu, Q., Egan, L., Gu, X., Liu, P., Gu, H., et al. (2019). Anterior insular cortex plays a critical role in interoceptive attention. Elife 8:e42265. doi: 10.7554/eLife.42265.038

PubMed Abstract | Crossref Full Text | Google Scholar

Weisenberger, J. M., and Percy, M. E. (1995). The transmission of phoneme-level information by multichannel tactile speech perception aids. Ear. Hear. 16, 392–406. doi: 10.1097/00003446-199508000-00006

PubMed Abstract | Crossref Full Text | Google Scholar

Woolnough, O., Forseth, K. J., Rollo, P. S., and Tandon, N. (2019). Uncovering the functional anatomy of the human insula during speech. Elife 8:e53086. doi: 10.7554/eLife.53086.sa2

Crossref Full Text | Google Scholar

World Medical Association (2013). World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA 310, 2191–2194. doi: 10.1001/jama.2013.281053

PubMed Abstract | Crossref Full Text | Google Scholar

Zhu, L., Fan, Y., Zou, Q., Wang, J., Gao, J. H., Niu, Z., et al. (2014). Temporal reliability and lateralization of the resting-state language network. PLoS ONE 9:e85880. doi: 10.1371/journal.pone.0085880

PubMed Abstract | Crossref Full Text | Google Scholar

Zühlsdorff, K., Dalley, J. W., Robbins, T. W., and Morein-Zamir, S. (2023). Cognitive flexibility: neurobehavioral correlates of changing one's mind. Cereb. Cortex. 33, 5436–5446. doi: 10.1093/cercor/bhac431

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: speech comprehension, tactile aid, multisensory training, fMRI, resting-state functional MRI, cochlear implants

Citation: Cieśla K, Wolak T and Amedi A (2025) Resting-state functional connectivity changes following audio-tactile speech training. Front. Neurosci. 19:1482828. doi: 10.3389/fnins.2025.1482828

Received: 18 August 2024; Accepted: 04 April 2025;
Published: 29 April 2025.

Edited by:

Corianne Rogalsky, Arizona State University, United States

Reviewed by:

Anna Dondzillo, University of Colorado Anschutz Medical Campus, United States
Alina Schulte, Technical University of Denmark, Denmark

Copyright © 2025 Cieśla, Wolak and Amedi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Katarzyna Cieśla, a2FzaWEuai5jaWVzbGFAZ21haWwuY29t

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.