General Commentary ARTICLE
Commentary: Oscillatory Neuronal Activity Reflects Lexical-Semantic Feature Integration within and across Sensory Modalities in Distributed Cortical Networks
- 1Laboratoire de Psychologie Cognitive, Aix-Marseille Université, Marseille, France
- 2Faculté de Psychologie et des Sciences de l Éducation, University of Geneva, Geneva, Switzerland
A commentary on
Oscillatory Neuronal Activity Reflects Lexical-Semantic Feature Integration within and Across Sensory Modalities in Distributed Cortical Networks
by van Ackeren, M. J., Schneider, T. R., Musch, K., and Rueschemeyer, S.-A. (2014). J. Neurosci. 34, 14318–14323. doi: 10.1523/JNEUROSCI.0958-14.2014
Semantic knowledge relies on widely distributed brain networks. According to embodied semantics, retrieving features associated to words reactivate the sensory systems used during the encoding of the objects or entities they refer to Barsalou (2008). This does not preclude that semantic knowledge is organized hierarchically from modality-specific brain regions to higher-level, “convergence zones” or “hubs” (Meyer and Damasio, 2009). This organization could provide the necessary dynamic neural network for flexible lexical-semantic representations (Willems and Casasanto, 2011), although it could alternatively be interpreted in favor of amodal abstract representations (Mahon, 2015). Nevertheless, to what extent are these regions differentially activated and functionally participate to context-directed semantic integration? Eventually, which parts of this network are fundamentally modal, cross-modal, or amodal, and what is the specific role of the anterior temporal lobe (ATL), remains an area of debate.
Paradigm and Main Results
In a recent study published in the Journal of Neuroscience, van Ackeren et al. (2014) attempted to determine whether words are processed differently when read in the context of words strongly associated with particular sensory modalities. Authors used magnetoencephalographic (MEG) recordings and a dual property verification task, in which two feature words were followed by a target word. Feature words either referred to a single modality (modality-specific or MS, e.g., “red,” “big”; i.e., vision), or different modalities (cross-modal or CM, e.g., “red,” “loud”; i.e., vision/audition). The subject's task was to evaluate if feature words were appropriate descriptors of the target.
Van Ackeren et al. (2014) focused on oscillatory activity across a wide range of frequencies (from 2 to 150 Hz) during the processing of the target word. In the high frequencies (gamma range: 80–120 Hz), early enhanced power (150–350 ms) was observed for the MS>CM contrast in a cluster of occipito-temporal sensors. In contrast, in the low frequencies (theta range: 2–8 Hz), late effects (580–1000 ms) were found with enhanced power for CM>MS primes in a cluster of left-lateralized sensors.
First of all, the authors' underlying hypothesis is that if a target word automatically activates all of its associated semantic features, the prior integration of feature words, whether MS or CM, should not influence the processing of the target word. However, the theoretical assumption associated with this paradigm—that target word processing reflects time-locked integration of feature words—needs to be carefully handled. In such a paradigm, feature words are most probably being processed during their presentation. This supposedly activates different networks, in turn influencing the processing of a subsequent word. Although enhanced theta oscillations in CM condition might reflect residual far-reaching connections between areas (van Ackeren and Rueschemeyer, 2014), what is going on within these distributed networks as feature words are being integrated with each other is kept aside. One could already expect differences, whether evoked or induced, related to the second feature word presentation and whether both words refer to the same or different modalities (e.g., N400, Kutas and Federmeier, 2011).
The main result discussed in van Ackeren et al. (2014) is that opposite gamma and theta effects both originate in left ATL. It suggests ATL sensitivity to cross-modality, which is consistent with its anatomical connections to sensory areas. However, ATL present fine-grained subdivisions, whose function might differ (Bonner and Price, 2013). Which part of the ATL is functionally involved in combining specifically auditory-visual information relative to general multimodal semantic information thus remains to be clarified. Accordingly, how do these results relate to the “amodality” of the ATL? An amodal hub should be insensitive to any modality of a word's features (haptic, olfactory, etc.) and to stimulus input modality (pictures, sounds, etc.), which was not tested here. In this regard, ATL but also the precuneus (Fairhall and Caramazza, 2013), the posterior part of MTG, the angular gyrus (Binder et al., 2009), or the right ATL (Jefferies, 2013) are all valid candidates. Moreover, an amodal hub should present the ability to generalize across several instantiations of the same concept. This implies connecting with other brain areas to form a dynamic network within which relevant core features will be extracted. This “neuronal assembly” view for memory traces clearly contrasts with the task at hand, dealing with on the spot features “integration”, i.e., with integrating features that can be related to the target concept but are probably not its core features. Still, van Ackeren et al. propose two distinct mechanisms for combining information, according to the modality(ies) they relate to. This promising mechanistic framework could well underlie new features learning.
However, any hypothesis about network dynamics falls short, as authors did not explicitly study interactions between the involved brain regions. Effective connectivity analyses could shed light on the organization of the semantic network, from sensory systems toward potential hubs. If the ATL is indeed a “convergence zone,” then its activity should be highly related to “features zones” (Patterson et al., 2007; Coutanche and Thompson-Schill, 2014). In particular, authors report similar gamma activity between sensory areas and ATL. As information is progressively being processed and integrated, activities should become coherent, which could be assessed via Phase-Locking Value or any other coherence measure. Such analyses could also help determining the direction of the late theta effect: integration from sensory systems—retrieving the concept—or reaching out to other brain networks—activating other features (McNorgan et al., 2011). Moreover, measures of cross-frequency phase-amplitude coupling in the ATL could add valuable insight on the relationship between the effects reported in gamma and theta frequency bands (see Canolty et al., 2006).
Van Ackeren et al.'s study of oscillatory patterns within and across neural networks provides precious evidence on the functional dynamics of semantic integration. In particular, it proposes an interesting mechanistic framework for semantic integration through oscillatory modulations. Studying synchronization between putative hubs and features zones could potentially enrich the embodiment debate. Then, to what extent the mechanistic differences observed can be reconciled with amodal properties of ATL still needs further investigation. The similar integration of feature names and features per se should also be examined to disentangle lexical-semantic from multi-sensory processing and shed light on representations content.
This work was supported by a doctoral MNRT grant from the Ministère de l'Enseignement et de la Recherche (France) and by the Swiss National Science Foundation (grant number 105319_146113).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer Kristof Strijkers declared a shared affiliation, though no other collaboration, with one of the authors Svetlana Pinet to the handling Editor, who ensured that the process nevertheless met the standards of a fair and objective review.
We thank Patrick Watson for his thorough proofreading of an earlier version of this manuscript and the referees for their fruitful comments.
Binder, J. R., Desai, R. H., Graves, W. W., and Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 19, 2767–2796. doi: 10.1093/cercor/bhp055
Canolty, R. T., Edwards, E., Dalal, S. S., Soltani, M., Nagarajan, S. S., Kirsch, H. E., et al. (2006). High gamma power is phase-locked to theta oscillations in human neocortex. Science 313, 1626–1628. doi: 10.1126/science.1128115
Kutas, M., and Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu. Rev. Psychol. 62, 621–647. doi: 10.1146/annurev.psych.093008.131123
Patterson, K., Nestor, P. J., and Rogers, T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987. doi: 10.1038/nrn2277
van Ackeren, M. J., and Rueschemeyer, S. A. (2014). Cross-modal integration of lexical-semantic features during word processing: evidence from oscillatory dynamics during EEG. PLoS ONE 9:e101042. doi: 10.1371/journal.pone.0101042
van Ackeren, M. J., Schneider, T. R., Musch, K., and Rueschemeyer, S.-A. (2014). Oscillatory neuronal activity reflects lexical-semantic feature integration within and across sensory modalities in distributed cortical networks. J. Neurosci. 34, 14318–14323. doi: 10.1523/JNEUROSCI.0958-14.2014
Keywords: MEG, cortical oscillations, semantic integration, features verification, semantic hub, sensory modalities
Citation: Pinet S and Fargier R (2016) Commentary: Oscillatory Neuronal Activity Reflects Lexical-Semantic Feature Integration within and across Sensory Modalities in Distributed Cortical Networks. Front. Psychol. 6:2005. doi: 10.3389/fpsyg.2015.02005
Received: 05 October 2015; Accepted: 15 December 2015;
Published: 06 January 2016.
Edited by:Guillaume Thierry, Bangor University, UK
Reviewed by:Olaf Hauk, MRC Cognition and Brain Sciences Unit, UK
Kristof Strijkers, Aix-Marseille Université, France
Keiichi Kitajo, RIKEN Brain Science Institute, Japan
Copyright © 2016 Pinet and Fargier. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Svetlana Pinet, email@example.com