Skip to main content

OPINION article

Front. Hum. Neurosci., 15 November 2023
Sec. Cognitive Neuroscience
Volume 17 - 2023 | https://doi.org/10.3389/fnhum.2023.1295431

Opinion on the event-related potential signature of automatic detection of violated regularity (visual mismatch negativity): non-perceptual but predictive

  • Institute of Cognitive Neuroscience and Psychology, Research Center of Natural Sciences, Budapest, Hungary

Introduction

I argue that the electrophysiological phenomenon, the visual mismatch negativity (vMMN), considered to be the signature of automatic visual change detection is post-perceptual.1 In other words, it emerges after the identification of the eliciting events. My argument is that simple, visual processes leading to stimulus identification are faster than the time range of electrophysiological phenomenon, the visual mismatch negativity component of event-related potentials (ERPs).

Presenting stimulus sequences with frequent physically or categorically identical visual stimuli (standards) and rare stimuli (deviant) that violate the regularity of standards, an ERP component, the vMMN emerges, even if these stimuli are unrelated to the ongoing task (passive oddball paradigm). VMMN is observed in the ERP when stimulus sequences with frequent physically or categorically identical visual stimuli (standards) and rare stimuli that violate the regularity of the standards (deviants) are presented. It is considered the counterpart of the auditory mismatch negativity (for reviews see Stefanics et al., 2014 and Garrido et al., 2009, for the visual and auditory MMN, respectively). Sources of this component have been identified in various regions of the visual cortex as well as, in some cases, in the frontal cortex (e.g. Kimura et al., 2010). The vMMN latency range is between 120 and 350 ms post-stimulus. VMMN is elicited by events unrelated to the ongoing task, outside the location of the task, and emerges to stimuli outside conscious awareness (Berti, 2011; Jock et al., 2017), therefore it is considered to be a result of automatic processes.

VMMN can be elicited by deviant visual features such as color (Athanasopoulos et al., 2010), spatial frequency (Heslenfeld, 2003), movement direction (Lorenzo-López et al., 2004), and orientation (Kimura et al., 2009). VMMN is also elicited by higher-order deviant visual characteristics, like object-related deviancy (Müller et al., 2010), facial attributes (Astikainen and Hietanen, 2009; Kecskés-Kovács et al., 2013; Kreegipuu et al., 2013), and semantic categories (Hu et al., 2020).

The dominant account of the function of processes underlying (auditory) MMN and vMMN is the predictive coding theory. In summary, the mechanism underlying (auditory) MMN), and mutatis mutandis, vMMN, as Garrido et al. (2009) clearly state, are brain activity within perception: “Predictive coding (or, more generally, hierarchical inference in the brain) states that perception arises from integrating sensory information from the environment and our prediction based on a model of what caused that sensory information” (p. 459). Accordingly, vMMN is a signature of a cascade (hierarchy) of comparison processes between the content of the representation of the actual input events and a memory store of expected events. The hierarchical predictive coding framework claims that veridical perception is supported by neural processes optimizing probabilistic representations of the causes of sensory inputs (Friston, 2010). In the vMMN studies the probabilistic representation is based on the high probability of the standard. The cascade terminates when the updated memory content matches the input representation. This account claims that vMMN depends on the characteristics of the memory content established by the regular event sequences. Presenting an expected event (a standard stimulus in the oddball paradigm) does not elicit vMMN, but a change in the event sequence (a deviant stimulus) initiates this cascade-type activity underlying vMMN. VMMN is an “error signal,” and the function of the “vMMN-generating system” is to update our predictive model of the world through prediction errors and infer the likely causes of the sensory inputs. According to this account, vMMN is a signature of activity within the perceptual system, and the processes underlying vMMN serve the identification of deviant events.

The time range of VMMN

VMMN is usually measured on the difference potentials, where ERPs to the standard are subtracted from the ERPs to deviant.2 Deviant minus standard differences may appear in an earlier (120–200 ms) and a later (200–300/350 ms) range (e.g., Maekawa et al., 2005; File et al., 2017). The earlier latency range corresponds to the N1/N170 ERP component, and this deviant-standard difference is frequently attributed to the stimulus-specific adaptation (repetition suppression) of this component. On the other hand, the later component, sometimes called “genuine vMMN” is considered to be a consequence of additional processes, attributed to the cascade processes of the predictive coding account. Equal probability control (EQ) is used to separate the two effects. Within the EQ sequence, stimuli physically identical to those in the oddball sequences are embedded in a sequence with stimuli varying in the same parameter (e.g., orientation, but different values). The probability of each stimulus type is equal to the probability of the oddball deviant. The ERPs elicited by the oddball deviant and the physically identical EQ stimuli are compared afterward (deviant minus control difference potential; Jacobsen and Schroger, 2001). In fact, in some studies, this control eliminated the earlier part of the deviant minus standard difference (e.g., Kimura et al., 2009), but in other studies, this procedure resulted in an early “genuine vMMN” (e.g., for grating pattern Susac et al., 2014; for windmill pattern File et al., 2017, and also Kovarski et al., 2017; for dot patterns, Beck et al., 2021; for human faces Li et al., 2012).

At the later time range, deviant minus standard ERP differences appear after 250 ms, e.g., for dot motion (Rowe et al., 2020), facial emotions (e.g., Xiong et al., 2022), spatial frequency-filtered faces (Lacroix et al., 2022), spatial frequency (Tales et al., 2008), color (Sultson et al., 2019), line orientation (Kimura et al., 2009, but see Male et al., 2020).

In summary, vMMN emerges no earlier than 120 ms; and frequently terminates later than 300 ms. I claim that this range (i.e. including the earlier vMMN range) is later than the time needed to identify images depicting single objects, objects within contexts, and scenes.

Data from reaction time for identifying visual events

The most direct method to measure the time needed to identify (plus decide and respond) visual events is a “go/no-go categorization task,” first introduced by Thorpe et al. (1996). Participants performed “animal/non-animal categorization” in reaction time (RT) situations, with concomitant EEG recording. Unmasked pictures of natural scenes were presented for 20 ms. The earliest behavioral responses were shorter than 300 ms. Interestingly, the onset of differential ERP activity in the two categories started 150 ms after stimulus onset. The research group developed a measure they termed minRT as the first-time bin of 10 ms, in which correct responses start to significantly outnumber incorrect responses (Fabre-Thorpe et al., 1998; VanRullen and Thorpe, 2001). MinRT for various categories (vehicles, animals) and in many variations of the paradigm (e.g., Rousselet et al., 2002; Macé et al., 2005; Joubert et al., 2007) was ~260 ms. Importantly, beyond the stimulus processing, this duration involves the duration of the decision process, motor organization, and response execution. Furthermore, in another response mode, when saccadic eye movement to the target category (natural scenes with or without animal) was measured instead of hand response, MinRT was 120 ms (Kirchner and Thorpe, 2006).

Data from short-term conceptual memory

Potter and her colleagues introduced a series of studies presenting pictures using the rapid serial visual presentation (RSVP) method. In her now-classic study (Potter, 1976) she presented sequences of pictures with short exposure duration (the lowest being 113 ms), and with an inter-stimulus interval of 0 ms. A picture was a target when it contained a scene described before presenting the sequence. Detection of the target pictures was better than chance, even at 113 ms. Note that the subsequent pictures masked the previous ones, therefore processing time was restricted to the exposure duration. In a later study, Hagmann and Potter (2016) obtained an even shorter exposure time. In sequences of six pictures, and target definition after the sequence (i.e., without the possible involvement of attentional blink after the target (see Martens and Wyble, 2010 for a review), performance above chance level was observed for color, greyscale, and blurred pictures below 80 ms. It should be noted that pictures with only low spatial frequencies were not significantly detected at 80 ms exposure, showing that a presumed gist provided by the fast-conducting magnocellular system (e.g., Bar, 2009) was insufficient for the identification of the pictures. Concerning vMMN studies, in the case of oddball deviants like facial emotions (Kreegipuu et al., 2013), gender (Kecskés-Kovács et al., 2013), or right vs. left hand (Stefanics and Czigler, 2012), the contribution of the parvocellular activity seems to be necessary, therefore the fast identification indicated by the above studies holds for the processing of stimuli in vMMN studies.

Data from steady-state evoked potential

In a passive paradigm (participants reacted to the color change of the fixation dot) Stothart et al. (2017) presented pictures of objects for 80 ms, with an 80 ms inter-stimulus interval. Every fifth picture (deviants) was from a different semantic category (e.g., non-living objects as deviants and animals as standards). According to the harmonic analysis, at the frequency of the deviant (1.25 Hz), the amplitude of the steady-state visual evoked potential increased, showing that using this RSVP procedure, 160 ms (from stimulus onset to stimulus onset) was enough for the implicit detection of a semantic category. Using related methods and facial stimuli Rossion and his colleagues obtained similar results (for a review see Rossion et al., 2020).

Conclusions

Visual events violating sequential regularities elicit a modality-specific evoked potential (ERP) component, the visual mismatch negativity (vMMN) in the 120–350 ms latency range. Data from visual discrimination, identification, and steady-state evoked potential studies show that its latency is longer than the time needed to identify even complex visual events. Therefore, it is improbable that vMMN is a signature of the processes of early phases of stimulus identification. In the paradigms investigating vMMN (mainly in the passive oddball situation), events appear in the context of other ones. The context is the set of standard stimuli. Due to the regular appearance of standards, the system is capable of developing predictions about forthcoming events. I agree that the processes underlying vMMN are error signals, but signals of mismatch between expected and unexpected (unpredicted, surprising) identified events. These events have potential importance, even in a passive oddball paradigm. This suggestion is not new; it fits the early interpretation of (auditory) MMN by Risto Näätänen. “The mismatch negativity was interpreted by Näätänen et al. (1978) as reflecting a neural mismatch process, which represents an early, preattentive stage of change detection in the central nervous system and may, according to Näätänen (1979) lead to the elicitation of the orienting response…” (Lyttinen et al., 1992, p. 523). However, in case of an insignificant mismatch, no other components of the orientation reaction (activation and motor components) emerge (Lyttinen et al., 1992).

Author contributions

IC: Conceptualization, Writing—original draft.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Project no. 143178 has been supported by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the OTKA-K funding schema.

Acknowledgments

I thank Petia Kojouharova for her help in the preparation of this paper.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^In this study the term visual perception (and perceptual) is considered as the whole set of processes from the activity of all structures that contribute to visual processing, i.e., both the lower and the higher level of the visual brain. In functional terms, it involves all processes leading to the identification of visual events, either with or without the involvement of consciousness.

2. ^Recent studies apply the reverse control method. The deviant in a sequence is identical to the standard of the other sequence, and vice versa. This way physically identical stimuli are compared in the role of deviant and standard.

References

Astikainen, P., and Hietanen, J. K. (2009). Event-related potentials to task-irrelevant changes in facial expressions. Behav. Brain Funct. 5, 30. doi: 10.1186/1744-9081-5-30

PubMed Abstract | CrossRef Full Text | Google Scholar

Athanasopoulos, P., Dering, B., Wiggett, A., Kuipers, J.-R., and Thierry, G. (2010). Perceptual shift in bilingualism: brain potentials reveal plasticity in pre-attentive colour perception. Cognition 116, 437–443. doi: 10.1016/j.cognition.2010.05.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar, M. (2009). The proactive brain: memory for predictions. Philos. Trans. R. Soc. London B 364, 1235–1243. doi: 10.1098/rstb.2008.0310

PubMed Abstract | CrossRef Full Text | Google Scholar

Beck, A.-K., Berti, S., Czernochowski, D., and Lachmann, T. (2021). Do categorical representations modulate early automatic visual processing? A visual mismatch-negativity study. Biol. Psychol. 163, 108139. doi: 10.1016/j.biopsycho.2021.108139

PubMed Abstract | CrossRef Full Text | Google Scholar

Berti, S. (2011). The attentional blink demonstrates automatic deviance processing in vision. Neuroreport 22, 664–667. doi: 10.1097/WNR.0b013e32834a8990

PubMed Abstract | CrossRef Full Text | Google Scholar

Fabre-Thorpe, M., Richard, G., and Thorpe, S. J. (1998). Rapid categorization of natural images by rhesus monkeys. Neuroreport 9, 303–308. doi: 10.1097/00001756-199801260-00023

PubMed Abstract | CrossRef Full Text | Google Scholar

File, D., Bodnar, F., Sulykos, I., Kecskes-Kovacs, K., and Czigler, I. (2017). Visual mismatch negativity (vMMN) for low- and high-level deviances: a control study. Atten. Percept. Psychophys 79, 2153–2170. doi: 10.3758/s13414-017-1373-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K. J. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138. doi: 10.1038/nrn2787

PubMed Abstract | CrossRef Full Text | Google Scholar

Garrido, M. I., Kilner, J. M., Stephan, K. E., and Friston, K. J. (2009). The mismatch negativity: a review of underlying mechanisms. Clin. Neurophysiol. 120, 453–463. doi: 10.1016/j.clinph.2008.11.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Hagmann, C. E., and Potter, M. C. (2016). Ultrafast scene detection and recognition with limited visual information. Vis. Cogn. 24, 2–14. doi: 10.1080/13506285.2016.1170745

PubMed Abstract | CrossRef Full Text | Google Scholar

Heslenfeld, D. I. (2003). “Visual mismatch negativity,” in Detection of Change: Event-Related Potential and fMRI Findings, ed. J. Polich (Bosto, MA: KluverAcademic Press), 41–59. doi: 10.1007/978-1-4615-0294-4_3

CrossRef Full Text | Google Scholar

Hu, A., Gu, F., Wong, L. L. N., Tong, X., and Zhang, X. (2020). Visual mismatch negativity elicited by semantic violations in visual words. Brain Res. 1746, 147010. doi: 10.1016/j.brainres.2020.147010

PubMed Abstract | CrossRef Full Text | Google Scholar

Jacobsen, T., and Schroger, E. (2001). Is there pre-attentive memory-based comparison of pitch? Psychophysiology 38, 723–727. doi: 10.1111/1469-8986.3840723

PubMed Abstract | CrossRef Full Text | Google Scholar

Jock, B. N., Widmann, A., O'Shea, R. P., Schroger, E., and Roeber, U. (2017). Brain activity from stimuli that are not perceived: visual mismatch negativity during binocular rivalry suppression. Psychophysiology 54, 755–763. doi: 10.1111/psyp.12831

PubMed Abstract | CrossRef Full Text | Google Scholar

Joubert, O. R., Rousselet, G. A., Fize, D., and Fabre-Thorpe, M. (2007). Processing scene context: fast categorization and object interference. Vision Res. 47, 3286–3297. doi: 10.1016/j.visres.2007.09.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Kecskés-Kovács, K., Sulykos, I., and Czigler, I. (2013). Is it a face of a woman or a man? Visual mismatch negativity is sensitive to gender category. Front. Hum. Neurosci. 7, 532. doi: 10.3389/fnhum.2013.00532

PubMed Abstract | CrossRef Full Text | Google Scholar

Kimura, M., Katayama, J., Ohira, H., and Schroger, E. (2009). Visual mismatch negativity: new evidence from the equiprobable paradigm. Psychophysiology 46, 402–409. doi: 10.1111/j.1469-8986.2008.00767.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Kimura, M., Ohira, H., and Schr}“oger, R. (2010). Localizing sensory and cognitive systems for pre-attentive visual deviance detection: an sLORETA analysis of the data of Kimura et al. (2009). Neutosci. Lett. 26, 198–203. doi: 10.1016/j.neulet.2010.09.011

CrossRef Full Text | Google Scholar

Kirchner, H., and Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: visual processing speed revisited. Vision Res. 46, 1762–1776. doi: 10.1016/j.visres.2005.10.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Kovarski, K., Latinus, M., Charpentier, J., Cléry, H., Roux, S., Houy-Durand, E., et al. (2017). Facial expression related vMMN: disentangling emotional from neutral change detection. Front. Hum. Neurosci. 11, 18. doi: 10.3389/fnhum.2017.00018

PubMed Abstract | CrossRef Full Text | Google Scholar

Kreegipuu, K., Kuldkepp, N., Sibolt, O., Toom, M., Allik, J., and Näätänen, R. (2013). vMMN for schematic faces: automatic detection of change in emotional expression. Front. Hum. Neurosci. 7, 714. doi: 10.3389/fnhum.2013.00714

PubMed Abstract | CrossRef Full Text | Google Scholar

Lacroix, A., Harquel, S., Mermillod, M., Vercueil, L., Alleysson, D., Dutheil, F., Kovarski, K., and Gomot, M. (2022). The predictive role of low spatial frequencies in automatic face processing: a visual mismatch negativity investigation. Front. Hum. Neurosci. 16, 838454. doi: 10.3389/fnhum.2022.838454

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., Lu, Y., Sun, G., Gao, L., and Zhao, L. (2012). Visual mismatch negativity elicited by facial expressions: new evidence from the equiprobable paradigm. Behav. Brain Funct. 8, 7. doi: 10.1186/1744-9081-8-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorenzo-López, L., Amenedo, E., Pazo-Alvarez, P., and Cadaveira, F. (2004). Pre-attentive detection of motion direction changes in normal aging. Neuroreport 15, 2633–2636. doi: 10.1097/00001756-200412030-00015

PubMed Abstract | CrossRef Full Text | Google Scholar

Lyttinen, H., Blomberg, A.-P., and Näätänen, R. (1992). Event-related potentials and autonomic responses to a change in unattended auditory stimuli. Psychophysiology 29, 523–534. doi: 10.1111/j.1469-8986.1992.tb02025.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Macé, M. J.-M., Thorpe, S. J., and Fabre-Thorpe, M. (2005). Rapid categorization of achromatic natural scenes: how robust at very low contrasts? Eur. J. Neurosci. 21, 2007–2018. doi: 10.1111/j.1460-9568.2005.04029.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Maekawa, T., Goto, Y., Kinukawa, N., Taniwaki, T., Kanba, S., and Tobimatsu, S. (2005). Functional characterization of mismatch negativity to a visual stimulus. Clin. Neurophysiol. 116, 2392–2402. doi: 10.1016/j.clinph.2005.07.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Male, A. G., O'Shea, R. P., Schroger, E., Muller, D., Roeber, U., and Widmann, A. (2020). The quest for the genuine visual mismatch negativity (vMMN): event-related potential indications of deviance detection for low-level visual features. Psychophysiology 57, e13576. doi: 10.1111/psyp.13576

PubMed Abstract | CrossRef Full Text | Google Scholar

Martens, S., and Wyble, B. (2010). The attentional blink: past, present, and future of a blind spot in perceptual awareness. Neurosci. Biobehav. Rev. 34, 947–957. doi: 10.1016/j.neubiorev.2009.12.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Müller, D., Winkler, I., Roeber, U., Schaffer, S., Czigler, I., and Schroger, E. (2010). Visual object representations can be formed outside the focus of voluntary attention: evidence from event-related brain potentials. J. Cogn. Neurosci. 22,1179–1188. doi: 10.1162/jocn.2009.21271

PubMed Abstract | CrossRef Full Text | Google Scholar

Näätänen, R. (1979). “Orienting and evoked potentials,” in The Orienting Reflex in Humans, eds H. D. Kimmel, E. H. von Olst, and J. F. Orlebeke (Hillsdale, NJ: Erlbaum), 61–75. doi: 10.4324/9781003171409-4

CrossRef Full Text | Google Scholar

Näätänen, R., Gaillard, A. W. K., and Mäntysalo, S. (1978). Early selective attention effect on evoked potential reinterpreted. Acta Psychol. 42, 313–329. doi: 10.1016/0001-6918(78)90006-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Potter, M. C. (1976). Short-term conceptual memory for pictures. J. Exp. Psychol. Hum. Learn. 2, 509–522. doi: 10.1037/0278-7393.2.5.509

PubMed Abstract | CrossRef Full Text | Google Scholar

Rossion, B., Retter, T. L., and Liu-Shuang, J. (2020). Understanding human individuation of unfamiliar faces with oddball fast periodic visual stimulation and electroencephalography. Eur. J. Neurosci. 10, 4283–4344. doi: 10.1111/ejn.14865

PubMed Abstract | CrossRef Full Text | Google Scholar

Rousselet, G. A., Fabre-Thorpe, M., and Thorpe, S. J. (2002). Parallel processing in high-level categorization of natural images. Nat. Neurosci. 5, 629–630. doi: 10.1038/nn866

PubMed Abstract | CrossRef Full Text | Google Scholar

Rowe, E. G., Tsuchiya, N., and Garrido, M. I. (2020). Detecting (un)seen change: the neural underpinnings of (un)conscious prediction errors. Front. Syst. Neurosci. 14, 541670. doi: 10.3389/fnsys.2020.541670

PubMed Abstract | CrossRef Full Text | Google Scholar

Stefanics, G., and Czigler, I. (2012). Automatic prediction error responses to hands with unexpected laterality: an electrophysiological study. Neuroimage 63, 253–261. doi: 10.1016/j.neuroimage.2012.06.068

PubMed Abstract | CrossRef Full Text | Google Scholar

Stefanics, G., Kremlácek, J., and Czigler, I. (2014). Visual mismatch negativity: a predictive coding view. Front. Hum. Neurosci. 8, 666. doi: 10.3389/fnhum.2014.00666

PubMed Abstract | CrossRef Full Text | Google Scholar

Stothart, G., Quadflieg, S., and Milton, A. A. (2017). A fast and implicit measure of semantic categorization using steady state visual evoked potentials. Neuropsychologia 102, 11–18. doi: 10.1016/j.neuropsychologia.2017.05.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Sultson, H., Vainik, U., and Kreegipuu, K. (2019). Hunger enhances automatic processing of food and non-food stimuli: a visual mismatch negativity study. Appetite 133, 324–336. doi: 10.1016/j.appet.2018.11.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Susac, A., Heslenfeld, D. J., Huonker, R., and Supek, S. (2014). Magnetic source localization of early visual mismatch response. Brain Topogr. 27, 648–651. doi: 10.1007/s10548-013-0340-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Tales, A., Haworth, J., Wilcock, G., Newton, P., and Butler, S. (2008). Visual mismatch negativity highlights abnormal pre-attentive visual processing in mild cognitive impairment and Alzheimer's disease. Neuropsychologia 46, 1224–1232. doi: 10.1016/j.neuropsychologia.2007.11.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Thorpe, S. J., Fize, D., and Marlot, C. (1996). Speed of processing in the human visual system. Nature 381, 520–522. doi: 10.1038/381520a0

PubMed Abstract | CrossRef Full Text | Google Scholar

VanRullen, R., and Thorpe, S. J. (2001). Is it a bird? Is it a plane? Ultrarapid visual categorisation of natural and artifactual objects. Perception 30, 655–668. doi: 10.1068/p3029

PubMed Abstract | CrossRef Full Text | Google Scholar

Xiong, M., Ding, X., Kang, T., Zhao, X., Zhao, J., and Liu, J. (2022). Automatic change detection of multiple facial expressions: a visual mismatch negativity study. Neurophysiologia 170, 108234. doi: 10.1016/j.neuropsychologia.2022.108234

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: visual mismatch negativity, post-perceptual, predictive coding, automatic change detection, processing duration

Citation: Czigler I (2023) Opinion on the event-related potential signature of automatic detection of violated regularity (visual mismatch negativity): non-perceptual but predictive. Front. Hum. Neurosci. 17:1295431. doi: 10.3389/fnhum.2023.1295431

Received: 16 September 2023; Accepted: 25 October 2023;
Published: 15 November 2023.

Edited by:

Erich Schröger, Leipzig University, Germany

Reviewed by:

Stefan Berti, Johannes Gutenberg University Mainz, Germany
Ann-Kathrin Beck, University of Kaiserslautern, Germany

Copyright © 2023 Czigler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: István Czigler, istvan.czigler@ttk.hu

Download