ORIGINAL RESEARCH article
Sec. Cognitive Neuroscience
Volume 15 - 2021 | https://doi.org/10.3389/fnhum.2021.707702
Older Adults Automatically Detect Age of Older Adults’ Photographs: A Visual Mismatch Negativity Study
- 1Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
- 2Doctoral School of Psychology (Cognitive Science), Budapest University of Technology and Economics, Budapest, Hungary
The human face is one of the most frequently used stimuli in vMMN (visual mismatch negativity) research. Previous studies showed that vMMN is sensitive to facial emotions and gender, but investigations of age-related vMMN differences are relatively rare. The aim of this study was to investigate whether the models’ age in photographs were automatically detected, even if the photographs were not parts of the ongoing task. Furthermore, we investigated age-related differences, and the possibility of different sensitivity to photographs of participants’ own versus different ages. We recorded event-related potentials (ERPs) to faces of young and old models in younger (N = 20; 18–30 years) and older groups (N = 20; 60–75 years). The faces appeared around the location of the field of a tracking task. In sequences the young or the old faces were either frequent (standards) or infrequent (deviants). According to the results, a regular sequence of models’ age is automatically registered, and faces violating the models’ age elicited the vMMN component. However, in this study vMMN emerged only in the older group to same-age deviants. This finding is explained by the less effective inhibition of irrelevant stimuli in the elderly, and corresponds to own-age bias effect of recognition studies.
The information content of the human face encompasses various important pieces of information such as identity, gender, race, age, and emotional state. This set has utmost importance in interpersonal and social behavior. In this study our aim was to investigate the possibility of automatic registration of age by using the visual mismatch negativity (vMMN) component of the event-related potentials (ERPs) of the brain electric activity.
Visual mismatch negativity emerges to visual events that violate the regularities of a stimulus sequence, even if the eliciting stimuli are unrelated to an ongoing task (for reviews see Kimura et al., 2011; Stefanics et al., 2015). VMMN is usually investigated in the passive oddball paradigm. In this paradigm participants perform a visual (or sometimes auditory) task, while the vMMN-related events are presented outside the task’s context as unattended stimuli. The characteristics of the frequent (standard) events of stimulus sequences may acquire representation, even if the characteristics are simple visual features such as color, orientation, spatial frequency, etc. VMMN also emerges to perceptual categories like symmetry (Kecskés-Kovács et al., 2013b) and orderliness (Durant et al., 2017).
The human face is one of the most frequently used stimuli in vMMN research. VMMN is especially sensitive to facial emotions, i.e., rare (deviant) faces expressing a different emotion from the frequent (standard) faces within the same sequence (e.g., Astikainen and Heitanen, 2009; Li et al., 2012; Stefanics et al., 2012; for a review see Kovarski et al., 2017). In case of gender as another facial feature, Kecskés-Kovács et al. (2013a) recorded vMMN to faces of female models within sequences of male faces, and vice versa.
In the present study we investigated the possibility of a similar effect, automatic detection of age by showing photographs of faces of models with different ages. Furthermore, we compared vMMN differences between older and younger participants. Investigations of age-related vMMN differences are relatively rare. Nevertheless, this is an important topic, because vMMN provides direct evidence about the sensitivity of automatic registration of environmental regularities, and the putative change of sensitivity with aging. So far in the context of age differences the majority of vMMN studies applied low-level deviancies, and the results are equivocal. Lorenzo-López et al. (2004) investigated vMMN to horizontally drifting sinusoidal gratings and obtained a long-lasting posterior negativity. Whereas in the older group the negativity was different from zero only at the Oz electrode site, it had a broader distribution in younger participants. Tales et al. (2002) presented single and double bars as standard and deviant stimuli. VMMN in the younger group emerged in the 250–400 ms range, but in the older group they obtained vMMN only in the later part of this range. However, using the same method, Stothart et al. (2013) obtained no age-related differences. Recently, we compared older and younger groups in three studies (Gaál et al., 2017; Sulykos et al., 2017, 2018). In our laboratory Sulykos et al. (2017) investigated vMMN to the offset of parts of continuously presented objects. Age-related vMMN difference emerged in the 180–220 ms range, but there was no vMMN difference in the earlier part of this component. In the Sulykos et al. (2018) study checkerboard stimuli were presented. VMMN appeared in the 100–300 ms range in both age groups, but in the later part of vMNN the amplitude was smaller in the older group. In contrast with the simple stimuli of the above studies, Gaál et al. (2017) investigated category-related vMMN, i.e., letters and pseudo-letters. The stimuli were presented in pairs of subsequent fragments, and the two fragments together constituted the stimuli as wholes. The main variable was the duration between the onset of the fragments, therefore the integration effects on vMMN were investigated in the two age groups. The integration period of the fragments was longer in the older group, showing longer stimulus persistence in the elderly. As this review of previous studies shows, with the exception of the Gaál et al. (2017) study, only low-level features were investigated in the context of age-related differences. One of the aims of the present study is to investigate age-related effects of automatic detection in the case of complex stimuli violating sequential regularities. As far as we know, this is the first vMMN study that investigated the sensitivity of an older and a younger group in the domain of human faces.
As another aim of this study, we investigated vMMNs to deviant photographs showing models of the same age as or a different age than the age of the participants. This issue is related to the phenomenon of own-age bias (OAB). As a considerable body of research shows, people are more efficient in recognizing photographs depicting faces of their own age than faces depicting different ages (for reviews see Rhodes and Anastasi, 2012; Wiese et al., 2013). Theories about the OAB proposed that people have more practice in processing faces of others with age similar to their own. This view emphasizes the importance of the different frequency of encounters for people with different ages (He et al., 2011). As an argument for the importance of encounter frequency, the OAB effect is reduced or even absent in groups with considerable experience with other age-groups (Harrison and Hole, 2009; Wiese et al., 2012). It is possible that in a multidimensional system of perception (Valentine, 1991), as an effect of less frequent experience, other-age faces are farther away from the more discriminative central regions on various dimensions. However, besides the frequency of encounter, motivational and social group relations have also been suggested as underlying mechanisms of OAB. This type of theory was originally proposed for the own-race bias (ORB) in face recognition, an effect stronger than OAB (Mukudi and Hills, 2019). Sporer (2001) supposed that ingroup-outgroup differentiation is an automatic process. The categorization-individualization model (Hugenberg et al., 2010) proposed that in an initial processing stage face processing is categorical, and individualization is a process at a subsequent stage. In the case of faces of a different age, processing is frequently restricted to the first stage. However, across different age groups OAB is not perfectly symmetrical. According to results by Bartlett and Leslie (1986) and Wiese et al. (2008), in groups of older participants no OAB emerged.
As results of some OAB studies show, both stages of the hypothesized processes are automatic. This is because following incidental learning of faces (attractiveness or friendliness rating or age estimation, search for a non-facial target feature), subsequent face recognition is similar to the effect of intentional (attentional) learning (Randall et al., 2012; Neumann et al., 2014). To investigate the possibility of automaticity of OAB-related effects and of age-related sensitivity differences, we compared a younger and an older group of participants in a vMMN paradigm with sequences of young standard – old deviant and old standard – young deviant photographs. We applied the method developed by Stefanics et al. (2012) for emotion-related vMMN. Accordingly, we presented four photographs around a central task field. As a modification of the method, to ensure continuous attentional engagement to the task-field, we introduced a tracking task. What did we expect in the present study? On a general level we expected the automatic perception of the models’ age, that is, the appearance of a negative deviant minus standard difference potential (vMMN) over the posterior locations within the 200–400 ms post-stimulus latency range. As a more specific possibility, we expected to find an OAB by registering a vMMN difference between the age groups in the young standard – old deviant and old standard – young deviant conditions. According to the categorization-individualization model of OAB (Hugenberg et al., 2010), only the own-age photographs are processed at the level of individual features. Such age-related difference may lead to increased sensitivity to own-age deviants, and accordingly, a larger deviant minus standard ERP difference for photographs of models of the same age as that of the participants.
Age difference of photographs per se elicits ERP differences. As an example, in a gender categorization task a larger anterior positivity and a smaller anterior negativity emerged to old faces in a younger group, and in the same group a larger late positivity emerged to old faces in a later latency range (Ebner et al., 2011). Therefore, in the present study we compared the ERPs to stimuli of the same age as deviants and standards (inverse control procedure). Face processing is dependent on the orientation of the photographs. Upside-down presentation of faces decreases the effectiveness of face-specific processing (Yin, 1969; for a review see Rossion, 2009). Low-level visual differences are preserved in upside-down photographs, therefore vMMN differences between original and upside-down presentation argue against the role of age-related low-level feature differences. Accordingly, we did not expect deviant minus standard ERP difference for upside-down faces.
In summary, our main goal was to study the possibility of automatic registration of age and to investigate age related sensitivity differences by using the visual mismatch negativity. We compared a younger and an older group of participants in a passive oddball paradigm with sequences of young standard – old deviant and old standard – young deviant photographs. According to results of previous vMMN studies with facial features, we expected the appearance of a negative deviant minus standard difference potential (vMMN) over the posterior locations within the 200–400 ms post-stimulus latency range and we expected to find an OAB, a larger deviant minus standard ERP difference for photographs of models of the same age as that of the participants.
Materials and Methods
Twenty older (60–75 years) participants were selected from a larger pool of available participants. This selection was independent of the potential difference between the deviant minus standard ERPs difference, but they had discernible P1 and N1/N170 exogenous components. In the younger (18–30 years) group seven participants were excluded from a starting sample of 27 participants because they had no discernable exogenous components. This way there were 20 participants in each age group (younger adults: 10 women; mean age: 22.0 years, SD = 2.34 years, older adults: 11 women; mean age: 68.45 years, SD = 3.62 years). Cognitive functions were measured by four subtests (Similarities, Digit Span, Matrix Reasoning, and Digit Symbol-Coding) of the Hungarian version of WAIS-IV (Rózsa et al., 2010). The aggregated mean points were 43.65 (SD = 5.85) in the younger group and 52.45 (SD = 8.31) in the older group. All participants were right-handed, had normal or corrected-to-normal vision (measured via a Hungarian version of Snellen card), and were free of any kind of neurological or psychiatric disease. Older adults were paid for participation. Younger adults participated in the experiment for course credit, except two paid participants, who were no longer college students. Written informed consent was obtained from all participants prior to the experimental procedure.
The study was conducted in accordance with the Declaration of Helsinki and approved by the United Ethical Review Committee for Research in Psychology in Hungary (EPKEB).
Stimuli and Procedure
The stimuli were presented on a 24″ LCD monitor (Asus VS229na, 60-Hz refresh rate) on a gray (44.48 cd/m2) background at a viewing distance of 1.44 m. ERP-related stimuli consisted of black and white photographs of 16 young and 16 old male models taken from the database constructed by Minear and Park (2004). Using Adobe Photoshop CS3 Extended 10.0 (Adobe Systems Inc. San Jose, CA, United States) the photographs were converted to grayscale (8 bit) and inserted onto a gray background. Each stimulus screen consisted of images of four different individuals, either four young male faces or four old male faces. The photographs appeared on the upper-left, upper-right, lower-left, and lower-right sides from the center of the screen. The average luminance of the faces was 62 cd/m2 (SE = 1.2 cd/m2). The size of the images was 260 × 360 pixels (2.9° × 4.0°). The center of each image was at a 2.7° horizontal and 2.7° vertical viewing angle from the center of the screen. Stimulus duration was 150 ms, the inter-stimulus intervals were between 366 and 416 ms with a jitter in steps of 16.67 ms.
There were four conditions in the experiment in separate blocks (i.e., inverted and upright faces were presented in separate sequences). Photographs were presented either in the original position or inverted (Position: upright, inverted). Either the photographs of young or old models were deviant stimuli (Photographs: young, old). In the sequences 20% of the stimuli were deviants. The order of presentation of conditions was counterbalanced across participants. There were 400 stimuli (320 standards and 80 deviants) within a condition. The presentation order of the models was random with the restriction that a photograph of the same model was not presented at subsequent stimuli, that is, faces changed trial-by-trial. (The photograph of a model as standard face was repeated 80 times, a deviant one was repeated 20 times within a condition).
The task-relevant stimuli appeared on the central area of the screen and consisted of two disks. A red disk served as a fixation point (0.19° visual angle), and a green disk (0.38°) made horizontal pseudorandom movements around the red disk. The participant’s task was to keep the green disk as close to the fixation point as possible using the S (left) and É (right) keys of the keyboard. Errors occurred when the distance of the two disks exceeded 0.77° in either direction. In case of an error, the color of the green disk changed to blue providing online visual feedback. Performance (the sum of the errors in one block) was reported on the screen at the end of each block. Figure 1 shows examples of the stimulus display. The experiment started with a practice block (252 trials) to ensure that the participant fully understood the task. In the practice sequence an equal number of young and old faces were mixed within the sequence. EEG was not recorded in this block.
Figure 1. Examples of the stimulus display. At the center there is the task-field with the target and moving circles is at the center.
Measurement of Brain Electric Activity
Electrophysiological recording was performed in an electrically and acoustically shielded room. Electrical brain activity was recorded from 32 locations according to the extended 10–20 system (BrainVision Recorder 1.21.0303, ActiChamp amplifier, Ag/AgCl active electrodes, EasyCap (Brain Products GmbH), sampling rate: 1000 Hz, DC-70 Hz online filtering). The ground electrode was placed on the forehead (AFz) and the reference electrode was on the nose tip. Both horizontal and vertical electrooculogram signals (HEOG and VEOG) were recorded with bipolar configurations between two electrodes (placed lateral to the outer canthi of the two eyes and above and below the left eye, respectively). The EEG signal was bandpass filtered offline with a non-causal Kaiser-windowed Finite Impulse Response filter (low pass filter parameters: 30 Hz of cutoff frequency, beta of 12.2653, a transition bandwidth of 10 Hz; high pass filter parameters: 0.1 Hz of cut off frequency, a transition bandwidth of 0.2 Hz). Epochs ranging from −100 to 600 ms relative to the onset of stimuli were extracted for all deviants and for those standards that immediately preceded a deviant. The first 100 ms of each epoch served as the baseline. Epochs with larger than 100 μV or smaller than 2 μV voltage change were considered artifacts and rejected from further processing. ERPs were calculated by averaging the extracted epochs (separately for standards and deviants for young and old faces). Difference waveforms were created by subtracting the ERPs to standards from the ERPs to deviants, separately for the two age category of the models (inverse control procedure), i.e., deviant and standard responses to physically identical stimuli were compared (deviant old face vs. standard old face and deviant young face vs. standard young face).
Analyses and Comparisons
P1 latency was measured at POz and Oz locations as the largest positivity within the 60–130 ms range, and P1 amplitude was measured as the average of a ± 10 ms range around the group averages. Amplitude and latency values were calculated in repeated measure ANOVAs with between group factor of Group (younger, older), and within group factors of Photograph (young, old), Stimulus (deviant, standard), and Position (upright, inverted). N1/N170 latency was measured at PO7 and PO8 locations as the largest negative/smallest positive value in the 100–200 ms range, and N1/N170 amplitude was measured as the average of a ± 10 ms range around the group averages. In the ANOVAs on latencies and amplitudes, Location (left, right) was included as an additional factor. P2 latency (as the largest positive value) was measured within the 170–270 and 190–290 ms latency ranges (younger and older group, respectively), and amplitude was measured as the average of a ± 10 ms range around the group averages at P7 and P8 locations. In the ANOVAs the between group factor was Group (younger, older), and within group factors were Photograph (young, old), Stimulus (deviant, standard), Position (upright, inverted), and Location (left, right). We report here only age-related differences, because other aspects of exogenous activity are beyond the scope of this study1.
To explore the possibility of deviant minus standard differences, as the first step we calculated consecutive t-tests (difference from zero as null-hypothesis) at PO7, PO3, POz, PO4, PO8, O1, Oz, and O2 locations on the deviant minus standard difference potentials at all points within the 200–400 ms latency range, i.e., in the expected range of vMMN. As criteria we considered significant t-values (p < 0.05) at least over two adjacent locations and 20 subsequent significant points (20 ms per location). Afterward we investigated the difference potentials in two epochs: in 230–270 and 330–370 ms, respectively, i.e., the middle part of the 200–300 and 300–400 ms latency ranges. These investigations were conducted in a posterior ROI, containing PO7, PO3, POz, PO4, PO8, O1, Oz, and O2 locations. These tests were conducted only if there were significant results in the exploratory analyzes. In the two epochs we calculated Benjamini-Hochberg corrected t-tests, comparing the difference potentials to zero. In these calculations the Statistica 13 (TIBCO Software Inc.) was applied. In case of tendencies of deviant minus standard differences, we conducted Bayesian statistics (JASP Team, 2018) to control the reliability of null effects (this calculation was not planned a priori). We used the default prior option for the t-tests, a Cauchy distribution with spread r set to 0.707. All tests were two-tailed2.
In case of reliable differences between the ERPs to deviant and standard stimuli, we conducted a source analysis using the sLORETA method. These results, along with the applied calculations are presented in Supplementary Materials.
The number of errors (the circle outside the target field) was larger in the older group (36.1, SE = 11.9) than in the younger group (2.10, SE = 0.60), according to the Mann-Whitney test, p < 0.001. The task was easier in the younger group, however, as we noted, participants of the older group attempted to concentrate on the task.
As Figure 2 shows, ERPs were different in the two age-groups. Following the P1 component, in the older group the ERP returned to the baseline, and the N1 component was followed by the P2. In the younger group N1 did not reach the baseline. This is because the negativity superimposed on a positive wave, and this positivity peaked as P2. Table 1 shows the latency and amplitude values of these components.
Figure 2. Event-related potentials in the younger and older groups to upright and inverted (-i) photographs of young and old models. For illustrative reasons the posterior ROI is divided into left (PO7, PO3, O1) middle (POz, Oz) and right (PO8, PO4, O2) parts.
Table 1. Mean amplitude (μV) and latency (ms) values of the P1, N1, and P2 components for upright and inverted photographs in the younger and older groups to the standard stimuli. P1 was measured at POz, N1 was measured at PO8 and P2 was measured at P7 (S.E.M. in parenthesis).
On the P1 latency values we obtained a significant main effect of Group, F(1,38) = 14.66, ηp2 = 0.28, p < 0.001, showing shorter P1 latency in the older group. For the P1 amplitude, despite the apparent difference we obtained no age-related difference. For the N1 latency we obtained no age-related differences. It is worth noting that the latencies were below 150 ms, which is shorter than the usual N170 latency. As it is evident from Figure 2, N1 amplitude was larger in the older group, accordingly, this difference was significant, F(1,38) = 11.42, ηp2 = 0.23, p < 0.01. P2 latency was longer in the older group, F(1,38) = 33.60, ηp2 = 0.47, p < 0.001, and P2 amplitude was larger in the younger group, F(1,38) = 10.15, ηp2 = 0.21, p = 0.003.
In the inverted condition the difference potentials failed to pass the criteria of the exploratory analysis, therefore we did not analyze this condition further. In the younger group the deviant minus standard difference just failed the criteria (at the photography with young models, in the 200–300 ms range there were negativities of 28 and 16 ms long epochs at O2 and Oz locations, respectively), therefore we further analyzed the earlier range in this age group. In the older group significant negativity emerged within the 343–374 ms latency range at all locations for the photographs depicting old models.
Figure 3 shows the difference potentials, and Figure 4 shows the surface distribution of the difference potentials in the 230–270 and 330–370 ms ranges to upright photographs in the two age-groups for the two ages of models. Table 2 shows the mean amplitude values of the above ranges. In the t-tests significant differences appeared in the older group in the 330–370 ms range for the photographs of old models, t(19) = 3.76, d = 0.79, p < 0.05 (Benjamini-Hochberg corrected), and there was a tendency for the negativity for young models t(19) = 2.05, d = 0.46, p < 0.06 (uncorrected). In the younger group we obtained a tendency of young deviant-related negativity in the 230–270 ms range, t(19) = 1.85, d = 0.41, p < 0.08 (uncorrected). No other comparison approached significance.
Figure 3. Deviant minus standard difference potentials in the younger and older groups to upright photographs of young and old models. For illustrative reasons the posterior ROI is divided into left (PO7, PO3, O1) middle (POz, Oz) and right (PO8, PO4, O2) parts.
Figure 4. Surface distribution of the deviant minus standard difference potentials in the 230–270 and 330–370 ms latency ranges in the younger and older groups to upright photographs of young and old models.
Table 2. Mean amplitude of the difference potentials (μV) in the younger and older groups in the 230–270 and 330–370 ms ranges to upright photographs of young and old models (S.E.M. in parenthesis).
In the Bayesian analyses we obtained strong evidence for the negative difference potential in the older group for upright old models in the 330–370 ms range (BF10 = 15.93). In this condition an anecdotal evidence appeared in the 330–370 ms range for young models (BF10 = 1.31). In the younger group the apparent negativity for the young models in the 230–270 ms range was unreliable (BF10 = 0.97).
The aim of the present study was to investigate the possibility of automatic identification of models’ age in photographs. To this end, in a passive oddball sequence of photographs some model’s age were different (deviants) from the frequent age of the models (standards). We investigated a group of younger and a group of older participants with deviant photographs of old and young models, and expected deviant minus standard event-related activity, the visual mismatch negativity (vMMN). As a specific expectation, we anticipated different effects to own-age vs. other-age deviancies.
Reliable deviant minus standard negativity (using traditional and Bayesian methods) appeared only in the older group to upright photographs of old models. This difference emerged in the 330–370 ms latency range, and it can be identified as vMMN. Although there was a tendency for similar posterior negativity to photographs of young models, the above results are a hint of the own-age effect, i.e., increased sensitivity to infrequent photographs of faces of age similar to that of the participants. Another tendency in the younger group for deviant minus standard difference at photographs of young models (in the 230–270 ms range) does not contradict the possibility of larger sensitivity to own-age faces. While the results in the older group corresponded to our expectation, as one of the reviewers noted, another way of thinking leads to different expectation. If participants have higher sensitivity to same-age faces, then it is likely that participants form a more robust standard representation for same-age standards, and as a result, different-age deviants elicit greater vMMN. However, we obtained no results in this direction.
Visual mismatch negativity to face-related stimuli have been reported in various post-stimulus latency ranges. The 320–370 ms range is relatively late, but it is within the range reported in previous studies (e.g., Susac et al., 2004, 2010; Gayle et al., 2012; Kimura, 2012; Vogel et al., 2015) and also within the vMMN range for other complex stimuli like right vs. left hands (Stefanics and Czigler, 2012). Due to the dependence of the position (upright vs. inverted) the effect seems to depend on holistic face processing, instead of the effect of low-level physical differences (e.g., Yin, 1969; Maurer et al., 2002; for review see Rossion, 2009). Our inverse control method, i.e., the comparison of faces of identical age in the role of deviant and standard, underscores this statement.
Using a similar method (four photographs in eccentric positions) Stefanics et al. (2012) obtained much more robust vMMN to emotional deviancy, showing that facial age difference is a less salient characteristic than facial emotion. Being an unexpected result, the sensitivity to deviant photographs in the older group deserves discussion. As a specificity of the present design, four photographs were presented at eccentric locations, and the task in the center of the screen required continuous fixation to the task field. This arrangement required stronger focal attention than other studies in the field of age-related vMMN differences. As a possibility, younger participants concentrated more effectively on the task-field, e.g., they were more efficient in inhibiting the task-irrelevant part of the visual field. On the one hand, this explanation corresponds to the compromised inhibitory processes in some fields of aging research (e.g., Hasher and Zacks, 1988), the larger effect of age-related distraction (e.g., Karthaus et al., 2020), and increased ERP effects of irrelevant stimuli (Kojouharova et al., 2020). On the other hand, spatial attention is relatively preserved in the elderly (for a discussion see Lawrence et al., 2018), and as an example, in the flanker task there is no robust age-related difference (De Bruin and Della Sala, 2018). Furthermore, less effective processing of events appearing at parafoveal regions in older participants is also against the above possibility. As an example, younger participants outperformed older participants in detection of motion direction at parafoveal areas (Park et al., 2020). However, according to some results, irrelevant stimuli outside the focus of attention have larger effects in older adults (Porter et al., 2012; Tsvetanov et al., 2013). As for the vMMN research, in a recent study with younger participants File and Czigler (2019) obtained considerable spatial attention effects on vMMN. In the only study with complex stimuli (meaningful vs. meaningless letter strings; Gaál et al., 2017) the advantage of older participants was due to the longer aftereffect of stimulus appearance. Longer aftereffect may facilitate the more elaborate processing of stimuli. The relatively long vMMN latency supports this assumption. The less efficient filtering of the task-irrelevant stimuli together with the possible advantage in stimulus coding seems to be a favorable condition for our older group for the emergence of vMMN.
As the more specific aim of the present study, the investigation of own-age bias (OAB) in the field of automatic change detection, in the older group we obtained positive results. In this group the magnitude of the reliable vMMN to photographs of old models was similar to the vMMN amplitude in younger groups to emotional face deviants (e.g., Chang et al., 2010; Wang et al., 2014; Sel et al., 2016). As an apparent controversy, in some studies OAB was less pronounced or even absent in older participants (e.g., Harrison and Hole, 2009; Wiese et al., 2012). However, our automatic change-detection procedure is different from the recognition paradigm of OAB studies, apart from a methodological similarity of a certain task that required task-irrelevant coding of facial age (Randall et al., 2012; Neumann et al., 2014). However, even in these studies participants had to attend to other aspects of the faces (e.g., gender, aesthetic value). As results on object-related attention (Duncan, 1984; Scholl, 2001) indicate, even if the ages of the models were task-irrelevant, faces were not “unattended.” On the contrary, in the present study the faces were outside the focus of attention, therefore the faces were not only task-irrelevant, but they were also “unattended.” As the results of the present study show, in the age group with relevant vMMN (i.e., the older group), photographs of their own age were automatically registered as deviant stimuli among the photographs of models of other ages. This way our results show that OAB has a component of automatic sensitivity. On a theoretical (but speculative) level, vMMN is considered as an index of predictive coding mechanism (Stefanics et al., 2014). According to this account, the representation of incoming stimuli is compared to the model of expected events. In case of mismatch, an error signal is compared to gradually updated models throughout a cascade of processes. As Hugenberg et al. (2010) proposed, other-age photographs are processed only at categorical level, whereas for own-age photographs there is an attempt at processing at individual level. The attempt at processing at a deeper level may contribute to a larger discrepancy (surprise) effect and accordingly, to a stronger activity of the match-mismatch mechanism.
Besides the deviant-related ERP differences, we obtained robust age-related differences in the exogenous ERP activities (P1, N1, and P2), i.e., earlier P1 in the older group, and larger and earlier P2 in the younger group. In previous studies the results on age-related differences on P1 are equivocal. Čeponiené et al. (2008) obtained smaller visual P1 in the older group, and this difference was especially large over the occipital regions. In contrast, after controlling for visual acuity (similar to that in the present study), Daffner et al. (2013) obtained larger P1 in older participants. Our results on P1 can be interpreted as preserved early processing in the older group. It is important to remind that the stimuli of the present study were human faces. P1 sensitivity to faces, especially to non-cropped photographs has been reported earlier (e.g., Dering et al., 2011). However, unlike in some studies (e.g., Pesciarelli et al., 2011), in the present study we found no P1 amplitude difference between the upright and inverted faces, showing that in the present study P1 had no strong connection to a face-specific processing stage. Facial stimuli typically elicit the posterior N170 component (e.g., Bentin et al., 1996). The N1 component of the present study was earlier than the usual latency of N170. Furthermore (like in case of P1), we obtained no N1 difference between the upright and inverted faces, e.g., longer N170 latency to inverted faces (Rossion et al., 2000). In an earlier study with similar stimulus presentation (four photographs at the four corners of the visual field) we got characteristic N170 components (Stefanics et al., 2012). As a marked difference between the studies, in the Stefanics et al. (2012) study the target-stimuli appeared intermittently as a change of the fixation cross, whereas the tracking task of the present study required continuous attentive processing. The strict attentional control might diminish the recordable negativity within the 100–200 ms latency range.
In the younger group N1 superimposed on a positivity peaked in the usual P2 range. The function of the processes underlying P2 is unclear, but their role is implicated at different stages of face processing (Itier and Taylor, 2004; Boutsen et al., 2006). Amplitude changes of P2 (P200) appeared in studies that investigated face-related decisions. Wiese et al. (2008) obtained amplitude decrease for old faces at older participants, and they interpreted the difference as deeper or more extensive processing at stimulus ambiguity. Faerber et al. (2015) obtained P2 (P200) amplitude reduction as a priming effect in younger participants, supporting this interpretation. In a passive task, i.e., without intentional decision demand, we obtained no such amplitude difference.
In summary, sequences of photographs showing models of particular age acquire memory representation for this regularity, even if the photographs are irrelevant (unattended). Photographs violating this regularity (deviants) elicit the vMMN component. This process is more effective in older adults, especially for deviant photographs of old models. Exogenous visual components are markedly different in younger and older groups, but little is known about the functional aspects of these differences.
Data Availability Statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
The studies involving human participants were reviewed and approved by United Ethical Review Committee for Research in Psychology (EPKEB). The patients/participants provided their written informed consent to participate in this study.
PC, IC, and ZG designed the study. PC, BN, and ZG collected the data. PC, BP, PK, KS, and IC analyzed the data. PC, PK, and IC wrote the manuscript. All authors contributed to the article and approved the submitted version.
This study was supported by the National Research, Development and Innovation Office (Hungary), grant number: 119587.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer MZ declared a shared affiliation, with no collaboration, with several of the authors (PC and BN) to the handling editor at the time of the review.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
We thank Zsuzsanna D’Albini for her technical help.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum.2021.707702/full#supplementary-material
- ^ The complete data set is available in the Supplementary Material.
- ^ It should be noted, that in a formal sense an ANOVA with factors of Group (younger, older), Photograph (young, old), Stimulus (deviant, standard), and ROI (parieto-occipital, occipital) corresponds to the design. However, due to the lack of significant differences in the younger group, it is equivocal to select a proper latency range of measurements. Results of an ANOVA using the range of significant difference between young and old photographs in the older group show only significant Group × Picture interaction: F(1,38) = 4.13, p < 0.05, ηp2 = 0.10.
Astikainen, P., and Heitanen, J. K. (2009). Event-related potentials to irrelevant changes in facial expressions. Behav. Brain Funct. 5:30. doi: 10.1186/1744-9081-5-30
Bartlett, J. C., and Leslie, J. E. (1986). Aging and memory for faces versus single view of faces. Mem. Cognit. 14, 371–381. doi: 10.3758/bf03197012
Bentin, S., Allison, T., Puce, A., Perez, E., and McCarthy, G. (1996). Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8, 551–565. doi: 10.1162/jocn.19126.96.36.1991
Boutsen, L., Humphreys, G. W., Praamstra, P., and Warbrick, T. (2006). Comparing neural correlates of configural processing in faces and objects: an ERP study of the Thacher illusion. Neuroimage 32, 352–367. doi: 10.1016/j.neuroimage.2006.03.023
Čeponiené, R., Westerfield, M., Torki, M., and Townsend, J. (2008). Modality-specificity of sensory aging in vision and audition. Evidence from event-related potential. Brain Res. 1215, 53–68. doi: 10.1016/j.brainres.2008.02.010
Chang, Y., Xu, J., Shi, B. N., Zhang, B., and Zhao, L. (2010). Dysfunction of processing task-irrelevant emotional faces in major depressive disorder patients revealed by expression-related visual MMN. Neurosci. Lett. 472, 33–77. doi: 10.1016/j.neulet.2010.01.050
Daffner, K. R., Haring, A. E., Alpert, B. R., Zhuravleva, T. Y., Mott, K. K., and Holcomb, P. J. (2013). The impact of visual acuity on age-related differences in neural markers of early visual processing. Neuroimage 15, 127–136. doi: 10.1016/j.neuroimage.2012.10.089
De Bruin, A., and Della Sala, S. (2018). Effects of age on inhibitory control are affected by task-specific features. Q. J. Exp. Psychol. 71, 1219–1233. doi: 10.1080/17470218.2017.1311352
Dering, B., Martin, C. D., Moro, S., Pegna, A. J., and Thierry, G. (2011). Face-sensitive processes one hundred millisecond after picture onset. Front. Hum. Neurosci. 15:93. doi: 10.3389/fnhum.2011.00093
Duncan, J. (1984). Selective attention and the organization of visual information. J. Exp. Psychol. Gen. 113, 501–517. doi: 10.1037/0096-34188.8.131.521
Durant, S., Sulykos, I., and Czigler, I. (2017). Automatic deviation of orientation variance. Neurosci. Lett. 658, 43–47. doi: 10.1016/j.neulet.2017.08.027
Ebner, N. C., He, Y., Fichtenholtz, H. M., McCarthy, G., and Johnson, M. K. (2011). Electrophysiological correlates of processing faces of younger and older individuals. Soc. Cogn. Affect. Neurosci. 6, 526–535. doi: 10.1093/scan/nsq074
Faerber, S. J., Kaufmann, J. M., and Schweinberger, S. R. (2015). Early temporal negativity is sensitive to perceived (rather than physical) facial identity. Neuropsychologia 75, 132–142. doi: 10.1016/j.neuropsychologia.2015.05.023
File, D., and Czigler, I. (2019). Automatic detection of violations of statistical regularities in the periphery is affected by the focus of spatial attention: a visual mismatch negativity study. Eur. J. Neurosci. 49, 1348–1356.
Gaál, Z. A., Bodnár, F., and Czigler, I. (2017). When elderly outperform young adults – integration in vision revealed by the visual mismatch negativity event-related component. Front. Aging Neurosci. 9:5. doi: 10.3389/fnagi.2017.00015
Gayle, L. C., Gal, D. E., and Kieffaber, P. D. (2012). Measuring affective reactivity in individuals with autism spectrum personality traits using the visual mismatch negativity event-related brain potential. Front. Hum. Neurosci. 6:334. doi: 10.3389/fnhum.2012.00334
Harrison, V., and Hole, G. (2009). Evidence for a contact.based explanation of the own-age bias in face recognition. Psychon. Bull. Rev. 16, 264–296. doi: 10.3758/pbr.16.2.264
Hasher, L., and Zacks, R. T. (1988). “Working memory, comprehension, and aging: a review and a new view,” in The Psychology of Learning and Motivation, ed. G. Bower (San Diego, CA: Academic Press), 193–225. doi: 10.1016/s0079-7421(08)60041-9
He, Y., Ebner, N., and Johnson, M. K. (2011). What predicts the own-age bias in in face recognition? Soc. Cogn. 29, 97–109. doi: 10.1521/soco.2011.29.1.97
Hugenberg, K., Yiung, S. G., Bernstein, M. J., and Sacco, D. F. (2010). The organization-individualization model: an integrative accoubt of the other-race recognition deficit. Psychol. Rev. 117, 1168–1187. doi: 10.1037/a0020463
Itier, R. J., and Taylor, M. J. (2004). N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cereb. Cortex 14, 132–142. doi: 10.1093/cercor/bhg111
JASP Team (2018). JASP (Version 0.9.2) [Computer Software].
Karthaus, M., Wascher, E., Falkenstein, M., and Getzmann, S. (2020). The ability of young, middle-aged and older drivers to inhibit visual and auditory distraction in a driving simulator task. Transp. Res. F 68, 272–284. doi: 10.1016/j.trf.2019.11.007
Kecskés-Kovács, K., Sulykos, I., and Czigler, I. (2013b). Visual mismatch negativity is sensitive to symmetry as a perceptual category. Eur. J. Neurosci. 37, 662–667. doi: 10.1111/ejn.12061
Kecskés-Kovács, K., Sulykos, I., and Czigler, I. (2013a). Is it a face of a woman or a man? Visual mismatch negativity is sensitive to gender category. Front. Hum. Neurosci. 7:532. doi: 10.3389/fnhum.2013.00532
Kimura, M. (2012). Visual mismatch negativity and unintentional temporal context-based prediction in vision. Int. J. Psychophysiol. 83, 144–155. doi: 10.1016/j.ijpsycho.2011.11.010
Kimura, M., Schröger, E., and Czigler, I. (2011). Visual mismatch negativity and its importance in visual cognitive sciences. Neuroreport 22, 669–673. doi: 10.1097/wnr.0b013e32834973ba
Kojouharova, P., Gaál, Z. A., Nagy, B., and Czigler, I. (2020). Age effects on distraction in a visual task requiring fast reactions: an event-related potential study. Front. Aging Neurosci. 7:596047. doi: 10.3389/fnagi.2020.596047
Kovarski, K., Latinus, M., Charpentier, J., Clery, H., Roux, S., Houy-Durand, E., et al. (2017). Expression related vMMN: disentangling emotional from neutral change. Front. Hum. Neurosci. 11:18. doi: 10.3389/fnhum.2017.00018
Lawrence, R. K., Edwards, M., and Goodhew, S. C. (2018). Changes in the spatial spread of attention with ageing. Acta Psychol. 188, 188–199. doi: 10.1016/j.actpsy.2018.06.009
Li, X., Lu, Y., Sun, G., Gao, L., and Zhao, L. (2012). Visual mismatch negativity elicited by facial expressions: new evidence from the equiprobable paradigm. Behav. Brain Funct. 8:7. doi: 10.1186/1744-9081-8-7
Lorenzo-López, L., Amenedo, E., Pazo-Alvarez, P., and Cadaveira, F. (2004). Pre-attentive detection of motion direction changes in normal aging. Neuroreport 15, 2633–2636. doi: 10.1097/00001756-200412030-00015
Maurer, D., Grand, I. R., and Mondloch, C. J. (2002). The many faces of configural processing. Trends Cogn. Sci. 6, 255–260. doi: 10.1016/s1364-6613(02)01903-4
Minear, M., and Park, D. C. (2004). A lifespan database of adult facial stimuli. Behav. Res. Methods Instrum. Comput. 36, 630–633. doi: 10.3758/bf03206543
Mukudi, P. B. L., and Hills, P. J. (2019). The combined influence of the own-age, -genfer, and –ethnicity bias on face recognition. Acta Psychol. 194, 1–6. doi: 10.1016/j.actpsy.2019.01.009
Neumann, M. F., End, E., Luttmann, S., Schweinberger, S. R., and Wiese, H. (2014). The own-age bias in face memory is unrelated to differences in attention – Evidence from event-related potentials. Cogn. Affect. Behav. Neurosci. 15, 180–194. doi: 10.3758/s13415-014-0306-7
Park, S., Nguyen, B. N., and McKendrick, A. M. (2020). Ageing elevates peripheral spatial suppression of motion regardless of divided attention. Ophthalmic Physiol. Opt. 40, 117–127. doi: 10.1111/opo.12674
Pesciarelli, F., Sarlo, M., and Leo, I. (2011). The time course of implicit processing of facial features: an event-related potential study. Neuropsychologia 49, 1154–1161. doi: 10.1016/j.neuropsychologia.2011.02.003
Porter, G., Wright, A., Tales, A., and Gilchrist, I. D. (2012). Stimulus onsets and distraction in younger and older adults. Psychol. Aging 27, 1111–1119. doi: 10.1037/a0028486
Randall, J. L., Tabernik, H. E., Aguilra, A. M., Anastasi, J. S., and Valk, K. V. (2012). Effects of encoding task on the own-age face recognition bias. J. Gen. Psychol. 139, 55–67. doi: 10.1080/00221309.2012.657266
Rhodes, M. G., and Anastasi, J. S. (2012). The own-age bias in face recognition: a meta-analytic and theoretical review. Psychol. Bull. 138, 146–174. doi: 10.1037/a0025750
Rossion, B. (2009). Distinguishing the cause and consequence of face inversion: the perceptual field hypothesis. Acta Psychol. (Amst.) 132, 300–312. doi: 10.1016/j.actpsy.2009.08.002 Epub 2009 Sep 10.
Rossion, B., Gauthier, I., Tarr, M. J., Despland, P., Bruyer, R., Linotte, S., et al. (2000). The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport 11, 69–74. doi: 10.1097/00001756-200001170-00014
Rózsa, S., Kő, N., Kuncz, E., Mészáros, A., and Mlinkó, R. (2010). WAIS-IV. Wechsler Adult Intelligence Scale – Fourth Edition. Tesztfelvételi és pontozási kézikönyv. Magyar adaptáció: OS-Hungary Tesztfejlesztő Kft. Available online at: http://www.oshungary.hu/hu/tesztkatalogus-oshungary/wais-iv/
Scholl, B. J. (2001). Objects and attention: the state of the art. Cognition 80, 1–46. doi: 10.1016/s0010-0277(00)00152-9
Sel, A., Harding, R., and Tsakiris, M. (2016). Elecrophysiological correlates of self-specific prediction error int he human brain. Neuroimage 125, 13–24. doi: 10.1016/j.neuroimage.2015.09.064
Sporer, S. L. (2001). Recognizing faces of other ethnic groups: an integration of theories. Psychol. Public Policy Law 7, 36–97. doi: 10.1037/1076-89184.108.40.206
Stefanics, G., Astikainen, P., and Czigler, I. (2015). Visual mismatch negativity (vMMN): a prediction error signal in the visual modality. Front. Hum. Neurosci. 8:1074. doi: 10.3389/fnhum.2014.01074
Stefanics, G., Csukly, G., Komlosi, S., Czobor, P., and Czigler, I. (2012). Processing of unattended facial emotions: a visual mismatch negativity study. Neuroimage 59, 3042–3049. doi: 10.1016/j.neuroimage.2011.10.041
Stefanics, G., and Czigler, I. (2012). Automatic prediction error responses to hands with unexpected laterality: an electrophysiological study. Neuroimage 63, 253–261. doi: 10.1016/j.neuroimage.2012.06.068
Stefanics, G., Kremlacek, J., and Czigler, I. (2014). Visual mismatch negativity: a predictive coding view. Front. Hum. Neurosci. 8:666. doi: 10.3389/fnhum.2014.00666
Stothart, G., Tales, A., and Kazanina, N. (2013). Evoked po¬tentials reveal age-related compensatory mechanisms in early visual processing. Neurobiol. Aging 34, 1302–1308. doi: 10.1016/j.neurobiolaging.2012.08.012
Sulykos, I., Gaál, Z. A., and Czigler, I. (2017). Visual mismatch negativity to vanishing parts of objects in younger and older adults. PLoS One 12:e0188929. doi: 10.1371/journal.pone.0188929
Sulykos, I., Gaál, Z. A., and Czigler, I. (2018). Automatic change detection in older and younger women: a visual mismatch negativity study. Gerontology 64, 318–325. doi: 10.1159/000488588
Susac, A., Ilmoniemi, R. J., Pihko, E., Ranken, D., and Supek, S. (2010). Early cortical responses are sensitive to changes in face stimuli. Brain Res. 1346, 155–164. doi: 10.1016/j.brainres.2010.05.049
Susac, A., Ilmoniemi, R. J., Pihko, E., and Supek, S. (2004). Neurodynamic studies on emotional and inverted faces in an oddball paradigm. Brain Topogr. 16, 265–268. doi: 10.1023/b:brat.0000032863.39907.cb
Tales, A., Troscianko, T., Wilcock, G. K., Newton, P., and Butler, S. R. (2002). Age-related changes in the preattentional detection of visual change. Neuroreport 13, 969–972. doi: 10.1097/00001756-200205240-00014
Tsvetanov, K. A., Mevorach, C., Allen, H., and Humphreys, G. W. (2013). Age-related differences in selection by visual saliency. Atten. Percept. Psychophys. 75, 1382–1394. doi: 10.3758/s13414-013-0499-9
Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Q. J. Exp. Psychol. 43A, 161–204. doi: 10.1080/14640749108400966
Vogel, B. O., Shen, C., and Neuhaus, A. H. (2015). Emotional context facilitates cortical prediction error responses. Hum. Brain Mapp. 36, 3641–3652. doi: 10.1002/hbm.22868
Wang, W., Miao, D., and Zhao, L. (2014). Automatic detection of orientation change of faces versus non-face objects: a visual MMN study. Biol. Psychol. 100, 71–78. doi: 10.1016/j.biopsycho.2014.05.004
Wiese, H., Comes, J., and Schweinberger, S. R. (2012). Daily-life contact affects the own-age bias and neural correlates of face memory in elderly participants. Neuropsychologia 50, 3496–3508. doi: 10.1016/j.neuropsychologia.2012.09.022
Wiese, H., Komes, J., and Schweinberger, S. R. (2013). Ageing faces in aging mind: a review ont he own-age bias in face recognition. Vis. Cogn. 21, 1337–1663. doi: 10.1080/13506285.2013.823139
Wiese, H., Schweinberger, S. R., and Hansen, K. (2008). The age of the beholder: ERP evidence of an own-age bias in face memory. Neuropsychologia 46, 2973–2985. doi: 10.1016/j.neuropsychologia.2008.06.007
Yin, R. K. (1969). Looking at upside-down faces. J. Exp. Psychol. 81, 141–145. doi: 10.1037/h0027474
Keywords: oddball, visual mismatch negativity (vMMN), facial stimuli, aging, own-age bias
Citation: Csizmadia P, Petro B, Kojouharova P, Gaál ZA, Scheiling K, Nagy B and Czigler I (2021) Older Adults Automatically Detect Age of Older Adults’ Photographs: A Visual Mismatch Negativity Study. Front. Hum. Neurosci. 15:707702. doi: 10.3389/fnhum.2021.707702
Received: 10 May 2021; Accepted: 21 July 2021;
Published: 20 August 2021.
Edited by:Juanita Todd, The University of Newcastle, Australia
Reviewed by:Motohiro Kimura, National Institute of Advanced Industrial Science and Technology (AIST), Japan
Marta Zimmer, Budapest University of Technology and Economics, Hungary
Copyright © 2021 Csizmadia, Petro, Kojouharova, Gaál, Scheiling, Nagy and Czigler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Petra Csizmadia, email@example.com