<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Aging Neurosci.</journal-id>
<journal-title>Frontiers in Aging Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Aging Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1663-4365</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnagi.2016.00187</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Composite Effect Is Face-Specific in Young but Not Older Adults</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Meinhardt</surname> <given-names>G&#x000FC;nter</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/130097/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Persike</surname> <given-names>Malte</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/228001/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Meinhardt-Injac</surname> <given-names>Bozana</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/164726/overview"/>
</contrib>
</contrib-group>
<aff><institution>Department of Psychology, Johannes Gutenberg University Mainz</institution> <country>Mainz, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Shin Murakami, Touro University California, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Fiona Kumfor, Neuroscience Research Australia, Australia; Vidyaramanan Ganesan, Rowan University, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: G&#x000FC;nter Meinhardt <email>meinharg&#x00040;uni-mainz.de</email></p></fn></author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>08</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>8</volume>
<elocation-id>187</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>03</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>07</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Meinhardt, Persike and Meinhardt-Injac.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Meinhardt, Persike and Meinhardt-Injac</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>In studying holistic face processing across the life-span there are only few attempts to separate face-specific from general aging effects. Here we used the complete design of the composite paradigm (Cheung et al., <xref ref-type="bibr" rid="B6">2008</xref>) with faces and novel non-face control objects (watches) to investigate composite effects in young (18&#x02013;32 years) and older adults (63&#x02013;78 years). We included cueing conditions to alert using a narrow or a wide attentional focus when comparing the composite objects, and used brief and relaxed exposure durations for stimulus presentation. Young adults showed large composite effects for faces, but none for watches. In contrast, older adults showed strong composite effects for faces and watches, albeit the effects were larger for faces. Moreover, composite effects for faces were larger for the wide attentional focus in both age groups, while the composite effects for watches of older adults were alike for both cueing conditions. Older adults showed low accuracy at the same levels for both types of stimuli when attended and non-attended halves were incongruent. Increasing presentation times improved performance strongly for congruent but not for incongruent composite objects. These findings suggest that the composite effects of older adults reflect substantial decline in the ability to control irrelevant stimuli, which takes effect both in non-face objects and in faces. In young adults, highly efficient attentional control mostly precludes interference of irrelevant features in novel objects, thus their composite effects reflect holistic integration specific for faces or objects of expertise.</p></abstract>
<kwd-group>
<kwd>age-related decline</kwd>
<kwd>holistic face perception</kwd>
<kwd>composite effect</kwd>
<kwd>interference</kwd>
<kwd>attentional control</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="5"/>
<equation-count count="4"/>
<ref-count count="55"/>
<page-count count="14"/>
<word-count count="10239"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Many studies report age-related decline in tests of face recognition and face perception (Bartlett et al., <xref ref-type="bibr" rid="B1">1989</xref>; Fulton and Bartlett, <xref ref-type="bibr" rid="B13">1991</xref>; Crook and Larrabee, <xref ref-type="bibr" rid="B7">1992</xref>; Searcy et al., <xref ref-type="bibr" rid="B50">1999</xref>; Pfutze et al., <xref ref-type="bibr" rid="B37">2002</xref>; Chaby et al., <xref ref-type="bibr" rid="B4">2003</xref>; Hildebrandt et al., <xref ref-type="bibr" rid="B26">2010</xref>; Germine et al., <xref ref-type="bibr" rid="B22">2011</xref>). However, since there is also decline in other domains of cognitive functions which are necessarily involved in tests of face cognition, it is unclear whether the age-related decline concerns face-specific mechanisms, or rests on impairment in general spatial vision ability (Sekuler and Sekuler, <xref ref-type="bibr" rid="B51">2000</xref>), processing speed (Salthouse, <xref ref-type="bibr" rid="B47">1996</xref>), memory functions (Rajah and D&#x00027;Esposito, <xref ref-type="bibr" rid="B40">2005</xref>), or attentional control (Gazzaley et al., <xref ref-type="bibr" rid="B18">2005a</xref>; Georgiou-Karistianis et al., <xref ref-type="bibr" rid="B21">2006</xref>).</p>
<p>In several studies it was found that older adults suffer from deficits in tasks that require top-down suppression and attentional control (Gazzaley et al., <xref ref-type="bibr" rid="B18">2005a</xref>,<xref ref-type="bibr" rid="B19">b</xref>, <xref ref-type="bibr" rid="B17">2008</xref>). In perceptual tasks with simultaneous presentation of target and distracter stimuli it was found that, particularly, the ability to ignore irrelevant information suffers from aging (De Fockert et al., <xref ref-type="bibr" rid="B9">2009</xref>; Quigley et al., <xref ref-type="bibr" rid="B39">2010</xref>; Schmitz et al., <xref ref-type="bibr" rid="B48">2010</xref>; Hanring et al., <xref ref-type="bibr" rid="B24">2013</xref>; Geerligs et al., <xref ref-type="bibr" rid="B20">2014</xref>). The loss of attentional control corresponds to the frontal lobe hypothesis of aging (West, <xref ref-type="bibr" rid="B55">1996</xref>), since divided attention, attentional and executive control, and working memory function were found to be mediated by frontal brain areas (Goldman-Rakic, <xref ref-type="bibr" rid="B23">1995</xref>; Cabeza et al., <xref ref-type="bibr" rid="B3">1997</xref>; Fink et al., <xref ref-type="bibr" rid="B11">1997</xref>; Rajah and D&#x00027;Esposito, <xref ref-type="bibr" rid="B40">2005</xref>; Prakash et al., <xref ref-type="bibr" rid="B38">2009</xref>).</p>
<p>In recent psychophysical studies no age-related decline was reported for the composite face effect, which is a common index of holistic face processing (Konar et al., <xref ref-type="bibr" rid="B27">2013</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>). This led to the conclusion that holistic processing is preserved, or even a preferred vision mode at mature ages (ibid). Evidence for intact holistic processing of faces is at odds with findings obtained with tests on the ability to judge spatial-configural changes in faces, which is thought to be closely related to, or even an integral part of holistic face processing (Rossion, <xref ref-type="bibr" rid="B46">2008</xref>). In several studies it was shown that older adults had difficulty to recognize two faces as different when the spatial distances of facial features were manipulated (Chaby et al., <xref ref-type="bibr" rid="B5">2011</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B35">2015</xref>), which indicates a loss in spatial-configural processing of faces at mature ages (Daniel and Bentin, <xref ref-type="bibr" rid="B8">2012</xref>). Using a face categorization task, Schwarzer et al. (<xref ref-type="bibr" rid="B49">2010</xref>) found that older adults did not prefer holistic to feature-based strategies. The overall picture of face processing in later adulthood is somewhat mixed, with studies supporting that face-specific abilities are maintained, while other studies report age-related decline in core capabilities of face processing (see also Hildebrandt et al., <xref ref-type="bibr" rid="B26">2010</xref>, <xref ref-type="bibr" rid="B25">2011</xref>).</p>
<p>The maintenance of the composite face effect at advanced ages deserves a second look, since the experimental measurement of the composite effect relies on the assumption that the observer has intact capabilities of attentional control, a domain that was shown to undergo strong age-related decline (see above). Generally, all experimental paradigms used to test whether objects are processed holistically or in a piecemeal manner share the common characteristic that holistic integration is concluded from the inability of the observer to judge a subset of object features (the attended or target features) independent of other object features (the unattended or context features, see Maurer et al., <xref ref-type="bibr" rid="B30">2002</xref> for an overview). Accordingly, holistic processing may be conceived as a failure to selectively attend objects parts (Richler et al., <xref ref-type="bibr" rid="B45">2008</xref>; Richler and Gauthier, <xref ref-type="bibr" rid="B42">2014</xref>). In later versions of the composite face paradigm holistic integration was concluded from the performance difference obtained for matching face halves in congruent and incongruent target to no-target relationships (congruency effect, CE), where only one half has to be attended and upper and lower half either agree (congruent) or disagree (incongruent) with respect to target face identity (Gauthier and Bukach, <xref ref-type="bibr" rid="B15">2007</xref>; Cheung et al., <xref ref-type="bibr" rid="B6">2008</xref>). Only if the observer is, in principle, able to selectively attend some object parts while ignoring others, the failure to do so with faces can be interpreted as indicating a specific processing mode exclusively elicited by faces, or objects of expertise after extensive training (Gauthier and Bukach, <xref ref-type="bibr" rid="B15">2007</xref>). Measuring the composite effect for novel non-face objects has so far shown that there is only moderate or no interaction among attended and non-attended object parts when tested with healthy young adults (Farah et al., <xref ref-type="bibr" rid="B10">1998</xref>; Gauthier et al., <xref ref-type="bibr" rid="B16">2003</xref>; Richler et al., <xref ref-type="bibr" rid="B41">2009a</xref>, <xref ref-type="bibr" rid="B44">2011</xref>; Meinhardt et al., <xref ref-type="bibr" rid="B31">2014</xref>).</p>
<p>At mature ages, however, the composite effect has so far not been tested with non-face control objects. Therefore, the finding of equal or even stronger face composite effects for older adults may not necessarily reflect intact holistic integration, but could reflect a general attentional age-related decline in the ability to suppress unattended object parts which provide conflicting target information. To clarify whether the composite effect for older adults is face-specific, or also exists for novel non-face objects is therefore mandatory. Testing face-specificity of the congruency effect by adding non-face control objects was a major aim of the present study.</p>
<p>In the methodological debate about the proper measurement of the composite effect the design issue has become salient. As advocated by Richler and Gauthier (<xref ref-type="bibr" rid="B42">2014</xref>), it is important to use a fully balanced design with an equal frequency of same and different face half pairings (the &#x0201C;complete design,&#x0201D; CD) to avoid that observers show response bias, i.e., the preference of either the &#x0201C;same&#x0201D; or the &#x0201C;different&#x0201D; response category, due to formal characteristics of the design. If the CD is used, then the observation of response bias is informative, and can be attributed to characteristics of the observers, the stimulus material, and the experimental conditions. Recently, Meinhardt et al. (<xref ref-type="bibr" rid="B31">2014</xref>) suggested to use the CD, and to analyse response bias alongside accuracy in order to obtain a further clue toward the origin of the congruency effect. Because in trials with only part-based agreement of the face halves (incongruent trials) the &#x0201C;wholes&#x0201D; formed by integrating upper and lower halves are <italic>always</italic> different in the CD, while there is parity of same and different wholes in congruent trials (see Figure <xref ref-type="fig" rid="F1">1</xref>), the observer should more frequently respond &#x0201C;different&#x0201D; in incongruent trials, compared to congruent trials (&#x0201C;congruency bias,&#x0201D; CB), if she/he relies on representations of integrated whole objects rather than on independent representations of the two halves. This prediction from holistic processing characterizes the congruency effect qualitatively: If the composite objects are processed holistically, then <italic>both</italic> a congruency effect and a CB should be observed. This was indeed found for faces in young adults (Gao et al., <xref ref-type="bibr" rid="B14">2011</xref>; Meinhardt et al., <xref ref-type="bibr" rid="B31">2014</xref>). A CE alongside no congruency bias would indicate part based interference resulting from the inability to suppress the influence of the unattended halves, but no holistic integration. To characterize the interaction among halves for faces and non-face control objects with the CE and the CB for young and older adults was a second major aim of this study.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Overview of the complete design, according to Cheung et al. (<xref ref-type="bibr" rid="B6">2008</xref>)</bold>. The illustration shows the design for upper half matching. The dashed boxes mark the partial design as a subset of the complete design. In the partial design, &#x0201C;same&#x0201D; trials are always incongruent, agreeing in only the target halves, while &#x0201C;different&#x0201D; trials are always congruent, differing in both target and non-target halves. In the complete design, the number of same and different halve pairings is the same in congruent and in incongruent trials, and there is no confound of response alternative and congruency relation.</p></caption>
<graphic xlink:href="fnagi-08-00187-g0001.tif"/>
</fig>
<p>It has been outlined above that only if observers have intact attentional control the failure of selectively attending parts can be attributed to a holistic processing strategy. The congruency effect should depend on attentional constraints of a same/different discrimination task: If the observer knows from the beginning of the trial which halves, the upper or the lower, have to be compared, she/he can apply a narrow focus on the target parts, which should delimit the influence of the irrelevant halves. If, on the other hand, the observer does not know the target half at the first composite image presentation, and is informed later which halves are to be compared, she/he must encode the whole stimulus at study and try to narrow the focus at test. Hence, in the late cue condition, much more of the irrelevant halves is processed, which may potentially interfere with the judgment about the target halves. As expected, face congruency effects increase in the late cue condition, compared to the early cue condition (Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>). However, it is unclear whether interference among non-face object parts does in the same way depend on attentional focus conditions. Comparing the modulation of the CE by attentional focus conditions for faces and non-face objects can therefore give further valuable clues whether the interaction among object parts rests on the same or different mechanisms for both object categories. We therefore added the early cue / late cue manipulation to the experiment.</p>
<p>Further important constraints for the composite effect derive from temporal conditions and task difficulty. Studies on the composite effect that included variation of presentation times have shown that composite effects are present beginning with brief timings of about 50 ms in young adults (Richler et al., <xref ref-type="bibr" rid="B43">2009b</xref>). However, this was not tested at older adults. Instead, most studies on holistic face perception used larger presentation times where settled performance could be expected (Boutet and Faubert, <xref ref-type="bibr" rid="B2">2006</xref>; Konar et al., <xref ref-type="bibr" rid="B27">2013</xref>). We used both brief and relaxed presentation times to compare the temporal constraints for holistic processing for young and older adults. Further, brief presentation times are a means to increase task difficulty remarkably for older adults (Salthouse, <xref ref-type="bibr" rid="B47">1996</xref>). In recent studies it was revealed that older adults exhibit a strong overall &#x0201C;same&#x0201D; bias, which coincided with lower sensitivity (Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>, <xref ref-type="bibr" rid="B35">2015</xref>). By varying presentation time we aimed at revealing how the composite effect and a potential response bias of older adults is linked to processing speed demands, and higher task difficulty.</p>
<p>In this study we systematically compared face and non-face matching performance, composite effects, and bias for young and older adults, using the outlined variation of attentional and temporal conditions. The results give important clues with respect to potentially different origins of the composite effect in young and older adults.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<sec>
<title>2.1. Experimental outline</title>
<p>We used a same/different forced choice task, which required matching of a composite study stimulus, presented for 800 ms, and a composite test stimulus, presented afterwards for one of four possible presentation times (34, 84, 250, 650 ms). Subjects were informed by a cue which halves, the upper or the lower ones, had to be compared. They decided by button press whether study and test stimulus agreed or disagreed in the target halves (upper or lower). In the early cue condition, a target cue marking the half to be attended, was shown with the study image. In the late cue condition, the cue appeared after the study image, together with its subsequent mask. The two cue conditions were run in separate experimental blocks. Separate experiments were done with faces and watches, in random sequence chosen for each subject.</p>
</sec>
<sec>
<title>2.2. Design</title>
<p>We employed the &#x0201C;complete design&#x0201D; (CD) of the composite task (Cheung et al., <xref ref-type="bibr" rid="B6">2008</xref>). In the CD congruent and incongruent stimulus half pairings are balanced, and, in contrast to the &#x0201C;partial&#x0201D; design, not confounded with response alternative (for details, see Richler and Gauthier, <xref ref-type="bibr" rid="B42">2014</xref>). The CD and the partial design are illustrated in Figure <xref ref-type="fig" rid="F1">1</xref>. In incongruent trials, the non-target halves disagree when the target halves agree (&#x0201C;same&#x0201D;-trial), and agree when the target halves disagree (&#x0201C;different&#x0201D;-trial). In congruent trials both the target halves and the non-target halves either agree (&#x0201C;same&#x0201D;-trial), or disagree (&#x0201C;different&#x0201D;-trial). As a result of just part-based agreement in incongruent trials, the wholes formed by integrating upper and lower halves are always different. In congruent trials, there is parity of same and different whole objects (see Figure <xref ref-type="fig" rid="F1">1</xref>). The number of congruent and incongruent trials, as well as upper-half matching and lower half matching trials, was the same. The study comprised 2 stimulus categories (face/non-face) &#x000D7; 2 congruency relations (congruent/incongruent) &#x000D7; 2 cueing conditions (early/late) &#x000D7; 4 presentation times (34, 84, 250, 650 ms) &#x0003D; 32 conditions, which were administered to each young and older adult. Hence Stimulus, Congruency, Cue and Presentation Time were repeated measurement (within subjects) factors, while Age group was a between subjects factor.</p>
</sec>
<sec>
<title>2.3. Stimuli</title>
<sec>
<title>2.3.1. Face stimuli</title>
<p>Face half stimuli were constructed from 20 pictures of male german and swiss models, taken in a photo studio under controlled lighting conditions. Photos were frontal view shots of the whole face. The original images were edited with Adobe Photoshop CS4 software to create face half sets. Photographs were initially converted to 8 bit grayscale pictures and superimposed with an elliptical frame mask to obliterate all external facial features, such as hair, ears, or chin line. The elliptical cutouts were then split horizontally at the bridge of the nose, thus yielding 20 upper and 20 lower face halves. Each upper half was recombined with three lower halves to constitute a final set of 60 compound faces. The cutline between the face halves was hidden with a superimposed white bar 5 pixels in thickness. It was warranted that any upper face part was never recombined with the lower half of the same original face, thus there was no replication of the same full face in the experimental trials. Additionally, each of the 20 lower and upper halves appeared exactly three times in the final set of stimuli. Stimulus examples are shown in Figure <xref ref-type="fig" rid="F2">2</xref>.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Stimulus examples for upper stimulus half comparison in incongruent trials (lower row of Figure <xref ref-type="fig" rid="F1">1</xref>), for faces (A) and watches (B)</bold>. The left composite stimulus pairs show same upper halves combined with different lower halves, the right ones show different upper halves combined with same lower halves. Note that the integrated wholes of both halves are different in both &#x0201C;same&#x0201D; and &#x0201C;different&#x0201D; trials.</p></caption>
<graphic xlink:href="fnagi-08-00187-g0002.tif"/>
</fig>
</sec>
<sec>
<title>2.3.2. Non-face stimuli</title>
<p>Twenty watches were sampled from internet sources, and selected such that they had high overall resemblance, showed the same time, and had non-salient distinctive single features within the clock-face. The images were transformed to gray and matched on lightness and contrast. The cutline for subdividing into upper and lower halves was exactly through the midpoint of the clock-face. All external features were removed, and a circular frame which was identical for all stimuli was superimposed on the clock-face. Stimulus examples are shown in Figure <xref ref-type="fig" rid="F2">2</xref>. As for the faces, a final set of 60 composite watches was constructed.</p>
</sec>
</sec>
<sec>
<title>2.4. Subjects</title>
<p>Overall, 32 young adults and 28 older adults participated. All participants had normal or corrected to normal vision and reported normal neurological and psychiatric status. Young adult subjects were undergraduate students. The mean age of the student group was 22.8 years (range 18&#x02013;32 years), and 69% were female. Participants received course credit points for participation, or payment. The mean age in the older adults sample was 69.4 years, age range was 63&#x02013;78 years, and 53% of the participants were female. The older adults subjects were recruited from a database of members from the &#x0201C;Studieren 50&#x0002B;&#x0201D; programme of the University of Mainz. Accordingly, all had at least highschool level education. Two subjects were still in profession, the others retired. All older adults lived in the area of Mainz, and lived independent lives. They were paid for participation. The mini-mental state examination (MMSE; Folstein et al., <xref ref-type="bibr" rid="B12">1975</xref>) was used to evaluate the mental status. All subjects passed the test with more than 27 of the 30 points.</p>
</sec>
<sec>
<title>2.5. Apparatus</title>
<p>The experiment was executed with Inquisit runtime units. Stimuli were displayed on NEC Spectra View 2040 TFT displays in 1280 &#x000D7; 1024 resolution at a refresh rate of 60 Hz. Screen mean luminance <italic>L</italic><sub>0</sub> was 100 cd/m<sup>2</sup> at a michelson contrast of (<italic>L</italic><sub><italic>max</italic></sub> &#x02212; <italic>L</italic><sub><italic>min</italic></sub>)/(<italic>L</italic><sub><italic>max</italic></sub> &#x0002B; <italic>L</italic><sub><italic>min</italic></sub>) &#x0003D; 0.98 and practically dark background (about 1.4 cd/m<sup>2</sup>). No gamma correction was used. The room was darkened so that the ambient illumination approximately matched the illumination on the screen. Stimulus size was 250 &#x000D7; 350 pixels (width &#x000D7; height). The stimuli were viewed binocularly at a distance of 70 cm. Subjects used a distance marker but no chin rest throughout the experiment. The subjects responded via an external key-pad (Cedrus RB-830 response pad).</p>
</sec>
<sec>
<title>2.6. Preparation and preliminary measurements</title>
<p>Preliminary measurements were taken and former results for young and older adults (see Meinhardt et al., <xref ref-type="bibr" rid="B31">2014</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>) were used to guide parameter settings. The difficulty of matching watch stimuli was manipulated by exchanging stimulus objects until a matching accuracy in congruent trials with early cue of 90% correct was achieved by the young adults at stimulus durations of 250 ms. This matched the performance obtained with face stimuli fairly good. Previous results showed that older adults met the 90% correct level with faces for presentation times of beyond 600 ms (Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>). Because differential performance with both stimulus classes is a potential age-related effect we did not adjust difficulty of watch matching by manipulating stimuli for the older adults group. The longest presentation time was set to 650 ms, since no further improvement was observed in previous testing for older adults even for longer durations. Adding the two brief timings of 34 and 84 ms to 250 and 650 ms we expected to get a good sampling both of the rising and the saturating part of the sensitivity vs. presentation time function, since this function is known to show strong rise for the first 100 ms and then starts to saturate gradually (see Richler et al., <xref ref-type="bibr" rid="B43">2009b</xref>).</p>
</sec>
<sec>
<title>2.7. Procedure</title>
<p>Subjects were instructed that just the cued halves had to be compared, but that the uncued halves could also agree od disagree. They were also instructed to judge as accurately as possible, and that there was no speed pressure for the response. The temporal order of events in a trial was: fixation mark (750 ms), blank (300 ms), study stimulus (800 ms), mask (400 ms), blank (800 ms), test stimulus (34, 84, 250, or 650 ms), mask (400 ms), and blank frame until response. In the early cue condition a rectangular bracket marking the target stimulus half was shown together with the study stimulus, and remained until the test stimulus was masked. In the late cue condition the cue presentation began with the mask of the study stimulus. Stimulus position jittered randomly within a region of &#x000B1;50 pixels around the center of the screen to preclude image region matching strategies between two subsequent stimulus presentations. Masks were constructed from scrambled 5 &#x000D7; 5 pixel blocks of the stimulus shown before. No feedback about correctness was given.</p>
<p>Young adults were made familiar with the task by responding to some randomly selected probe trials. Older adults were carefully prepared for the experiment. First, paper print examples of the stimulus pairings were explained to the subject. The experimenter displayed paper prints of 10 stimulus pairs, and asked participants to name the five pairs showing objects with the same upper halves and the five showing different upper halves. Subjects were given as much time as needed to label the 10 pairs. If errors occurred, the experimenter adverted to the wrongly labeled pairs and drew attention to just the halves to be compared. The first minutes at the computer were spent on just congruent trials presented with the longest presentation time (650 ms), which all subjects could do with good accuracy. The subjects then responded to probe trials of the experiment with congruent and incongruent trials for about 8 min. After the preparation phase the experimental blocks started.</p>
<p>Two experiments were run, one with faces, and one with watches. One experiment comprised 16 conditions (see Design). Each condition was measured with 16 same&#x02014;and 16 different&#x02014;trials. Eight of these <italic>N</italic> &#x0003D; 16 replications were done with upper half, and 8 with lower half as the target. The total 512 trials were subdivided into two blocks, 256 early cue and 256 late cue trials. Going through a block took about 20 min. Interleaved by a brief pause, the two blocks were administered on a single day, one with early cue, and one with late cue, in random order across subjects. The two experiments were done at two consecutive days.</p>
</sec>
<sec>
<title>2.8. Performance measures and data analysis</title>
<p>Performance was assessed within the framework of the signal detection paradigm. Based on the relative frequencies for the two response categories in same and different trials, the sensitivity measure
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mi>d</mml:mi><mml:mo>&#x02032;</mml:mo><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>H</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>z</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>F</mml:mi><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
and the estimate of the response criterion
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mrow><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mrow><mml:mi>z</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>H</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0002B;</mml:mo><mml:mi>z</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>F</mml:mi><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></disp-formula>
was calculated (see MacMillan and Creelman, <xref ref-type="bibr" rid="B29">2005</xref>, p. 8 and p. 29). The hit-rate (Hit) was defined as the rate of correctly identifying same target halves and the correct rejection rate (CR) was defined as the rate of correctly identifying different target halves. False alarm rate (FA) and the rate of misses (Miss) were defined as the complementary rates to CR and Hit, respectively. Perfect or zero hit or false alarm rates were corrected before transforming to <italic>d</italic>&#x02032;, replacing the rate by <italic>p</italic> &#x0003D; 1 &#x02212; 1/(2<italic>N</italic>), or <italic>p</italic> &#x0003D; 1/(2<italic>N</italic>), respectively (see MacMillan and Creelman, <xref ref-type="bibr" rid="B29">2005</xref>, p. 8). For further analyses of the sensitivity measure congruency effects were calculated as the difference measure <italic>CE</italic> &#x0003D; <italic>d</italic>&#x02032;(CC) &#x02212; <italic>d</italic>&#x02032;(IC). Here, congruent is abbreviated as CC (congruent composite), and incongruent as IC (incongruent composite). Congruency effects were also calculated for the response criterion according to <italic>CB</italic> &#x0003D; <italic>c</italic>(<italic>IC</italic>) &#x02212; <italic>c</italic>(<italic>CC</italic>) to measure the effect of congruency on response bias (see Section 1). Note that, with the given convention for defining the four events of the forced choice task, positive values of <italic>c</italic> indicate a &#x0201C;different&#x0201D; bias and negative values a &#x0201C;same&#x0201D; bias.</p>
<p>Further, we provide a bias measure in terms of the error proportion of wrong &#x0201C;different&#x0201D; responses:
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mrow><mml:mi>q</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:mi>A</mml:mi></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
If <italic>q</italic> &#x0003D; 0.5, then both responses occur with equal likelihood. A ratio of <italic>q</italic> &#x0003E; 0.5 indicates a tendency to respond &#x0201C;different&#x0201D; while <italic>q</italic> &#x0003C; 0.5 indicates a preference toward &#x0201C;same&#x0201D; responses. To compare response preferences for congruent and incongruent trials we also calculated odds ratios for both types of errors, i.e.,
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mrow><mml:mi>O</mml:mi><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mo>/</mml:mo><mml:mi>H</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>F</mml:mi><mml:mi>A</mml:mi><mml:mo>/</mml:mo><mml:mi>C</mml:mi><mml:mi>R</mml:mi></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
The odds ratio (Equation 4) indicates how much larger the odds are for wrong &#x0201C;different&#x0201D; responses compared to wrong &#x0201C;same&#x0201D; responses.</p>
<p>Both the <italic>d</italic>&#x02032; and the <italic>c</italic> measure were analyzed with ANOVA.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Sensitivity measure</title>
<p>Figure <xref ref-type="fig" rid="F3">3</xref> shows <italic>d</italic>&#x02032; means as a function of presentation time for all experimental conditions. Generally, there were striking age-related differences in performance level, and its dependency on presentation time and stimulus category. Data analysis using ANOVA revealed significance of all main effects, i.e., age [<inline-formula><mml:math id="M5"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>155</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>73</mml:mn></mml:math></inline-formula>], stimulus [<inline-formula><mml:math id="M6"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>37</mml:mn><mml:mo>.</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>39</mml:mn></mml:math></inline-formula>], cue [<inline-formula><mml:math id="M7"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>110</mml:mn><mml:mo>.</mml:mo><mml:mn>5</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>66</mml:mn></mml:math></inline-formula>], congruency [<inline-formula><mml:math id="M8"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>217</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>79</mml:mn></mml:math></inline-formula>], and presentation time [<inline-formula><mml:math id="M9"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>185</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>76</mml:mn></mml:math></inline-formula>]. These effects were analysed in detail by considering first and higher order interactions. Because sensitivity was mostly settled for presentation times of 250 ms and beyond in both age groups, we provide tables with pairwise comparisons for data agglomerated over the last two presentation times. In these tables we report age-related performance differences, as well as congruency effects for stabilized performance levels.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>The <italic>d</italic>&#x02032; measure as a function of presentation time for the two age groups with faces (upper panels) and watches (lower panels), and target half cue given at study image (early cue, left panels) and before test image (late cue, right panels)</bold>. Data for the congruent trials are shown as open black circles, gray symbols indicate data for incongruent trials. Error bars indicate 95% confidence limits of the means.</p></caption>
<graphic xlink:href="fnagi-08-00187-g0003.tif"/>
</fig>
<sec>
<title>3.1.1. Stimulus effects</title>
<p>There was an age &#x000D7; stimulus interaction [<inline-formula><mml:math id="M10"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>24</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>29</mml:mn></mml:math></inline-formula>], which indicated different performance with both stimulus categories in either age group. Comparing sensitivity across stimuli for young adults showed that performance was at equal levels with both stimulus categories [<italic>F</italic><sub>(1, 58)</sub> &#x0003D; 0.72, <italic>p</italic> &#x0003D; 0.398], while older adults performed notably worse with watches compared to faces [<italic>F</italic><sub>(1, 58)</sub> &#x0003D; 56.75, <italic>p</italic> &#x0003C; 0.001]. These effects did not depend on presentation time [presentation time &#x000D7; stimulus interaction <italic>F</italic><sub>(3, 174)</sub> &#x0003D; 1.10, <italic>p</italic> &#x0003D; 0.351], and were also present in the data for the two longest timings [young adults: <italic>F</italic><sub>(1, 58)</sub> &#x0003D; 0.01, <italic>p</italic> &#x0003D; 0.937; older adults: <italic>F</italic><sub>(1, 58)</sub> &#x0003D; 56.47, <italic>p</italic> &#x0003C; 0.001]. More detailed analysis revealed how stimulus effects differed in congruent and incongruent trials. For young adults there was better performance with watches in incongruent trials [<inline-formula><mml:math id="M11"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>33</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>31</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>38</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>03</mml:mn></mml:math></inline-formula>], but better performance with faces in congruent trials [<inline-formula><mml:math id="M12"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>47</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>31</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>7</mml:mn><mml:mo>.</mml:mo><mml:mn>44</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn></mml:math></inline-formula>]. Hence, agglomerated across congruency, both effects canceled out. For older adults, in contrast, performance was much better with faces than with watches in congruent trials [<inline-formula><mml:math id="M13"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo><mml:mn>05</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>27</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>7</mml:mn><mml:mo>.</mml:mo><mml:mn>44</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn></mml:math></inline-formula>], but not significantly different in incongruent trials [<inline-formula><mml:math id="M14"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>30</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>27</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>00</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>055</mml:mn></mml:math></inline-formula>]. Again, these results did not change when only the last two timings were considered [young adults, congruent: <inline-formula><mml:math id="M15"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>42</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>31</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>9</mml:mn><mml:mo>.</mml:mo><mml:mn>39</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn></mml:math></inline-formula>; young adults, incongruent: <inline-formula><mml:math id="M16"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>41</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>31</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>28</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>03</mml:mn></mml:math></inline-formula>; older adults, congruent: <inline-formula><mml:math id="M17"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo><mml:mn>12</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>27</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>23</mml:mn><mml:mo>.</mml:mo><mml:mn>18</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn></mml:math></inline-formula>; older adults, incongruent: <inline-formula><mml:math id="M18"><mml:mi>&#x00394;</mml:mi><mml:msup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>39</mml:mn><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>27</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>02</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>053</mml:mn></mml:math></inline-formula>].</p>
</sec>
<sec>
<title>3.1.2. Age-related performance differences</title>
<p>Young adults showed higher matching accuracy in all conditions of the experiment. The age &#x000D7; stimulus interaction [<inline-formula><mml:math id="M19"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>24</mml:mn><mml:mo>.</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>29</mml:mn></mml:math></inline-formula>] reflected that age-related performance differences were much stronger with watches than with faces. The strong age &#x000D7; stimulus interaction was maintained when only the last two presentation times were considered [<inline-formula><mml:math id="M20"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>16</mml:mn><mml:mo>.</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>22</mml:mn></mml:math></inline-formula>]. Table <xref ref-type="table" rid="T1">1</xref> lists the results of pairwise tests for these data. For faces, age-related performance differences reached an average of 0.68 <italic>d</italic>&#x02032; units, with an effect size of <italic>d</italic> &#x0003D; 0.98. For watches, a difference of 1.43 <italic>d</italic>&#x02032; units was obtained, with an effect size of <italic>d</italic> &#x0003D; 2.32. Table <xref ref-type="table" rid="T1">1</xref> also shows that age-related sensitivity differences, measured in <italic>d</italic>&#x02032; units, were much larger (at least doubled) in incongruent compared to congruent trials. However, due to the much larger standard errors in incongruent trials, this effect did not become obvious in the effect size measure <italic>d</italic>.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Age-related performance differences agglomerated across the two longer presentation times</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Stimulus</bold></th>
<th valign="top" align="left"><bold>Cue</bold></th>
<th valign="top" align="left"><bold>Congruency</bold></th>
<th valign="top" align="center"><bold>&#x00394;<italic>d</italic>&#x02032;</bold></th>
<th valign="top" align="center"><bold><italic>s<sub>e</sub></italic></bold></th>
<th valign="top" align="center"><bold><italic>t</italic></bold></th>
<th valign="top" align="center"><bold><italic>df</italic></bold></th>
<th valign="top" align="center"><bold><italic>p</italic></bold></th>
<th valign="top" align="center"><bold><italic>d</italic></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.36</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">4.00</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.04</td>
</tr>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.99</td>
<td valign="top" align="center">0.27</td>
<td valign="top" align="center">3.68</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.95</td>
</tr>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.38</td>
<td valign="top" align="center">0.12</td>
<td valign="top" align="center">3.07</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.003</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.99</td>
<td valign="top" align="center">0.23</td>
<td valign="top" align="center">4.39</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.14</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.68</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="center">0.98</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.97</td>
<td valign="top" align="center">0.14</td>
<td valign="top" align="center">6.95</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.80</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">1.79</td>
<td valign="top" align="center">0.23</td>
<td valign="top" align="center">7.66</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.98</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">1.16</td>
<td valign="top" align="center">0.11</td>
<td valign="top" align="center">10.11</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">2.62</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">1.78</td>
<td valign="top" align="center">0.16</td>
<td valign="top" align="center">11.08</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">2.87</td>
</tr>
<tr>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">1.43</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="center">2.32</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The table shows d&#x02032; difference, its standard error, t-value, degrees of freedom, significance level, and Cohen&#x00027;s d</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>There were further significant interactions of factors with age, which involved congruency and presentation time (see below).</p>
</sec>
<sec>
<title>3.1.3. Congruency effects</title>
<p>There were large congruency effects, which were notably larger for faces than for watches [congruency &#x000D7; stimulus, <inline-formula><mml:math id="M21"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>62</mml:mn><mml:mo>.</mml:mo><mml:mn>62</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>52</mml:mn></mml:math></inline-formula>]. Congruency effects were also modulated by age, having larger CEs for older than for younger adults [congruency &#x000D7; age, <inline-formula><mml:math id="M22"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>11</mml:mn><mml:mo>.</mml:mo><mml:mn>24</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>002</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>16</mml:mn></mml:math></inline-formula>]. The congruency effect also depended on cueing, with larger CEs for the late compared to the early cue [congruency &#x000D7; cue, <inline-formula><mml:math id="M23"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>6</mml:mn><mml:mo>.</mml:mo><mml:mn>71</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>02</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>]. The congruency effect also depended on presentation time, but in different ways for the two age groups (see below).</p>
<p>When analysing the data of only the last two presentation times, all the reported interactions were maintained [congruency &#x000D7; stimulus, <inline-formula><mml:math id="M24"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>19</mml:mn><mml:mo>.</mml:mo><mml:mn>07</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>25</mml:mn></mml:math></inline-formula>; congruency &#x000D7; age, <inline-formula><mml:math id="M25"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>30</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>05</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>07</mml:mn></mml:math></inline-formula>; congruency &#x000D7; cue, <inline-formula><mml:math id="M26"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>8</mml:mn><mml:mo>.</mml:mo><mml:mn>95</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>13</mml:mn></mml:math></inline-formula>], but two further higher order interactions emerged. For settled performance levels, there was a significant congruency &#x000D7; cue &#x000D7; stimulus interaction [<inline-formula><mml:math id="M27"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>7</mml:mn><mml:mo>.</mml:mo><mml:mn>68</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>11</mml:mn></mml:math></inline-formula>], and a significant congruency &#x000D7; stimulus &#x000D7; age interaction [<inline-formula><mml:math id="M28"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>7</mml:mn><mml:mo>.</mml:mo><mml:mn>68</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>11</mml:mn></mml:math></inline-formula>].</p>
<p>Table <xref ref-type="table" rid="T2">2</xref> lists the results of testing congruency effects for the last two presentation times, which illuminates these interactions. With faces, both young and older adults showed large congruency effects of about one <italic>d</italic>&#x02032; unit (young adults), and beyond 1.3 <italic>d</italic>&#x02032; units (older adults). With watches young adults showed no substantial CEs. There was a modest congruency effect only in the late cue condition (0.26 <italic>d</italic>&#x02032; units), but no congruency effect in the early cue condition. Older adults, in contrast, showed strong congruency effects for watches of nearly one <italic>d</italic>&#x02032; unit in both cue conditions. The CEs for watches had large effect sizes of more than one Cohen&#x00027;s <italic>d</italic>, and compared to the CE found for young adults with faces in the early cue condition. Pairwise comparisons across age showed that older adults had significantly larger CEs than young adults for both faces and watches. This was consistently found for early and late cueing (see last column of Table <xref ref-type="table" rid="T2">2</xref>). The significant congruency &#x000D7; cue &#x000D7; stimulus was reflected by larger CEs for the late compared to the early cue, which applied to faces, but not to watches.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Congruency effects (CEs), for both age groups and stimulus classes, agglomerated across the two longer presentation times</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Age group</bold></th>
<th valign="top" align="left"><bold>Stimulus</bold></th>
<th valign="top" align="left"><bold>Cue</bold></th>
<th valign="top" align="center"><bold><italic>CE</italic></bold></th>
<th valign="top" align="center"><bold><italic>s<sub>e</sub></italic></bold></th>
<th valign="top" align="center"><bold><italic>t</italic></bold></th>
<th valign="top" align="center"><bold><italic>df</italic></bold></th>
<th valign="top" align="center"><bold><italic>p</italic></bold></th>
<th valign="top" align="center"><bold><italic>d</italic></bold></th>
<th valign="top" align="left"><bold>Effect size</bold></th>
<th valign="top" align="center"><bold>Older-younger</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.18</td>
<td valign="top" align="center">4.19</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="left">Large</td>
<td valign="top" align="center"><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">1.26</td>
<td valign="top" align="center">0.17</td>
<td valign="top" align="center">7.28</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.29</td>
<td valign="top" align="left">Large</td>
<td valign="top" align="center"><xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">0.15</td>
<td valign="top" align="center">0.52</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.605</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="left">&#x02013;</td>
<td valign="top" align="center"><xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.26</td>
<td valign="top" align="center">0.12</td>
<td valign="top" align="center">2.19</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.036</td>
<td valign="top" align="center">0.39</td>
<td valign="top" align="left">Medium</td>
<td valign="top" align="center"><xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;&#x0002A;</sup></xref></td>
</tr> <tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">1.38</td>
<td valign="top" align="center">0.19</td>
<td valign="top" align="center">7.27</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.37</td>
<td valign="top" align="left">Large</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">1.87</td>
<td valign="top" align="center">0.18</td>
<td valign="top" align="center">10.10</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.91</td>
<td valign="top" align="left">Large</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.16</td>
<td valign="top" align="center">5.51</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.04</td>
<td valign="top" align="left">Large</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.89</td>
<td valign="top" align="center">0.13</td>
<td valign="top" align="center">6.90</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.30</td>
<td valign="top" align="left">Large</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The table shows the CE, its standard error, t-value, degrees of freedom, significance level, and Cohen&#x00027;s d with classification of effect size. The last column indicates the significance level for comparing CEs in the same conditions across age (</italic></p>
<fn id="TN1">
<label>&#x0002A;</label>
<p><italic>&#x003B1; &#x0003D; 0.05</italic>,</p></fn>
<fn id="TN2">
<label>&#x0002A;&#x0002A;&#x0002A;</label>
<p><italic>&#x003B1; &#x0003D; 0.001)</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>3.1.4. Effects of early or late target cue</title>
<p>The cueing manipulation modulated performance strongly, yielding better performance for the early compared to the late target cue. These effects were similar for both age groups [cue &#x000D7; age, <italic>F</italic><sub>(1, 58)</sub> &#x0003D; 2.08, <italic>p</italic> &#x0003D; 0.155] and both stimulus classes [cue &#x000D7; stimulus, <italic>F</italic><sub>(1, 58)</sub> &#x0003D; 0.27, <italic>p</italic> &#x0003D; 0.606]. Early vs. late cueing affected the CE (s.a.), and its effects depended on presentation time (see below).</p>
</sec>
<sec>
<title>3.1.5. Effects of presentation time</title>
<p>The strong effect of presentation time (s.a.) was different in the two age groups [presentation time &#x000D7; age, <inline-formula><mml:math id="M29"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>10</mml:mn><mml:mo>.</mml:mo><mml:mn>51</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>15</mml:mn></mml:math></inline-formula>]. Older adults showed more improvement with increasing presentation time, while young adults were closer to their settled performance even at brief timings. The effect of cueing was also modulated by presentation time, with larger performance differences for early and late cue occurring at the two longer presentation times [presentation time &#x000D7; cue, <inline-formula><mml:math id="M30"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>60</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>07</mml:mn></mml:math></inline-formula>].</p>
<p>The congruency effect also depended on presentation time [presentation time &#x000D7; congruency, <inline-formula><mml:math id="M31"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>5</mml:mn><mml:mo>.</mml:mo><mml:mn>62</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>09</mml:mn></mml:math></inline-formula>]. This effect was moderated by age. While the congruency effect was constant across presentation time for young adults, it increased with increasing presentation time for older adults [presentation time &#x000D7; congruency &#x000D7; age, <inline-formula><mml:math id="M32"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>12</mml:mn><mml:mo>.</mml:mo><mml:mn>10</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>17</mml:mn></mml:math></inline-formula>]. Among all the differential effects involving presentation time, this effect had largest effect size. This age-differential effect was due to the fact that older adults showed more improvement with increasing presentation time in congruent trials, but could not improve at the same rate in incongruent trials (see Figure <xref ref-type="fig" rid="F3">3</xref>). Young adults, instead, showed improvement at similar rates in both congruency conditions, with a marginal tendency toward stronger improvement in incongruent trials.</p>
</sec>
</sec>
<sec>
<title>3.2. Response bias</title>
<p>Figure <xref ref-type="fig" rid="F4">4</xref> shows the mean estimates of the response criterion <italic>c</italic> as a function of presentation time for all experimental conditions. ANOVA revealed main effects of presentation time [<inline-formula><mml:math id="M33"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>174</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>9</mml:mn><mml:mo>.</mml:mo><mml:mn>64</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>14</mml:mn></mml:math></inline-formula>], congruency [<inline-formula><mml:math id="M34"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>73</mml:mn><mml:mo>.</mml:mo><mml:mn>51</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>559</mml:mn></mml:math></inline-formula>], stimulus [<inline-formula><mml:math id="M35"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>56</mml:mn><mml:mo>.</mml:mo><mml:mn>80</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>49</mml:mn></mml:math></inline-formula>], and age [<inline-formula><mml:math id="M36"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>50</mml:mn><mml:mo>.</mml:mo><mml:mn>46</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>465</mml:mn></mml:math></inline-formula>], but no effect of cueing [<italic>F</italic><sub>(1, 58)</sub> &#x0003D; 2.66, <italic>p</italic> &#x0003D; 0.11]. The main effect of age indicated that older adults had consistently lower values in the response criterion than young adults in all experimental conditions (see Figure <xref ref-type="fig" rid="F4">4</xref>). However, there was a strong age &#x000D7; stimulus interaction [<inline-formula><mml:math id="M37"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>15</mml:mn><mml:mo>.</mml:mo><mml:mn>99</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>22</mml:mn></mml:math></inline-formula>], which indicated that the response criterion used by young adults for watches was only marginally smaller than for faces, while older adults strongly preferred &#x0201C;same&#x0201D; responses for watches, but not for faces (see Figure <xref ref-type="fig" rid="F4">4</xref>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Estimated response criterion <italic>c</italic> as a function of presentation time for the two age groups for faces (upper panels) and watches (lower panels), and target half cue given at study image (early cue, left panels) and before test image (late cue, right panels)</bold>. Conventions as in Figure <xref ref-type="fig" rid="F3">3</xref>.</p></caption>
<graphic xlink:href="fnagi-08-00187-g0004.tif"/>
</fig>
<p>A further striking difference of young and older adults was the strong differential effect of presentation time [presentation time &#x000D7; age, <inline-formula><mml:math id="M38"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>234</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>26</mml:mn><mml:mo>.</mml:mo><mml:mn>21</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>31</mml:mn></mml:math></inline-formula>]. For older adults the response criterion increased with presentation time, i.e., the strong &#x0201C;same&#x0201D; bias continuously diminished with increasing presentation time. For young adults, in contrast, the response criterion slightly decreased with increasing presentation time, or stayed relatively constant about zero.</p>
<p>Figure <xref ref-type="fig" rid="F4">4</xref> also shows that the response criterion <italic>c</italic> reached settled values for the two longer presentation times. For these timings we compared the response criterion across age for both stimuli, cues and congruency relations (see Table <xref ref-type="table" rid="T3">3</xref>). The pairwise tests reveal that there were no significant age-related differences in the response criterion for faces. For watches, there were strong age-related differences, which were quite constant across congruency and cueing. These effects were due to the strong &#x0201C;same&#x0201D; bias of older adults for watches, which did not vanish even at longer presentation times.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Age-related differences in the response criterion, <italic>c</italic>, agglomerated across the two longer presentation times</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Stimulus</bold></th>
<th valign="top" align="left"><bold>Cue</bold></th>
<th valign="top" align="left"><bold>Congruency</bold></th>
<th valign="top" align="center"><bold><italic>&#x00394;c</italic></bold></th>
<th valign="top" align="center"><bold><italic>s<sub>e</sub></italic></bold></th>
<th valign="top" align="center"><bold><italic>t</italic></bold></th>
<th valign="top" align="center"><bold><italic>df</italic></bold></th>
<th valign="top" align="center"><bold><italic>p</italic></bold></th>
<th valign="top" align="center"><bold><italic>d</italic></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">1.03</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.307</td>
<td valign="top" align="center">0.27</td>
</tr>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.07</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.462</td>
<td valign="top" align="center">0.19</td>
</tr>
<tr>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.07</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">1.12</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.265</td>
<td valign="top" align="center">0.29</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.361</td>
<td valign="top" align="center">0.24</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.07</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="center">0.25</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.39</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">5.11</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.32</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.39</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">4.06</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.05</td>
</tr>
<tr>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">0.47</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">5.25</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.36</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.53</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">5.28</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">1.37</td>
</tr>
<tr>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.44</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="center">1.27</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The table shows c difference, its standard error, t-value, degrees of freedom, significance level, and Cohen&#x00027;s d</italic>.</p>
</table-wrap-foot>
</table-wrap>
<sec>
<title>3.2.1. Congruency bias (CB)</title>
<p>The strong modulation of the response criterion by the congruency relation indicated larger values of <italic>c</italic> in incongruent trials, compared to congruent trials, i.e., a CB effect. The CB was moderated by age [congruency &#x000D7; age, <inline-formula><mml:math id="M39"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>25</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>05</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>07</mml:mn></mml:math></inline-formula>] and, notably, by stimulus [congruency &#x000D7; stimulus, <inline-formula><mml:math id="M40"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>19</mml:mn><mml:mo>.</mml:mo><mml:mn>45</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>001</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>25</mml:mn></mml:math></inline-formula>]. The congruency &#x000D7; age interaction indicated that the CB was larger for younger than for older adults. However, this was due to the fact that younger adults showed a CB already at brief timings, while, for older adults, the CB emerged at larger presentation times.</p>
<p>Analysing the response criterion data at only the last two presentation times showed that the congruency &#x000D7; age interaction vanished [<inline-formula><mml:math id="M41"><mml:msub><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>58</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>30</mml:mn><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>587</mml:mn><mml:mo>,</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x003B7;</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>01</mml:mn></mml:math></inline-formula>], indicating equal CB effects for relaxed timings. Table <xref ref-type="table" rid="T4">4</xref> shows the CB agglomerated for the last two presentation times, and in detail for both stimuli, cues and the two age groups. For watches, there was no CB in either age group, and for both early and late cueing. For faces, there were significant CB effects, which reflected a similar congruency modulated criterion shift in both age groups, and for both the early and the late cue. The CB reached effect sizes in a span of <italic>d</italic> &#x0003D; [0.63, 0.82] which indicated similar CB effects for both age groups and with both cues for faces.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><bold>Congruency bias effects (CBs), for both age groups and stimulus classes, agglomerated across the two longer presentation times</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Age group</bold></th>
<th valign="top" align="left"><bold>Stimulus</bold></th>
<th valign="top" align="left"><bold>Cue</bold></th>
<th valign="top" align="center"><bold><italic>CB</italic></bold></th>
<th valign="top" align="center"><bold><italic>s<sub>e</sub></italic></bold></th>
<th valign="top" align="center"><bold><italic>t</italic></bold></th>
<th valign="top" align="center"><bold><italic>df</italic></bold></th>
<th valign="top" align="center"><bold><italic>p</italic></bold></th>
<th valign="top" align="center"><bold><italic>d</italic></bold></th>
<th valign="top" align="left"><bold>Effect size</bold></th>
<th valign="top" align="left"><bold>Older-younger</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">0.20</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">3.92</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="left">Large</td>
<td valign="top" align="left">n.s.</td>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.24</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">4.65</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="left">Large</td>
<td valign="top" align="left">n.s</td>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">&#x02212;0.01</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">&#x02212;0.28</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.784</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="left">&#x02013;</td>
<td valign="top" align="left">n.s.</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">1.88</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">0.069</td>
<td valign="top" align="center">0.33</td>
<td valign="top" align="left">&#x02013;</td>
<td valign="top" align="left">n.s.</td>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">0.18</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">3.33</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.003</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="left">Medium</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.22</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">4.12</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="left">Large</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Early</td>
<td valign="top" align="center">&#x02212;0.01</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">&#x02212;0.17</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.869</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="left">&#x02013;</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">Late</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">0.468</td>
<td valign="top" align="center">0.14</td>
<td valign="top" align="left">&#x02013;</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The table shows the CB, its standard error, t-value, degrees of freedom, significance level, and Cohen&#x00027;s d with classification of effect size. The last column indicates the significance level for comparing CEs in the same conditions across age, n.s., not significant</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>To illuminate the different kinds of errors made at the last two presentation times we calculated the error proportion <italic>q</italic>, and report the odds ratio for wrong &#x0201C;different&#x0201D; compared to wrong &#x0201C;same&#x0201D; responses. Table <xref ref-type="table" rid="T5">5</xref> shows the results. The data reveal that, albeit older adults made much more errors of both kinds in incongruent trials than young adults, the error proportion <italic>q</italic> for faces was modulated by the congruency relation similarly for young and older adults. The odds ratios indicate that the risk for wrong &#x0201C;different&#x0201D; responses to faces was about doubled in incongruent compared to congruent trials in both age groups. For watches, there was only a marginally higher risk for wrong &#x0201C;different&#x0201D; responses in incongruent trials for both young and for older adults. This illustrates the quite similar effect of congruency on response bias in both age groups, which was found albeit the overall response bias for watches differed strongly among both age groups (see <italic>q</italic> measure in Table <xref ref-type="table" rid="T5">5</xref>). Further, the proportion correct rates shown in Table <xref ref-type="table" rid="T5">5</xref> illustrate that older adults reached good performance with faces in congruent contexts (92% correct judgments), coming close to the performance of young adults (95% correct judgments).</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p><bold>Bias measure <italic>c</italic>, error rates and error proportion <italic>q</italic>, for both age groups and stimulus classes, agglomerated across the two longer presentation times</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Age group</bold></th>
<th valign="top" align="left"><bold>Stimulus</bold></th>
<th valign="top" align="left"><bold>Congruency</bold></th>
<th valign="top" align="center"><bold><italic>c</italic></bold></th>
<th valign="top" align="center"><bold><italic>CR</italic></bold></th>
<th valign="top" align="center"><bold><italic>FA</italic></bold></th>
<th valign="top" align="center"><bold><italic>Hit</italic></bold></th>
<th valign="top" align="center"><bold><italic>Miss</italic></bold></th>
<th valign="top" align="center"><bold><italic>p<sub>c</sub></italic></bold></th>
<th valign="top" align="center"><bold><italic>q</italic></bold></th>
<th valign="top" align="center"><bold><italic>OR</italic></bold></th>
<th valign="top" align="center"><bold><italic>q<sub>OR</sub></italic></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">&#x02212;0.04</td>
<td valign="top" align="center">0.94</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">0.95</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">0.95</td>
<td valign="top" align="center">0.46</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">2.29</td>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.18</td>
<td valign="top" align="center">0.90</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">0.83</td>
<td valign="top" align="center">0.17</td>
<td valign="top" align="center">0.87</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">1.94</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">&#x02212;0.08</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.93</td>
<td valign="top" align="center">0.07</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.43</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">1.19</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Young adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">&#x02212;0.04</td>
<td valign="top" align="center">0.90</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.47</td>
<td valign="top" align="center">0.87</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">&#x02212;0.10</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.94</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.41</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">2.10</td>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Faces</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.23</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.30</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">1.42</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">CC</td>
<td valign="top" align="center">&#x02212;0.51</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.36</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.19</td>
<td valign="top" align="center">0.16</td>
<td valign="top" align="center">1.20</td>
</tr>
<tr>
<td valign="top" align="left">Older adults</td>
<td valign="top" align="left">Watches</td>
<td valign="top" align="left">IC</td>
<td valign="top" align="center">&#x02212;0.49</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.52</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.18</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.25</td>
<td valign="top" align="center">0.20</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The table shows c, the rates for CR, FA, Hit, and Miss, proportion correct, p<sub>c</sub>, the error proportion measure, q, the odds ratio for Miss compared to FA, OR, and the ratio of the OR for incongruent, compared to congruent trials, q<sub>OR</sub></italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>We studied face and non-face object perception with the complete design of the composite paradigm to reveal face-specificity of congruency effects in young and older adults. We found that congruency effects were face-specific in young, but not in older adults. Congruency effects of older adults increased with increasing presentation time, and were substantial also for novel non-face objects for relaxed exposure durations where performance reached settled levels. In the following we discuss these results with respect to the potentially different origins of the congruency effect in young and older adults, and in the context of other recent findings.</p>
<sec>
<title>4.1. The effect of presentation time on the congruency effect for young and older adults</title>
<p>An important characteristic of the CEs of older adults is their dependency on presentation time. The CEs of older adults increased with increasing exposure durations, while the CEs of young adults were strong even at the shortest presentation times, and tended to decline afterwards. The CEs of older adults at longer presentation times are the result of the differential improvement in congruent and incongruent trials. With incongruent composites there was hardly improvement, while performance improved at strong rates for congruent composites, and stronger for faces than for watches. This means that older adults could benefit from larger temporal processing resources only in the condition where attentional control of the irrelevant halves and focusing only the target parts was not required. There, face processing showed stronger benefit than watch processing, finally reaching good levels close to the levels of young adults.</p>
<p>Stimulus processing in trials requiring to control irrelevant information stayed at same modest levels for faces and watches. This gives important clues to the origin of the congruency effect. Apparently, older adults had difficulty to control the effects of the irrelevant halves, independent of object category. Younger adults had no particular problems in this respect. Performance increased with presentation time in incongruent trials with at least the rate found in congruent trials, and reached levels that were largely different from the levels reached by older adults (see Figure <xref ref-type="fig" rid="F3">3</xref> and Table <xref ref-type="table" rid="T1">1</xref>). These results confirm earlier results showing that the congruency effects are already present at brief timings for young adults (Richler et al., <xref ref-type="bibr" rid="B43">2009b</xref>). Further, young adults were able to use additional processing resources to delimit the influence of irrelevant features (Meinhardt-Injac et al., <xref ref-type="bibr" rid="B32">2011</xref>). In this study, young adults could control incongruent watch halves fairly well at larger presentation times, and even in the late cue condition, thus congruency effects were true face-specific effects in this age group. Good control of irrelevant information is further indicated by the fact that young adults reached better performance in incongruent trials with watches than with faces (see Section 3.2). This indicates the different origins of performance with incongruent composite faces and watches in young adults. Incongruent features could be controlled fairly well with watches, while irrelevant face halves could not be ignored. This result clearly suggests a specific, integrative mode of processing exclusively for faces, but not watches.</p>
<p>In contrast to young adults, the stimulus unspecific performance loss of older adults with incongruent composites points to a general impairment in controlling irrelevant features. This results adds to the age differential results found in other studies where interference of non-attended scenes on attended faces, and vice versa, was measured (Gazzaley et al., <xref ref-type="bibr" rid="B17">2008</xref>; Quigley et al., <xref ref-type="bibr" rid="B39">2010</xref>; Schmitz et al., <xref ref-type="bibr" rid="B48">2010</xref>). Moreover, age-related decline in controlling irrelevant features as a determinant of worse performance in incongruent trials is supported by the results obtained for the bias measure.</p>
</sec>
<sec>
<title>4.2. The role of response bias</title>
<p>Analysis of response bias revealed that the congruency effects for faces of both age groups were accompanied by more wrong &#x0201C;different&#x0201D; responses in incongruent, compared to congruent trials (CB effect). For young adults, this was consistently observed for all presentation times, while, for older adults, the CB emerged only for settled performance at relaxed presentation times. The CB shows that the observers were more strongly biased to respond &#x0201C;different,&#x0201D; in agreement with the prediction from holistic processing (see Introduction). Hence, for relaxed timings, we found both a CE and a CB for young and for older adults, which is agreement with holistic processing of faces in both age groups.</p>
<p>The CE for watches of the older adults, however, was not accompanied by a CB. Detailed analysis of the kind of errors showed that wrong &#x0201C;same&#x0201D; responses were more likely than wrong &#x0201C;different&#x0201D; responses with watches, and this was not modulated by the congruency relation. Hence, the fact that all whole watches were different in incongruent trials did not influence the response behavior of older adults. This means that the errors made in incongruent trials with watches root in part based interference rather than in holistic integration of upper and lower halves.</p>
<p>For older adults we found significant CEs for watches, but also <italic>larger</italic> CEs for faces, compared to young adults (see Table <xref ref-type="table" rid="T2">2</xref>). The differential pattern of CBs for faces and watches, gives a clue to interpreting this result. Note that the CE increases when more errors are made in incongruent, compared to congruent trials, irrespective of the kind of errors. Albeit young and older adults have a similar CB for faces, older adults made much more errors of <italic>both</italic> kinds, i.e., also the frequency of wrong &#x0201C;same&#x0201D; responses increased in incongruent trials. That is, also for faces there were more errors which were not induced by the non-identity of the wholes, but by part based interference. Hence, the stronger CEs of older adults for faces does not indicate stronger reliance on holistic processing strategies, but a plus in interference of parts. This supports the conclusion that the larger CE of older adults for faces and the CE for watches have a common ground in larger part based interference from the non-attended parts in composite objects. The stronger susceptibility to part-based interference indicates a loss in efficient attentional control that applies to both object categories.</p>
<p>A further striking observation was the strong general &#x0201C;same&#x0201D; bias of older adults, a much stronger one for watches than for faces, which diminished with increasing presentation times. Both the stimulus dependency and the dependency on presentation time indicated that the &#x0201C;same&#x0201D; bias of older adults was performance related. That is, older adults preferred to respond &#x0201C;same&#x0201D; when they experience high degrees of uncertainty about the correct judgment, i.e., experienced task difficulty is high. This is in agreement with earlier findings of a tendency to overlook diagnostic differences (Daniel and Bentin, <xref ref-type="bibr" rid="B8">2012</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B34">2014b</xref>, <xref ref-type="bibr" rid="B35">2015</xref>), and corresponds to the typical failure of older adults to categorize new objects as known ones (Fulton and Bartlett, <xref ref-type="bibr" rid="B13">1991</xref>; Lee et al., <xref ref-type="bibr" rid="B28">2014</xref>).</p>
</sec>
<sec>
<title>4.3. Age-related decline in face-specific processing</title>
<p>The findings of the present study bring up the question whether the observation of comparable face composite effects for young and older adults justifies the conclusion that the specific mechanisms of holistic face processing are intact, and do not undergo age-related decline, as claimed recently (Konar et al., <xref ref-type="bibr" rid="B27">2013</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B33">2014a</xref>). As the face-unspecific CEs in older adults, as well as the differential CB effects show, comparable face composite effects for young and older adults are no solid grounds for this conclusion. A significant proportion of the face congruency effects observed for older adults may root in face-unspecific and part-based interference, but not in integrative processing specific for faces. While CEs associated with CB effects were observed in older and young adults, which indicates holistic integration for faces in both age groups, the observation of face-unspecific CEs for older poses severe constraints on the interpretation of the face CEs in older adults, since it can hardly be determined to which extent these effects reflect part interference on the one and holistic integration on the other hand.</p>
<p>We therefore conclude that strong composite effects for faces are no sufficient evidence to concluding intact face-specific processing at advanced ages. In this study, older adults performed much better with faces than with watches, but only for congruent composites where attending target parts and attending non-target parts yields same results. This supports that older adults used a global viewing strategy, which is advantageous for faces, but not for non-face objects, which differ in single features, but hardly in global appearance. We think the relatively good performance reached with congruent face composites is no sufficent proof for intact and efficient holistic processing of faces. A major advantage of holistic processing is that changes in inner face details have strong effects on the overall facial appearance. However, there is evidence that older adults rely on global face shape (Schwarzer et al., <xref ref-type="bibr" rid="B49">2010</xref>) and have difficulty judging the inner face details (Meinhardt-Injac et al., <xref ref-type="bibr" rid="B34">2014b</xref>). Holistic integration of facial cues from different facial areas is also important in emotion recognition. Using the bubbles technique Smith et al. (<xref ref-type="bibr" rid="B52">2005</xref>) revealed that happiness and anger may be readily recognized just from single face regions (happiness from the mouth and anger from the eyes region), but to categorize the remaining four emotions correctly, cues from more than one area have to be integrated. However, aging studies of emotion recognition consistently report age-related deficits in identifying particularly anger, fear and sadness (Sullivan and Ruffman, <xref ref-type="bibr" rid="B53">2004</xref>), and also declined capabilities in inferring emotions from the eyes region and the whole face (Sullivan et al., <xref ref-type="bibr" rid="B54">2007</xref>). Further, the reported strong age-related decline in using spatial-configural cues (Chaby et al., <xref ref-type="bibr" rid="B5">2011</xref>; Meinhardt-Injac et al., <xref ref-type="bibr" rid="B35">2015</xref>), and findings of reduced grouping ability for face fragments into whole intact faces (Norton et al., <xref ref-type="bibr" rid="B36">2009</xref>) point toward impairment in core-capabilities of holistic processing.</p>
</sec>
<sec>
<title>4.4. Conclusion</title>
<p>Using the complete design of the composite paradigm with faces and novel non-face objects showed face-specific congruency effects for young adults, but a loss of face-specificity in the congruency effects of older adults. This is a critical observation, since it was the specificity of the contextual interaction among attended and non-attend parts for faces or objects of expertise that let authors so far conclude a specific integrative processing mode from the observation of congruency effects (composite effects). The magnitude of congruency effects, as well as their association with response bias toward &#x0201C;different&#x0201D; responses for incongruent composites supports that a specific holistic processing mode is not lost at advanced ages. However, since, the congruency effect does also reflect part-based interference effects in older adults, as verified with non-face control objects, the face-congruency effect may confound both origins, part interference and holistic integration, and both sources can hardly be disentangled. It can therefore not be judged whether there is age-related decline in the face-specific component. We recommend not to ground conclusions about holistic face perception of older adults in a single measure, to combine several experimental paradigms which aim at different aspects of holistic processing, and to use non-face control objects to assess face-specificity.</p>
</sec>
</sec>
<sec id="s5">
<title>Ethics statement</title>
<p>The study was conducted in accordance with the Declaration of Helsinki. In detail, subjects participated voluntarily and gave written informed consent to their participation. In addition, participants were informed that they were free to stop the experiment at any time without negative consequences. The data were analyzed anonymously. All procedures, including personal treatment, data dandling and reasonability of experimental routines were approved by the local ethics committee at the Johannes Gutenberg University Mainz.</p>
</sec>
<sec id="s6">
<title>Author contributions</title>
<p>All authors contributed equally to the conceptualization of the study. BM set up the basic design. MP conducted the experiments and data preparation. GM contributed data analysis and interpretation. All authors were involved in writing, preparation of the manuscript and final approval. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are investigated and resolved appropriately.</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>This study was supported by the university research fund of Johannes Gutenberg University Mainz. Funding was granted to BM for project &#x0201C;Visual perception across the life-span.&#x0201D;</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bartlett</surname> <given-names>J. C.</given-names></name> <name><surname>Leslie</surname> <given-names>J. E.</given-names></name> <name><surname>Tubbs</surname> <given-names>A.</given-names></name> <name><surname>Fulton</surname> <given-names>A.</given-names></name></person-group> (<year>1989</year>). <article-title>Aging and memory for pictures of faces</article-title>. <source>Psychol. Aging</source> <volume>4</volume>, <fpage>276</fpage>&#x02013;<lpage>283</lpage>. <pub-id pub-id-type="pmid">2803620</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boutet</surname> <given-names>I.</given-names></name> <name><surname>Faubert</surname> <given-names>I.</given-names></name></person-group> (<year>2006</year>). <article-title>Recognition of faces and complex objects in younger and older adults</article-title>. <source>Mem. Cogn.</source> <volume>34</volume>, <fpage>854</fpage>&#x02013;<lpage>864</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193432</pub-id><pub-id pub-id-type="pmid">17063916</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cabeza</surname> <given-names>R.</given-names></name> <name><surname>Grady</surname> <given-names>C. L.</given-names></name> <name><surname>Nyberg</surname> <given-names>L.</given-names></name> <name><surname>McIntosh</surname> <given-names>A. R.</given-names></name> <name><surname>Tulving</surname> <given-names>E.</given-names></name> <name><surname>Kapur</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Age-related differences in neural activity during memory encoding and retrieval: a positron emission tomography study</article-title>. <source>J. Neurosci.</source> <volume>17</volume>, <fpage>391</fpage>&#x02013;<lpage>400</lpage>. <pub-id pub-id-type="pmid">8987764</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chaby</surname> <given-names>L.</given-names></name> <name><surname>George</surname> <given-names>N.</given-names></name> <name><surname>Renault</surname> <given-names>B.</given-names></name> <name><surname>Fiori</surname> <given-names>N.</given-names></name></person-group> (<year>2003</year>). <article-title>Age-related changes in brain responses to personally known faces: an event-related potential (ERP) study in humans</article-title>. <source>Neurosci. Lett.</source> <volume>349</volume>, <fpage>125</fpage>&#x02013;<lpage>129</lpage>. <pub-id pub-id-type="doi">10.1016/S0304-3940(03)00800-0</pub-id><pub-id pub-id-type="pmid">12946568</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chaby</surname> <given-names>L.</given-names></name> <name><surname>Narme</surname> <given-names>P.</given-names></name> <name><surname>George</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>Older adults&#x00027; configural processing of faces: role of second-order information</article-title>. <source>Psychol. Aging</source> <volume>26</volume>, <fpage>71</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1037/a0020873</pub-id><pub-id pub-id-type="pmid">20973603</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheung</surname> <given-names>O. S.</given-names></name> <name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Palmeri</surname> <given-names>T. J.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name></person-group> (<year>2008</year>). <article-title>Revisiting the role of spatial frequencies in the holistic processing of faces</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>34</volume>, <fpage>1327</fpage>&#x02013;<lpage>1336</lpage>. <pub-id pub-id-type="doi">10.1037/a0011752</pub-id><pub-id pub-id-type="pmid">19045978</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crook</surname> <given-names>T. H.</given-names></name> <name><surname>Larrabee</surname> <given-names>G. J.</given-names></name></person-group> (<year>1992</year>). <article-title>Changes in face recognition memory across the adult life span</article-title>. <source>J. Gerontol.</source> <volume>47</volume>, <fpage>138</fpage>&#x02013;<lpage>141</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daniel</surname> <given-names>S.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Age-related changes in processing faces from detection to identification: ERP evidence</article-title>. <source>Neurobiol. Aging</source> <volume>33</volume>, <fpage>206</fpage>.<fpage>e1</fpage>&#x02013;<lpage>e28</lpage>. <pub-id pub-id-type="doi">10.1016/j.neurobiolaging.2010.09.001</pub-id><pub-id pub-id-type="pmid">20961658</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Fockert</surname> <given-names>J. W.</given-names></name> <name><surname>Ramchurn</surname> <given-names>A.</given-names></name> <name><surname>Velzen</surname> <given-names>Z.</given-names></name> <name><surname>Bergstrom</surname> <given-names>Z.</given-names></name> <name><surname>Bunce</surname> <given-names>D.</given-names></name></person-group> (<year>2009</year>). <article-title>Behavioral and ERP evidence of greater distractor processing in old age</article-title>. <source>Brain Res.</source> <volume>1282</volume>, <fpage>67</fpage>&#x02013;<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.05.060</pub-id><pub-id pub-id-type="pmid">19497314</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farah</surname> <given-names>M. J.</given-names></name> <name><surname>Wilson</surname> <given-names>K. D.</given-names></name> <name><surname>Drain</surname> <given-names>M.</given-names></name> <name><surname>Tanaka</surname> <given-names>J. N.</given-names></name></person-group> (<year>1998</year>). <article-title>What is &#x02018;special&#x02019; about face perception?</article-title> <source>Psychol. Rev.</source> <volume>105</volume>, <fpage>482</fpage>&#x02013;<lpage>498</lpage>. <pub-id pub-id-type="pmid">18375769</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fink</surname> <given-names>G.</given-names></name> <name><surname>Halligan</surname> <given-names>P.</given-names></name> <name><surname>Marshall</surname> <given-names>J.</given-names></name> <name><surname>Frith</surname> <given-names>C.</given-names></name> <name><surname>Frackowiak</surname> <given-names>R.</given-names></name> <name><surname>Dolan</surname> <given-names>R.</given-names></name></person-group> (<year>1997</year>). <article-title>Neural mechanisms involved in the processing of global and local aspects of hierarchically organized visual stimuli</article-title>. <source>Brain</source> <volume>120</volume>, <fpage>1779</fpage>&#x02013;<lpage>1791</lpage>. <pub-id pub-id-type="pmid">9365370</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Folstein</surname> <given-names>M. F.</given-names></name> <name><surname>Folstein</surname> <given-names>S. E.</given-names></name> <name><surname>McHugh</surname> <given-names>P. R.</given-names></name></person-group> (<year>1975</year>). <article-title>Mini-mental state. A practical method for grading the cognitive state of patients for the clinician</article-title>. <source>J. Psychiatr. Res</source>. <volume>12</volume>, <fpage>189</fpage>&#x02013;<lpage>198</lpage>. <pub-id pub-id-type="pmid">1202204</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fulton</surname> <given-names>A.</given-names></name> <name><surname>Bartlett</surname> <given-names>J. C.</given-names></name></person-group> (<year>1991</year>). <article-title>Young and old faces in young and old heads: the factor of age in face recognition</article-title>. <source>Psychol. Aging</source> <volume>6</volume>, <fpage>623</fpage>&#x02013;<lpage>630</lpage>. <pub-id pub-id-type="pmid">1777151</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>Z.</given-names></name> <name><surname>Flevaris</surname> <given-names>A. V.</given-names></name> <name><surname>Robertson</surname> <given-names>L. C.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Priming global and local processing of composite faces: revisiting the processing-bias effect on face perception</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>73</volume>, <fpage>1477</fpage>&#x02013;<lpage>1486</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-011-0109-7</pub-id><pub-id pub-id-type="pmid">21359683</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gauthier</surname> <given-names>I.</given-names></name> <name><surname>Bukach</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <article-title>Should we reject the expertise hypothesis?</article-title> <source>Cognition</source> <volume>103</volume>, <fpage>322</fpage>&#x02013;<lpage>330</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2006.05.003</pub-id><pub-id pub-id-type="pmid">16780825</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gauthier</surname> <given-names>I.</given-names></name> <name><surname>Curran</surname> <given-names>T.</given-names></name> <name><surname>Curby</surname> <given-names>K. M.</given-names></name> <name><surname>Collins</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>Perceptual interference supports a non-modular account of face processing</article-title>. <source>Nat. Neurosci.</source> <volume>6</volume>, <fpage>428</fpage>&#x02013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1038/nn1029</pub-id><pub-id pub-id-type="pmid">12627167</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gazzaley</surname> <given-names>A.</given-names></name> <name><surname>Clapp</surname> <given-names>W.</given-names></name> <name><surname>Kelley</surname> <given-names>J.</given-names></name> <name><surname>McEvoy</surname> <given-names>K.</given-names></name> <name><surname>Knight</surname> <given-names>R. T.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Age-related top-down suppression deficit in the early stages of cortical visual memory processing</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>105</volume>, <fpage>13122</fpage>&#x02013;<lpage>13126</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0806074105</pub-id><pub-id pub-id-type="pmid">18765818</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gazzaley</surname> <given-names>A.</given-names></name> <name><surname>Cooney</surname> <given-names>J. W.</given-names></name> <name><surname>McEvoy</surname> <given-names>K.</given-names></name> <name><surname>Knight</surname> <given-names>R. T.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2005a</year>). <article-title>Top-down enhancement and suppression of the magnitude and speed of neural activity</article-title>. <source>J. Cogn. Neurosci.</source> <volume>17</volume>, <fpage>505</fpage>&#x02013;<lpage>517</lpage>. <pub-id pub-id-type="doi">10.1162/0898929053279522</pub-id><pub-id pub-id-type="pmid">15814009</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gazzaley</surname> <given-names>A.</given-names></name> <name><surname>Cooney</surname> <given-names>J. W.</given-names></name> <name><surname>Rissman</surname> <given-names>J.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2005b</year>). <article-title>Top-down suppression deficit underlies working memory impairment in normal aging</article-title>. <source>Nat. Neurosci.</source> <volume>8</volume>, <fpage>1298</fpage>&#x02013;<lpage>1300</lpage>. <pub-id pub-id-type="doi">10.1038/nn1543</pub-id><pub-id pub-id-type="pmid">16158065</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geerligs</surname> <given-names>L.</given-names></name> <name><surname>Saliasi</surname> <given-names>E.</given-names></name> <name><surname>Maurits</surname> <given-names>N. M.</given-names></name> <name><surname>Renken</surname> <given-names>R. J. and Lorist, M. M.</given-names></name></person-group> (<year>2014</year>). <article-title>Brain mechanisms underlying the effects of aging on different aspects of selective attention</article-title>. <source>Neuroimage</source> <volume>91</volume>, <fpage>52</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.01.029</pub-id><pub-id pub-id-type="pmid">24473095</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Georgiou-Karistianis</surname> <given-names>N.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name> <name><surname>Mehmedbegovic</surname> <given-names>F.</given-names></name> <name><surname>Farrow</surname> <given-names>M.</given-names></name> <name><surname>Bradshaw</surname> <given-names>J.</given-names></name> <name><surname>Sheppard</surname> <given-names>D.</given-names></name></person-group> (<year>2006</year>). <article-title>Age-related differences in cognitive function using a global local hierarchical paradigm</article-title>. <source>Brain Res.</source> <volume>1124</volume>, <fpage>86</fpage>&#x02013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2006.09.070</pub-id><pub-id pub-id-type="pmid">17069772</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Germine</surname> <given-names>L. T.</given-names></name> <name><surname>Duchaine</surname> <given-names>B.</given-names></name> <name><surname>Nakayama</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Where cognitive development and aging meet: face learning ability peaks after age 30</article-title>. <source>Cognition</source> <volume>118</volume>, <fpage>201</fpage>&#x02013;<lpage>210</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2010.11.002</pub-id><pub-id pub-id-type="pmid">21130422</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goldman-Rakic</surname> <given-names>P. S.</given-names></name></person-group> (<year>1995</year>). <article-title>Cellular basis of working memory</article-title>. <source>Neuron</source> <volume>14</volume>, <fpage>477</fpage>&#x02013;<lpage>485</lpage>. <pub-id pub-id-type="pmid">7695894</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hanring</surname> <given-names>A. E.</given-names></name> <name><surname>Zhuravleva</surname> <given-names>T. Y.</given-names></name> <name><surname>Alperin</surname> <given-names>B. R.</given-names></name> <name><surname>Rentz</surname> <given-names>D. M.</given-names></name> <name><surname>Holcomb</surname> <given-names>P. J.</given-names></name> <name><surname>Daffner</surname> <given-names>K. R.</given-names></name></person-group> (<year>2013</year>). <article-title>Age-related differences in enhancement and suppression of neural activity underlying selective attention in matched young and old adults</article-title>. <source>Brain Res.</source> <volume>1499</volume>, <fpage>69</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2013.01.003</pub-id><pub-id pub-id-type="pmid">23313874</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hildebrandt</surname> <given-names>A.</given-names></name> <name><surname>Wilhelm</surname> <given-names>O.</given-names></name> <name><surname>Schmiedek</surname> <given-names>F.</given-names></name> <name><surname>Herzmann</surname> <given-names>G.</given-names></name> <name><surname>Sommer</surname> <given-names>W.</given-names></name></person-group> (<year>2011</year>). <article-title>On the specificity of face cognition compared with general cognitive functioning across adult age</article-title>. <source>Psychol. Aging</source> <volume>26</volume>, <fpage>701</fpage>&#x02013;<lpage>715</lpage>. <pub-id pub-id-type="doi">10.1037/a0023056</pub-id><pub-id pub-id-type="pmid">21480718</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hildebrandt</surname> <given-names>A.</given-names></name> <name><surname>Sommer</surname> <given-names>W.</given-names></name> <name><surname>Herzmann</surname> <given-names>G.</given-names></name> <name><surname>Wilhelm</surname> <given-names>O.</given-names></name></person-group> (<year>2010</year>). <article-title>Structural invariance and age-related performance differences in face cognition</article-title>. <source>Psychol. Aging</source> <volume>25</volume>, <fpage>794</fpage>&#x02013;<lpage>810</lpage>. <pub-id pub-id-type="doi">10.1037/a0019774</pub-id><pub-id pub-id-type="pmid">20822255</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Konar</surname> <given-names>Y.</given-names></name> <name><surname>Bennett</surname> <given-names>P. J.</given-names></name> <name><surname>Sekuler</surname> <given-names>A. B.</given-names></name></person-group> (<year>2013</year>). <article-title>Effects of aging on face identification and holistic face processing</article-title>. <source>Vis. Res.</source> <volume>88</volume>, <fpage>38</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2013.06.003</pub-id><pub-id pub-id-type="pmid">23806271</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>Y.</given-names></name> <name><surname>Smith</surname> <given-names>C. R.</given-names></name> <name><surname>Grady</surname> <given-names>C. L.</given-names></name> <name><surname>Hoang</surname> <given-names>N.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Broadly tuned face representation in older adults assessed by categorical perception</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>40</volume>, <fpage>1060</fpage>&#x02013;<lpage>1071</lpage>. <pub-id pub-id-type="doi">10.1037/a0035710</pub-id><pub-id pub-id-type="pmid">24490946</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>MacMillan</surname> <given-names>N. A.</given-names></name> <name><surname>Creelman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2005</year>). <source>Detection Theory,</source> <edition>2nd</edition> Edn. <publisher-loc>London</publisher-loc>: <publisher-name>Lawrence Erlbaum Inc.</publisher-name></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maurer</surname> <given-names>D.</given-names></name> <name><surname>Le Grand</surname> <given-names>R.</given-names></name> <name><surname>Mondloch</surname> <given-names>C. J.</given-names></name></person-group> (<year>2002</year>). <article-title>The many faces of configural processing</article-title>. <source>Trends Cogn. Sci.</source> <volume>6</volume>, <fpage>255</fpage>&#x02013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(02)01903-4</pub-id><pub-id pub-id-type="pmid">12039607</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt</surname> <given-names>G.</given-names></name> <name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt-Injac</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>The complete design in the composite face paradigm: role of response bias, target certainty, and feedback</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<issue>885</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.00885</pub-id><pub-id pub-id-type="pmid">25400573</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt-Injac</surname> <given-names>B.</given-names></name> <name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2011</year>). <article-title>The time course of face matching for featural and relational image manipulations</article-title>. <source>Acta Psychol.</source> <volume>137</volume>, <fpage>48</fpage>&#x02013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2011.02.005</pub-id><pub-id pub-id-type="pmid">21420660</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt-Injac</surname> <given-names>B.</given-names></name> <name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2014a</year>). <article-title>Holistic face perception in young and older adults: effects of feedback and attentional demand</article-title>. <source>Front. Aging Neurosci.</source> <volume>6</volume>:<issue>291</issue>. <pub-id pub-id-type="doi">10.3389/fnagi.2014.00291</pub-id><pub-id pub-id-type="pmid">25386138</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt-Injac</surname> <given-names>B.</given-names></name> <name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2014b</year>). <article-title>Holistic processing and reliance on global viewing strategies in older adults&#x00027; face perception</article-title>. <source>Acta Psychol.</source> <volume>151</volume>, <fpage>155</fpage>&#x02013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2014.06.001</pub-id><pub-id pub-id-type="pmid">24977938</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt-Injac</surname> <given-names>B.</given-names></name> <name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>The sensitivity to replacement and displacement of the eyes region in early adolescence, young and later adulthood</article-title>. <source>Front. Psychol.</source> <volume>6</volume>:<issue>1164</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2015.01164</pub-id><pub-id pub-id-type="pmid">26321984</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Norton</surname> <given-names>D.</given-names></name> <name><surname>McBain</surname> <given-names>R.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name></person-group> (<year>2009</year>). <article-title>Reduced ability to detect facial configuration in middle-aged and elderly individuals: associations with spatiotemporal visual processing</article-title>. <source>J. Gerontol. Psychol. Sci.</source> <volume>64B</volume>, <fpage>328</fpage>&#x02013;<lpage>334</lpage>. <pub-id pub-id-type="doi">10.1093/geronb/gbp008</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfutze</surname> <given-names>E. M.</given-names></name> <name><surname>Sommer</surname> <given-names>W.</given-names></name> <name><surname>Schweinberger</surname> <given-names>S. R.</given-names></name></person-group> (<year>2002</year>). <article-title>Age-related slowing in face and name recognition: evidence from event-related brain potentials</article-title>. <source>Psychol. Aging</source> <volume>17</volume>, <fpage>140</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1037/0882-7974.17.1.140</pub-id><pub-id pub-id-type="pmid">11931282</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prakash</surname> <given-names>R. S.</given-names></name> <name><surname>Erickson</surname> <given-names>K. I.</given-names></name> <name><surname>Colcombe</surname> <given-names>S. J.</given-names></name> <name><surname>Kim</surname> <given-names>J. S.</given-names></name> <name><surname>Voss</surname> <given-names>M. W.</given-names></name> <name><surname>Kramer</surname> <given-names>A. F.</given-names></name></person-group> (<year>2009</year>). <article-title>Age-related differences in the involvement of the prefrontal cortex in attentional control</article-title>. <source>Brain Cognit.</source> <volume>71</volume>, <fpage>328</fpage>&#x02013;<lpage>335</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2009.07.005</pub-id><pub-id pub-id-type="pmid">19699019</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quigley</surname> <given-names>C.</given-names></name> <name><surname>Andersena</surname> <given-names>S.</given-names></name> <name><surname>Schulze</surname> <given-names>L.</given-names></name> <name><surname>Grunwald</surname> <given-names>M.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Feature-selective attention: evidence for a decline in old age</article-title>. <source>Neurosci. Lett.</source> <volume>474</volume>, <fpage>5</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1016/j.neulet.2010.02.053</pub-id><pub-id pub-id-type="pmid">20219631</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajah</surname> <given-names>M. N.</given-names></name> <name><surname>D&#x00027;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Region-specific changes in prefrontal function with age: a review on PET and fMRI studies on working and episodic memory</article-title>. <source>Brain</source> <volume>128</volume>, <fpage>1964</fpage>&#x02013;<lpage>1983</lpage>. <pub-id pub-id-type="doi">10.1093/brain/awh608</pub-id><pub-id pub-id-type="pmid">16049041</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Bukach</surname> <given-names>C. M.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name></person-group> (<year>2009a</year>). <article-title>Context influences holistic processing of nonface objects in the composite task</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>71</volume>, <fpage>530</fpage>&#x02013;<lpage>540</lpage>. <pub-id pub-id-type="doi">10.3758/APP.71.3.530</pub-id><pub-id pub-id-type="pmid">19304644</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name></person-group> (<year>2014</year>). <article-title>A meta-analysis and review of holistic face processing</article-title>. <source>Psychol. Bull.</source> <volume>140</volume>, <fpage>1281</fpage>&#x02013;<lpage>1302</lpage>. <pub-id pub-id-type="doi">10.1037/a0037004</pub-id><pub-id pub-id-type="pmid">24956123</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Mack</surname> <given-names>M. L.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name> <name><surname>Palmeri</surname> <given-names>T. L.</given-names></name></person-group> (<year>2009b</year>). <article-title>Holistic processing happens at a glance</article-title>. <source>Vis. Res.</source> <volume>49</volume>, <fpage>2856</fpage>&#x02013;<lpage>2861</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2009.08.025</pub-id><pub-id pub-id-type="pmid">19716376</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Mack</surname> <given-names>M. L.</given-names></name> <name><surname>Palmeri</surname> <given-names>T. L.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name></person-group> (<year>2011</year>). <article-title>Inverted faces are eventually processed holistically</article-title>. <source>Vis. Res.</source> <volume>51</volume>, <fpage>333</fpage>&#x02013;<lpage>342</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2010.11.014</pub-id><pub-id pub-id-type="pmid">21130798</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richler</surname> <given-names>J. J.</given-names></name> <name><surname>Tanaka</surname> <given-names>J. W.</given-names></name> <name><surname>Brown</surname> <given-names>D. D.</given-names></name> <name><surname>Gauthier</surname> <given-names>I.</given-names></name></person-group> (<year>2008</year>). <article-title>Why does selective attention to parts fail in face processing?</article-title> <source>J. Exp. Psychol. Learn. Mem. Cognit.</source> <volume>34</volume>, <fpage>1356</fpage>&#x02013;<lpage>1368</lpage>. <pub-id pub-id-type="doi">10.1037/a0013080</pub-id><pub-id pub-id-type="pmid">18980400</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossion</surname> <given-names>B.</given-names></name></person-group> (<year>2008</year>). <article-title>Picture-plane inversion leads to qualitative changes of face perception</article-title>. <source>Acta Psychol.</source> <volume>128</volume>, <fpage>274</fpage>&#x02013;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.1016/j.actpsy.2008.02.003</pub-id><pub-id pub-id-type="pmid">18396260</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salthouse</surname> <given-names>T. A.</given-names></name></person-group> (<year>1996</year>). <article-title>The processing-speed theory of adult age differences in cognition</article-title>. <source>Psychol. Rev.</source> <volume>103</volume>, <fpage>403</fpage>&#x02013;<lpage>428</lpage>. <pub-id pub-id-type="pmid">8759042</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmitz</surname> <given-names>T.</given-names></name> <name><surname>Cheng</surname> <given-names>F.</given-names></name> <name><surname>De Rosa</surname> <given-names>E.</given-names></name></person-group> (<year>2010</year>). <article-title>Failing to ignore: paradoxical neural effects of perceptual load on early attentional selection in normal aging</article-title>. <source>J. Neurosci.</source> <volume>30</volume>, <fpage>14750</fpage>&#x02013;<lpage>14758</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2687-10.2010</pub-id><pub-id pub-id-type="pmid">21048134</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwarzer</surname> <given-names>G.</given-names></name> <name><surname>Kretzer</surname> <given-names>M.</given-names></name> <name><surname>Wimmer</surname> <given-names>D.</given-names></name> <name><surname>Jovanovic</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>Holistic face processing among school children, younger and older adults</article-title>. <source>Eur. J. Dev. Psychol.</source> <volume>7</volume>, <fpage>511</fpage>&#x02013;<lpage>528</lpage>. <pub-id pub-id-type="doi">10.1080/17405620903003697</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Searcy</surname> <given-names>J. H.</given-names></name> <name><surname>Bartlett</surname> <given-names>J. C.</given-names></name> <name><surname>Memon</surname> <given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>Age differences in accuracy and choosing in eyewitness identification and face recognition</article-title>. <source>Mem. Cognit.</source> <volume>27</volume>, <fpage>538</fpage>&#x02013;<lpage>552</lpage>. <pub-id pub-id-type="doi">10.3758/BF03211547</pub-id><pub-id pub-id-type="pmid">10355242</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sekuler</surname> <given-names>R.</given-names></name> <name><surname>Sekuler</surname> <given-names>A. B.</given-names></name></person-group> (<year>2000</year>). <article-title>Visual perception and cognition</article-title>, in <source>Oxford Textbook of Geriatric Medicine, 2nd Edn.</source>, eds <person-group person-group-type="editor"><name><surname>Evans</surname> <given-names>J. G.</given-names></name> <name><surname>Williams</surname> <given-names>T. F.</given-names></name> <name><surname>Beattie</surname> <given-names>B. L.</given-names></name> <name><surname>Michel</surname> <given-names>J.-P.</given-names></name> <name><surname>Wilcock</surname> <given-names>G. K.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>874</fpage>&#x02013;<lpage>880</lpage>.</citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>M. L.</given-names></name> <name><surname>Cottrell</surname> <given-names>G. W.</given-names></name> <name><surname>Gosselin</surname> <given-names>F.</given-names></name> <name><surname>Schyns</surname> <given-names>P. G.</given-names></name></person-group> (<year>2005</year>). <article-title>Transmitting and decoding facial expressions</article-title>. <source>Psychol. Sci.</source> <volume>16</volume>, <fpage>184</fpage>&#x02013;<lpage>189</lpage>. <pub-id pub-id-type="doi">10.1111/j.0956-7976.2005.00801.x</pub-id><pub-id pub-id-type="pmid">15733197</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sullivan</surname> <given-names>S.</given-names></name> <name><surname>Ruffman</surname> <given-names>T.</given-names></name></person-group> (<year>2004</year>). <article-title>Emotion recognition deficits in the elderly</article-title>. <source>Int. J. Neurosci.</source> <volume>114</volume>, <fpage>403</fpage>&#x02013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1080/00207450490270901</pub-id><pub-id pub-id-type="pmid">14754664</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sullivan</surname> <given-names>S.</given-names></name> <name><surname>Ruffman</surname> <given-names>T.</given-names></name> <name><surname>Hutton</surname> <given-names>S. B.</given-names></name></person-group> (<year>2007</year>). <article-title>Age differences in emotion recognition skills and the visual scanning of emotion faces</article-title>. <source>J. Gerontol. Psychol. Sci.</source> <volume>62b</volume>, <fpage>53</fpage>&#x02013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1093/geronb/62.1.P53</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>West</surname> <given-names>R.</given-names></name></person-group> (<year>1996</year>). <article-title>An application of prefrontal cortex function theory to cognitive aging</article-title>. <source>Psychol. Bull.</source> <volume>120</volume>, <fpage>272</fpage>&#x02013;<lpage>292</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.120.2.272</pub-id><pub-id pub-id-type="pmid">8831298</pub-id></citation>
</ref>
</ref-list>
</back>
</article>