<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2015.01937</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Change of Expression Configuration Affects Identity-Dependent Expression Aftereffect but Not Identity-Independent Expression Aftereffect</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Song</surname> <given-names>Miao</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/183934/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Shinomori</surname> <given-names>Keizo</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/202232/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Qian</surname> <given-names>Qian</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Yin</surname> <given-names>Jun</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Zeng</surname> <given-names>Weiming</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Information Engineering, Shanghai Maritime University</institution> <country>Shanghai, China</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Information, Kochi University of Technology</institution> <country>Kochi, Japan</country></aff>
<aff id="aff3"><sup>3</sup><institution>Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology</institution> <country>Kunming, China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Wenfeng Chen, Institute of Psychology, Chinese Academy of Sciences, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Marianne Latinus, Aix Marseille Universit&#x000E9;, France; Jan Van den Stock, KU Leuven, Belgium</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Miao Song <email>songmiaolm&#x00040;gmail.com</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Emotion Science, a section of the journal Frontiers in Psychology</p></fn> 
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>12</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>6</volume>
<elocation-id>1937</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>01</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>02</day>
<month>12</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2015 Song, Shinomori, Qian, Yin and Zeng.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Song, Shinomori, Qian, Yin and Zeng</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>The present study examined the influence of expression configuration on cross-identity expression aftereffect. The expression configuration refers to the spatial arrangement of facial features in a face for conveying an emotion, e.g., an open-mouth smile vs. a closed-mouth smile. In the first of two experiments, the expression aftereffect is measured using a cross-identity/cross-expression configuration factorial design. The facial identities of test faces were the same or different from the adaptor, while orthogonally, the expression configurations of those facial identities were also the same or different. The results show that the change of expression configuration impaired the expression aftereffect when the facial identities of adaptor and tests were the same; however, the impairment effect disappears when facial identities were different, indicating the identity-independent expression representation is more robust to the change of the expression configuration in comparison with the identity-dependent expression representation. In the second experiment, we used schematic line faces as adaptors and real faces as tests to minimize the similarity between the adaptor and tests, which is expected to exclude the contribution from the identity-dependent expression representation to expression aftereffect. The second experiment yields a similar result as the identity-independent expression aftereffect observed in Experiment 1. The findings indicate the different neural sensitivities to expression configuration for identity-dependent and identity-independent expression systems.</p></abstract>
<kwd-group>
<kwd>facial expression</kwd>
<kwd>adaptation</kwd>
<kwd>aftereffect</kwd>
<kwd>visual representation</kwd>
<kwd>vision</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="1"/>
<equation-count count="1"/>
<ref-count count="66"/>
<page-count count="12"/>
<word-count count="9245"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>One key issue in face study is to understand how emotional expression is represented in the human visual system. According to the classical cognitive model (Bruce and Young, <xref ref-type="bibr" rid="B5">1986</xref>) and neural model (Haxby et al., <xref ref-type="bibr" rid="B32">2000</xref>), emotional expression is consider to be represented and processed independent of facial identity. This view is supported by several lines of evidence. First, neuropathological studies have reported that some brain injury patients exhibit selective impairments of ability to recognize expression but not face, or vice versa (e.g., Bruyer et al., <xref ref-type="bibr" rid="B7">1983</xref>; Young et al., <xref ref-type="bibr" rid="B65">1993</xref>; Palermo et al., <xref ref-type="bibr" rid="B44">2011</xref>), indicating a double dissociation between the representations of identity and expression. Second, early behavior studies found no difference in reaction times when making expression matching decisions to familiar and unfamiliar faces (e.g., Bruce and Young, <xref ref-type="bibr" rid="B5">1986</xref>; Ellis et al., <xref ref-type="bibr" rid="B20">1990</xref>), suggesting that expression processing does not depend on the familiarity of facial identity. Finally, single-unit recording and fMRI studies have found that expression may be processed in the superior temporal sulcus, located in lateral occipitotemporal cortex, whereas the process of facial identity preferentially involves the face fusiform area (FFA), located in inferior occipitotemporal cortex (e.g., Hasselmo et al., <xref ref-type="bibr" rid="B31">1989</xref>; Haxby et al., <xref ref-type="bibr" rid="B32">2000</xref>; Winston et al., <xref ref-type="bibr" rid="B61">2004</xref>).</p>
<p>Despite substantial evidence supporting independent representation of expression, there are a growing number of studies suggesting that expression representation is related to facial identity. Using a Ganer&#x00027;s speeded-classification task, variation in identity was found to interfere with the performance of expression classification (Schweinberger and Soukup, <xref ref-type="bibr" rid="B51">1998</xref>; Schweinberger et al., <xref ref-type="bibr" rid="B50">1999</xref>) or vice versa (Ganel and Goshen-Gottstein, <xref ref-type="bibr" rid="B23">2004</xref>, for a recent further discussion, see Wang et al., <xref ref-type="bibr" rid="B58">2013</xref>). On the other hand, happy expression was reported to improve the familiarity ratings of face (Baudouin et al., <xref ref-type="bibr" rid="B2">2000</xref>) and the identity recognition speed (e.g., Kaufmann and Schweinber, <xref ref-type="bibr" rid="B37">2004</xref>). Furthermore, Chen et al. (<xref ref-type="bibr" rid="B11">2011</xref>) compared effects of all basic facial expressions on identity recognition, and observed the different reaction times when faces are matched in different expression conditions, indicating the modulation of expression on the processing of face identity. In fact, besides emotional expression in face, body expressions and emotional cues in the background scene could also affect the processing of facial identity (Van den Stock and de Gelder, <xref ref-type="bibr" rid="B55">2012</xref>, <xref ref-type="bibr" rid="B56">2014</xref>), suggesting an general interaction mechanism underlying the processing of emotion and facial identity. In support of these behavior studies, single-cell studies have identified certain cells in monkey STS area (e.g., Sugase et al., <xref ref-type="bibr" rid="B53">1999</xref>) and amygdale area (e.g., Gohard et al., <xref ref-type="bibr" rid="B27">2007</xref>) which are responsive to both identity and expression stimuli, indicating that representations of emotional expression and facial identity may be overlapped.</p>
<p>In fact, concerning the relationship between expression and identity representations, besides the radical view of complete independence or the opposing view of highly dependence, there is also a third view to describe representation of expression, i.e., there are two types of expression representations, one depends on facial identity while the other is independent of facial identity. In a pioneer study, Fox and Barton (<xref ref-type="bibr" rid="B22">2007</xref>) have provided the evidence for this view of partial dependence, using the cross-identity expression adaptation. Adaptation refers to the phenomenon that prolonged exposure to a given stimulus would result in a subsequent perceptual bias (i.e., aftereffect). For instance, after viewing a unidirectional moving stimulus, an observer will perceive a subsequent stationary stimulus as moving in the opposite direction. As visual adaptation can reflect short-term responsiveness changes in the human neural system, it has been well developed to study the processes of low-level visual properties (for review see Durgin and Proffitt, <xref ref-type="bibr" rid="B15">1996</xref>; Anstis et al., <xref ref-type="bibr" rid="B1">1998</xref>; Clifford, <xref ref-type="bibr" rid="B12">2002</xref>), and complex visual stimuli, such as various facial dimensions (e.g., Leopold et al., <xref ref-type="bibr" rid="B39">2001</xref>; Zhao and Chubb, <xref ref-type="bibr" rid="B66">2001</xref>; Rhodes et al., <xref ref-type="bibr" rid="B48">2003</xref>; Hsu and Young, <xref ref-type="bibr" rid="B33">2004</xref>; Yamashita et al., <xref ref-type="bibr" rid="B64">2005</xref>, for review see Webster et al., <xref ref-type="bibr" rid="B59">2004</xref>; Webster and MacLeod, <xref ref-type="bibr" rid="B60">2011</xref>). As far as expression adaptation are concerned, within a series of ambiguous expression images morphing between two expressions (e.g., happy and angry faces), adaptation to one expression (e.g., happy face) will distort the perception of observers to facilitate perceiving the ambiguous images as the other expression (i.e., angry expression). The magnitude of expression aftereffect can be indexed by the differences in response proportions of subjects before and after adaptation.</p>
<p>In Fox and Barton&#x00027;s adaptation experiments, expression aftereffect is measured using adaptors and tests that were both the same and different identities. The expression aftereffect is strongest when the identities of the adaptor and test faces were the same, however, it is still present, but much reduced when the adaptor and test faces differ in facial identity. These results were interpreted as evidence that there are two different neural representations in expression system, i.e., identity-dependent and identity-independent expression representations. If the adaptor and test faces are congruent, both identity-dependent and identity-independent expression representations are adapted and contribute to the expression aftereffect; if the adaptor and test faces are incongruent, only identity-independent expression representation contributes to the expression aftereffect. This explains why the expression aftereffect in incongruent condition is much weaker than that in congruent condition.</p>
<p>In support of Fox and Barton, several subsequent works replicated this cross-identity expression adaptation with a variety of experimental paradigms and stimuli (Ellamil et al., <xref ref-type="bibr" rid="B19">2008</xref>; Campbell and Burke, <xref ref-type="bibr" rid="B10">2009</xref>; Vida and Mondloch, <xref ref-type="bibr" rid="B57">2009</xref>; Pell and Richards, <xref ref-type="bibr" rid="B45">2013</xref>). The cross-identity expression aftereffect was found to occur generally in five basic expressions with approximately the same extent of transfer, suggesting different expression representations depend on identity in a similar way (Campbell and Burke, <xref ref-type="bibr" rid="B10">2009</xref>). A recent study further reported that the overlapping expression representations (e.g., the representation for similar emotional features of disgusted and angry faces) are also identity-dependent, it was found that the adaptation to a disgusted face would bias perception away from angry face and this cross-emotion expression adaptation generalizes across identity (i.e., Pell and Richards, <xref ref-type="bibr" rid="B45">2013</xref>). Additionally, Vida and Mondloch (<xref ref-type="bibr" rid="B57">2009</xref>) shows that children (5&#x02013;9-year-olds) display an adult-like transfer of expression aftereffect across identities.</p>
<p>The above studies provided strong evidence for the existence of identity-dependent and identity-independent expression representations, little is, however, known about the nature of these two different types of expression representations, especially about their sensitivity to expression configuration. Expression can be viewed as stereotyped geometrical variations in facial configurations that correspond to well-defined action patterns (Webster and MacLeod, <xref ref-type="bibr" rid="B60">2011</xref>). Although there is only a small set of six basic expressions, one expression can be conveyed with different varieties of expression configurations within an emotion category to present subtle emotional information (Rozin et al., <xref ref-type="bibr" rid="B49">1994</xref>). For instance, humans can open their lips and show their teeth to express a high-intensity smile, or they can simply crinkle the corner of their eye to express a weak-intensity smile. So far, there is relatively little knowledge regarding the sensitivity to expression configuration of the identity-dependent and identity-independent expression representations, this issue is, however, helps to obtain a better understanding for the processing mechanism of the human expression system, as well as for an understanding for the functional difference between the identity-dependent and identity-independent expression systems.</p>
<p>The expression aftereffect is an effective measure to assess the sensitivity of expression system to expression configuration. Fox and Barton (<xref ref-type="bibr" rid="B22">2007</xref>) have showed that the magnitude of expression aftereffect was almost not affected when the adaptor was changed to a picture of the same expression in the same individual but that differs from the image used to create the morphed test stimuli, indicating that the expression aftereffect is not sensitive to the subtle change in the expression configuration to some extent. However, the different expression images of the same expression by the same person are still highly similar; this finding cannot well exclude the influence of expression configuration on expression aftereffects. In contrast, several other studies suggested that the expression aftereffect depends on the expression configuration. Butler et al. (<xref ref-type="bibr" rid="B8">2008</xref>) showed that the expression aftereffect can be produced by emotional facial features, but not by the same emotional features in a hybrid face with inconsistent expression configurations. On the other hand, Skinner and Benton (<xref ref-type="bibr" rid="B52">2010</xref>) reported that adaptation to the anti-expression, which is defined by the physically complementary expression configuration, would induce a significant expression aftereffect on the original emotional expression matching to the anti-expression (see also Cook et al., <xref ref-type="bibr" rid="B13">2011</xref>; Juricevic and Webster, <xref ref-type="bibr" rid="B35">2012</xref>). As the anti-expression does not convey the emotional information, this observation also reflects that the expression aftereffect is closely related to the spatial configuration of expression.</p>
<p>Although the previous studies have provided the cue about how expression configuration influences expression aftereffect, there is no studies directly examining this issue in a framework comprising identity-dependent and identity-independent elements. This is important because the identity-dependent and identity-independent expression representations may reside in different brain areas (Fox and Barton, <xref ref-type="bibr" rid="B22">2007</xref>), and have different sensitivities to expression configuration (Harris et al., <xref ref-type="bibr" rid="B29">2012</xref>, <xref ref-type="bibr" rid="B30">2014</xref>). Morris et al. (<xref ref-type="bibr" rid="B42">1996</xref>) and Thielscher and Pessoa (<xref ref-type="bibr" rid="B54">2007</xref>) have previously reported that responses in the amygdala can be modulated by changes in the emotion&#x00027;s intensity of expression images, suggesting that the amygdala contain neuron population that tuned to the expression configuration. In contrast, in the recent studies, Harris found that amygdala area is relatively robust to expression change within the same emotion category in comparison to posterior superior temporal sulcus (pSTS), using static expression images (Harris et al., <xref ref-type="bibr" rid="B29">2012</xref>) or dynamic movie as stimuli (Harris et al., <xref ref-type="bibr" rid="B30">2014</xref>). Although the results from above studies are not completely consistent, the current evidence indicates that the brain areas may have the different neural sensitivities to the expression configuration.</p>
<p>In current study, we used the cross-identity expression adaptation (Fox and Barton, <xref ref-type="bibr" rid="B22">2007</xref>) to investigate the sensitivities of identity-dependent and identity-independent expression representations, respectively. The experimental logic for adaptation is as follows: Given that the adaptation with the repedition of the completely same stimulus would reduce the neural response and induce the aftereffect to the maximum extent, the reduction of aftereffect should be observed when a particular dimension of stimulus is changed, provided that the underlying neural system codes that dimension. The aftereffect would become weak because the altered stimulus activates a new, non-adapted neural representation. In contrast, the aftereffect would remain the same if the underlying neural system is insensitive to differences along the altered stimulus dimension (Grill-Spector and Malach, <xref ref-type="bibr" rid="B28">2001</xref>). As far as our research question is concerned, if the identity-dependent neural representation is sensitive to the expression configuration, the identity-dependent expression aftereffect should be reduced when the adaptor is changed to a expression configuration different from the tests; if identity-dependent neural representation is regardless of the processing of expression configuration, the expression aftereffect should be robust to the change in the expression configuration. The logic for identity-independent expression representation is similar.</p>
<p>Two experiments were performed in this study. In Experiment 1, we measured the expression aftereffect in an orthogonal 2 &#x000D7; 2 design, manipulating whether the adaptor and test faces exhibited the same or different facial identities, and independently, whether the expression configurations on the identities were the same or different. Experiment 1 thus includes the following 2 &#x000D7; 2 combinations (same or different: facial identity and expression configuration between adaptor and test faces): same identity/same configuration, same identity/different configuration, different identity/same configuration, different identity/different configuration. Given this factorial design, if identity-dependent and identity-independent expression representations have similar neural sensitivities to expression configuration, it is expected to observe that expression configuration change has similar effect on expression aftereffect regardless of the identities of adaptors and tests. In other word, there should be no the significant interaction between facial identity and expression configuration. On the contrary, if the sensitivities of these two expression representations are different, the interaction is expected to be observed. More specifically, if identity-dependent expression representation is sensitive to expression configuration, the expression aftereffect in same identity/different configuration condition should be weaker than that in the same identity/same configuration condition, otherwise, the aftereffect size in the two conditions should be approximately the same. Similarly, if identity-independent expression aftereffect is sensitive to expression configuration, the expression aftereffects in different identity/different configuration is expected to be weaker than that in different identity/same configuration condition, otherwise, we expect to observe the approximately same aftereffect in these two conditions.</p>
<p>In Experiment 2, we repeated the different identity/different configuration experimental condition using schematic line face as the adaptor and real faces as tests, which minimized the similarity in face features between adaptor and tests. Using these dissimilar stimuli, interference is expected to be excluded from the identity-dependent expression representation, and the effect of expression configuration on identity-independent expression representation to the expression aftereffect can be accurately evaluated.</p>
</sec>
<sec id="s2">
<title>Experiment 1</title>
<p>The expression aftereffect is measured in the following way. We created morph series using two expression images of the same person. The images in the middle range of the morph series thus showed a recognizable identity but an ambiguous expression cue. After adapted to one expression, the subject was instructed to judge which expression the ambiguous morphing expression resembled. Generally, adaptation to one expression increased the possibility the ambiguous expression was judged as the other expression.</p>
<p>Experiment 1 included 4 adapting conditions as described in the introduction part, in which test faces were always the same among conditions, but the identity and/or the expression configuration of the adaptor were manipulated to create the different adapting conditions. The four conditions are respectively termed as SI/SC, SI/DC, DI/SC, and DI/DC, where the first two letters indicates whether adaptor and test face are of the same identity (SI: Same Identity) or not (DI: Different Identity), and last two letters indicates whether the expression configuration of adaptor are the same with that of test face (SC: Same Configuration) or not (DC: Different Configuration). For instance, SI/SC refers to the condition in which both the identity and the expression configuration of adaptor are identical to those of test face, whereas DI/DC means that adaptor differentiates with the test face in both identity and expression configuration.</p>
<sec>
<title>Subject</title>
<p>The subjects were sixteen paid students (five from Kochi University of Technology and eleven from Shanghai Maritime University, mean age: 19.6, <italic>SD</italic> &#x0003D; 2.8), and each subject completed eight 40-min sessions. All subjects have normal or corrected-to-normal vision, and were na&#x000EF;ve to the purpose of the experiment. Naivety was confirmed during the debriefing that took place once they had finished. The protocol of two experiments was approved by the review boards of the Chinese Ethics Committee of Registering Clinical Trials, and informed consent was obtained in accordance with the principles in the Declaration of Helsinki.</p>
</sec>
<sec>
<title>Stimuli and apparatus</title>
<p>The face stimuli used in Experiment 1 were selected from the affiliated image set of the Facial Action Coding System (Ekman et al., <xref ref-type="bibr" rid="B17">2002</xref>) and the Cohn-Kanade AU-Coded Facial Expression Database (Kanade et al., <xref ref-type="bibr" rid="B36">2000</xref>). The expression images in the two databases are coded by Face Action Coding System (Ekman and Oster, <xref ref-type="bibr" rid="B18">1979</xref>; Ekman et al., <xref ref-type="bibr" rid="B17">2002</xref>) and given an emotion label, which enables us to select same expression configuration for different photographic subjects in terms of their emotion labels.</p>
<p>Happy, angry, surprised, and disgusted expressions were used in Experiment 1, which constituted two expression pairs, i.e., happy-angry and surprised-disgusted expression pairs. Illustrated by the case of surprised-disgusted expression pair, we selected two female photographic subjects (F01 and F02) depicting two expression configurations (C01 and C02) of surprise and disgusted expressions, resulting in four combinations of identity and expression configuration (F01 with C01, F01 with C02, F02 with C01, and F02 and C02) (see Figure <xref ref-type="fig" rid="F1">1A</xref>). The argument to select these photographic subjects is that they posed the most easily recognizable and high intensity expressions in the databases. The same method was used to select the adaptors of happy-angry expression pair, except that the two photographic subjects showing happy and angry expressions were male.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Examples of the face stimuli used in experiment 1. (A)</bold> The adaptors in four adapting conditions. <bold>(B)</bold> The surprised-disgusted expression pair as tests, which were created by morphing between two adapting faces used in the SI/SC condition. These test stimuli were always the same for four adapting conditions.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0001.tif"/>
</fig>
<p>Using two expression images of the same photographic subject, we morphed a series of test faces using Abrosoft FantaMorph 5.0 for surprised-disgusted and happy-angry expression pairs, respectively, with the expression strength from 0 to 100% in steps of 5% in terms of the scale of the morphing software. The nine middle ambiguous images, which varied from 30 to 70%, served as test stimuli (see Figure <xref ref-type="fig" rid="F1">1B</xref>).</p>
<p>We used the same test faces but different adaptors in the four experimental conditions. As shown in Figure <xref ref-type="fig" rid="F1">1</xref>, the test faces were always the ambiguous expression images morphed between F01 with surprised expression (C01) and F01 with the disgusted expression (C01). In the same identity/same configuration condition, the same images used to construct the test faces were used as the adaptors (i.e., F01 with surprised expression C01, and F01 with disgusted expression C01). In the same identity/different configuration condition, the adaptors were the face images that had the same identity but expression configuration that differed from the test faces (i.e., F01 with the surprised expression C02, and F01 with disgusted expression C02). In different identity/same configuration condition, we used the face images that had different identities but the same expression configuration with test faces as the adaptors (i.e., F02 with surprise expression C01, and F02 with disgusted expression C01). Finally, in different identity/different configuration condition, the adaptors were the face images with different identities and expression configuration that differed from test faces (i.e., F02 with surprise expression C02, and F02 with disgusted expression C02).</p>
<p>All face stimuli used in Experiment 1 were cropped with an oval frame (leaving only internal features and external jaw), resized to 400<sup>&#x0002A;</sup>550 pixels, and set on a black background. Distinguishing features, such as moles or scars, were removed with the Spot Healing Brush tool. Luminance and contrast were manually adjusted to be comparable across all face images. The experiments were run with Cogent Psychophysics Toolbox extensions, and the visual stimuli were presented on a 19-inchLCD controlled by a DELL computer, with a vertical refresh rate adjusted to 85 Hz and a spatial resolution set to1024<sup>&#x0002A;</sup>768 pixels. Subjects were seated to view the monitor from the distance of 50 cm, and the visual stimuli subtended a visual angle of approximately 12&#x000B0; (horizontally) by 15&#x000B0; (vertically) in this distance.</p>
<p>To better control the expression configuration, we chose Caucasian faces from two FACS coded facial expression databases. The subjects were, however, Asian persons. One might doubt whether the other race effect could influence the recognition performance of subjects or pollute the data. We argue the other race effect would not be a main confounding factor for the following reasons. First, as the experimental task is to recognize facial expression rather than facial identity. It is well established that six basic expressions are generally regarded as universal cross different races (Ekman, <xref ref-type="bibr" rid="B16">1993</xref>), and observers can reliably discriminate among the six basic facial expressions on other race faces (Ekman, <xref ref-type="bibr" rid="B16">1993</xref>; Biehl et al., <xref ref-type="bibr" rid="B3">1997</xref>; Lee et al., <xref ref-type="bibr" rid="B38">2011</xref>). Although other studies also show that participants could recognize facial expression of a person of their own race better than of a person of another race (Izard, <xref ref-type="bibr" rid="B34">1971</xref>; Ducci et al., <xref ref-type="bibr" rid="B14">1982</xref>), this race effect generally occur when a large number of subtle expressions or micro-expressions are used as tests. In current experiment, only two expression images were used in each condition, and we carefully selected the easily recognizable and high intensity expressions images. The recognition test showed that the subjects can easily recognize the expression with 100% correct rate in these selected expression images. Finally, to confirm this further, we ran a pilot experiment by six subjects to compare the just noticeable differences (JNDs) of happy-angry expression pair of two Caucasian faces used in our experiment, with those of two Asian faces<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>. The results did not show a significant difference between these two conditions, indicating an approximately identical sensitivity of subjects to expression perception in Asian and Caucasian faces.</p>
</sec>
<sec>
<title>Procedure</title>
<p>Each subject was tested individually. To familiarize them with the expression images used in the experiment and the experimental procedure, subjects were given oral instruction and short training blocks. Training stimuli included the unaltered version (0, 100%, also used as adaptor in the main experiment) plus two morphed versions (30, 70%, also used as tests in the main experiment) for happy-angry and disgusted-surprised expression pairs. A training block of eight trials was performed to help the subjects make the correct association between the expression and the corresponding button for response, and this was repeated until subjects succeeded to reach the 100% correct ratio.</p>
<p>Each trial consisted of an adaptor image presented for 4 s, followed by the test face for 200 ms (after a 100 ms noise mask). Time parameters were selected to obtain strong aftereffects based on the study of the dynamics of facial adaptation (Leopold et al., <xref ref-type="bibr" rid="B40">2005</xref>). After the presentation of the test image, subjects performed a two-alternative forced choice (2-AFC) task to classify the presented image into one of two categories (i.e., two expression images used to create the morphed series). Feedback for pressing each button was given after each trial to confirm subjects&#x00027; button responses. The subjects were instructed to attend to the face stimuli, but no fixation point was given. This is to prevent subjects from overly attending to local facial features near the fixation point.</p>
<p>There were 20 trials for each test face, resulting in 360 (9 test face <sup>&#x0002A;</sup> 2 adaptors <sup>&#x0002A;</sup> 20 trials) trials per block. The experiment consisted of eight blocks, each for one of eight experimental conditions (2 expression pairs &#x000D7; 4 experimental conditions), performed in an order that was randomized across subjects. The trials for different tests in a block were also randomized. The duration of one block is approximately 40 min. In order to reduce fatigue, the subjects were allowed take a 5 min break after every 15-min experiment, they were also encouraged to take a break whenever they feel tired. The subjects participated in one block every other day and finished all experiments within 16 days. Figure <xref ref-type="fig" rid="F2">2</xref> shows the procedure of Experiment 1.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Illustration of the visual adaptation and experimental procedure in Experiment 1</bold>.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0002.tif"/>
</fig>
</sec>
<sec>
<title>Data analysis</title>
<p>For each condition, we first determined the response proportion given for one of two choices (e.g., how many times does the subject respond &#x0201C;surprised&#x0201D;) at each test level for each subject. Then, we fit the data of the response proportion based on the maximum likelihood fitting procedure (Meeker and Escobar, <xref ref-type="bibr" rid="B41">1995</xref>) using a logistic function formula as follows:
<disp-formula id="E1"><mml:math id="M1"><mml:mrow><mml:mi>F</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>=</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mtext>exp</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula></p>
<p>Where <italic>x</italic> is the morphing strength, <italic>F</italic>(<italic>x</italic>) is the probability of response, parameter &#x003B1; corresponds to the point of subjective point [PSE, <italic>F</italic>(<italic>x</italic> &#x0003D; &#x003B1;; &#x003B1;, &#x003B2;) &#x0003D; 0.5], and parameter &#x003B2; determines the slope of psychometric function. From these fits, the aftereffect magnitude was quantified as the difference (in morphing strength) between each subject&#x00027;s PSE adapted after one expression in a pair (e.g., happy expression) and that after the other expression (i.e., angry expression).</p>
<p>All statistical analyses were run on SPSS 19.0 software, and significance levels for all tests were set at <italic>p</italic> &#x0003C; 0.05. As the Kolmogorov-Smirnov normality test shows that there were no data violating the assumption of normality, we performed a three-way repeated measures ANOVA with the PSE difference of each subject as the dependent variable, facial identity (2 levels, same or different between adaptor and tests), expression configuration (2 levels, same or different between adaptor and tests), and expression type (2 levels, happy-angry expression pair or disgusted-surprise expression pair) as within-subject factors. Significant main effects and interactions were followed up with simple effect analyses, respectively. Paired samples t-tests were run on each condition to determine whether that condition generated a significant aftereffect, with the PSE adapted after one expression in a pair (e.g., happy expression) and that after the other expression (i.e., angry expression) for the same subject as paired variables. Finally, the <italic>post-hoc</italic> power analyses was performed for Three-way repeated measures ANOVA using SPSS 19.0 and for paired samples t-tests using G<sup>&#x0002A;</sup>Power 3.1.</p>
</sec>
<sec>
<title>Result</title>
<p>We plotted the response proportions as a function of the expression strength of the test faces under four adapting conditions for happy-angry (Figure <xref ref-type="fig" rid="F3">3A</xref>) and surprise-disgusted expression (Figure <xref ref-type="fig" rid="F3">3B</xref>) pairs. All adapting conditions generated significant aftereffects (Figure <xref ref-type="fig" rid="F4">4</xref>, also see Table <xref ref-type="table" rid="T1">1</xref> for details). After adaptation to one expression, the subjects tended to see the test face as the other expression within an expression pair, and psychometric curve shifts to the opposite direction of the adaptor. These results confirmed the expression aftereffect reported in the previous literatures (Webster et al., <xref ref-type="bibr" rid="B59">2004</xref>; Butler et al., <xref ref-type="bibr" rid="B8">2008</xref>; Xu et al., <xref ref-type="bibr" rid="B63">2012</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>The response proportion as a function of expression strength in Experiment 1 for the happy-angry expression pair (A) and the surprised-disgusted expression pair (B)</bold>. The data were fitted with logistic functions averaged in the sixteen subjects.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0003.tif"/>
</fig>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>The aftereffect magnitude in four adapting conditions in Experiment 1, bars indicate the magnitude of the expression aftereffect and the error bars denote SEM</bold>.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0004.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>The expression aftereffect (Mean and SEM) for happy-angry and disgusted-surprised expression pairs in four adapting conditions</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Adapting conditions</bold></th>
<th valign="top" align="left"><bold>Expression pairs</bold></th>
<th valign="top" align="center"><bold>Mean (%)</bold></th>
<th valign="top" align="center"><bold>SEM (%)</bold></th>
<th valign="top" align="center"><bold>Statistic tests (<italic>df</italic> &#x0003D; 15)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">SI/SC</td>
<td valign="top" align="left">Happy-angry</td>
<td valign="top" align="center">17.26</td>
<td valign="top" align="center">1.08</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 16.03, <italic>p</italic> &#x0003C; 0.001</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Disgusted-surprised</td>
<td valign="top" align="center">13.94</td>
<td valign="top" align="center">1.05</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 13.32, <italic>p</italic> &#x0003C; 0.001</td>
</tr>
<tr>
<td valign="top" align="left">SI/DC</td>
<td valign="top" align="left">Happy-angry</td>
<td valign="top" align="center">7.41</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 9.10, <italic>p</italic> &#x0003C; 0.001</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Disgusted-surprised</td>
<td valign="top" align="center">9.08</td>
<td valign="top" align="center">0.59</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 15.35, <italic>p</italic> &#x0003C; 0.001</td>
</tr>
<tr>
<td valign="top" align="left">DI/SC</td>
<td valign="top" align="left">Happy-angry</td>
<td valign="top" align="center">3.81</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 5.49, <italic>p</italic> &#x0003C; 0.003</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Disgusted-surprised</td>
<td valign="top" align="center">3.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 5.04, <italic>p</italic> &#x0003C; 0.009</td>
</tr>
<tr>
<td valign="top" align="left">DI/DC</td>
<td valign="top" align="left">Happy-angry</td>
<td valign="top" align="center">3.54</td>
<td valign="top" align="center">0.54</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 6.57, <italic>p</italic> &#x0003C; 0.002</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Disgusted-surprised</td>
<td valign="top" align="center">4.18</td>
<td valign="top" align="center">0.97</td>
<td valign="top" align="center"><italic>t</italic> &#x0003D; 4.33, <italic>p</italic> &#x0003C; 0.026</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The result shows no significant main effect of the expression pair [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 0.21, <italic>p</italic> &#x0003D; 0.65], indicating that a similar expression aftereffect is obtained for both expression pairs. There were also no significant two or three interactions involving the variable of expression type. Consequently, the data were collapsed across expression pairs in further analyses.</p>
<p>The main effect of facial identity on expression aftereffect was significant [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 228.63, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00], indicating smaller expression aftereffects when the adaptor and tests are different persons (<italic>M</italic> &#x0003D; 3.80%, SEM &#x0003D; 0.37%) compared to when they are the same person (<italic>M</italic> &#x0003D; 11.92%, SEM &#x0003D; 0.66%), averaged across two different expression configuration groups. This result is consistent with the observation that change in facial identity reduced the expression aftereffect, suggesting the dependence of expression representation on facial identity (Fox and Barton, <xref ref-type="bibr" rid="B22">2007</xref>; Ellamil et al., <xref ref-type="bibr" rid="B19">2008</xref>; Campbell and Burke, <xref ref-type="bibr" rid="B10">2009</xref>; Vida and Mondloch, <xref ref-type="bibr" rid="B57">2009</xref>; Pell and Richards, <xref ref-type="bibr" rid="B45">2013</xref>).</p>
<p>There was a main effect of expression configuration on expression aftereffect [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 53.07, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00]. We also found significant interaction between facial identity and expression configuration [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 29.05, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 0.99], suggesting that expression configuration influences the identity-dependent and identity-independent expression aftereffects in different ways. Simple effect analyses were performed to further explore the source of this interaction. The expression aftereffect in the same identity/different configuration condition (<italic>M</italic> &#x0003D; 8.25%, SEM &#x0003D; 0.52%) was much weaker than that in the same identity/same configuration condition (<italic>M</italic> &#x0003D; 15.60%, SEM &#x0003D; 0.80%) [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 71.13, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00], suggesting change of expression configuration would impair the identity-dependent expression aftereffect. In contrast, the aftereffect magnitude in the different identity/same configuration condition (<italic>M</italic> &#x0003D; 3.73%, SEM &#x0003D; 0.49%) was approximately the same with that in the different identity/different configuration condition (<italic>M</italic> &#x0003D; 3.86%, SEM &#x0003D; 0.55%) [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 0.02, <italic>p</italic> &#x0003D; 0.88, power &#x0003D; 0.046]. This indicates that the identity-independent expression aftereffect is robust to the variance in expression configuration in comparison to identity-dependent expression aftereffect. It should be noted that the statistical power for this contrast is relatively low, and one may argue that the robustness of identity-independent is simply due to statistical error. A control experiment is required to exclude this possibility (see Experiment 2). Finally, the expression aftereffect in different identity/same configuration condition was weaker than that in same identity/same configuration condition [<italic>F</italic><sub>(1, 15)</sub> &#x0003D; 59.98, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00], indicating that the reduction of expression aftereffect across identities is held even when the adaptor and test face has the same expression configuration. This suggested the reduction of cross-identity expression aftereffect cannot completely be attributed to change in expression configuration, confirming further the role of facial identity in expression aftereffect.</p>
</sec>
</sec>
<sec id="s3">
<title>Experiment 2</title>
<p>Although Experiment 1 shows no significant difference in aftereffect size between different identity/different configuration and different identity/same configuration conditions, indicating identity-dependent expression aftereffect is robust to expression configuration change. The statistical power for this observation is, however, not high, which is due to the interference of identity-dependent expression representation. Here, we further confirmed this observation using schematic line face as the adaptor and real faces as tests. It is well established that the adaptation effect is modulated by the perceptual similarity between adaptor and test. As the line face is totally dissimilar with real face and thus only conveys the emotional information of an expression, using the line face as adaptor could exclude the possible contribution from the identity-dependent neural representation to expression aftereffect. Besides, the line face enables us to manipulate more precisely the expression configuration and/or expression intensity. If the identity-independent expression aftereffect is robust to the variance in expression configuration, it can be predicted that the magnitude of the expression aftereffect should be approximately equal, even with different expression configurations. Otherwise, the aftereffect magnitude should differ significantly from different expression configurations.</p>
<sec>
<title>Subject</title>
<p>The subjects were twenty paid students (Mean age: 22.5, <italic>SD</italic> &#x0003D; 2.9) from Shanghai Maritime University, and each completed one 40 min sessions. All subjects have normal or corrected-to-normal vision, and all were na&#x000EF;ve to the aims and purpose of the experiment.</p>
</sec>
<sec>
<title>Stimuli and apparatus</title>
<p>As the line faces cannot vividly express complex negative expressions, such as surprise or disgust, we used happy and sad expressions in Experiment 2. The four line faces displaying happy or sad expressions were created using Adobe Photoshop CS 6.0. These line faces were made of an ellipse with white background for the face area, two black dots for eyes, and a curvature for the mouth. The eyes and mouth are located at one-third and two-thirds of the major axis of the face, respectively. The center-to-center distance between the two eyes is one-third of the minor axis of the face. We gradually changed the curvature of the mouth from concave to convex to express a happy or sad expression.</p>
<p>There were two adapting conditions in Experiment 2, in which faces having mouths with large curvatures were used as adaptors for high-intensity adapting condition, while those with small curvatures were used as the adaptors for low-intensity adapting condition (Figure <xref ref-type="fig" rid="F5">5</xref>). Ten subjects were instructed to rate the expression intensity on these adapting faces using Likert method (Likert scale is from 1 to 9). As expected, the happy face having large curvature (<italic>M</italic> &#x0003D; 6.9, SEM &#x0003D; 0.48) presented higher expression intensity than that having small curvature (<italic>M</italic> &#x0003D; 3.8, SEM &#x0003D; 0.49) [<italic>t</italic><sub>(9)</sub> &#x0003D; 6.15, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00], and the sad face having large curvature (<italic>M</italic> &#x0003D; 6.1, SEM &#x0003D; 0.50) is also higher in expression intensity than that having small curvatures (<italic>M</italic> &#x0003D; 4.1, SEM &#x0003D; 0.35) [<italic>t</italic><sub>(9)</sub> &#x0003D; 7.75, <italic>p</italic> &#x0003C; 0.001, power &#x0003D; 1.00].</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>The line faces as adaptors used in Experiment 2</bold>.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0005.tif"/>
</fig>
<p>For these two conditions, the adaptors were different but the tests were the same. The male photographic subject, displaying the happy and sad expressions, was selected from the affiliated image set of Facial Action Coding System to construct the test faces. Similar to Experiment 1, we created the ambiguous expression images with sadness strength from 0% (happiest) to100% (saddest) in steps of 5% using Abrosoft FantaMorph 5.0, and nine faces with sadness strength from 30 to70% were used as test faces. The image size, displaying apparatus, and presenting method were the same as in Experiment 1.</p>
</sec>
<sec>
<title>Procedure</title>
<p>The procedure is identical to that of Experiment 1 with the following exceptions. First, the adaptors were computer-generated line faces instead of real faces. Second, we randomly interleaved the catch trials using the inversion of cartoon faces or real faces as the adaptation stimuli, which was to prevent subjects from simply using the local mouth area instead of whole face to recognize the expression (Xu et al., <xref ref-type="bibr" rid="B62">2008</xref>). These catch trials were not further analyzed.</p>
</sec>
<sec>
<title>Result</title>
<p>We plotted the response proportions as a function of the expression strength of the test faces under two adapting conditions (Figure <xref ref-type="fig" rid="F6">6</xref>). The line faces generated significant aftereffects on two adapting conditions, although the aftereffect strength is relatively weak compared to that induced by the real face as the adaptor in Experiment 1. The results are consistent with prior studies (Butler et al., <xref ref-type="bibr" rid="B8">2008</xref>; Xu et al., <xref ref-type="bibr" rid="B62">2008</xref>), which found that a simple curvature and a cartoon face can generate the observable aftereffect on real face.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>The response proportion as a function of the expression strength in the Experiment 2</bold>. The data is fitted with logistic functions averaged in all twenty subjects.</p></caption>
<graphic xlink:href="fpsyg-06-01937-g0006.tif"/>
</fig>
<p>The aftereffect magnitude was 2.89% (SEM &#x0003D; 0.57%) in the high-intensity adapting condition, and 2.60% (SEM &#x0003D; 0.78%) in the low-intensity adapting condition. A paired sample t-test showed no significant difference between these two conditions [<italic>t</italic><sub>(19)</sub> &#x0003D; 0.383, <italic>p</italic> &#x0003D; 0.706, power &#x0003D; 0.92]. The observation is similar to the result obtained from the real face in Experiment 1, which provided strong evidence suggesting the identity-independent expression aftereffect is robust to the change in the expression configuration in comparison to identity dependent expression aftereffect.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Our results first confirmed that the expression aftereffect is reduced when the facial identity is manipulated, a finding that several studies have observed before. This observation does not relate to the specific expression pairs, as the similar expression aftereffect pattern was observed in both the happy-angry and in surprised-disgusted expression pairs. Campbell and Burke (<xref ref-type="bibr" rid="B10">2009</xref>) also found that the extent of the aftereffect reduction across identities was almost the same for five basic emotional expressions, although the processing of these different expressions appears to involve different neural mechanisms and visual areas (e.g., Posamentier and Abdi, <xref ref-type="bibr" rid="B47">2003</xref>). This suggests the processing of individual facial expressions depend on facial identity similarly.</p>
<p>The reduction in the cross-identity expression aftereffect is not simply due to change in expression configuration between adaptor and test faces, although a change in expression configuration does influence expression aftereffect. The result in Experiment 1 shows that the expression aftereffect is still reduced even when the adaptor and test faces have identical expression configurations, suggesting the functional influence from facial identity on expression aftereffect. Such observation is consistent with Ellamil et al.&#x00027;s (<xref ref-type="bibr" rid="B19">2008</xref>) work, which used artificial faces of angry and surprised expressions as adaptors and found similar reduction in the adaptation effect when the adaptor and test faces had the same expression morphing prototype but different facial textures and contours. Together, our and Ellamil et al.&#x00027;s (<xref ref-type="bibr" rid="B19">2008</xref>) observation consolidates Fox&#x00027;s proposal of the identity dependent neural representation.</p>
<p>We further determined how a change in the expression configuration influences the identity- dependent and identity-independent expression aftereffects, respectively. This was achieved using real faces as adaptors in Experiment 1 and using line faces as adaptors in Experiment 2. The results were consistent in two experiments. Experiment 1 found that a change in expression configuration impaired the identity-dependent expression aftereffect, but not the identity-independent expression aftereffect. The Experiment 2 further confirmed the observation of Experiment 1 using the line faces as adaptor. We found that the identity-independent expression aftereffect is consistently robust to the variance in the expression configuration relative to the identity-independent expression aftereffect.</p>
<p>Why do identity-dependent and identity-independent expression systems show different sensitivities to expression configuration? The possible explanation is that these two systems depend on the different facial components to process emotional expressions. As the structural reference hypothesis stated (Ganel and Goshen-Gottstein, <xref ref-type="bibr" rid="B23">2004</xref>), the face structure information is not only important to identity discrimination, but it is also used by observers as a reference to compute and recognize expressions. The identity-dependent expression system may rely more on the facial shape and/or structure information (e.g., local edge, facial contour) to perform an expression configuration analysis, thus it is sensitive to change in spatial configurations of expressions. In support of this notion, Neth and Martinez (<xref ref-type="bibr" rid="B43">2009</xref>) have demonstrated that simple structural changes in emotionless faces could induce the virtual perception of facial expression, with shorter vertical distance between eyes and nose resulting in the perception of anger, and the larger distance leading to the perception of sadness. On the other hand, the identity-independent expression system seems to depend on the information of emotion category; thus, it is more robust to the detailed variance in the configuration properties of the face. Just as in daily life, although the happy expression can be displayed with different configurations or different intensities in the face, all these expressions would be perceived as a happy emotion within a certain range of expression intensities and configurations. It is possible for the identity-dependent expression system to interpret the spatial configuration of an expression, as well as for identity-independent neural system to receive the input of identity-dependent expression system and extract the emotional information from the facial expression.</p>
<p>There is a long-term debate whether facial expressions are represented as belonging to discrete categories of emotion or as continuous dimensions (see Bruce and Young, <xref ref-type="bibr" rid="B6">2012</xref>; Harris et al., <xref ref-type="bibr" rid="B29">2012</xref>). The category model is supported by the evidence that faces within a category were discriminated more poorly than faces in different categories that differed by an equal physical amount (Etcoff and Magee, <xref ref-type="bibr" rid="B21">1992</xref>; Calder et al., <xref ref-type="bibr" rid="B9">1996</xref>). In contrast, the observation that human can accurately perceive differences in intensity (Calder et al., <xref ref-type="bibr" rid="B9">1996</xref>) and variation (Rozin et al., <xref ref-type="bibr" rid="B49">1994</xref>) of a given emotional expression is consistent with the continuous model. Our result supports a synthesis of the above two models by showing that expression is represented in both a categorical and a continuous ways. Specifically, the identity-independent expression system is robust to variance in expression configurations within same emotion category, suggesting it may perceive emotional expressions in a category manner; In contrast, the high sensitivity to expression configuration supports a continuous manner in the identity-dependent expression system.</p>
<p>We suggest two possible candidates of brain areas responsible for identity dependent expression representation. One possible candidate of brain area responsible for the identity-dependent expression representation is the posterior superior temporal sulcus (STS). A recent fMRI adaptation study investigated adaptation to both facial identity and expression (Winston et al., <xref ref-type="bibr" rid="B61">2004</xref>). Adaptation to identity but not expression was found in the fusiform face area, and adaptation to expression but not identity was found in the middle STS. In the posterior STS, there were large adaptation effects to identity and a smaller adaptation effect to expression. These observations suggest that the posterior STS area may encode both identity and expression. Future researches are desirable to examine whether the adaptation effect is sensitive to the change in expression configuration in the same person in the posterior STS. The other candidate for the identity-dependent expression representation is the FFA. Although this area is generally believed to be responsible for the processing of static information of the face in identity discrimination (Haxby et al., <xref ref-type="bibr" rid="B32">2000</xref>; Winston et al., <xref ref-type="bibr" rid="B61">2004</xref>), the FFA is also found to be sensitive to variations in expression even when attention was directed to identity (Ganel et al., <xref ref-type="bibr" rid="B24">2005</xref>). The findings fit with our behavior data and suggest an interactive system for the processing of expression and identity in the FFA area.</p>
<p>It is relatively difficult to infer the possible brain area related to identity-independent expression representation. Although Winston et al. (<xref ref-type="bibr" rid="B61">2004</xref>) found that repeating emotional expressions across different face pairs led to reduced signal in the middle STS, showing the identity-independent processing of expressions in the middle STS. However, the middle STS is sensitive to the dynamic and transient changes in facial features, which contrasts with the robustness of the identity-independent expression aftereffect to expression configuration observed in current study. We suggest that the amygdala is a possible candidate. This area is highly related to the processing of emotional expressions, and has previously found to be insensitive to expression change within the same emotion category (Harris et al., <xref ref-type="bibr" rid="B29">2012</xref>), which well fits with our result observed in identity-independent expression aftereffect. On the other hand, it is worth pointing out that the amygdala may be not the only area involved in identity-independent processing of expressions. As amygdala is generally related to processing of fear and happy expressions (Morris et al., <xref ref-type="bibr" rid="B42">1996</xref>; Blair et al., <xref ref-type="bibr" rid="B4">1999</xref>), and the other expressions, such as disgusted expression, could not induce the activation in the amygdala (Phillips et al., <xref ref-type="bibr" rid="B46">1997</xref>). The disgusted expression, however, shows the similar result as the fear expression in first experiment. This appears to suggest that other brain areas besides amygdala were also involved in identity-independent expression processing. It would be interesting for future studies to investigate whether the activation of the other expression-related brain areas are robust to the change in expression configuration using an fMRI adaptation paradigm.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>This work is supported by the School Foundation of Shanghai Maritime University under Grant No. 20130468 and Shanghai Municipal Natural Science Foundation under Grants No. 14ZR1419300, No. 14ZR1419700, and No. 13ZR1455600 and it is partially supported by the National Natural Science Foundation of China under Grants No. 61403251, No. 3147095, and No. 31300938. It is also supported by the Japan Society for the Promotion of Science, KAKENHI under Grants No. 24300085 to KS.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anstis</surname> <given-names>S.</given-names></name> <name><surname>Verstraten</surname> <given-names>F. A.</given-names></name> <name><surname>Mather</surname> <given-names>G.</given-names></name></person-group> (<year>1998</year>). <article-title>The motion aftereffect</article-title>. <source>Trends Cogn. Sci.</source> <volume>2</volume>, <fpage>111</fpage>&#x02013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(98)01142-5</pub-id><pub-id pub-id-type="pmid">12678580</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baudouin</surname> <given-names>J. Y.</given-names></name> <name><surname>Gilibert</surname> <given-names>D.</given-names></name> <name><surname>Sansone</surname> <given-names>S.</given-names></name> <name><surname>Tiberghien</surname> <given-names>G.</given-names></name></person-group> (<year>2000</year>). <article-title>When the smile is a cue to familiarity</article-title>. <source>Memory</source> <volume>8</volume>, <fpage>285</fpage>&#x02013;<lpage>292</lpage>. <pub-id pub-id-type="doi">10.1080/09658210050117717</pub-id><pub-id pub-id-type="pmid">11045237</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Biehl</surname> <given-names>M.</given-names></name> <name><surname>Matsumoto</surname> <given-names>D.</given-names></name> <name><surname>Ekman</surname> <given-names>P.</given-names></name> <name><surname>Hearn</surname> <given-names>V.</given-names></name> <name><surname>Heider</surname> <given-names>K.</given-names></name> <name><surname>Kudoh</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Matsumoto and Ekman&#x00027;s Japanese and Caucasian facial expressions of emotion (JACFEE): reliability data and cross-national differences</article-title>. <source>J. Nonverbal Behav.</source> <volume>21</volume>, <fpage>3</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1023/A:1024902500935</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blair</surname> <given-names>R. J.</given-names></name> <name><surname>Morris</surname> <given-names>J. S.</given-names></name> <name><surname>Frith</surname> <given-names>C. D.</given-names></name> <name><surname>Perrett</surname> <given-names>D. I.</given-names></name> <name><surname>Dolan</surname> <given-names>J. R.</given-names></name></person-group> (<year>1999</year>). <article-title>Dissociable neural responses to facial expressions of sadness and anger</article-title>. <source>Brain</source> <volume>122</volume>, <fpage>883</fpage>&#x02013;<lpage>893</lpage>. <pub-id pub-id-type="doi">10.1093/brain/122.5.883</pub-id><pub-id pub-id-type="pmid">10355673</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruce</surname> <given-names>V.</given-names></name> <name><surname>Young</surname> <given-names>A.</given-names></name></person-group> (<year>1986</year>). <article-title>Understanding face recognition</article-title>. <source>Br. J. Psychol.</source> <volume>77</volume>, <fpage>305</fpage>&#x02013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1111/j.2044-8295.1986.tb02199.x</pub-id><pub-id pub-id-type="pmid">3756376</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bruce</surname> <given-names>V.</given-names></name> <name><surname>Young</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <source>Face Perception</source>. <publisher-loc>Hove</publisher-loc>: <publisher-name>Psychology Press</publisher-name>.</citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruyer</surname> <given-names>R.</given-names></name> <name><surname>Laterre</surname> <given-names>C.</given-names></name> <name><surname>Seron</surname> <given-names>X.</given-names></name> <name><surname>Feyereisen</surname> <given-names>P.</given-names></name> <name><surname>Strypstein</surname> <given-names>E.</given-names></name> <name><surname>Pierrard</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>1983</year>). <article-title>A case of prosopagnosia with some preserved covert remembrance of familiar faces</article-title>. <source>Brain Cogn.</source> <volume>2</volume>, <fpage>257</fpage>&#x02013;<lpage>284</lpage>. <pub-id pub-id-type="doi">10.1016/0278-2626(83)90014-3</pub-id><pub-id pub-id-type="pmid">6546027</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butler</surname> <given-names>A.</given-names></name> <name><surname>Oruc</surname> <given-names>I.</given-names></name> <name><surname>Fox</surname> <given-names>C. J.</given-names></name> <name><surname>Barton</surname> <given-names>J. J. S.</given-names></name></person-group> (<year>2008</year>). <article-title>Factors contributing to the adaptation aftereffects of facial expression</article-title>. <source>Brain Res.</source> <volume>1191</volume>, <fpage>116</fpage>&#x02013;<lpage>126</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2007.10.101</pub-id><pub-id pub-id-type="pmid">18096142</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calder</surname> <given-names>A. J.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Perrett</surname> <given-names>D. I.</given-names></name> <name><surname>Etcoff</surname> <given-names>N. L.</given-names></name> <name><surname>Rowland</surname> <given-names>D.</given-names></name></person-group> (<year>1996</year>). <article-title>Categorical perception of morphed facial expressions</article-title>. <source>Cognition</source> <volume>3</volume>, <fpage>81</fpage>&#x02013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.1080/713756735</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campbell</surname> <given-names>J.</given-names></name> <name><surname>Burke</surname> <given-names>D.</given-names></name></person-group> (<year>2009</year>). <article-title>Evidence that identity-dependent and identity-independent neural populations are recruited in the perception of five basic emotional facial expressions</article-title>. <source>Vision Res.</source> <volume>49</volume>, <fpage>1532</fpage>&#x02013;<lpage>1540</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2009.03.009</pub-id><pub-id pub-id-type="pmid">19303422</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Lander</surname> <given-names>K.</given-names></name> <name><surname>Liu</surname> <given-names>C. H.</given-names></name></person-group> (<year>2011</year>). <article-title>Matching faces with emotional expressions</article-title>. <source>Front. Psychol.</source> <volume>2</volume>:<issue>206</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00206</pub-id><pub-id pub-id-type="pmid">21909332</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clifford</surname> <given-names>C. W.</given-names></name></person-group> (<year>2002</year>). <article-title>Perceptual adaptation: motion parallels orientation</article-title>. <source>Trends Cogn. Sci.</source> <volume>6</volume>, <fpage>136</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(00)01856-8</pub-id><pub-id pub-id-type="pmid">11861192</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cook</surname> <given-names>R.</given-names></name> <name><surname>Matei</surname> <given-names>M.</given-names></name> <name><surname>Johnston</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Exploring expression space: adaptation to orthogonal and anti-expressions</article-title>. <source>J. Vis.</source> <volume>11</volume>:<fpage>2</fpage>. <pub-id pub-id-type="doi">10.1167/11.4.2</pub-id><pub-id pub-id-type="pmid">21464438</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ducci</surname> <given-names>L.</given-names></name> <name><surname>Arcuri</surname> <given-names>L.</given-names></name> <name><surname>Georgis</surname> <given-names>T.</given-names></name> <name><surname>Sineshaw</surname> <given-names>T.</given-names></name></person-group> (<year>1982</year>). <article-title>Emotion recognition in Ethiopia: the effect of familiarity with western culture on accuracy of recognition</article-title>. <source>J. Cross Cult. Psychol.</source> <volume>13</volume>, <fpage>340</fpage>&#x02013;<lpage>351</lpage>. <pub-id pub-id-type="doi">10.1177/0022002182013003005</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Durgin</surname> <given-names>F. H.</given-names></name> <name><surname>Proffitt</surname> <given-names>D. R.</given-names></name></person-group> (<year>1996</year>). <article-title>Visual learning in the perception of texture: simple and contingent aftereffects of texture density</article-title>. <source>Spat Vis.</source> <volume>9</volume>, <fpage>423</fpage>&#x02013;<lpage>474</lpage>. <pub-id pub-id-type="doi">10.1163/156856896X00204</pub-id><pub-id pub-id-type="pmid">8774089</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ekman</surname> <given-names>P.</given-names></name></person-group> (<year>1993</year>). <article-title>Facial expression and emotion</article-title>. <source>Am. Psychol.</source> <volume>48</volume>, <fpage>384</fpage>&#x02013;<lpage>392</lpage>. <pub-id pub-id-type="doi">10.1037/0003-066X.48.4.384</pub-id><pub-id pub-id-type="pmid">19945472</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ekman</surname> <given-names>P.</given-names></name> <name><surname>Friesen</surname> <given-names>W. V.</given-names></name> <name><surname>Hager</surname> <given-names>J. C.</given-names></name></person-group> (<year>2002</year>). <source>Facial Action Coding System: The Manual on CD ROM</source>. <publisher-loc>Salt Lake City</publisher-loc>: <publisher-name>A Human Face</publisher-name>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ekman</surname> <given-names>P.</given-names></name> <name><surname>Oster</surname> <given-names>H.</given-names></name></person-group> (<year>1979</year>). <article-title>Facial expressions of emotion</article-title>. <source>Annu. Rev. Clin. Psychol.</source> <volume>30</volume>, <fpage>527</fpage>&#x02013;<lpage>554</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.ps.30.020179.002523</pub-id><pub-id pub-id-type="pmid">26073552</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ellamil</surname> <given-names>M.</given-names></name> <name><surname>Susskind</surname> <given-names>J. M.</given-names></name> <name><surname>Anderson</surname> <given-names>A. K.</given-names></name></person-group> (<year>2008</year>). <article-title>Examinations of identity invariance in facial expression adaptation</article-title>. <source>Cogn. Affect. Behav Neurosci.</source> <volume>8</volume>, <fpage>273</fpage>&#x02013;<lpage>281</lpage>. <pub-id pub-id-type="doi">10.3758/CABN.8.3.273</pub-id><pub-id pub-id-type="pmid">18814464</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ellis</surname> <given-names>A. W.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Flude</surname> <given-names>B. M.</given-names></name></person-group> (<year>1990</year>). <article-title>Repetition priming and face processing: priming occurs within the system that responds to the identity of a face</article-title>. <source>Q. J. Exp. Psychol. Hum. Exp. Psychol.</source> <volume>42</volume>, <fpage>495</fpage>&#x02013;<lpage>512</lpage>. <pub-id pub-id-type="doi">10.1080/14640749008401234</pub-id><pub-id pub-id-type="pmid">2236632</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Etcoff</surname> <given-names>N. L.</given-names></name> <name><surname>Magee</surname> <given-names>J. J.</given-names></name></person-group> (<year>1992</year>). <article-title>Categorical perception of facial expressions</article-title>. <source>Cognition</source> <volume>44</volume>, <fpage>227</fpage>&#x02013;<lpage>240</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(92)90002-Y</pub-id><pub-id pub-id-type="pmid">1424493</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fox</surname> <given-names>C. J.</given-names></name> <name><surname>Barton</surname> <given-names>J. J. S.</given-names></name></person-group> (<year>2007</year>). <article-title>What is adapted in face adaptation? The neural representations of expression in the human visual system</article-title>. <source>Brain Res.</source> <volume>1127</volume>, <fpage>80</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2006.09.104</pub-id><pub-id pub-id-type="pmid">17109830</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ganel</surname> <given-names>T.</given-names></name> <name><surname>Goshen-Gottstein</surname> <given-names>Y.</given-names></name></person-group> (<year>2004</year>). <article-title>Effects of familiarity on the perceptual integrality of the identity and expression of faces: the parallel route hypothesis revisited</article-title>. <source>Q. J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>30</volume>, <fpage>583</fpage>&#x02013;<lpage>597</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.30.3.583</pub-id><pub-id pub-id-type="pmid">15161388</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ganel</surname> <given-names>T.</given-names></name> <name><surname>Valyear</surname> <given-names>K. F.</given-names></name> <name><surname>Goshen-Gottstein</surname> <given-names>Y.</given-names></name> <name><surname>Goodale</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>The involvement of the &#x0201C;fusiform face area&#x0201D; in processing facial expression</article-title>. <source>Neuropsychology</source> <volume>43</volume>, <fpage>645</fpage>&#x02013;<lpage>1654</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2005.01.012</pub-id><pub-id pub-id-type="pmid">16009246</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>W.</given-names></name> <name><surname>Cao</surname> <given-names>B.</given-names></name> <name><surname>Shan</surname> <given-names>S. G.</given-names></name> <name><surname>Chen</surname> <given-names>X. L.</given-names></name> <name><surname>Zhou</surname> <given-names>D. L.</given-names></name> <name><surname>Zhang</surname> <given-names>X. H.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>The CAS-PEAL large-scale Chinese face database and baseline evaluations</article-title>. <source>IEEE Trans. Syst.Man Cybernet. A</source> <volume>38</volume>, <fpage>149</fpage>&#x02013;<lpage>161</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCA.2007.909557</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gescheider</surname> <given-names>G.</given-names></name></person-group> (<year>1997</year>). <source>Psychophysics: The Fundamentals, 3rd Edn.</source> <publisher-loc>Mahwah, NJ</publisher-loc>: <publisher-name>Lawrence Erlbaum Associates</publisher-name>.</citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gohard</surname> <given-names>K. M.</given-names></name> <name><surname>Battaglia</surname> <given-names>F. P.</given-names></name> <name><surname>Erickson</surname> <given-names>C. A.</given-names></name> <name><surname>Spitler</surname> <given-names>K. M.</given-names></name> <name><surname>Amaral</surname> <given-names>D. G.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural responses to facial expression and face identity in the monkey amydgala</article-title>. <source>J. Neurophysiol.</source> <volume>97</volume>, <fpage>1671</fpage>&#x02013;<lpage>1683</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00714.2006</pub-id><pub-id pub-id-type="pmid">17093126</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grill-Spector</surname> <given-names>K.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name></person-group> (<year>2001</year>). <article-title>fMR-adaptation: a tool for studying the functional properties of human cortical neurons</article-title>. <source>Acta Psychol.</source> <volume>107</volume>, <fpage>293</fpage>&#x02013;<lpage>321</lpage>. <pub-id pub-id-type="doi">10.1016/S0001-6918(01)00019-1</pub-id><pub-id pub-id-type="pmid">11388140</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>R. J.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Andrews</surname> <given-names>T. J.</given-names></name></person-group> (<year>2012</year>). <article-title>Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>109</volume>, <fpage>21164</fpage>&#x02013;<lpage>21169</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1212207110</pub-id><pub-id pub-id-type="pmid">23213218</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>R. J.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Andrews</surname> <given-names>T. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Dynamic stimuli demonstrate a categorical representation of facial expression in the amygdala</article-title>. <source>Neuropsychology</source> <volume>56</volume>, <fpage>47</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.01.005</pub-id><pub-id pub-id-type="pmid">24447769</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasselmo</surname> <given-names>M. E.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Baylis</surname> <given-names>G. C.</given-names></name></person-group> (<year>1989</year>). <article-title>The role of expression and identity in the face-selective response of neurons in the temporal visual cortex of the monkey</article-title>. <source>Behav. Brain Res.</source> <volume>32</volume>, <fpage>203</fpage>&#x02013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-4328(89)80054-3</pub-id><pub-id pub-id-type="pmid">2713076</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Hoffman</surname> <given-names>E. A.</given-names></name> <name><surname>Gobbini</surname> <given-names>M. I.</given-names></name></person-group> (<year>2000</year>). <article-title>The distributed human neural System for face perception</article-title>. <source>Trends Cogn. Sci.</source> <volume>4</volume>, <fpage>223</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(00)01482-0</pub-id><pub-id pub-id-type="pmid">10827445</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hsu</surname> <given-names>S. M.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name></person-group> (<year>2004</year>). <article-title>Adaptation effects in facial expression recognition</article-title>. <source>Visual Cogn.</source> <volume>11</volume>, <fpage>871</fpage>&#x02013;<lpage>899</lpage>. <pub-id pub-id-type="doi">10.1080/13506280444000030</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Izard</surname> <given-names>C. E.</given-names></name></person-group> (<year>1971</year>). <source>The Face of Emotion</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Appleton-Century-Crofts</publisher-name>.</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Juricevic</surname> <given-names>I.</given-names></name> <name><surname>Webster</surname> <given-names>M. A.</given-names></name></person-group> (<year>2012</year>). <article-title>Selectivity of face aftereffects for expressions and anti-expressions</article-title>. <source>Front. Psychol.</source> <volume>3</volume>:<issue>4</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00004</pub-id><pub-id pub-id-type="pmid">22291677</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kanade</surname> <given-names>T.</given-names></name> <name><surname>Cohn</surname> <given-names>J. F.</given-names></name> <name><surname>Tian</surname> <given-names>Y.</given-names></name></person-group> (<year>2000</year>). <article-title>Comprehensive database for facial expression analysis</article-title>, in <source>Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition</source> (<publisher-loc>Grenoble</publisher-loc>), <fpage>46</fpage>&#x02013;<lpage>53</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaufmann</surname> <given-names>J. M.</given-names></name> <name><surname>Schweinber</surname> <given-names>S. R.</given-names></name></person-group> (<year>2004</year>). <article-title>Expression influence recognition of familiar face</article-title>. <source>Perception</source> <volume>33</volume>, <fpage>399</fpage>&#x02013;<lpage>408</lpage>. <pub-id pub-id-type="doi">10.1068/p5083</pub-id><pub-id pub-id-type="pmid">15222388</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>R. K.</given-names></name> <name><surname>B&#x000FC;lthoff</surname> <given-names>I.</given-names></name> <name><surname>Armann</surname> <given-names>R.</given-names></name> <name><surname>Wallraven</surname> <given-names>C.</given-names></name> <name><surname>B&#x000FC;lthoff</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Investigating the other-race effect in different face recognition tasks</article-title>. <source>i-Perception</source> <volume>2</volume>:<fpage>355</fpage>. <pub-id pub-id-type="doi">10.1068/ic355</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leopold</surname> <given-names>D. A.</given-names></name> <name><surname>O&#x00027;Toole</surname> <given-names>A. J.</given-names></name> <name><surname>Vetter</surname> <given-names>T.</given-names></name> <name><surname>Blanz</surname> <given-names>V.</given-names></name></person-group> (<year>2001</year>). <article-title>Prototype-referenced shape encoding revealed by high-level aftereffects</article-title>. <source>Nat. Neurosci.</source> <volume>4</volume>, <fpage>89</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1038/82947</pub-id><pub-id pub-id-type="pmid">11135650</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leopold</surname> <given-names>D. A.</given-names></name> <name><surname>Rhodes</surname> <given-names>G.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K. M.</given-names></name> <name><surname>Jeffery</surname> <given-names>L.</given-names></name></person-group> (<year>2005</year>). <article-title>The dynamics of visual adaptation to faces</article-title>. <source>Proc. R. Soc. Lond. B</source> <volume>272</volume>, <fpage>897</fpage>&#x02013;<lpage>904</lpage>. <pub-id pub-id-type="doi">10.1098/rspb.2004.3022</pub-id><pub-id pub-id-type="pmid">16024343</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meeker</surname> <given-names>W. Q.</given-names></name> <name><surname>Escobar</surname> <given-names>L. A.</given-names></name></person-group> (<year>1995</year>). <article-title>Teaching about approximate confidence regions based on maximum likelihood estimation</article-title>. <source>Am. Stat.</source> <volume>49</volume>, <fpage>48</fpage>&#x02013;<lpage>53</lpage>.</citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morris</surname> <given-names>J. S.</given-names></name> <name><surname>Frith</surname> <given-names>C. D.</given-names></name> <name><surname>Perrett</surname> <given-names>D. I.</given-names></name> <name><surname>Rowland</surname> <given-names>D.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Calder</surname> <given-names>A. J.</given-names></name> <etal/></person-group>. (<year>1996</year>). <article-title>A differential neural response in the human amygdala to fearful and happy facial expressions</article-title>. <source>Nature</source> <volume>383</volume>, <fpage>812</fpage>&#x02013;<lpage>815</lpage>. <pub-id pub-id-type="doi">10.1038/383812a0</pub-id><pub-id pub-id-type="pmid">8893004</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neth</surname> <given-names>D.</given-names></name> <name><surname>Martinez</surname> <given-names>A. M.</given-names></name></person-group> (<year>2009</year>). <article-title>Emotion perception in emotionless face images suggests a norm-based representation</article-title>. <source>J. Vis.</source> <volume>9</volume>, <fpage>5.1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1167/9.1.5</pub-id><pub-id pub-id-type="pmid">19271875</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palermo</surname> <given-names>R.</given-names></name> <name><surname>Willis</surname> <given-names>M. L.</given-names></name> <name><surname>Rivolta</surname> <given-names>D.</given-names></name> <name><surname>McKone</surname> <given-names>E.</given-names></name> <name><surname>Wilson</surname> <given-names>C. E.</given-names></name> <name><surname>Andrew</surname> <given-names>J. C.</given-names></name></person-group> (<year>2011</year>). <article-title>Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia</article-title>. <source>Neuropsychology</source> <volume>49</volume>, <fpage>1226</fpage>&#x02013;<lpage>1235</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.02.021</pub-id><pub-id pub-id-type="pmid">21333662</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pell</surname> <given-names>J. P.</given-names></name> <name><surname>Richards</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Overlapping facial expression representations are identity-dependent</article-title>. <source>Vision Res.</source> <volume>79</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2012.12.009</pub-id><pub-id pub-id-type="pmid">23274648</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phillips</surname> <given-names>M. L.</given-names></name> <name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Senior</surname> <given-names>C.</given-names></name> <name><surname>Brammer</surname> <given-names>M.</given-names></name> <name><surname>Andrew</surname> <given-names>C.</given-names></name> <name><surname>Calder</surname> <given-names>A. J.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>A specific neutral substrate for perceiving facial expressions of disgust</article-title>. <source>Nature</source> <volume>389</volume>, <fpage>495</fpage>&#x02013;<lpage>498</lpage>. <pub-id pub-id-type="doi">10.1038/39051</pub-id><pub-id pub-id-type="pmid">9333238</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Posamentier</surname> <given-names>M. T.</given-names></name> <name><surname>Abdi</surname> <given-names>H.</given-names></name></person-group> (<year>2003</year>). <article-title>Processing faces and facial expressions</article-title>. <source>Neuropsychol. Rev.</source> <volume>13</volume>, <fpage>113</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1023/A:1025519712569</pub-id><pub-id pub-id-type="pmid">14584908</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rhodes</surname> <given-names>G.</given-names></name> <name><surname>Jeffery</surname> <given-names>L.</given-names></name> <name><surname>Watson</surname> <given-names>T. L.</given-names></name> <name><surname>Clifford</surname> <given-names>C. W. G.</given-names></name> <name><surname>Nakayama</surname> <given-names>K.</given-names></name></person-group> (<year>2003</year>). <article-title>Fitting the mind to the world: face adaptation and attractiveness aftereffects</article-title>. <source>Psychol. Sci.</source> <volume>14</volume>, <fpage>558</fpage>&#x02013;<lpage>556</lpage>. <pub-id pub-id-type="doi">10.1046/j.0956-7976.2003.psci_1465.x</pub-id><pub-id pub-id-type="pmid">14629686</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rozin</surname> <given-names>P.</given-names></name> <name><surname>Lowery</surname> <given-names>L.</given-names></name> <name><surname>Ebert</surname> <given-names>R.</given-names></name></person-group> (<year>1994</year>). <article-title>Varieties of disgust faces and the structure of disgust</article-title>. <source>J. Pers. Soc. Psychol.</source> <volume>66</volume>, <fpage>870</fpage>&#x02013;<lpage>881</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.66.5.870</pub-id><pub-id pub-id-type="pmid">8014832</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schweinberger</surname> <given-names>S. R.</given-names></name> <name><surname>Burton</surname> <given-names>A. M.</given-names></name> <name><surname>Kelly</surname> <given-names>S. W.</given-names></name></person-group> (<year>1999</year>). <article-title>Asymmetric dependencies in perceiving identity and emotion: experiments with morphed faces</article-title>. <source>Percept. Psychophys.</source> <volume>61</volume>, <fpage>1102</fpage>&#x02013;<lpage>1115</lpage>. <pub-id pub-id-type="doi">10.3758/BF03207617</pub-id><pub-id pub-id-type="pmid">10497431</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schweinberger</surname> <given-names>S. R.</given-names></name> <name><surname>Soukup</surname> <given-names>G. R.</given-names></name></person-group> (<year>1998</year>). <article-title>Asymmetric relationships among perceptions of facial identity, emotion, and facial speech</article-title>. <source>Q. J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>24</volume>, <fpage>1748</fpage>&#x02013;<lpage>1765</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.24.6.1748</pub-id><pub-id pub-id-type="pmid">9861721</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skinner</surname> <given-names>A. L.</given-names></name> <name><surname>Benton</surname> <given-names>C. P.</given-names></name></person-group> (<year>2010</year>). <article-title>Anti-expression aftereffects reveal prototype-referenced coding of facial expressions</article-title>. <source>Psychol Sci.</source> <volume>21</volume>, <fpage>1248</fpage>&#x02013;<lpage>1253</lpage>. <pub-id pub-id-type="doi">10.1177/0956797610380702</pub-id><pub-id pub-id-type="pmid">20713632</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sugase</surname> <given-names>Y.</given-names></name> <name><surname>Yamane</surname> <given-names>S. Y.</given-names></name> <name><surname>Ueno</surname> <given-names>S.</given-names></name> <name><surname>Kawano</surname> <given-names>K.</given-names></name></person-group> (<year>1999</year>). <article-title>Global and fine information coded by single neuron sin the temporal visual cortex</article-title>. <source>Nature</source> <volume>400</volume>, <fpage>869</fpage>&#x02013;<lpage>873</lpage>. <pub-id pub-id-type="doi">10.1038/23703</pub-id><pub-id pub-id-type="pmid">10476965</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thielscher</surname> <given-names>A.</given-names></name> <name><surname>Pessoa</surname> <given-names>L.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural correlates of perceptual choice and decision making during fear-disgust discrimination</article-title>. <source>J. Neurosci.</source> <volume>27</volume>, <fpage>2908</fpage>&#x02013;<lpage>2917</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3024-06.2007</pub-id><pub-id pub-id-type="pmid">17360913</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van den Stock</surname> <given-names>J.</given-names></name> <name><surname>de Gelder</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Emotional information in body and background hampers recognition memory for faces</article-title>. <source>Neurobiol. Learn Mem.</source> <volume>97</volume>, <fpage>321</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1016/j.nlm.2012.01.007</pub-id><pub-id pub-id-type="pmid">22406473</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van den Stock</surname> <given-names>J.</given-names></name> <name><surname>de Gelder</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>Face identity matching is influenced by emotions conveyed by face and body</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<issue>53</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.00053</pub-id><pub-id pub-id-type="pmid">24574994</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vida</surname> <given-names>M. A.</given-names></name> <name><surname>Mondloch</surname> <given-names>C. J.</given-names></name></person-group> (<year>2009</year>). <article-title>Children&#x00027;s representations of facial expression and identity: identity-contigent expression aftereffects</article-title>. <source>J. Exp. Child Psychol.</source> <volume>104</volume>, <fpage>326</fpage>&#x02013;<lpage>345</lpage>. <pub-id pub-id-type="doi">10.1016/j.jecp.2009.06.003</pub-id><pub-id pub-id-type="pmid">19632689</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y. M.</given-names></name> <name><surname>Fu</surname> <given-names>X. L.</given-names></name> <name><surname>Johnston</surname> <given-names>R. A.</given-names></name> <name><surname>Yan</surname> <given-names>Z.</given-names></name></person-group> (<year>2013</year>). <article-title>Discriminablity effect on Garner interference: evidence from recognition of facial identity and expression</article-title>. <source>Front. Psychol.</source> <volume>4</volume>:<issue>943</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00943</pub-id><pub-id pub-id-type="pmid">24391609</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Webster</surname> <given-names>M. A.</given-names></name> <name><surname>Kaping</surname> <given-names>D.</given-names></name> <name><surname>Mizokami</surname> <given-names>Y.</given-names></name> <name><surname>Duhamel</surname> <given-names>P.</given-names></name></person-group> (<year>2004</year>). <article-title>Adaptation to natural facial categories</article-title>. <source>Nature</source> <volume>428</volume>, <fpage>557</fpage>&#x02013;<lpage>561</lpage>. <pub-id pub-id-type="doi">10.1038/nature02420</pub-id><pub-id pub-id-type="pmid">15058304</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Webster</surname> <given-names>M. A.</given-names></name> <name><surname>MacLeod</surname> <given-names>D. I.</given-names></name></person-group> (<year>2011</year>). <article-title>Visual adaptation and face perception</article-title>. <source>Philos. Trans. R. Soc. Lond. B. Biol. Sci.</source> <volume>366</volume>, <fpage>1702</fpage>&#x02013;<lpage>1725</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2010.0360</pub-id><pub-id pub-id-type="pmid">21536555</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winston</surname> <given-names>J. S.</given-names></name> <name><surname>Henson</surname> <given-names>R. N. A.</given-names></name> <name><surname>Fine-Goulden</surname> <given-names>M. R.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Fmri adaptation reveals dissociable neural representations of identity and expression in face perception</article-title>. <source>J. Neurophysiol.</source> <volume>92</volume>, <fpage>1830</fpage>&#x02013;<lpage>1839</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00155.2004</pub-id><pub-id pub-id-type="pmid">15115795</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name> <name><surname>Lipkin</surname> <given-names>R. M.</given-names></name> <name><surname>Qian</surname> <given-names>N.</given-names></name></person-group> (<year>2008</year>). <article-title>Adaptation across the cortical hierarchy: low-level curve adaptation affects high-level facial expressions</article-title>. <source>J. Neurosci.</source> <volume>28</volume>, <fpage>3374</fpage>&#x02013;<lpage>3383</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0182-08.2008</pub-id><pub-id pub-id-type="pmid">18367604</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>P.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name> <name><surname>Qian</surname> <given-names>N.</given-names></name></person-group> (<year>2012</year>). <article-title>Multi-level visual adaptation: dissociating curvature and facial-expression aftereffects produced by the same adapting stimuli</article-title>. <source>Vision Res.</source> <volume>72</volume>, <fpage>42</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2012.09.003</pub-id><pub-id pub-id-type="pmid">23000272</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamashita</surname> <given-names>J. A.</given-names></name> <name><surname>Hardy</surname> <given-names>J. L.</given-names></name> <name><surname>De Valois</surname> <given-names>K. K.</given-names></name> <name><surname>Webster</surname> <given-names>M. A.</given-names></name></person-group> (<year>2005</year>). <article-title>Stimulus selectivity of figural aftereffects for faces</article-title>. <source>J. Exp. Psychol.</source> <volume>31</volume>, <fpage>420</fpage>&#x02013;<lpage>437</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.31.3.420</pub-id><pub-id pub-id-type="pmid">15982123</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>A. W.</given-names></name> <name><surname>Newcombe</surname> <given-names>F.</given-names></name> <name><surname>de Haan</surname> <given-names>E. H. F.</given-names></name> <name><surname>Small</surname> <given-names>M.</given-names></name> <name><surname>Hay</surname> <given-names>D. C.</given-names></name></person-group> (<year>1993</year>). <article-title>Face perception after brain injury: selective impairments affecting identity and expression</article-title>. <source>Brain</source> <volume>116</volume>, <fpage>941</fpage>&#x02013;<lpage>959</lpage>. <pub-id pub-id-type="doi">10.1093/brain/116.4.941</pub-id><pub-id pub-id-type="pmid">8353717</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>L.</given-names></name> <name><surname>Chubb</surname> <given-names>C.</given-names></name></person-group> (<year>2001</year>). <article-title>The size-tuning of the face-distortion aftereffect</article-title>. <source>Vision Res.</source> <volume>41</volume>, <fpage>2979</fpage>&#x02013;<lpage>2994</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(01)00202-4</pub-id><pub-id pub-id-type="pmid">11704237</pub-id></citation>
</ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>The JNDs of Caucasian and Asian faces were measured using the Method of Adjustment (Gescheider, <xref ref-type="bibr" rid="B26">1997</xref>). Two Asian faces displaying a happy or an angry expression were selected from the CAS-PEAL Face Database (Gao et al., <xref ref-type="bibr" rid="B25">2008</xref>), and visually matched in expression intensity with Caucasian faces. We generated 101 images with the proportion of happiness from 0% (happiest) to 100% (angriest) in steps of 1% for Asian and Caucasian faces, respectively. Within these expression images, the expression images with the proportions equal to 20, 40, 50, 60, 80% were used as the references. In each trial, the test face and one of references were presented side by side at the center of the screen, and the subject was instructed to adjust the expression strength of test face by step of 1% so that it is perceptually identical to the reference face. The standard deviation of the distribution of the adjustments within 20 trials is taken as an estimate of the difference threshold (JND) for each reference level. The JND averaged across five reference levels were 6.72% (SEM &#x0003D; 0.33%) for Caucasian face and 6.56% (SEM &#x0003D; 0.40%) for Asian faces. The Two-way repeated measures ANOVA with reference level and race as within-subject factors shows no main effect of race [<italic>F</italic><sub>(1, 5)</sub> &#x0003D; 0.78, <italic>p</italic> &#x0003D; 0.42] and interaction between race and reference level [<italic>F</italic><sub>(4, 20)</sub> &#x0003D; 2.12, <italic>p</italic> &#x0003D; 0.11].</p></fn>
</fn-group>
</back>
</article>
