<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2020.565169</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Multiple Exposures Enhance Both Item Memory and Contextual Memory Over Time</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Chen</surname>
<given-names>Haoyu</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/610388/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Yang</surname>
<given-names>Jiongjiong</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/70551/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Psychological and Cognitive Sciences, Peking University</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Beijing Key Laboratory of Behavior and Mental Health, Peking University</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<author-notes>
<fn id="fn1" fn-type="edited-by"><p>Edited by: Timothy L. Hubbard, Arizona State University, United States</p></fn>
<fn id="fn2" fn-type="edited-by"><p>Reviewed by: Valerio Santangelo, University of Perugia, Italy; Daniele Saraulli, LUMSA University, Italy</p></fn>
<corresp id="c001">&#x002A;Correspondence: Jiongjiong Yang, <email>yangjj@pku.edu.cn</email></corresp>
<fn id="fn3" fn-type="other"><p>This article was submitted to Cognition, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>01</day>
<month>12</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>11</volume>
<elocation-id>565169</elocation-id>
<history>
<date date-type="received">
<day>24</day>
<month>05</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>11</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2020 Chen and Yang.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Chen and Yang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Repetition learning is an efficient way to enhance memory performance in our daily lives and educational practice. However, it is unclear to what extent repetition or multiple exposures modulate different types of memory over time. The inconsistent findings on it may be associated with encoding strategy. In this study, participants were presented with pairs of pictures (same, similar, and different) once (see section &#x201C;Experiment 1&#x201D;) or three times (see section &#x201C;Experiment 2&#x201D;) and were asked to make a same/similar/different judgment. By this, an elaborative encoding is more required for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions than the &#x201C;different&#x201D; condition. Then after intervals of 10 min, 1 day, and 1 week, they were asked to perform a recognition test to discriminate a repeated and a similar picture, followed by a remember/know/guess assessment and a contextual judgment. The results showed that after learning the objects three times, both item memory and contextual memory improved. Multiple exposures enhanced the hit rate for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions, but did not change the false alarm rate significantly. The recollection, rather than the familiarity, contributed to the repetition effect. In addition, the memory enhancement was manifested in each encoding condition and retention interval, especially for the &#x201C;same&#x201D; condition and at 10-min and 1-day intervals. These results clarify how repetition influences item and contextual memories during discriminative learning and suggest that multiple exposures render the details more vividly remembered and retained over time when elaborative encoding is emphasized.</p>
</abstract>
<kwd-group>
<kwd>episodic memory</kwd>
<kwd>repetition</kwd>
<kwd>contextual memory</kwd>
<kwd>recollection</kwd>
<kwd>discriminative learning</kwd>
</kwd-group>
<contract-num rid="cn1">31571114</contract-num>
<contract-sponsor id="cn1">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content>
</contract-sponsor>
<counts>
<fig-count count="6"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="58"/>
<page-count count="14"/>
<word-count count="10883"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>In our everyday lives, we usually have to remember a large number of events and general knowledge. How to improve our memory ability is one of the central issues in memory research. Repetition learning is an efficient way to enhance memory performance (<xref ref-type="bibr" rid="ref14">Ebbinghaus, 1964</xref>). Episodic memory is enhanced when an event is exposed repetitively, with detailed and vivid information remaining (<xref ref-type="bibr" rid="ref5">Cabeza and St Jacques, 2007</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). Semantic memory could also be established after the knowledge is learned multiple times in the same or different contexts (<xref ref-type="bibr" rid="ref48">Vargha-Khadem et al., 1997</xref>; <xref ref-type="bibr" rid="ref16">Elward and Vargha-Khadem, 2018</xref>).</p>
<p>Generally, repetition learning triggers reactivation process, which results in changes in memory traces (<xref ref-type="bibr" rid="ref32">Nadel and Moscovitch, 1997</xref>). However, how repetition learning changes memory representations is still in an intensive debate. There are two views to account for the effect of repetition learning or multiple exposures on memory. One view emphasizes that by multiple exposures, memory representation transforms from hippocampus-dependent to neocortex-dependent, thus making memory more semanticized, losing fine-grained details. For example, based on the multiple trace theory (<xref ref-type="bibr" rid="ref32">Nadel and Moscovitch, 1997</xref>; <xref ref-type="bibr" rid="ref31">Moscovitch et al., 2006</xref>), <xref ref-type="bibr" rid="ref55">Yassa and Reagh (2013)</xref> further posit that each time an event is reactivated, a similar but not identical memory trace is established. The overlapping elements of the trace are assumed to become strengthened, leaving a core representation of the event (<xref ref-type="bibr" rid="ref37">Reagh and Yassa, 2014</xref>). Hence, repetition learning leads to a more general memory with the contextual details forgotten and false memory emerging over time (<xref ref-type="bibr" rid="ref55">Yassa and Reagh, 2013</xref>; <xref ref-type="bibr" rid="ref37">Reagh and Yassa, 2014</xref>; <xref ref-type="bibr" rid="ref22">Kim et al., 2019</xref>). In support of this view, the enhanced memory performance is usually shown as increased hit rates, but simultaneously increased FA rates after multiple exposures (<xref ref-type="bibr" rid="ref21">Jacoby, 1999</xref>; <xref ref-type="bibr" rid="ref35">Poppenk et al., 2010</xref>; <xref ref-type="bibr" rid="ref37">Reagh and Yassa, 2014</xref>; <xref ref-type="bibr" rid="ref36">Reagh et al., 2017</xref>). In an investigation by <xref ref-type="bibr" rid="ref37">Reagh and Yassa (2014)</xref>, after participants learned a series of objects that were presented once or three times, they were asked to make an old/new judgment for the old, similar, and new objects. The results showed that general recognition was enhanced, but the ability to discriminate similar from new objects was diminished (in moderate similarity) after three exposures.</p>
<p>The other view is that multiple exposures not only lead to a generalized memory representation, but also enhance detailed memory by facilitating elaborative encoding. The enhanced memory performance is shown as increased hit rates and decreased FA rate (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>; <xref ref-type="bibr" rid="ref29">McCormick-Huhn et al., 2018</xref>). In addition, the underlying processes may differentially contribute to the repetition effect. According to the dual process model of recognition memory, both recollection and familiarity processes contribute to discriminating between old and new items during test (<xref ref-type="bibr" rid="ref57">Yonelinas and Jacoby, 1995</xref>; <xref ref-type="bibr" rid="ref47">Tulving, 2002</xref>; <xref ref-type="bibr" rid="ref56">Yonelinas, 2002</xref>; <xref ref-type="bibr" rid="ref15">Eichenbaum et al., 2007</xref>; but see <xref ref-type="bibr" rid="ref13">Dunn, 2004</xref>; <xref ref-type="bibr" rid="ref50">Williams et al., 2013</xref>). To support the elaborative view, some studies have suggested that multiple exposures lead to enhanced recollection contribution, hence increased ability to discriminate between targets and lures (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>; <xref ref-type="bibr" rid="ref29">McCormick-Huhn et al., 2018</xref>). The familiarity process remained relatively stable (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>) or greater for older adults (<xref ref-type="bibr" rid="ref29">McCormick-Huhn et al., 2018</xref>). For example, in a study by <xref ref-type="bibr" rid="ref29">McCormick-Huhn et al. (2018)</xref>, participants learned single words from different categories once or twice. Then, recognition memory (old, lures from the same category, and new words) was tested by a remember/know/new task 24 h later. The results showed that compared to learning once, repetition learning in either the same or different context led to significant memory enhancement, and the repetition effect was contributed by the recollection process.</p>
<p>One main difference between the two views is whether detailed memory is established after repetition. The inconsistent findings on it may be associated with encoding strategy. Studies have confirmed that different encoding tasks have significant effects on subsequent memory performance (<xref ref-type="bibr" rid="ref9">Craik and Lockhart, 1972</xref>; <xref ref-type="bibr" rid="ref8">Craik, 2002</xref>). Deeper or more elaborative encoding is associated with higher levels of memory retention. For example, when participants were asked to pay attention to specific parts of perceptual features of an object during encoding, distinctive features of each picture were elaboratively processed, and subsequent old/new recognition performance was improved (<xref ref-type="bibr" rid="ref23">Koutstaal et al., 1999</xref>). The difference in encoding strategy may also influence the effect of repetitive learning. When the encoding strategy is elaborative, repetitive learning enables participants to have more chance to deeply process the stimuli, leading to higher subsequent memory performance including detailed information. In this study, we adopted a discriminative learning paradigm to test this possibility.</p>
<p>The discriminative learning paradigm has been suggested to be an efficient way for elaborative processing. In this paradigm (<xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>), two similar (e.g., two dog pictures) or different (e.g., one dog and one flower pictures) objects were presented simultaneously, and participants were asked to make a similar/different judgment. So in this case, discriminative learning refers to a process to distinguish between two pictures, whether they belong to the same or different concepts. During test, they were presented with an old and a similar picture and performed a two-alternative forced choice (2AFC) task. If they judged the picture was old, they further decided under what encoding condition (similar or different) the picture was learned (i.e., contextual memory; <xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>). The results showed that the objects in the &#x201C;similar&#x201D; condition were better remembered in terms of both item details and their contextual features than in the &#x201C;different&#x201D; condition. When eye movements were measured, more saccades between the two similar objects during encoding predicted higher item and contextual memory performance in the &#x201C;similar&#x201D; condition. It suggests that discriminating between the &#x201C;similar&#x201D; objects triggers more elaborative encoding to facilitate subsequent item and contextual memories.</p>
<p>The important feature of discriminative learning is that item memory is enhanced in both detailed and general aspects when two similar objects are compared. After discriminative learning, <xref ref-type="bibr" rid="ref7">Chen et al. (2019)</xref> adopted a recognition test during which participants were presented an old picture or a new but similar picture and thereby requested to make an old/new judgment. The results showed that the hit rate and FA rate were both higher for the &#x201C;similar&#x201D; than &#x201C;different&#x201D; condition. As similar objects were used as lures in the test, the participants had to discriminate between the old and the similar pictures; memory after discriminative learning reflects a detailed representation. On the other hand, when participants have difficulty in discriminating lure pictures from old ones in terms of details, but still have memory of an object&#x2019;s concept, they judge them as &#x201C;old.&#x201D; So, a higher FA rate indicates a more gist-based memory representation (<xref ref-type="bibr" rid="ref36">Reagh et al., 2017</xref>; <xref ref-type="bibr" rid="ref25">Lee et al., 2018</xref>). As discriminative learning of similar objects enhanced both detailed and gist-based memory representations, and both item and contextual memories, this paradigm is appropriate to explore how different types of memory change after repetitive exposures.</p>
<p>In addition, when participants discriminated similar objects only once, their enhanced item and contextual memories retained until 1 week (<xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>; <xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). But there are intensive debates on whether multiple exposures produce slower forgetting of subsequent memory (e.g., <xref ref-type="bibr" rid="ref43">Slamecka and McElree, 1983</xref>; <xref ref-type="bibr" rid="ref2">Bogartz, 1990</xref>; <xref ref-type="bibr" rid="ref17">Gardiner and Java, 1991</xref>; <xref ref-type="bibr" rid="ref19">Hockley and Consoli, 1999</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). For example, <xref ref-type="bibr" rid="ref43">Slamecka and McElree (1983)</xref> showed that learning three times led to greater memory performance of words and word pairs than learning once, but the forgetting rate remained stable from immediately to 5 days after the test. Other studies showed that learning multiple times led to slower forgetting at shorter intervals, which was mainly contributed by the recollection process (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). Whether multiple exposures influence forgetting rate after discriminative learning needs to be clarified. Memory forgetting is usually measured as the interaction between retention interval and other factors (e.g., <xref ref-type="bibr" rid="ref43">Slamecka and McElree, 1983</xref>; <xref ref-type="bibr" rid="ref17">Gardiner and Java, 1991</xref>; <xref ref-type="bibr" rid="ref19">Hockley and Consoli, 1999</xref>). However, this interaction may be influenced by initial memory performance. So the forgetting rate should be measured when the initial performance was controlled.</p>
<p>In summary, we applied the discriminative learning paradigm (<xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>) to explore the effect of multiple exposures on subsequent memory over time. During encoding, two groups of participants learned the object pairs once (see section &#x201C;Experiment 1&#x201D;) or three times (see section &#x201C;Experiment 2&#x201D;) by making a same/similar/different judgment. Then, after the intervals of 10 min, 1 day, and 1 week, their item memory [followed by a remember/know/guess (RKG) judgment] and contextual memory were tested. In addition to the &#x201C;similar&#x201D; and &#x201C;different&#x201D; conditions, the &#x201C;same&#x201D; condition was included. The three conditions differed in the requirement of elaborative encoding. In the &#x201C;different&#x201D; condition, the two objects were conceptually different, so elaborative processing of detailed information was not necessary to make a &#x201C;different&#x201D; response. In the &#x201C;same&#x201D; and &#x201C;similar&#x201D; condition, the two objects shared the same concept, so elaborative encoding was required to discriminate between similar objects or make sure that the two objects were exactly the same. The three intervals were included to explore whether the repetition effect after discriminative learning could remain with the passage of time. The forgetting rate was calculated for each condition by controlling the initial memory performance.</p>
<p>The aforementioned two views have different predictions for the effect of repetition on discriminative learning on critical parameters such as hit/FA rates and recollection/familiarity contributions. Based on the view of generalized representation, when participants discriminated objects multiple times, a more stable semantic representation of the object is established, leading to higher FA rates and greater familiarity contribution, and the contextual memory should not be improved. As memory representation is more semanticized and more stable, memory forgetting should be slower after repetition. Instead, based on the view of elaborative encoding, multiple exposures lead to greater memory for the details related to the objects and their contexts, especially for the &#x201C;same&#x201D; condition. Accordingly, the recollection contribution is enhanced. In this case, memory representations decay over time, and the effect of multiple exposures should not influence memory forgetting significantly.</p>
</sec>
<sec id="sec2">
<title>Experiment 1</title>
<sec id="sec3">
<title>Materials and Methods</title>
<sec id="sec4">
<title>Participants</title>
<p>Twenty-five healthy, right-handed participants (eight males) with a mean age of 20.80 &#x00B1; 2.22 years were recruited in section &#x201C;Experiment 1.&#x201D; The overall sample size for the experiment was based on an <italic>a priori</italic> power analysis (G&#x002A;Power 3.1.9.6; University of Kiel, Germany) and previous studies (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>; <xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>). In order to obtain adequate power (i.e., <italic>&#x03B1;</italic> = 0.05, 1 &#x2212; <italic>&#x03B2;</italic> = 0.95) and detect moderate effect size (i.e., <italic>f</italic> = 0.25) for the interaction of encoding condition (3) and retention interval (3), we would need a total sample of at least 22 participants for each learning group. All of the participants were native Chinese speakers, and they all provided written informed consent in accordance with the procedures and protocols approved by the Review Board of School of Psychological and Cognitive Science, Peking University.</p>
</sec>
<sec id="sec5">
<title>Materials</title>
<p>Two within-subjects factors were included in the study: encoding (same, similar, and different) and retention interval (10 min, 1 day, and 1 week).</p>
<p>Seven hundred twenty objects (240 triplets) were selected from Hemera Photo Clipart and the Internet. Each triplet included three similar color pictures with the same basic concept/name (e.g., dog, tomato). The three pictures differed in dimensions such as shape, color, orientation, and number. They were in the same size of 640 &#x00D7; 480 pixels and with the white background. The 720 pictures were rated by a group of 23 participants (12 males, mean age of 22.83 &#x00B1; 2.67 years) who did not participate in the experiments. The participants named the pictures and rated their familiarity (i.e., how familiar they felt the object were. One for most familiar, five for most unfamiliar) and similarity within the triplets (i.e., how similar the two pictures were. One for most dissimilar, five for most similar). As one concept triplet had three similar pictures, three similarity rating scores for every two pictures of a triplet were acquired and averaged as one similarity score for each concept (<xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>; <xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). The naming accuracy for the pictures was 0.91 &#x00B1; 0.12, the familiarity score was 1.81 &#x00B1; 0.33, and the similarity score was 2.93 &#x00B1; 0.51.</p>
<p>All triplets were first randomly assigned to four groups (Groups A&#x2013;D), with one group used for the &#x201C;same&#x201D; condition, one for the &#x201C;similar&#x201D; condition, and the other two for the &#x201C;different&#x201D; condition (<xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). Next, each group was assigned to three different sets (S1, S2, and S3) for three retention intervals. The three pictures within a triplet were differentially used during encoding and test. For the same condition (A1-A1), one picture within a triplet was learned during encoding and presented as the old picture during test; one of the other two pictures was randomly used as the lure picture during test (A2 or A3). For the similar condition (B1-B2), two pictures within a triplet were paired during encoding; one of them was randomly used as the old picture during test, and the third picture was used as the lure picture (B3). For the different condition (C1-D1), one picture in each of the two groups was randomly paired during encoding; one of them was used as the old picture during the test, and the similar picture to the other picture was used as the lure picture (C2 or D2). The materials in groups and sets were counterbalanced (<italic>p</italic> &#x003E; 0.60) so that each picture had the same chance to be used in each condition.</p>
</sec>
<sec id="sec6">
<title>Procedure</title>
<p>During the encoding phase (<xref rid="fig1" ref-type="fig">Figure 1A</xref>), each picture pair was presented on the screen for 4 s, and the participants judged whether the two pictures were the same, similar, or different. All the pairs were pseudorandomly presented during the encoding phase so that no more than three stimuli that were in the same condition were presented consecutively. The position of the target/old pictures and the order of the three buttons were counterbalanced across the participants.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Experimental procedure. During the encoding phase, participants were asked to judge whether the two pictures were the same, similar, or different <bold>(A)</bold>. During the test phase, the participants finished an old/new recognition task. If the judgment was &#x201C;old,&#x201D; they further make a remember/know/guess and a contextual judgment task <bold>(B)</bold>.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g001.tif"/>
</fig>
<p>During the test phase, each picture was presented on the screen for 2 s, and the participants performed an old/new recognition (i.e., item memory) test and a contextual memory test. During the item memory test, they judged whether the picture was old or new as accurately and quickly as possible (<xref rid="fig1" ref-type="fig">Figure 1B</xref>). Half of the pictures were old, and the other half were new but similar to the old ones (i.e., lures). If the picture was judged as &#x201C;old,&#x201D; the participants made an RKG assessment and a contextual judgment. They responded as &#x201C;remember&#x201D; (R) if they could retrieve stimulus-related details. They responded as &#x201C;know&#x201D; (K) if they only felt that the picture was familiar without any detailed information. They responded as &#x201C;guess&#x201D; if they did not retrieve the stimulus by the two aforementioned processes. During the contextual judgment task, the participants determined whether the picture appeared in the same/similar/or different condition, followed by the confidence rating. The old and new pictures were pseudorandomly presented at each retention interval so that no more than three pictures in the same condition were presented consecutively.</p>
<p>The software we used for the presentation of the stimuli and the recording of the participants&#x2019; responses was MATLAB and its free set Psychophysics Toolbox-3 (MathWorks Co.). The participants learned the 180 pairs once and then performed the item memory and contextual memory tests at three retention intervals (60 objects per interval, 20 pairs per encoding condition). Before each test phase, to avoid a rehearsal from the study phase, the participants were asked to count backward by seven continuously from 1,000 for 5 min. The participants had separate opportunities to practice encoding and test trials before the formal phases. In particular, to ensure that they followed the instruction of the RKG procedure, they specifically practiced this part with feedback from experimenters.</p>
</sec>
<sec id="sec7">
<title>Data Analysis</title>
<p>The hit rate, FA rate, corrected recognition (hit-FA), and the accuracy of contextual memory were analyzed using a repeated-measures analysis of variance (ANOVA) with retention interval (10 min, 1 day, and 1 week) and encoding condition (same, similar, and different) as within-subjects factors by the SPSS software. The accuracy of the contextual memory was calculated as the correct number of contextual judgment trials out of the total number of trials in each condition (<xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). The forgetting rate was estimated by the interaction between the retention interval and encoding condition (<xref ref-type="bibr" rid="ref43">Slamecka and McElree, 1983</xref>; <xref ref-type="bibr" rid="ref17">Gardiner and Java, 1991</xref>; <xref ref-type="bibr" rid="ref19">Hockley and Consoli, 1999</xref>). As the results of the corrected recognition and <italic>d</italic>&#x2032; value were similar, and those of the contextual memory with all trials and high-confidence trials were similar, only the previous ones were reported. Partial <italic>&#x03B7;</italic><sup>2</sup> was calculated to estimate the effect size of each analysis. <italic>Post hoc</italic> pairwise comparisons were Bonferroni-corrected (<italic>p</italic> &#x003C; 0.05, two-tailed).</p>
<p>Recollection and familiarity processes were estimated using the independent K (IRK) procedure (<xref ref-type="bibr" rid="ref57">Yonelinas and Jacoby, 1995</xref>; <xref ref-type="bibr" rid="ref56">Yonelinas, 2002</xref>), in which R responses are assumed to estimate recollection, whereas familiarity is estimated as the proportion of K responses divided by the proportion of non-R responses. By this, the recollection and familiarity are not only mutually exclusive, but also independently estimated. Therefore, R and IRK responses were corrected with the FA rate: recollection = <italic>p</italic>(R, hit) &#x2212; <italic>p</italic>(R, FA); familiarity = [<italic>p</italic>(K, hit)/1 &#x2212; <italic>p</italic>(R, hit) &#x2013; <italic>p</italic>(K,FA)/1 &#x2212; <italic>p</italic>(R,FA)]. Repeated-measures ANOVA tests were applied separately for recollection and familiarity processes with encoding condition and retention interval as within-subjects factors (<italic>p</italic> &#x003C; 0.05, two-tailed). Partial <italic>&#x03B7;</italic><sup>2</sup> was calculated to estimate the effect size of each analysis. <italic>Post hoc</italic> pairwise comparisons were Bonferroni-corrected (<italic>p</italic> &#x003C; 0.05, two-tailed).</p>
</sec>
</sec>
<sec id="sec8">
<title>Results</title>
<p>During the encoding phase, the participants judged the pairs of objects accurately (0.97 &#x00B1; 0.04) and quickly (1.31 &#x00B1; 0.21 s). The effect of encoding condition was not significant for accuracy [<italic>F</italic>(2,48) = 2.75, <italic>p</italic> = 0.11, <italic>&#x03B7;</italic><sup>2</sup> = 0.10] but significant for reaction times [RTs; <italic>F</italic>(2,48) = 16.27, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.40]. This was because the &#x201C;different&#x201D; pairs were judged as the quickest (1.12 &#x00B1; 0.19 s), and the &#x201C;same&#x201D; pairs were judged as the slowest (1.48 &#x00B1; 0.38 s; <italic>p</italic> &#x003C; 0.001).</p>
<p>For the corrected recognition, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 26.76, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.53; <xref rid="fig2" ref-type="fig">Figure 2A</xref>]. Further analysis showed that memory performance was the highest for the &#x201C;same&#x201D; condition, then the &#x201C;similar&#x201D; condition, and the lowest for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.001; <xref rid="tab1" ref-type="table">Table 1</xref>). Besides, memory accuracy decreased over time [<italic>F</italic>(2,48) = 16.40, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.41; <xref rid="fig2" ref-type="fig">Figure 2B</xref>], but the interaction between condition and time interval was not significant [<italic>F</italic>(4,96) = 1.41, <italic>p</italic> = 0.24, <italic>&#x03B7;</italic><sup>2</sup> = 0.06]. The results suggest that after discriminative learning of &#x201C;same&#x201D; and &#x201C;similar&#x201D; pictures, the memory performance remained at a high level over time.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Results of the corrected recognition. Multiple exposures enhanced item memory <bold>(A,B)</bold> for each condition, with the greatest enhancement for the &#x201C;same&#x201D; condition and at intervals of 10 min and 1 day. The accuracies were averaged for different intervals <bold>(A)</bold> and for different encoding condition <bold>(B)</bold> to illustrate the interactions of group and condition, and group and interval. The error bars represent the standard errors of the means.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g002.tif"/>
</fig>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Results for group L1.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2"/>
<th rowspan="2"/>
<th align="center" valign="top" colspan="3">10 min</th>
<th align="center" valign="top" colspan="3">1 day</th>
<th align="center" valign="top" colspan="3">1 week</th>
</tr>
<tr>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="bottom" rowspan="2">Hit-FA</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.49</td>
<td align="left" valign="bottom">0.33</td>
<td align="left" valign="bottom">0.24</td>
<td align="left" valign="bottom">0.35</td>
<td align="left" valign="bottom">0.28</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.24</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.09</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.23</td>
<td align="left" valign="bottom">0.22</td>
<td align="left" valign="bottom">0.24</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.13</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">Hit</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.77</td>
<td align="left" valign="bottom">0.75</td>
<td align="left" valign="bottom">0.51</td>
<td align="left" valign="bottom">0.64</td>
<td align="left" valign="bottom">0.70</td>
<td align="left" valign="bottom">0.43</td>
<td align="left" valign="bottom">0.49</td>
<td align="left" valign="bottom">0.49</td>
<td align="left" valign="bottom">0.31</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.10</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.20</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">FA</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.28</td>
<td align="left" valign="bottom">0.42</td>
<td align="left" valign="bottom">0.27</td>
<td align="left" valign="bottom">0.30</td>
<td align="left" valign="bottom">0.41</td>
<td align="left" valign="bottom">0.28</td>
<td align="left" valign="bottom">0.25</td>
<td align="left" valign="bottom">0.29</td>
<td align="left" valign="bottom">0.22</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.18</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">Contextual memory</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.54</td>
<td align="left" valign="bottom">0.65</td>
<td align="left" valign="bottom">0.34</td>
<td align="left" valign="bottom">0.29</td>
<td align="left" valign="bottom">0.55</td>
<td align="left" valign="bottom">0.21</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.31</td>
<td align="left" valign="bottom">0.11</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.22</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.21</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.11</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.09</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="2">Recollection</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.38</td>
<td align="left" valign="bottom">0.28</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.05</td>
<td align="left" valign="bottom">0.05</td>
<td align="left" valign="bottom">0.06</td>
<td align="left" valign="bottom">0.00</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.23</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.09</td>
<td align="left" valign="bottom">0.08</td>
<td align="left" valign="bottom">0.09</td>
<td align="left" valign="bottom">0.03</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="2">Familiarity</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.25</td>
<td align="left" valign="bottom">0.10</td>
<td align="left" valign="bottom">0.11</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.04</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.05</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.12</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.09</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The hit rate had a significant effect of encoding condition [<italic>F</italic>(2,48) = 72.28, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.75; <xref rid="fig3" ref-type="fig">Figure 3A</xref>]. The hit rates for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions were higher than that for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.005), but they did not differ significantly (<italic>p</italic> = 0.51; <xref rid="tab1" ref-type="table">Table 1</xref>). There was also a significant effect of encoding condition for the FA [<italic>F</italic>(2,48) = 27.39, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.53], showing that the FA rates of the &#x201C;same&#x201D; and &#x201C;different&#x201D; conditions were lower than that of the &#x201C;similar&#x201D; condition (<italic>p</italic> &#x003C; 0.005), but they did not differ significantly (<italic>p</italic> = 0.31; <xref rid="tab1" ref-type="table">Table 1</xref>; <xref rid="fig3" ref-type="fig">Figure 3C</xref>). Both the hit rate [<italic>F</italic>(2,48) = 40.27, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.63] and FA rate [<italic>F</italic>(2,48) = 6.20, <italic>p</italic> = 0.004, <italic>&#x03B7;</italic><sup>2</sup> = 0.21] decreased significantly over time (<xref rid="fig3" ref-type="fig">Figures 3B</xref>,<xref rid="fig3" ref-type="fig">D</xref>). The interactions between condition and time interval were not significant for the hit rate [<italic>F</italic>(4,96) = 1.33, <italic>p</italic> = 0.27, <italic>&#x03B7;</italic><sup>2</sup> = 0.05] and FA rate [<italic>F</italic>(4,96) = 1.43, <italic>p</italic> = 0.23, <italic>&#x03B7;</italic><sup>2</sup> = 0.06]. These results suggest that discriminating &#x201C;similar&#x201D; pictures leads to both higher hit rate and higher FA rate.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Results of the hit and FA rates. Multiple exposures significantly increased the hit rate for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions <bold>(A)</bold> and kept the FA rate relatively stable <bold>(C)</bold>. There were no significant interactions between group and retention interval for the hit and FA rates <bold>(B,D)</bold>. The accuracies were averaged for different intervals <bold>(A,C)</bold> and for different encoding condition <bold>(B,D)</bold> to illustrate the interactions of group and condition, and group and interval. The error bars represent the standard errors of the means.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g003.tif"/>
</fig>
<p>Regarding the contribution of recollection, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 25.58, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.54; <xref rid="fig4" ref-type="fig">Figure 4A</xref>] and a significant interaction between condition and time interval [<italic>F</italic>(4,96) = 6.28, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.21; <xref rid="tab1" ref-type="table">Table 1</xref>]. Further analysis showed that its contribution was higher for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions than the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.001), but they did not differ significantly (<italic>p</italic> = 0.10) except at 10 min (<italic>p</italic> = 0.01). Regarding the contribution of familiarity, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 8.14, <italic>p</italic> = 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.25; <xref rid="fig4" ref-type="fig">Figure 4C</xref>]. Further analysis showed that its contribution was higher for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions than the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.05), but the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions did not differ (<italic>p</italic> = 0.10). The effect of time interval [<italic>F</italic>(2,48) = 1.16, <italic>p</italic> = 0.32, <italic>&#x03B7;</italic><sup>2</sup> = 0.05; <xref rid="fig4" ref-type="fig">Figure 4D</xref>] and the interaction were not significant [<italic>F</italic>(4,96) = 1.90, <italic>p</italic> = 0.12, <italic>&#x03B7;</italic><sup>2</sup> = 0.07]. Both the recollection and familiarity contributions were significantly higher than chance level (0) for each condition (<italic>p</italic> &#x003C; 0.05) except the &#x201C;different&#x201D; condition at 1 week for the recollection (<italic>p</italic> = 0.52; <xref rid="tab1" ref-type="table">Table 1</xref>; <xref rid="fig4" ref-type="fig">Figures 4A</xref>,<xref rid="fig4" ref-type="fig">B</xref>). The proportion of guess response did not show significant effects of retention interval [<italic>F</italic>(2,48) = 0.74, <italic>p</italic> = 0.48, <italic>&#x03B7;</italic><sup>2</sup> = 0.03] or encoding condition [<italic>F</italic>(2,48) = 0.87, <italic>p</italic> = 0.42, <italic>&#x03B7;</italic><sup>2</sup> = 0.04]. The results suggest that after discriminative learning, both recollection and familiarity contribute to the enhanced memory under the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions over time.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Results of recollection and familiarity contributions. Multiple exposures enhanced the recollection process, especially for the &#x201C;same&#x201D; condition <bold>(A)</bold> and at 10-min and 1-day intervals <bold>(B)</bold>. The interactions between group and encoding condition and between group and retention interval were not significant for the familiarity contribution <bold>(C,D)</bold>. The accuracies were averaged for different intervals <bold>(A,C)</bold> and for different encoding condition <bold>(B,D)</bold> to illustrate the interactions of group and condition, and group and interval. The error bars represent the standard errors of the means.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g004.tif"/>
</fig>
<p>For contextual memory, the effect of encoding condition was significant [<italic>F</italic>(2,48) = 40.47, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.63], showing that the source memory for the &#x201C;similar&#x201D; condition was the highest, then the &#x201C;same&#x201D; condition, and lowest for the &#x201C;different&#x201D; condition (<xref rid="tab1" ref-type="table">Table 1</xref>; <xref rid="fig5" ref-type="fig">Figure 5A</xref>). The accuracy decreased over time [<italic>F</italic>(2,48) = 76.20, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.76; <xref rid="fig5" ref-type="fig">Figure 5B</xref>]. There was a significant interaction between time interval and encoding [<italic>F</italic>(4,96) = 5.64, <italic>p</italic> = 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.20], as the difference between the &#x201C;same&#x201D; and &#x201C;similar&#x201D; condition was larger with the passage of time (<italic>p</italic> &#x003C; 0.002). The contextual memory for the &#x201C;same&#x201D; condition was higher than that for the &#x201C;different&#x201D; condition at 10-min interval (<italic>p</italic> &#x003C; 0.001), but they were comparable at 1-day and 1-week intervals (<italic>p</italic> &#x003E; 0.30; <xref rid="tab1" ref-type="table">Table 1</xref>). It suggests that discriminative learning of similar objects significantly enhances the contextual memory, and this remains over time. Although the picture in the &#x201C;same&#x201D; condition was well recognized, the contextual memory was lower than that in the &#x201C;similar&#x201D; condition and comparable to that in the &#x201C;different&#x201D; condition.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Results of contextual memory. Multiple exposures enhanced contextual memory <bold>(A,B)</bold> for each condition, with the greatest enhancement for the &#x201C;same&#x201D; condition and at intervals of 10 min and 1 day. The accuracies were averaged for different intervals <bold>(A)</bold> and for different encoding condition <bold>(B)</bold> to illustrate the interactions of group and condition, and group and interval. The error bars represent the standard errors of the means.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g005.tif"/>
</fig>
<p>Overall, the main result of section &#x201C;Experiment 1&#x201D; was that the recognition memory for pictures was greater after the two same or similar pictures were learned together than when the two different pictures were together. The contextual memory was higher for the &#x201C;similar&#x201D; than for the other conditions and remained over time. Thus, after discriminating between &#x201C;similar&#x201D; objects, the memories for both the objects and the contexts were significantly improved. Discriminating between the &#x201C;same&#x201D; objects significantly enhanced item memory only. The result of section &#x201C;Experiment 1&#x201D; also showed the RTs during encoding were significantly slower for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions than for the &#x201C;different&#x201D; condition, and memory enhancement relied on both recollection and familiarity processes. It suggests that discriminating same and similar objects facilitates elaborative encoding process, which was consistent with <xref ref-type="bibr" rid="ref58">Zhou et al. (2018)</xref> when the 2AFC task was employed. We further investigated the effect of multiple exposures on memory of item and contextual information in section &#x201C;Experiment 2&#x201D;.</p>
</sec>
</sec>
<sec id="sec9">
<title>Experiment 2</title>
<sec id="sec10">
<title>Materials and Methods</title>
<sec id="sec11">
<title>Participants</title>
<p>Twenty-five healthy, right-handed participants (eight males) with a mean age of 21.76 &#x00B1; 1.92 years were recruited in section &#x201C;Experiment 2.&#x201D; The sample size was determined by power analysis using G&#x002A;Power 3.1.9.6 (University of Kiel, Germany) and referred to previous studies (e.g., <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>; <xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>). A prior power analysis revealed that a total sample size of at least 22 participants would provide 95% power to detect effects. All of the participants were native Chinese speakers, and they all provided written informed consent in accordance with the procedures and protocols approved by the Review Board of School of Psychological and Cognitive Science, Peking University.</p>
</sec>
<sec id="sec12">
<title>Materials and Procedures</title>
<p>The materials and procedures were the same as those in section &#x201C;Experiment 1,&#x201D; except that the participants learned the picture pairs three times. The pairs were presented in a block-wise manner; i.e., all the picture pairs were presented once, and then they were presented for the second and third times. The orders of the pairs in three presentations were different for each participant.</p>
</sec>
<sec id="sec13">
<title>Data Analysis</title>
<p>Data analysis was the same as that in section &#x201C;Experiment 1.&#x201D; In addition, we compared the parameters when learning once and three times. The ANOVAs were performed with group (once, three times) as between-subjects factor and encoding (same, similar, and different) and time interval (10 min, 1 day, and 1 week) as within-subjects factors. In addition, to control for the initial memory performance, the forgetting rate was calculated for each condition as follows: (memory at 10 min &#x2212; memory at 1 week)/(memory at min). A repeated-measures ANOVA was conducted with group and encoding condition as within-subjects factors for both item and contextual memories. <italic>Post hoc</italic> pairwise comparisons were Bonferroni-corrected (<italic>p</italic> &#x003C; 0.05, two-tailed).</p>
</sec>
</sec>
<sec id="sec14">
<title>Results</title>
<sec id="sec15">
<title>Learning Three Times</title>
<p>During the encoding phase, the participants judged the object pairs accurately (0.98 &#x00B1; 0.01) and quickly (1.34 &#x00B1; 0.28 s). The effect of encoding condition was significant for accuracy [<italic>F</italic>(2,48) = 3.38, <italic>p</italic> = 0.04, <italic>&#x03B7;</italic><sup>2</sup> = 0.13] and RTs [<italic>F</italic>(2,48) = 13.81 <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.37]. This was because the &#x201C;different&#x201D; pairs were judged as the quickest (1.19 &#x00B1; 0.25 s) and most accurate (0.99 &#x00B1; 0.01), and the &#x201C;same&#x201D; pairs were judged as the slowest (1.51 &#x00B1; 0.43 s) and least accurate (0.98 &#x00B1; 0.02; <italic>p</italic> &#x003C; 0.001).</p>
<p>For the corrected recognition, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 95.21, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.80; <xref rid="fig2" ref-type="fig">Figure 2A</xref>] and a significant interaction [<italic>F</italic>(4,96) = 2.79, <italic>p</italic> = 0.03, <italic>&#x03B7;</italic><sup>2</sup> = 0.10; <xref rid="tab2" ref-type="table">Table 2</xref>]. Besides, memory accuracy decreased over time [<italic>F</italic>(2,48) = 43.07, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.64; <xref rid="fig2" ref-type="fig">Figure 2B</xref>]. As the case in section &#x201C;Experiment 1,&#x201D; memory performance was the highest for the &#x201C;same&#x201D; condition, then the &#x201C;similar&#x201D; condition, and the lowest for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.001; <xref rid="tab2" ref-type="table">Table 2</xref>).</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Results for group L3.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2"/>
<th rowspan="2"/>
<th align="center" valign="top" colspan="3">10 min</th>
<th align="center" valign="top" colspan="3">1 day</th>
<th align="center" valign="top" colspan="3">1 week</th>
</tr>
<tr>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
<th align="center" valign="top">Same</th>
<th align="center" valign="top">Similar</th>
<th align="center" valign="top">Diff</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="bottom" rowspan="2">Hit-FA</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.79</td>
<td align="left" valign="bottom">0.51</td>
<td align="left" valign="bottom">0.39</td>
<td align="left" valign="bottom">0.55</td>
<td align="left" valign="bottom">0.38</td>
<td align="left" valign="bottom">0.23</td>
<td align="left" valign="bottom">0.40</td>
<td align="left" valign="bottom">0.29</td>
<td align="left" valign="bottom">0.11</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.22</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.17</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">Hit</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.93</td>
<td align="left" valign="bottom">0.83</td>
<td align="left" valign="bottom">0.66</td>
<td align="left" valign="bottom">0.79</td>
<td align="left" valign="bottom">0.76</td>
<td align="left" valign="bottom">0.46</td>
<td align="left" valign="bottom">0.61</td>
<td align="left" valign="bottom">0.55</td>
<td align="left" valign="bottom">0.27</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.06</td>
<td align="left" valign="bottom">0.12</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.21</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.13</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">FA</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.33</td>
<td align="left" valign="bottom">0.27</td>
<td align="left" valign="bottom">0.24</td>
<td align="left" valign="bottom">0.38</td>
<td align="left" valign="bottom">0.23</td>
<td align="left" valign="bottom">0.21</td>
<td align="left" valign="bottom">0.26</td>
<td align="left" valign="bottom">0.18</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.12</td>
</tr>
<tr>
<td align="left" valign="bottom" rowspan="2">Contextual memory</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.85</td>
<td align="left" valign="bottom">0.73</td>
<td align="left" valign="bottom">0.59</td>
<td align="left" valign="bottom">0.61</td>
<td align="left" valign="bottom">0.62</td>
<td align="left" valign="bottom">0.30</td>
<td align="left" valign="bottom">0.25</td>
<td align="left" valign="bottom">0.30</td>
<td align="left" valign="bottom">0.10</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.11</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.20</td>
<td align="left" valign="bottom">0.11</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="2">Recollection</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.76</td>
<td align="left" valign="bottom">0.44</td>
<td align="left" valign="bottom">0.27</td>
<td align="left" valign="bottom">0.43</td>
<td align="left" valign="bottom">0.30</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.10</td>
<td align="left" valign="bottom">0.02</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.23</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.12</td>
<td align="left" valign="bottom">0.14</td>
<td align="left" valign="bottom">0.12</td>
<td align="left" valign="bottom">0.05</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="2">Familiarity</td>
<td align="left" valign="bottom">Mean</td>
<td align="left" valign="bottom">0.29</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.19</td>
<td align="left" valign="bottom">0.26</td>
<td align="left" valign="bottom">0.17</td>
<td align="left" valign="bottom">0.10</td>
<td align="left" valign="bottom">0.18</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.04</td>
</tr>
<tr>
<td align="left" valign="bottom">SD</td>
<td align="left" valign="bottom">0.31</td>
<td align="left" valign="bottom">0.22</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.22</td>
<td align="left" valign="bottom">0.21</td>
<td align="left" valign="bottom">0.13</td>
<td align="left" valign="bottom">0.15</td>
<td align="left" valign="bottom">0.16</td>
<td align="left" valign="bottom">0.08</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>For the &#x201C;hit&#x201D; rate, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 155.77, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.87; <xref rid="fig3" ref-type="fig">Figure 3A</xref>]. The hit rate for the &#x201C;same&#x201D; condition was significantly higher than that in the &#x201C;similar&#x201D; and &#x201C;different&#x201D; conditions (<italic>p</italic> &#x003C; 0.01; <xref rid="tab2" ref-type="table">Table 2</xref>). For the FA rate, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 24.25, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.50; <xref rid="fig3" ref-type="fig">Figure 3C</xref>]. The FA rate in the &#x201C;similar&#x201D; conditions was higher than that in the &#x201C;same&#x201D; and &#x201C;different&#x201D; conditions (<italic>p</italic>'s &#x003C; 0.05). This pattern was similar to that in section &#x201C;Experiment 1.&#x201D; Both the hit rate [<italic>F</italic>(2,48) = 57.85, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.71] and FA rate [<italic>F</italic>(2,48) = 4.26, <italic>p</italic> = 0.02, <italic>&#x03B7;</italic><sup>2</sup> = 0.15] changed significantly over time (<xref rid="fig3" ref-type="fig">Figures 3B</xref>,<xref rid="fig3" ref-type="fig">D</xref>). The interaction between condition and time interval was significant for the hit rate [<italic>F</italic>(4,96) = 3.33, <italic>p</italic> = 0.01, <italic>&#x03B7;</italic><sup>2</sup> = 0.12] and FA rate [<italic>F</italic>(4,96) = 7.28, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.23], because the hit rate of the &#x201C;same&#x201D; condition was higher than that of the &#x201C;similar&#x201D; condition only at 10 min (<italic>p</italic> &#x003C; 0.001), and the FA rate of the &#x201C;same&#x201D; conditions was higher than that of the &#x201C;different&#x201D; condition at 10 min (<italic>p</italic> &#x003C; 0.001), but they were comparable for the two conditions at 1-day and 1-week (<xref rid="tab2" ref-type="table">Table 2</xref>).</p>
<p>Regarding the contribution of recollection, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 84.04, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.78] and a significant interaction [<italic>F</italic>(4,96) = 15.06, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.39; <xref rid="fig4" ref-type="fig">Figure 4A</xref>]. This was because the difference between the &#x201C;same&#x201D; and &#x201C;similar&#x201D; condition was the largest at 10-min, then decreased over time, and comparable at 1-week (<italic>p</italic> = 0.33; <xref rid="tab2" ref-type="table">Table 2</xref>). Regarding the contribution of familiarity, there was a significant effect of encoding condition [<italic>F</italic>(2,48) = 10.48, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.30; <xref rid="fig4" ref-type="fig">Figure 4C</xref>]. The effect of time interval [<italic>F</italic>(2,48) = 4.75, <italic>p</italic> = 0.01, <italic>&#x03B7;</italic><sup>2</sup> = 0.15; <xref rid="fig4" ref-type="fig">Figure 4D</xref>] and the interaction were not significant [<italic>F</italic>(4,96) = 1.03, <italic>p</italic> = 0.40, <italic>&#x03B7;</italic><sup>2</sup> = 0.04]. Further analysis showed that its contribution was the greatest for the &#x201C;same&#x201D; condition, then the &#x201C;similar&#x201D; condition, and least for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.05). The proportion of guess response increased over time [<italic>F</italic>(2,48) = 7.40, <italic>p</italic> = 0.002, <italic>&#x03B7;</italic><sup>2</sup> = 0.24], but did not show significant effect of encoding condition [<italic>F</italic>(2,48) = 0.64, <italic>p</italic> = 0.53, <italic>&#x03B7;</italic><sup>2</sup> = 0.03]. The results suggest that the both the recollection and familiarity processes contribute to the enhanced memory after discriminative learning, especially for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; condition.</p>
<p>For contextual memory, the effect of encoding condition was significant [<italic>F</italic>(2,48) = 46.33, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.66; <xref rid="fig5" ref-type="fig">Figure 5A</xref>], showing that the contextual memory for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions were comparable (<italic>p</italic> = 0.52) but both were higher than that for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.001; <xref rid="tab2" ref-type="table">Table 2</xref>). There was a significant interaction between time interval and encoding [<italic>F</italic>(4,96) = 8.81, <italic>p</italic> &#x003C; 0.0011, <italic>&#x03B7;</italic><sup>2</sup> = 0.27]. This was because the contextual memory for the &#x201C;same&#x201D; condition was greater than that for the &#x201C;similar&#x201D; condition at 10-min interval (<italic>p</italic> = 0.01) and was comparable to that for the &#x201C;similar&#x201D; condition afterwards (<italic>p</italic> = 1.0).</p>
<p>Overall, the patterns of the corrected recognition, hit rate, FA rate, contribution of recollection and familiarity were generally similar to those in section &#x201C;Experiment 1,&#x201D; except that the difference between the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions was significant for the hit rate, contributions of recollection and familiarity. The contextual memory for the &#x201C;same&#x201D; condition was significantly enhanced.</p>
</sec>
</sec>
<sec id="sec16">
<title>Comparison of Experiments 1 and 2</title>
<p>For the corrected recognition, the ANOVA results showed a significant group effect [<italic>F</italic>(1,48) = 21.11, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.31]. In addition, there was a significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 7.85, <italic>p</italic> = 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.14]. Further analysis demonstrated that the &#x201C;same&#x201D; condition benefitted from multiple exposures most evidently (<italic>p</italic> &#x003C; 0.001), then the &#x201C;similar&#x201D; condition (<italic>p</italic> = 0.003) and least for the &#x201C;different&#x201D; condition (<italic>p</italic> = 0.04; <xref rid="fig2" ref-type="fig">Figure 2A</xref>). The interaction between group and interval was also significant [<italic>F</italic>(2,96) = 3.75, <italic>p</italic> = 0.03, <italic>&#x03B7;</italic><sup>2</sup> = 0.07], showing that learning three times led to better memory performance for each interval, but most pronouncedly for 10-min and 1-day intervals (<italic>p</italic> &#x003C; 0.001; <xref rid="fig2" ref-type="fig">Figure 2B</xref>). These findings suggest that the &#x201C;same&#x201D; condition benefits more from multiple exposures, and the enhancement is obvious at shorter intervals. The results of the forgetting rate showed a significant effect of condition [<italic>F</italic>(2,96) = 13.54, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.23; <xref rid="fig6" ref-type="fig">Figure 6A</xref>], with slower forgetting for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions than for the &#x201C;different&#x201D; condition (<italic>p</italic> &#x003C; 0.002). But the effect of group [<italic>F</italic>(1,48) = 0.37, <italic>p</italic> = 0.55, <italic>&#x03B7;</italic><sup>2</sup> = 0.008] and the interaction [<italic>F</italic>(2,96) = 0.25, <italic>p</italic> = 0.78, <italic>&#x03B7;</italic><sup>2</sup> = 0.006] was not significant. It suggests that multiple exposures do not influence forgetting of item memory.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Results of the forgetting rate. Multiple exposures did not significantly influence the forgetting of item memory <bold>(A)</bold> and contextual memory <bold>(B)</bold>. The forgetting rates were estimated by controlling the initial memory performance for each condition. The error bars represent the standard errors of the means.</p>
</caption>
<graphic xlink:href="fpsyg-11-565169-g006.tif"/>
</fig>
<p>Regarding the hit rate, the group effect was significant [<italic>F</italic>(1,48) = 8.28, <italic>p</italic> = 0.006, <italic>&#x03B7;</italic><sup>2</sup> = 0.15]. In addition, there was a significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 6.29, <italic>p</italic> = 0.003, <italic>&#x03B7;</italic><sup>2</sup> = 0.12]. Multiple exposures significantly increased the hit rate for the &#x201C;same&#x201D; (<italic>p</italic> &#x003C; 0.001) and &#x201C;similar&#x201D; (<italic>p</italic> = 0.04) conditions, with no significant change for the &#x201C;different&#x201D; condition (<italic>p</italic> = 0.19; <xref rid="fig3" ref-type="fig">Figure 3A</xref>). The interaction of group and retention interval was not significant [<italic>F</italic>(2,96) = 2.04, <italic>p</italic> = 0.14, <italic>&#x03B7;</italic><sup>2</sup> = 0.04; <xref rid="fig3" ref-type="fig">Figure 3B</xref>]. Regarding the FA rate, the group effect was not significant [<italic>F</italic>(1,48) = 2.94, <italic>p</italic> = 0.10, <italic>&#x03B7;</italic><sup>2</sup> = 0.06]. In addition, there was no significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 1.84, <italic>p</italic> = 0.16, <italic>&#x03B7;</italic><sup>2</sup> = 0.04] or between group and retention interval [<italic>F</italic>(2,96) = 0.77, <italic>p</italic> = 0.46, <italic>&#x03B7;</italic><sup>2</sup> = 0.02; <xref rid="fig3" ref-type="fig">Figures 3C</xref>,<xref rid="fig3" ref-type="fig">D</xref>]. The results suggest that the hit rate is enhanced after multiple exposures for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions, but the FA rate remains relatively stable after multiple exposures.</p>
<p>Regarding the contribution of recollection, there was a significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 15.32, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.24; <xref rid="fig4" ref-type="fig">Figure 4A</xref>], and between group and interval [<italic>F</italic>(2,96) = 9.58, <italic>p</italic> = 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.17; <xref rid="fig4" ref-type="fig">Figure 4B</xref>]. The greater contribution of recollection for L3 than L1 was most obvious for the &#x201C;same&#x201D; condition and the 10-min and 1-day intervals, although group contrasts were all significant (<italic>p</italic> &#x003C; 0.01). Regarding the contribution of familiarity, it was comparable for L1 and L3 [<italic>F</italic>(1,48) = 2.55, <italic>p</italic> = 0.12, <italic>&#x03B7;</italic><sup>2</sup> = 0.05]. In addition, there was no significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 0.06, <italic>p</italic> = 0.94, <italic>&#x03B7;</italic><sup>2</sup> = 0.001] or between group and retention interval [<italic>F</italic>(2,96) = 1.21, <italic>p</italic> = 0.30, <italic>&#x03B7;</italic><sup>2</sup> = 0.03; <xref rid="fig4" ref-type="fig">Figures 4C</xref>,<xref rid="fig4" ref-type="fig">D</xref>]. These results suggest multiple exposures enhance the recollection process rather than familiarity.</p>
<p>For contextual memory, there was a significant interaction between group and encoding condition [<italic>F</italic>(2,96) = 11.40, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.19] and between group and retention interval [<italic>F</italic>(2,96) = 12.05, <italic>p</italic> &#x003C; 0.001, <italic>&#x03B7;</italic><sup>2</sup> = 0.20]. Multiple exposures enhanced contextual memory for each condition, with the greatest enhancement for the &#x201C;same&#x201D; condition (<xref rid="fig5" ref-type="fig">Figure 5A</xref>) and for the intervals of 10 min and 1 day (<italic>p</italic> &#x003C; 0.05; <xref rid="fig5" ref-type="fig">Figure 5B</xref>). There was a significant three-way interaction [<italic>F</italic>(4,192) = 3.55, <italic>p</italic> = 0.008, <italic>&#x03B7;</italic><sup>2</sup> = 0.07]. The enhancement for the &#x201C;similar&#x201D; condition was obvious at each interval after L1 (<italic>p</italic> &#x003C; 0.05) and disappeared at 1 day and 1 week after L3 when compared to the &#x201C;same&#x201D; condition (<italic>p</italic> &#x003E; 0.50). This indicated that discriminative learning of similar objects significantly enhances contextual memory when the stimuli were learned once. Multiple exposures increased the contextual memory especially for the &#x201C;same&#x201D; condition. The results of the forgetting rate showed a significant effect of condition [<italic>F</italic>(2,96) = 4.66, <italic>p</italic> = 0.01, <italic>&#x03B7;</italic><sup>2</sup> = 0.09; <xref rid="fig6" ref-type="fig">Figure 6B</xref>], with slower forgetting for the &#x201C;similar&#x201D; condition than for the &#x201C;same&#x201D; and &#x201C;different&#x201D; conditions (<italic>p</italic> &#x003C; 0.05). But the effect of group [<italic>F</italic>(1,48) = 2.04, <italic>p</italic> = 0.16, <italic>&#x03B7;</italic><sup>2</sup> = 0.05] and the interaction [<italic>F</italic>(2,96) = 2.19, <italic>p</italic> = 0.12, <italic>&#x03B7;</italic><sup>2</sup> = 0.05] was not significant. It suggests that multiple exposures do not significantly influence forgetting of contextual memory.</p>
</sec>
</sec>
<sec id="sec17" sec-type="discussions">
<title>Discussion</title>
<p>In this study, we investigated how multiple exposures modulated item memory and contextual memory over time after discriminative learning. There were three main findings. First, after learning three times, both item memory and contextual memory performance increased. Second, the hit rate for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions was enhanced after multiple exposures, but the FA rate did not change significantly. In addition, the recollection contributed to the repetition effect, with no significant change for the familiarity contribution. Third, memory enhancement was manifested in each encoding condition and for all retention intervals, especially for the &#x201C;same&#x201D; condition and at 10-min and 1-day intervals. These results suggest that through multiple discriminative learning, the stimuli are more elaboratively encoded, making the details and contexts more vividly remembered and retained over time.</p>
<sec id="sec18">
<title>Enhanced Memory and Multiple Exposures</title>
<p>Previous studies have posited two possible findings for the effect of repetition learning on subsequent memory. On the one hand, repetition learning decreases the capacity to discriminate between targets and lures, leading to enhanced general memory but impaired detailed memory (e.g., <xref ref-type="bibr" rid="ref37">Reagh and Yassa, 2014</xref>; <xref ref-type="bibr" rid="ref36">Reagh et al., 2017</xref>). On the other hand, repetition learning enhances elaborative encoding and hence the recollection process, leading to improved subsequent memory for details (e.g., <xref ref-type="bibr" rid="ref23">Koutstaal et al., 1999</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>; <xref ref-type="bibr" rid="ref29">McCormick-Huhn et al., 2018</xref>). The results of our study supported the second view. The enhanced memory was manifested in both item and contextual memories. In the work herein, after learning three times, the increased discrimination ability for item memory was based on increased hit rate and relatively stable FA rate. Previous studies have shown that item-specific memory was enhanced, and false recognition was reduced when participants were asked to notice perceptual features of pictures during encoding (<xref ref-type="bibr" rid="ref23">Koutstaal et al., 1999</xref>). Similarly, discriminative learning of similar objects enabled participants to adopt elaborative processing to compare detailed information between the objects to make a judgment (<xref ref-type="bibr" rid="ref58">Zhou et al., 2018</xref>). By learning three times, the participants have more chance to deliberately compare the two objects, and the differences between the two objects are strengthened. In contrast, the FA rate remained relatively stable after multiple exposures. Although the FA rate was higher for the &#x201C;similar&#x201D; condition than the other two when learning once, it did not change significantly by learning three times. It is possible that general memory for the concept is quickly acquired and stabilized by one exposure of two similar objects; thus, more exposures are not necessary. Therefore, more exposures facilitate elaborative encoding of detailed information, which in turn leads to enhanced hit rate and memory performance for pictures.</p>
<p>Consistent with the change of hit and FA rates, our finding showed that higher item memory after repetition was more contributed by the recollection process rather than the familiarity. As the participants had to distinguish between the old and similar picture, the item memory reflected memory for detailed information. The detailed item memory after multiple exposures had more vivid subjective feeling and perceptual information (<xref ref-type="bibr" rid="ref29">McCormick-Huhn et al., 2018</xref>). This supported the finding that the hit rate increased after repetition. The elaborative encoding after multiple exposures facilitated the participants remembering more detailed information about the objects, thereby rendering the recollection process more of a contributor. In contrast, there was no significant group effect or group-related interactions for the familiarity process, suggesting that the enhanced memory is not contributed by familiar feeling of the objects.</p>
<p>In addition to item memory, the results demonstrated that contextual memory was also enhanced. During the retrieval phase, the participants were asked to judge the condition (same/similar/different) where the objects were learned. Although the contextual memory is associated with the recollection process (<xref ref-type="bibr" rid="ref11">Davachi et al., 2003</xref>; <xref ref-type="bibr" rid="ref10">Davachi, 2006</xref>; <xref ref-type="bibr" rid="ref15">Eichenbaum et al., 2007</xref>), it differs from item memory in that the contextual memory relies more on information related to spatial or temporal sources of the object rather than detailed information of the object itself. Thus, by multiple exposures, the relations between objects and their contexts are enhanced significantly.</p>
<p>Although our findings supported the view of elaborative processing, it does not mean that the generalization view is not correct. It is possible that core content and details of the memory are selectively strengthened and connected (<xref ref-type="bibr" rid="ref32">Nadel and Moscovitch, 1997</xref>), but variable contextual details associated with each reactivation of the memory are weakened (<xref ref-type="bibr" rid="ref55">Yassa and Reagh, 2013</xref>). We consider that the encoding task is important for the effect of repetition on subsequent memory. For example, <xref ref-type="bibr" rid="ref37">Reagh and Yassa (2014)</xref> showed that the FA rate was greater when the pictures were presented three times rather than just once. They asked participants to make an indoor/outdoor judgment for single objects. In contrast, discriminative learning emphasizes the relationship between the two pictures, especially when the pictures were the same or similar. The encoding difference may render memory for detailed information more likely distinctive and contextualized after discriminative learning, and the distinctive representations are reactivated and strengthened after multiple exposures. Future studies are required to include both single pictures and picture pairs in a study to clarify the boundary conditions for different effects of repetition learning. These two effects may be mediated by different brain mechanisms (<xref ref-type="bibr" rid="ref49">Wagner et al., 2000</xref>; <xref ref-type="bibr" rid="ref27">Manelis et al., 2013</xref>; <xref ref-type="bibr" rid="ref24">Kremers et al., 2014</xref>; <xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). In addition, different from <xref ref-type="bibr" rid="ref37">Reagh and Yassa (2014)</xref>, in which they included old, similar, and new objects during test, we only included the old and similar objects to ensure the participants focus on the detailed memory without having the strategic change through the test (<xref ref-type="bibr" rid="ref3">Brainerd and Reyna, 1993</xref>, <xref ref-type="bibr" rid="ref4">2015</xref>). It is interesting to include new pictures during test to assess gist/conceptual memory in addition to detailed/perceptual memory at the same time.</p>
</sec>
<sec id="sec19">
<title>Discriminative Learning and Multiple Exposures</title>
<p>One important feature of discriminative learning was that elaborative encoding is required to discriminate between two objects for the &#x201C;same&#x201D; and &#x201C;similar&#x201D; conditions. In section &#x201C;Experiment 1,&#x201D; the item memory in the &#x201C;same&#x201D; and &#x201C;similar&#x201D; condition was higher than that in the &#x201C;different&#x201D; condition. Multiple exposures enhanced the performance of item and contextual memories as well as recollection process most in the &#x201C;same&#x201D; condition. The longer encoding time indicated that discriminating between the &#x201C;same&#x201D; objects takes more time than between the &#x201C;similar&#x201D; and &#x201C;different&#x201D; objects. Although the two pictures were the same under the &#x201C;same&#x201D; condition, the participants had to adopt elaborative strategy and process every detail to ensure that the two objects were the same. So, the &#x201C;same&#x201D; condition was analog to the &#x201C;similar&#x201D; condition but required more elaborative processing. Multiple exposures of same objects enabled the participants to have more chance to elaborately process the detailed and contextual information, which led to higher item and contextual memories.</p>
<p>After the participants learned the similar objects once, both the hit rate and FA rate were higher than the pairs in the &#x201C;different&#x201D; condition. It suggests that presenting two similar objects of a concept facilitate the general memory of the concept, in addition to an enhanced detailed memory for the objects (<xref ref-type="bibr" rid="ref7">Chen et al., 2019</xref>). Multiple exposures enhanced the item and contextual memories for the &#x201C;similar&#x201D; condition, although the effect was smaller than those for the &#x201C;same&#x201D; condition. Note that the FA rate was higher for the &#x201C;similar&#x201D; condition irrespective of repetition. This made the corrected recognition lower for the &#x201C;similar&#x201D; (vs. &#x201C;same&#x201D;) condition. In addition, the contextual memory was higher for the &#x201C;similar&#x201D; than the &#x201C;same&#x201D; condition after learning once, whereas the difference disappeared after learning three times. So the advantage of adopting &#x201C;similar&#x201D; (vs. &#x201C;same&#x201D;) condition was that the enhanced memory effects appeared right after learning once, especially for the contextual memory. It suggests that discriminative learning of similar objects is efficient to quickly improve memory in both details and contexts, and repetition is not necessary.</p>
</sec>
<sec id="sec20">
<title>Retention Interval and Multiple Exposures</title>
<p>The enhanced memories of item and context for the &#x201C;similar&#x201D; condition over the &#x201C;different&#x201D; condition were shown from 10 min to 1 week. Multiple exposures further enhanced memories at shorter intervals. There was a significant interaction between group and retention interval for item memory and contextual memory, showing that the group difference was significant at each interval but more obvious at 10 min and 1 day. With repetition, the stimuli are more elaboratively processed, which leads to more stable memory representations (<xref ref-type="bibr" rid="ref53">Xue et al., 2011</xref>) and higher memory accuracy over time (<xref ref-type="bibr" rid="ref26">Litman and Davachi, 2008</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). The higher enhancement at shorter intervals may be mainly because of massed learning mode (<xref ref-type="bibr" rid="ref6">Cepeda et al., 2006</xref>; <xref ref-type="bibr" rid="ref28">Mazza et al., 2016</xref>). In this study, the objects were presented repetitively in three blocks, which was a typical manipulation of massed learning (<xref ref-type="bibr" rid="ref6">Cepeda et al., 2006</xref>). Previous studies have also found that compared to distributed learning, massed learning enhances recent memory significantly, whereas distributed learning improved associative memory at longer intervals (<xref ref-type="bibr" rid="ref26">Litman and Davachi, 2008</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>).</p>
<p>On the other hand, as memory performance increased after repetition, it is necessary to control the initial memory (at 10-min interval) to clarify the effect of multiple exposures on memory forgetting. The results showed the forgetting rates of item and contextual memories had no significant group effects, or interactions between group and encoding condition. The results thus suggest that multiple exposures at encoding do not modulate subsequent memory consolidation process (<xref ref-type="bibr" rid="ref43">Slamecka and McElree, 1983</xref>). Although memory performance in each condition declined from 10 min to 1 week, multiple exposures did not change the forgetting pattern. The significant interaction between group and interaction for hit-FA was mainly due to higher memory accuracy, rather than stronger memory consolidation process.</p>
<p>The influence of multiple exposures on memory performance has important practical implications. Memory impairments are common in elderly people and patients with brain lesion. Some people with memory impairments, such as patients with amnesia and severely deficient autobiographical memory, are characteristic of the deficits in encoding processes (<xref ref-type="bibr" rid="ref33">Palombo et al., 2016</xref>, <xref ref-type="bibr" rid="ref34">2018</xref>). As multiple exposures enhanced elaborative encoding, repetitive learning could be used as an efficient way to improve memory retention for memory-impaired patients (e.g., <xref ref-type="bibr" rid="ref18">Green et al., 2014</xref>). Future studies with neuroimaging investigations could also help to clarify to what extent the hippocampus and cortical regions (<xref ref-type="bibr" rid="ref40">Santangelo et al., 2018</xref>, <xref ref-type="bibr" rid="ref41">2020</xref>) are involved in enhanced encoding after repetition for patients with memory deficits.</p>
<p>Furthermore, the decline was contributed by the recollection process, shown as a significant interaction between retention interval and group for the recollection rather than for the familiarity. The results supported the view that contribution of recollection is associated with memory forgetting over time. As proposed by <xref ref-type="bibr" rid="ref39">Sadeh et al. (2014)</xref>, memories relying on recollection are more sensitive to decay but are relatively resistant to interference from irrelevant information (<xref ref-type="bibr" rid="ref17">Gardiner and Java, 1991</xref>; <xref ref-type="bibr" rid="ref19">Hockley and Consoli, 1999</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). Through multiple exposures, the recollection contribution significant increased, making the forgetting rate slower.</p>
<p>The RKG procedure is widely used to estimate the underlying processes during recognition (<xref ref-type="bibr" rid="ref46">Tulving, 1985</xref>; <xref ref-type="bibr" rid="ref12">Donaldson, 1996</xref>; <xref ref-type="bibr" rid="ref56">Yonelinas, 2002</xref>; <xref ref-type="bibr" rid="ref52">Wixted and Stretch, 2004</xref>). Some may argue that the distinction of recollection and familiarity reflect the difference of strong and weak memory and confidence experience (<xref ref-type="bibr" rid="ref13">Dunn, 2004</xref>; <xref ref-type="bibr" rid="ref44">Squire et al., 2007</xref>; <xref ref-type="bibr" rid="ref50">Williams et al., 2013</xref>). According to this proposal, forgetting is a change from stronger to weaker memory representation, and multiple exposures lead to stronger memory. If so, the recollection process should increase by repetition and decrease by longer intervals, whereas the familiarity should decrease by repetition and increase with the passage of time. Contrary to the prediction, we found that the familiarity remained unchanged for L1 and L3 and decreased over time. The distinction in recollection and familiarity well explained the current findings on forgetting and learning effect. We therefore suggest that the recollection/familiarity distinction is an appropriate way to account for the underlying process of recognition in this study.</p>
<p>Although detailed memory is susceptible to be forgotten rapidly (e.g., <xref ref-type="bibr" rid="ref45">Tuckey and Brewer, 2003</xref>; <xref ref-type="bibr" rid="ref20">Huebner and Gegenfurtner, 2012</xref>; <xref ref-type="bibr" rid="ref1">Andermane and Bowers, 2015</xref>; <xref ref-type="bibr" rid="ref42">Sekeres et al., 2016</xref>), through discriminative learning of similar objects, some detailed memory and contextual memory remained at 1 week. This pattern was observed for both L1 and L3 conditions. The transformation trace theory model states that with the passage of time, memory representation could have both gist and detailed forms, and they can be transformed in certain conditions (<xref ref-type="bibr" rid="ref51">Winocur and Moscovitch, 2011</xref>; <xref ref-type="bibr" rid="ref30">Moscovitch et al., 2016</xref>; <xref ref-type="bibr" rid="ref38">Robin and Moscovitch, 2017</xref>). In addition, the recollection contribution decreases more rapidly than the familiarity (<xref ref-type="bibr" rid="ref17">Gardiner and Java, 1991</xref>; <xref ref-type="bibr" rid="ref19">Hockley and Consoli, 1999</xref>; <xref ref-type="bibr" rid="ref39">Sadeh et al., 2014</xref>; <xref ref-type="bibr" rid="ref54">Yang et al., 2016</xref>). Thus, at 1 week, both the recollection and familiar contributions helped the participants make correct judgments to discriminate between the old and lure objects.</p>
</sec>
</sec>
<sec id="sec21" sec-type="conclusions">
<title>Conclusion</title>
<p>After learning three times, both item memory and contextual memory performance increased over time. The enhanced item memory was shown as higher hit rate rather than the FA rate, and with more of a contribution of the recollection process rather than the familiarity. In addition, multiple exposures enhanced the memory performance especially for the &#x201C;same&#x201D; condition and at 10-min and 1-day intervals. Overall, these results suggest that when elaborative processing is emphasized during encoding, multiple exposures enable recollection more pronouncedly, rendering the details and contexts more vividly remembered and retained over time. Therefore, the strategies combining elaborative encoding and multiple exposures could apply to elderly adults and patients with brain lesion who have memory impairments and help them improve memory abilities over time.</p>
</sec>
<sec id="sec22">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="sec23">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by the Ethics Committee of School of Psychological and Cognitive Sciences, Peking University. Participants received written and oral information of the study before they gave their written consent. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec24">
<title>Author Contributions</title>
<p>HC designed and performed the research, and analyzed the data. JY designed the research, analyzed the data, and wrote the paper. All authors contributed to the article and approved the submitted version.</p>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank Pan Zhou at Peking University for the help in data collection.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andermane</surname> <given-names>N.</given-names></name> <name><surname>Bowers</surname> <given-names>J. S.</given-names></name></person-group> (<year>2015</year>). <article-title>Detailed and gist-like visual memories are forgotten at similar rates over the course of a week</article-title>. <source>Psychon. Bull. Rev.</source> <volume>22</volume>, <fpage>1358</fpage>&#x2013;<lpage>1363</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13423-015-0800-0</pub-id>, PMID: <pub-id pub-id-type="pmid">26391175</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bogartz</surname> <given-names>R. S.</given-names></name></person-group> (<year>1990</year>). <article-title>Evaluating forgetting curves psychologically</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn.</source> <volume>16</volume>, <fpage>138</fpage>&#x2013;<lpage>148</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0278-7393.16.1.138</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brainerd</surname> <given-names>C. J.</given-names></name> <name><surname>Reyna</surname> <given-names>V. F.</given-names></name></person-group> (<year>1993</year>). <article-title>Memory independence and memory interference in cognitive-development</article-title>. <source>Psychol. Rev.</source> <volume>100</volume>, <fpage>42</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-295X.100.1.42</pub-id>, PMID: <pub-id pub-id-type="pmid">8426881</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brainerd</surname> <given-names>C. J.</given-names></name> <name><surname>Reyna</surname> <given-names>V. E.</given-names></name></person-group> (<year>2015</year>). <article-title>Fuzzy-trace theory and lifespan cognitive development</article-title>. <source>Dev. Rev.</source> <volume>38</volume>, <fpage>89</fpage>&#x2013;<lpage>121</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.dr.2015.07.006</pub-id>, PMID: <pub-id pub-id-type="pmid">26644632</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cabeza</surname> <given-names>R.</given-names></name> <name><surname>St Jacques</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Functional neuroimaging of autobiographical memory</article-title>. <source>Trends Cogn. Sci.</source> <volume>11</volume>, <fpage>219</fpage>&#x2013;<lpage>227</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2007.02.005</pub-id>, PMID: <pub-id pub-id-type="pmid">17382578</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cepeda</surname> <given-names>N. J.</given-names></name> <name><surname>Pashler</surname> <given-names>H.</given-names></name> <name><surname>Vul</surname> <given-names>E.</given-names></name> <name><surname>Wixted</surname> <given-names>J. T.</given-names></name> <name><surname>Rohrer</surname> <given-names>D.</given-names></name></person-group> (<year>2006</year>). <article-title>Distributed practice in verbal recall tasks: a review and quantitative synthesis</article-title>. <source>Psychol. Bull.</source> <volume>132</volume>, <fpage>354</fpage>&#x2013;<lpage>380</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-2909.132.3.354</pub-id>, PMID: <pub-id pub-id-type="pmid">16719566</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Zhou</surname> <given-names>W.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Dissociation of the perirhinal cortex and hippocampus during discriminative learning of similar objects</article-title>. <source>J. Neurosci.</source> <volume>39</volume>, <fpage>6190</fpage>&#x2013;<lpage>6201</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3181-18.2019</pub-id>, PMID: <pub-id pub-id-type="pmid">31167939</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Craik</surname> <given-names>F. I. M.</given-names></name></person-group> (<year>2002</year>). <article-title>Levels of processing: past, present&#x2026; and future?</article-title> <source>Memory</source> <volume>10</volume>, <fpage>305</fpage>&#x2013;<lpage>318</lpage>. doi: <pub-id pub-id-type="doi">10.1080/09658210244000135</pub-id>, PMID: <pub-id pub-id-type="pmid">12396643</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Craik</surname> <given-names>F. I. M.</given-names></name> <name><surname>Lockhart</surname> <given-names>R. S.</given-names></name></person-group> (<year>1972</year>). <article-title>Levels of processing - framework for memory research</article-title>. <source>J. Verbal Learn. Verbal Behav.</source> <volume>11</volume>, <fpage>671</fpage>&#x2013;<lpage>684</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0022-5371(72)80001-X</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davachi</surname> <given-names>L.</given-names></name></person-group> (<year>2006</year>). <article-title>Item, context and relational episodic encoding in humans</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>16</volume>, <fpage>693</fpage>&#x2013;<lpage>700</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.conb.2006.10.012</pub-id>, PMID: <pub-id pub-id-type="pmid">17097284</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davachi</surname> <given-names>L.</given-names></name> <name><surname>Mitchell</surname> <given-names>J. P.</given-names></name> <name><surname>Wagner</surname> <given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>Multiple routes to memory: distinct medial temporal lobe processes build item and source memories</article-title>. <source>Proc. Natl. Acad. Sci. U. S. A.</source> <volume>100</volume>, <fpage>2157</fpage>&#x2013;<lpage>2162</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.0337195100</pub-id>, PMID: <pub-id pub-id-type="pmid">12578977</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donaldson</surname> <given-names>W.</given-names></name></person-group> (<year>1996</year>). <article-title>The role of decision processes in remembering and knowing</article-title>. <source>Mem. Cogn.</source> <volume>24</volume>, <fpage>523</fpage>&#x2013;<lpage>533</lpage>.</citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dunn</surname> <given-names>J. C.</given-names></name></person-group> (<year>2004</year>). <article-title>Remember-know: a matter of confidence</article-title>. <source>Psychol. Rev.</source> <volume>111</volume>, <fpage>524</fpage>&#x2013;<lpage>542</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-295X.111.2.524</pub-id>, PMID: <pub-id pub-id-type="pmid">15065921</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Ebbinghaus</surname> <given-names>E. E.</given-names></name></person-group> (<year>1964</year>). Memory (Trans. H. A. Ruger and C. E. Bussenius). New York: Dover (Original work published 1885).</citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eichenbaum</surname> <given-names>H.</given-names></name> <name><surname>Yonelinas</surname> <given-names>A. P.</given-names></name> <name><surname>Ranganath</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <article-title>The medial temporal lobe and recognition memory</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>30</volume>, <fpage>123</fpage>&#x2013;<lpage>152</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev.neuro.30.051606.094328</pub-id>, PMID: <pub-id pub-id-type="pmid">17417939</pub-id></citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Elward</surname> <given-names>R. L.</given-names></name> <name><surname>Vargha-Khadem</surname> <given-names>F.</given-names></name></person-group> (<year>2018</year>). <article-title>Semantic memory in developmental amnesia</article-title>. <source>Neurosci. Lett.</source> <volume>680</volume>, <fpage>23</fpage>&#x2013;<lpage>30</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neulet.2018.04.040</pub-id>, PMID: <pub-id pub-id-type="pmid">29715544</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gardiner</surname> <given-names>J. M.</given-names></name> <name><surname>Java</surname> <given-names>R. I.</given-names></name></person-group> (<year>1991</year>). <article-title>Forgetting in recognition memory with and without recollective experience</article-title>. <source>Mem. Cogn.</source> <volume>19</volume>, <fpage>617</fpage>&#x2013;<lpage>623</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03197157</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Green</surname> <given-names>J. L.</given-names></name> <name><surname>Weston</surname> <given-names>T.</given-names></name> <name><surname>Wiseheart</surname> <given-names>M.</given-names></name> <name><surname>Rosenbaum</surname> <given-names>R. S.</given-names></name></person-group> (<year>2014</year>). <article-title>Long-term spacing effect benefits in developmental amnesia: case experiments in rehabilitation</article-title>. <source>Neuropsychology</source> <volume>28</volume>, <fpage>685</fpage>&#x2013;<lpage>694</lpage>. doi: <pub-id pub-id-type="doi">10.1037/neu0000070</pub-id>, PMID: <pub-id pub-id-type="pmid">24749729</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hockley</surname> <given-names>W. E.</given-names></name> <name><surname>Consoli</surname> <given-names>A.</given-names></name></person-group> (<year>1999</year>). <article-title>Familiarity and recollection in item and associative recognition</article-title>. <source>Mem. Cogn.</source> <volume>27</volume>, <fpage>657</fpage>&#x2013;<lpage>664</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03211559</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huebner</surname> <given-names>G. M.</given-names></name> <name><surname>Gegenfurtner</surname> <given-names>K. R.</given-names></name></person-group> (<year>2012</year>). <article-title>Conceptual and visual features contribute to visual memory for natural images</article-title>. <source>PLoS One</source> <volume>7</volume>:<fpage>e37575</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0037575</pub-id>, PMID: <pub-id pub-id-type="pmid">22719842</pub-id></citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jacoby</surname> <given-names>L. L.</given-names></name></person-group> (<year>1999</year>). <article-title>Ironic effects of repetition: measuring age-related differences in memory</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn.</source> <volume>25</volume>, <fpage>3</fpage>&#x2013;<lpage>22</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0278-7393.25.1.3</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>G.</given-names></name> <name><surname>Norman</surname> <given-names>K. A.</given-names></name> <name><surname>Turk-Browne</surname> <given-names>N. B.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural overlap in item representations across episodes impairs context memory</article-title>. <source>Cereb. Cortex</source> <volume>29</volume>, <fpage>2682</fpage>&#x2013;<lpage>2693</lpage>. doi: <pub-id pub-id-type="doi">10.1093/cercor/bhy137</pub-id>, PMID: <pub-id pub-id-type="pmid">29897407</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koutstaal</surname> <given-names>W.</given-names></name> <name><surname>Schacter</surname> <given-names>D. L.</given-names></name> <name><surname>Galluccio</surname> <given-names>L.</given-names></name> <name><surname>Stofer</surname> <given-names>K. A.</given-names></name></person-group> (<year>1999</year>). <article-title>Reducing gist-based false recognition in older adults: encoding and retrieval manipulations</article-title>. <source>Psychol. Aging</source> <volume>14</volume>, <fpage>220</fpage>&#x2013;<lpage>237</lpage>.</citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kremers</surname> <given-names>N. A. W.</given-names></name> <name><surname>Deuker</surname> <given-names>L.</given-names></name> <name><surname>Kranz</surname> <given-names>T. A.</given-names></name> <name><surname>Oehrn</surname> <given-names>C.</given-names></name> <name><surname>Fell</surname> <given-names>J.</given-names></name> <name><surname>Axmacher</surname> <given-names>N.</given-names></name></person-group> (<year>2014</year>). <article-title>Hippocampal control of repetition effects for associative stimuli</article-title>. <source>Hippocampus</source> <volume>24</volume>, <fpage>892</fpage>&#x2013;<lpage>902</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hipo.22278</pub-id>, PMID: <pub-id pub-id-type="pmid">24753358</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>H.</given-names></name> <name><surname>Samide</surname> <given-names>R.</given-names></name> <name><surname>Richter</surname> <given-names>F. R.</given-names></name> <name><surname>Kuhl</surname> <given-names>B. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Decomposing parietal memory reactivation to predict consequences of remembering</article-title>. <source>Cereb. Cortex</source> <volume>29</volume>, <fpage>3305</fpage>&#x2013;<lpage>3318</lpage>. doi: <pub-id pub-id-type="doi">10.1093/cercor/bhy200</pub-id>, PMID: <pub-id pub-id-type="pmid">30137255</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Litman</surname> <given-names>L.</given-names></name> <name><surname>Davachi</surname> <given-names>L.</given-names></name></person-group> (<year>2008</year>). <article-title>Distributed learning enhances relational memory consolidation</article-title>. <source>Learn. Mem.</source> <volume>15</volume>, <fpage>711</fpage>&#x2013;<lpage>716</lpage>. doi: <pub-id pub-id-type="doi">10.1101/lm.1132008</pub-id>, PMID: <pub-id pub-id-type="pmid">18772260</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Manelis</surname> <given-names>A.</given-names></name> <name><surname>Paynter</surname> <given-names>C. A.</given-names></name> <name><surname>Wheeler</surname> <given-names>M. E.</given-names></name> <name><surname>Reder</surname> <given-names>L. M.</given-names></name></person-group> (<year>2013</year>). <article-title>Repetition related changes in activation and functional connectivity in hippocampus predict subsequent memory</article-title>. <source>Hippocampus</source> <volume>23</volume>, <fpage>53</fpage>&#x2013;<lpage>65</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hipo.22053</pub-id>, PMID: <pub-id pub-id-type="pmid">22807169</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mazza</surname> <given-names>S.</given-names></name> <name><surname>Gerbier</surname> <given-names>E.</given-names></name> <name><surname>Gustin</surname> <given-names>M. P.</given-names></name> <name><surname>Kasikci</surname> <given-names>Z.</given-names></name> <name><surname>Koenig</surname> <given-names>O.</given-names></name> <name><surname>Toppino</surname> <given-names>T. C.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Relearn faster and retain longer</article-title>. <source>Psychol. Sci.</source> <volume>27</volume>, <fpage>1321</fpage>&#x2013;<lpage>1330</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0956797616659930</pub-id>, PMID: <pub-id pub-id-type="pmid">27530500</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCormick-Huhn</surname> <given-names>J. M.</given-names></name> <name><surname>Bowman</surname> <given-names>C. R.</given-names></name> <name><surname>Dennis</surname> <given-names>N. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Repeated study of items with and without repeated context: aging effects on memory discriminability</article-title>. <source>Memory</source> <volume>26</volume>, <fpage>603</fpage>&#x2013;<lpage>609</lpage>. doi: <pub-id pub-id-type="doi">10.1080/09658211.2017.1387267</pub-id>, PMID: <pub-id pub-id-type="pmid">29039240</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moscovitch</surname> <given-names>M.</given-names></name> <name><surname>Cabeza</surname> <given-names>R.</given-names></name> <name><surname>Winocur</surname> <given-names>G.</given-names></name> <name><surname>Nadel</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>Episodic memory and beyond: the hippocampus and neocortex in transformation</article-title>. <source>Annu. Rev. Psychol.</source> <volume>67</volume>, <fpage>105</fpage>&#x2013;<lpage>134</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev-psych-113011-143733</pub-id>, PMID: <pub-id pub-id-type="pmid">26726963</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moscovitch</surname> <given-names>M.</given-names></name> <name><surname>Nadel</surname> <given-names>L.</given-names></name> <name><surname>Winocur</surname> <given-names>G.</given-names></name> <name><surname>Gilboa</surname> <given-names>A.</given-names></name> <name><surname>Rosenbaum</surname> <given-names>R. S.</given-names></name></person-group> (<year>2006</year>). <article-title>The cognitive neuroscience of remote episodic, semantic and spatial memory</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>16</volume>, <fpage>179</fpage>&#x2013;<lpage>190</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.conb.2006.03.013</pub-id>, PMID: <pub-id pub-id-type="pmid">16564688</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nadel</surname> <given-names>L.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Memory consolidation, retrograde amnesia and the hippocampal complex</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>7</volume>, <fpage>217</fpage>&#x2013;<lpage>227</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0959-4388(97)80010-4</pub-id>, PMID: <pub-id pub-id-type="pmid">9142752</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palombo</surname> <given-names>D. J.</given-names></name> <name><surname>Keane</surname> <given-names>M. M.</given-names></name> <name><surname>Verfaellie</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Does the hippocampus keep track of time?</article-title> <source>Hippocampus</source> <volume>26</volume>, <fpage>372</fpage>&#x2013;<lpage>379</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hipo.22528</pub-id>, PMID: <pub-id pub-id-type="pmid">26343544</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palombo</surname> <given-names>D. J.</given-names></name> <name><surname>Sheldon</surname> <given-names>S.</given-names></name> <name><surname>Levine</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Individual differences in autobiographical memory</article-title>. <source>Trends Cogn. Sci.</source> <volume>22</volume>, <fpage>583</fpage>&#x2013;<lpage>597</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2018.04.007</pub-id>, PMID: <pub-id pub-id-type="pmid">29807853</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poppenk</surname> <given-names>J.</given-names></name> <name><surname>Mcintosh</surname> <given-names>A. R.</given-names></name> <name><surname>Craik</surname> <given-names>F. I. M.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Past experience modulates the neural mechanisms of episodic memory formation</article-title>. <source>J. Neurosci.</source> <volume>30</volume>, <fpage>4707</fpage>&#x2013;<lpage>4716</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5466-09.2010</pub-id>, PMID: <pub-id pub-id-type="pmid">20357121</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reagh</surname> <given-names>Z. M.</given-names></name> <name><surname>Murray</surname> <given-names>E. A.</given-names></name> <name><surname>Yassa</surname> <given-names>M. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Repetition reveals ups and downs of hippocampal, thalamic, and neocortical engagement during mnemonic decisions</article-title>. <source>Hippocampus</source> <volume>27</volume>, <fpage>169</fpage>&#x2013;<lpage>183</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hipo.22681</pub-id>, PMID: <pub-id pub-id-type="pmid">27859884</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reagh</surname> <given-names>Z. M.</given-names></name> <name><surname>Yassa</surname> <given-names>M. A.</given-names></name></person-group> (<year>2014</year>). <article-title>Repetition strengthens target recognition but impairs similar lure discrimination: evidence for trace competition</article-title>. <source>Learn. Mem.</source> <volume>21</volume>, <fpage>342</fpage>&#x2013;<lpage>346</lpage>. doi: <pub-id pub-id-type="doi">10.1101/lm.034546.114</pub-id>, PMID: <pub-id pub-id-type="pmid">24934334</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robin</surname> <given-names>J.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Details, gist and schema: hippocampal-neocortical interactions underlying recent and remote episodic and spatial memory</article-title>. <source>Curr. Opin. Behav. Sci.</source> <volume>17</volume>, <fpage>114</fpage>&#x2013;<lpage>123</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cobeha.2017.07.016</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sadeh</surname> <given-names>T.</given-names></name> <name><surname>Ozubko</surname> <given-names>J. D.</given-names></name> <name><surname>Winocur</surname> <given-names>G.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>How we forget may depend on how we remember</article-title>. <source>Trends Cogn. Sci.</source> <volume>18</volume>, <fpage>26</fpage>&#x2013;<lpage>36</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2013.10.008</pub-id>, PMID: <pub-id pub-id-type="pmid">24246135</pub-id></citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Santangelo</surname> <given-names>V.</given-names></name> <name><surname>Cavallina</surname> <given-names>C.</given-names></name> <name><surname>Colucci</surname> <given-names>P.</given-names></name> <name><surname>Santori</surname> <given-names>A.</given-names></name> <name><surname>Macri</surname> <given-names>S.</given-names></name> <name><surname>McGaugh</surname> <given-names>J. L.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Enhanced brain activity associated with memory access in highly superior autobiographical memory</article-title>. <source>Proc. Natl. Acad. Sci. U. S. A.</source> <volume>115</volume>, <fpage>7795</fpage>&#x2013;<lpage>7800</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.1802730115</pub-id>, PMID: <pub-id pub-id-type="pmid">29987025</pub-id></citation></ref>
<ref id="ref41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Santangelo</surname> <given-names>V.</given-names></name> <name><surname>Pedale</surname> <given-names>T.</given-names></name> <name><surname>Macri</surname> <given-names>S.</given-names></name> <name><surname>Campolongo</surname> <given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>Enhanced cortical specialization to distinguish older and newer memories in highly superior autobiographical memory</article-title>. <source>Cortex</source> <volume>129</volume>, <fpage>476</fpage>&#x2013;<lpage>483</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cortex.2020.04.029</pub-id>, PMID: <pub-id pub-id-type="pmid">32599463</pub-id></citation></ref>
<ref id="ref42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sekeres</surname> <given-names>M. J.</given-names></name> <name><surname>Bonasia</surname> <given-names>K.</given-names></name> <name><surname>St-Laurent</surname> <given-names>M.</given-names></name> <name><surname>Pishdadian</surname> <given-names>S.</given-names></name> <name><surname>Winocur</surname> <given-names>G.</given-names></name> <name><surname>Grady</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Recovering and preventing loss of detailed memory: differential rates of forgetting for detail types in episodic memory</article-title>. <source>Learn. Mem.</source> <volume>23</volume>, <fpage>72</fpage>&#x2013;<lpage>82</lpage>. doi: <pub-id pub-id-type="doi">10.1101/lm.039057.115</pub-id>, PMID: <pub-id pub-id-type="pmid">26773100</pub-id></citation></ref>
<ref id="ref43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slamecka</surname> <given-names>N. J.</given-names></name> <name><surname>McElree</surname> <given-names>B.</given-names></name></person-group> (<year>1983</year>). <article-title>Normal forgetting of verbal lists as a function of their degree of learning</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn.</source> <volume>9</volume>, <fpage>384</fpage>&#x2013;<lpage>397</lpage>.</citation></ref>
<ref id="ref44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Squire</surname> <given-names>L. R.</given-names></name> <name><surname>Wixted</surname> <given-names>J. T.</given-names></name> <name><surname>Clark</surname> <given-names>R. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Recognition memory and the medial temporal lobe: a new perspective</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>8</volume>, <fpage>872</fpage>&#x2013;<lpage>883</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nrn2154</pub-id>, PMID: <pub-id pub-id-type="pmid">17948032</pub-id></citation></ref>
<ref id="ref45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tuckey</surname> <given-names>M. R.</given-names></name> <name><surname>Brewer</surname> <given-names>N.</given-names></name></person-group> (<year>2003</year>). <article-title>How schemas affect eyewitness memory over repeated retrieval attempts</article-title>. <source>Appl. Cogn. Psychol.</source> <volume>17</volume>, <fpage>785</fpage>&#x2013;<lpage>800</lpage>. doi: <pub-id pub-id-type="doi">10.1002/acp.906</pub-id></citation></ref>
<ref id="ref46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tulving</surname> <given-names>E.</given-names></name></person-group> (<year>1985</year>). <article-title>Ebbinghaus memory - what did he learn and remember</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn.</source> <volume>11</volume>, <fpage>485</fpage>&#x2013;<lpage>490</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0278-7393.11.3.485</pub-id></citation></ref>
<ref id="ref47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tulving</surname> <given-names>E.</given-names></name></person-group> (<year>2002</year>). <article-title>Episodic memory: from mind to brain</article-title>. <source>Annu. Rev. Psychol.</source> <volume>53</volume>, <fpage>1</fpage>&#x2013;<lpage>25</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev.psych.53.100901.135114</pub-id>, PMID: <pub-id pub-id-type="pmid">11752477</pub-id></citation></ref>
<ref id="ref48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vargha-Khadem</surname> <given-names>F.</given-names></name> <name><surname>Gadian</surname> <given-names>D. G.</given-names></name> <name><surname>Watkins</surname> <given-names>K. E.</given-names></name> <name><surname>Connelly</surname> <given-names>A.</given-names></name> <name><surname>Vanpaesschen</surname> <given-names>W.</given-names></name> <name><surname>Mishkin</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Differential effects of early hippocampal pathology on episodic and semantic memory</article-title>. <source>Science</source> <volume>277</volume>, <fpage>376</fpage>&#x2013;<lpage>380</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.277.5324.376</pub-id>, PMID: <pub-id pub-id-type="pmid">9219696</pub-id></citation></ref>
<ref id="ref49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagner</surname> <given-names>A. D.</given-names></name> <name><surname>Maril</surname> <given-names>A.</given-names></name> <name><surname>Schacter</surname> <given-names>D. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Interactions between forms of memory: when priming hinders new episodic learning</article-title>. <source>J. Cogn. Neurosci.</source> <volume>12</volume>, <fpage>52</fpage>&#x2013;<lpage>60</lpage>. doi: <pub-id pub-id-type="doi">10.1162/089892900564064</pub-id>, PMID: <pub-id pub-id-type="pmid">11506647</pub-id></citation></ref>
<ref id="ref50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Williams</surname> <given-names>H. L.</given-names></name> <name><surname>Conway</surname> <given-names>M. A.</given-names></name> <name><surname>Moulin</surname> <given-names>C. J. A.</given-names></name></person-group> (<year>2013</year>). <article-title>Remembering and knowing: using another&#x2019;s subjective report to make inferences about memory strength and subjective experience</article-title>. <source>Conscious. Cogn.</source> <volume>22</volume>, <fpage>572</fpage>&#x2013;<lpage>588</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.concog.2013.03.009</pub-id>, PMID: <pub-id pub-id-type="pmid">23619311</pub-id></citation></ref>
<ref id="ref51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winocur</surname> <given-names>G.</given-names></name> <name><surname>Moscovitch</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Memory transformation and systems consolidation</article-title>. <source>J. Int. Neuropsychol. Soc.</source> <volume>17</volume>, <fpage>766</fpage>&#x2013;<lpage>780</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S1355617711000683</pub-id>, PMID: <pub-id pub-id-type="pmid">21729403</pub-id></citation></ref>
<ref id="ref52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wixted</surname> <given-names>J. T.</given-names></name> <name><surname>Stretch</surname> <given-names>V.</given-names></name></person-group> (<year>2004</year>). <article-title>In defense of the signal detection interpretation of remember/know judgments</article-title>. <source>Psychon. Bull. Rev.</source> <volume>11</volume>, <fpage>616</fpage>&#x2013;<lpage>641</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03196616</pub-id>, PMID: <pub-id pub-id-type="pmid">15581116</pub-id></citation></ref>
<ref id="ref53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>G.</given-names></name> <name><surname>Mei</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Lu</surname> <given-names>Z.-L.</given-names></name> <name><surname>Poldrack</surname> <given-names>R.</given-names></name> <name><surname>Dong</surname> <given-names>Q.</given-names></name></person-group> (<year>2011</year>). <article-title>Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression</article-title>. <source>J. Cogn. Neurosci.</source> <volume>23</volume>, <fpage>1624</fpage>&#x2013;<lpage>1633</lpage>. doi: <pub-id pub-id-type="doi">10.1162/jocn.2010.21532</pub-id>, PMID: <pub-id pub-id-type="pmid">20617892</pub-id></citation></ref>
<ref id="ref54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Zhan</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Du</surname> <given-names>X.</given-names></name> <name><surname>Zhou</surname> <given-names>W.</given-names></name> <name><surname>Ning</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Effects of learning experience on forgetting rates of item and associative memories</article-title>. <source>Learn. Mem.</source> <volume>23</volume>, <fpage>365</fpage>&#x2013;<lpage>378</lpage>. doi: <pub-id pub-id-type="doi">10.1101/lm.041210.115</pub-id>, PMID: <pub-id pub-id-type="pmid">27317197</pub-id></citation></ref>
<ref id="ref55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yassa</surname> <given-names>M. A.</given-names></name> <name><surname>Reagh</surname> <given-names>Z. M.</given-names></name></person-group> (<year>2013</year>). <article-title>Competitive trace theory: a role for the hippocampus in contextual interference during retrieval</article-title>. <source>Front. Behav. Neurosci.</source> <volume>7</volume>:<fpage>107</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbeh.2013.00107</pub-id>, PMID: <pub-id pub-id-type="pmid">23964216</pub-id></citation></ref>
<ref id="ref56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yonelinas</surname> <given-names>A. P.</given-names></name></person-group> (<year>2002</year>). <article-title>The nature of recollection and familiarity: a review of 30 years of research</article-title>. <source>J. Mem. Lang.</source> <volume>46</volume>, <fpage>441</fpage>&#x2013;<lpage>517</lpage>. doi: <pub-id pub-id-type="doi">10.1006/jmla.2002.2864</pub-id></citation></ref>
<ref id="ref57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yonelinas</surname> <given-names>A. P.</given-names></name> <name><surname>Jacoby</surname> <given-names>L. L.</given-names></name></person-group> (<year>1995</year>). <article-title>The relation between remembering and knowing as bases for recognition - effects of size congruency</article-title>. <source>J. Mem. Lang.</source> <volume>34</volume>, <fpage>622</fpage>&#x2013;<lpage>643</lpage>. doi: <pub-id pub-id-type="doi">10.1006/jmla.1995.1028</pub-id></citation></ref>
<ref id="ref58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>Discriminative learning of similar objects enhances memory for the objects and contexts</article-title>. <source>Learn. Mem.</source> <volume>25</volume>, <fpage>601</fpage>&#x2013;<lpage>610</lpage>. doi: <pub-id pub-id-type="doi">10.1101/lm.047514.118</pub-id>, PMID: <pub-id pub-id-type="pmid">30442768</pub-id></citation></ref></ref-list>
<fn-group><fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This research was supported by the grant from the National Natural Science Foundation of China (31571114 and 32071027, JY). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p></fn></fn-group>
</back>
</article>